Google Developer Day 2008

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In the past, I’ve been accused of writing too much Microsoft-focused content on this blog and, in my defence, this blog advertises itself as follows:

“Originally created as a place for me to store some notes, this blog comments on my daily encounters with technology and aims to share some of this knowledge with fellow systems administrators and technical architects across the ‘net. Amazingly, it’s become quite popular!”

My daily encounters with technology… well, as I’m an infrastructure architect who (mostly) works with Microsoft products, that would explain the volume of Microsoft stuff around here… but in order to be credible (and retain some objectivity) when I’m talking about Microsoft products, I’m also interested in what their competitors are doing. That’s why I’m also a Mac user and I dabble with Linux from time to time; my website uses an open source CMS (WordPress), running on Linux, Apache, MySQL and PHP (classic LAMP); I keep an eye on what VMware is up to; and, as well as using a bunch of Google products on the web I recently started using Google Apps for e-mail, calendar and contacts.

Google Developer Day 2008Since the Microsoft-Yahoo! merger-that-wasn’t, I’ve become increasingly interested in Microsoft’s online offerings and consequently I’m also watching the dominant force in Internet search as they expand into other areas online – that’s why I spent today at the Google Developer Day 2008. Aside from being an opportunity to visit the new Wembley Stadium (I do think they should have incorporated the iconic twin towers from the old stadium somewhere in the new structure), it’s a chance for me to find out a little about the technologies that Google is pushing right now. I feel a bit of a fraud as I’m not really a developer but I answered the registration form truthfully and Google accepted me here, so I guess that’s OK!

Over the course of the day, I noted some brief (and sometimes frivolous) highlights from the various sessions – think of it as a microblog in one post. Where I understand enough of the dev stuff, I’ll follow up with more detail later…

Stage at Google Developer Day 2008[08.20] Right from the off, it’s been a positive experience. After arriving at the venue almost an hour before registration was due to commence, I was allowed in, invited to have a coffee and some breakfast, and a really helpful guy went and found me my delegate badge. Now I’m sitting here enjoying the free Wi-Fi (and grabbing one of the few seats that’s situated next to a floorbox so I can keep my notebook PC’s battery charged during the keynote).

Google Developer Day 2008 - rooms named after classic arcade games[8.55] As I sat in the “Space Invaders” room waiting for the keynote session to begin, I was thinking that nnly Google would name the session rooms after classic computer games. Now it all makes sense… I just heard that the keynote will include the first public demo of the Android phone!

[9.10] Someone just changed the SSID on the Wi-Fi and I lost my connection mid-post… arghhh!

[9.30] I now have the rest of my delegate pack… including a snazzy gift-wrapped parcel…

Green parcel from Google Developer Day 2008

containing…

Little green Google man from Google Developer Day 2008Little green man USB key from Google Developer Day 2008

A little green man… hang on… he’s removed his head – what’s he doing inside my Mac?

(It’s OK, he’s just giving me a copy of all the materials I might need to make the most of today).

[09:59] What can’t Microsoft events be this much fun?

[10:00] The keynote is about to start…

[10:25] This keynote has lots of slides, few words, lots of pictures. I like it. Whatever the opposite of death by PowerPoint is, this is it.

[10:30] Mike Jennings is performing the first European demo of Android – the open source mobile stack.

Android demo at Google Developer Day 2008

[10:50] The keynote was an overview of what Google is doing to help people develop for the web. Highlights were:

  • The theme for today is client-cloud-connectivity:
    • Making the client more powerful.
    • Making the cloud more accessible.
    • Connecting pervasively.
  • Google Chrome is 100% open source (based on WebKit and the V8 JavaScript engine), designed to support today’s rich Internet applications.
  • Gears is a browser plugin to enable web application functionality that was previously only available on the desktop.
  • Google has two types of API – the various data APIs and those which provide AJAX functionality – both are designed to make Google services programmatically accessible.
  • Google App Engine allows organisation to run their application on the Google infrastructure in an attempt to overcome the financial and administrative hurdles associated with traditional computing.
  • Android provides a mobile application stack.
  • Google Web Toolkit (GWT) allows applications to be written in Java and run in cross-browser compiled JavaScript.
  • OpenSocial provides a family of APIs for connecting social websites.

Android at Google Developer Day 2008[11:10] Hoping to learn more about Android in Mike Jenning’s session “An introduction to Android”…

[11:15] There’s no code in this session… I should be able to cope then ;-)

[11:25] Mike seems a nice guy but he’s clearly learning this deck as he goes…

[11:30] Into Q&A already?!

[11:50] 35 minutes to go and the Q&A is getting hard for the presenter… what’s interesting to me is that this Google-led presentation has degenerated into a group of developers and users feeding back to Google on things like security, usability, and other common considerations for mobile application development that don’t seem to have been considered. Some of the questions are tough… but that should be expected given the forum.

[12:00] He’s desperate to end this session (twice now he’s asked how much longer to go on for…). Poor guy – I feel really sorry for him the way this session has gone but there was nothing here that shouldn’t have been expected. Hopefully Google has a better idea of the state of the mobile market than this session would indicate.

[12:05] There’s a guy on the front row writing a book: Professional Android Application Development (to be published by Wrox with a November 2008 release date).

[12:20] It seemed to me that Mike was strangled by the Google PR machine but, thanks to his great sense of humour, he still managed to end the session on a high note. Key points were:

  • Based on a poll of the room, around 50% of people have more than one mobile handset; 25% of people have no land-line at home; and there was no-one here that does not have a mobile. This should be caveated heavily – this was a room full of geeks – but it is nevertheless an interesting study.
  • Android is an open mobile handset project: an open development model; open to the industry (free to carriers/manufacturers/enthusiasts); open to the developer with the ability to integrate at a deep level in the stack (e.g. replacing the dialler).
  • The Android runtime environment is implemented in Java running on a Linux kernel. Some classes are unavailable (i.e. those that are not relevant to mobile computing).
  • Android should be expected during the 4th quarter of 2008.
  • Google appears unprepared for the questions that will be asked of any new platform around security, usability, upgradability – over even why people will choice Android over more established competition. Maybe they are prepared but to quote Mike Jennings, “these kind of questions are over my pay grade”.

[12:25] Ooo! Curly Wurlys on the snack table!

[12:30] I like geek t-shirts – I just saw one which said “Gears – we power the Tubes”

[12:35] In this session Aaron Boodman will be talking talking about Google Gears… let’s hope that he is allowed to say more than Mike Jennings was.

[13:10] Great session – gave me just enough to learn something about the APIs that Gears provides. Key points were:

  • Gears is a browser extension which provides JavaScript APIs for web application development, available for Internet Explorer (5 or later), Mozilla Firefox (1.5 or later), WIndows Mobile, Chrome (which is built on Gears) and now Safari. Android will support gears (at the moment it just has a stub API).
  • Gears is now a year old and has dropped its Google prefix.
  • Gears is not just about offline access to web applications although the initial implementation was about a database, local server and worker pool.
  • APIs include desktop shortcuts, file system, binary object access and geolocation.

[13:15] I’ve just managed to sneak a quick peak outside at the stadium itself – it’s very impressive. We’ve been asked not to use any photos that identify Wembley Stadium for commercial purposes but this is just a personal snapshot (actually, it’s five of them, stitched together in Photoshop CS3).

The new Wembley Stadium

(Someone seems to have stolen half the pitch…)

[13:30] Fooling around whilst waiting for lunch…

Me at Google Developer Day 2008

[14:50] I thought that my web access was fast here… I just ran a speed test and I’m getting about 14Mbps! This is the best Internet access I’ve ever had at a conference.

[14:55] Looking around the delegates it seems that Macs are pretty common among developers who follow Google technologies! I reckon I’ve seen 2-3 MacBooks for every PC laptop here today (and several of the PCs I saw were running Linux)… as someone who lives primarily in the Microsoft world, this is an interesting experience.

[15:00] Ryan Boyd is just starting to talk about mashing up Google APIs… hopefully I can keep up!

[16:10] That was hard work but I just about held in there… Ryan demonstrated a number of APIs working together, including example code. A few points to note:

  • AtomPub is used to define feeds (mostly for blog syndication), made up of entries containing additional information.
  • Four methods are applied to feeds (create, retrieve, update, delete) and these relate to the equivalent HTTP communications (post, get, put, delete).
  • Standard HTTP status codes are returned.
  • Google has extended AtomPub to provide:
    • A data model.
    • Batch operations.
    • Authentication (client login with username and password, AuthSub or OAuth).
      Alternate output formats for non-Atom data (e.g. RSS, KML, JSON).
  • The OAuth Playground is a good place to understand how OAuth authentication works – AuthSub is similar in some ways and has been around longer but OAuth is a standardised implementation and should grow over time.

[16:20] My little green man now has some blue and red playmates.

[16:25] Next up, Google Web Toolkit (GWT): the technical advantage, presented by Sumit Chandel. This will also be developer heavy (this is a developer day after all!) so I may struggle again…

[16:35] Just noticed that quite a few people are using sub-notebook PCs here…

[16:50] And I’ve never seen as many stickers on PCs as I have today… maybe that’s a dev thing too?!

[17:15] Into Q&A now, I won’t understand the answers but to summarise the key points from the GWT session:

  • GWT allows developers to write AJAX applications more quickly, compiling Java into optimised JavaScript and employing techniques such as deferred binding to ensure that only those elements that are required for the local browser implementation are used.
  • Browser quirks are no longer a problem – GWT handles these for all supported browsers.
  • With GWT, there are no more memory leaks! A bold statement and actually there may be some where JavaScript native interface (JSNI) calls are made but there should be none for pure GWT applications (read more in Joel Webber’s article on DOM events, memory leaks and you).
  • GWT adds history support to AJAX applications with its implementation of really simple history (RSH).
  • GWT enables code reuse through design patterns.
  • Faster application development is accommodated using IDEs such as Eclipse and other Java tools bust specifically, GWT allows for debugging in bytecode.

[17:20] Just swapped my evaluation form for a t-shirt… my kids will love the Google icons on the front!

Google Developer Day 2008 T-Shirt

[17:45] Google has a new UK developer blog – and they just showed us a cool wrap-up video from the day – hopefully that will be on YouTube later. [Update: here it is, courtesy of Youtube]:

[17:50] Look! A Googler – complete with lab-coat!

Google employee with labcoat at Google Developer Day 2008

[17:55] Mmm… beer!

[17:55] And the fun continues… with giant Chess, Connect 4, Jenga, arcade games (including Pacman and Space Invaders), Mega Blocks… and… somewhat bizarrely, a PHP Elephant!

PHP Elephant

[18:15] Whilst chatting with Tim Anderson, he made a very valid point that I hadn’t considered whilst I was getting excited about technology – Google is an advertising company and, unlike Microsoft or any of the other vendors that I enjoy a relationship with, they don’t need to sell software – they just want people to use their search, etc. and if their vision of the web continues to develop the ad revenues should keep on rolling in too.

[18:20] Just looked out of the window and saw that the turf is slowly returning to Wembley’s pitch. Only about a quarter missing now!

[18:35] Now that is a good use for the presentation projectors… Wii Sports/Guitar Hero II!

Playing games on the projectors after Google Developer Day 2008

[18:55] Mmm… pizza!

[20:00] I really should head home now!

I’ve really enjoyed this event – a fantastic opportunity to learn more about Google’s developer tools and APIs and, who knows, I may even get around to implementing some of them here (if this site ever gets its long awaited AJAX overhaul). From chatting with the event organisers, I learned that this was the second annual Google Developer Day in the UK and there were just over 500 people here today. Google is looking to run more events as their portfolio expands – possibly even some smaller, more focused, events but, for me, this was the perfect balance between a conference (for which my employer is unlikely to support attendance, based on recent experience) and the shorter events – providing a small amount of information on a wide variety of topics.

Hopefully I’ll be at next years GDD too. As for the Microsoft posts… normal service will be resumed at 9am tomorrow.

Active Directory design considerations: part 2 (forest and domain design)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Having set the scene for this series of posts, the first area to examine is Active Directory forest and domain design.

Bearing in mind the key principle that requirements should dictate design, and that the solution should be as simple as possible, whenever possible, AD designers should look to consolidate and a single forest (with a single domain) should be the starting point, after which any requirements for scaling out can be considered.

Reasons for implementing multiple forests include:

  • Multiple schemas (to avoid application conflicts).
  • Resource forests (deliberate isolation).
  • Distrust of forest administrators (autonomy).
  • Legal regulations around application/data access.
  • Requirements to be disconnected for long periods (e.g. on a ship).

Forest design models

Single organisational forest

The single organisational forest is the starting point. In this model, users, computers and applications are all in the same forest, providing a simple Active Directory. One major advantage of having a simple AD, is that many application designs will also be simplified (e.g. Exchange Server or MOSS) and delegation of administration is still possible; however it is absolutely essential that forest-level administrators are trusted.

To mitigate the risk of rogue administrators, many organisations rely on detection (auditing and monitoring security logs – flagging any events after the fact). In many cases the effort of implementing an extra forest outweighs the risk of an exploit from a rogue administrator. Other mitigation steps include keeping highly privileged groups (e.g. Enterprise Admins and Domain Admins) empty (or at least down to a minimal number of users) and closely monitoring membership as well as implementing two-factor authentication for highly privileged accounts.

Multiple organisation forest model

The multiple organisation forest model is applicable where there are distinct business groups that require limited sharing of resources whilst retaining autonomy and isolation. In this model users, computers and applications all exist within their respective forests and a trust (1 or 2 way, as appropriate) is established, with selective authentication to control the rights granted from one forest to the other.

This model can be costly and often causes additional complexity (e.g. if Exchange Server is used in the two organisations, then identity management tools may be required for calendar and contact information).

Shared resource forest model

According to Microsoft, the shared resource forest model is gaining in popularity as it provides flexibility as organisations are created and merge but require some sharing of resources. Users and computers exist in the appropriate account forests and trusts are created as necessary to access application(s) in a separate resource forest.

With this model, an application such as Exchange Server would be installed into the resource forest (as a single organisation) and the users in the account forests would see the global address list from the resource forest, avoiding the need for directory synchronisation tools.

Potential downsides of this approach are the extra servers that will be required and the corresponding management overhead; however it is flexible and is commonly deployed.

Shared account forest model

The shared account forest model is similar to the shared resource forest model except that a common account forest is used for all users and computers, with various resource forests deployed for restricted access to data and applications and corresponding trust relationships with the account forest. With this model, users can log on anywhere but some control is exercised over their access to applications and data.

This model might also be used in an extranet scenario – for example MOSS in an extranet forest but with access provided to internal accounts using a forest trust or through ADFS.

Considerations for domain design

Having decided on the overall forest structure, domain design needs to be considered and this is also simplified where a single domain exists within each forest (this is the most straightforward, and hence least expensive, option to implement, manage and recover). Multiple domains may need to be considered:

  • Where there is a large number of frequently changing attributes.
  • To reduce replication.
  • To control replication over slow links.
  • To present legacy Active Directory structures.

With Windows Server 2008, it is no longer necessary to implement a separate domain where an alternative password policy is required (e.g. PIN access for mobile users) as Active Directory Domain Services supports fine grained password policies. Note that these policies are not applied at an organizational unit (OU) level but through group membership or at an individual user level. To aid when troubleshooting application of multiple policies, Microsoft recommends that security groups are used for policy application and users added to groups accordingly.

A domain is a replication boundary but whereas with Windows 2000 network links were poor, these days bandwidth is more plentiful and controls may be exercised over replication. Microsoft considers that the only real hard limit is the maximum number of domain controllers, which was around 1200 under Windows Server 2003 due to the limitations of sysvol replication using the file replication service (FRS). With Windows Server 2008 this is no longer a concern, once the domain has been switched to use DFS-R for replication.

In short, there are very few technical reasons for separate domains; however this may be influenced by political concerns.

Forest and domain functional levels

Forest and domain functional levels can drive requirements for domain design, with consideration due to migration vs. an in-place upgrade. On the face of it, in-place upgrades seem simple, but the health of the existing AD needs to be considered. If the domain has been upgraded previously from Windows 2000 to 2003, there may be older groups in place which do not use linked value replication, or there may be issues around strict replication consistency.

The basic changes at each level are:

  • Windows Server 2003 interim forest functional level:
    • Linked value replication.
    • Different replication compression ratios.
    • Improved knowledge consistency checker.
  • Windows Server 2003 forest functional level:
    • Forest trusts (and selective authentication).
    • Deactivation of attributes within the schema.
    • Domain renaming.
    • Read only domain controllers (requires Windows Server 2008, plus schema updates).
  • Windows Server 2008 domain functional level:
    • Fine-grained password policies.
    • DFS-R for sysvol.
    • Last interactive logon information.

Domain naming

Domain naming ought to be the simple part of the design; however it is often heavily influenced by politics. Whilst domain renames are possible, it’s generally not advised due to the potential impact on other applications.

For each domain, there are two names to consider – NetBIOS and DNS.

The NetBIOS name must not exceed a maximum length of 15 characters and must be unique on the network.

Meanwhile, Microsoft recommends that the DNS name does not replicate an existing Internet domain name, is registered with Internic (to prevent future conflicts – this also means that once-common naming conventions such as .local are no longer recommended).

In general, the NetBIOS and the domain portion of the DNS names should be made to match one another as many tools expect one to be derived from the other; however single label names should not be used as they cannot be registered and may cause issues with certain applications (Microsoft knowledge base article 300684 has more details). Also, the name should not represent a business unit or division (as this is likely to change over time).

Summary

After following the advice in this article, the forest and domain structure, level and naming should all be clear.

In the next post in this series, I’ll take a look at organizational unit design.

Active Directory design considerations: part 1 (introduction)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, I wrote a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series (Introduction, Remote offices, Controlling network access, Virtualisation, Security, High availability and data centre consolidation).

Session 2 of the MCS Talks series looked at Active Directory (AD), so I’m kicking off a new series of posts here based on the information from that webcast, supplemented where appropriate with my own experiences.

The original webcast on which this series was based was presented by Andrew Hill and Rob Lowe (who are both consultants with Microsoft Consulting Services in the UK) and they stressed that there are 6 tenets to AD design which are inextricably linked:

  • Complexity.
  • Cost.
  • Fault tolerance.
  • Performance.
  • Scalability.
  • Security.

The main point that they wanted to make was to let requirements dictate design (to avoid over-complicating the solution) and that is the focus in each of the posts that will make up this series.

The rest of this series will examine key design considerations for forest/domain design, organisational unit structure, group policy objects, security groups, domain controller placement, site topology, domain controller configuration and DNS. Two important areas that have not been included though are backup/recovery of AD (I’m reading a book on AD disaster recovery and will post my review soon) and delegation of administration. Also, some previous knowledge is assumed – this is not an introduction to Active Directory.

Microsoft has also provided a collection of AD design resources on the MCS Talks blog.

Using Google Apps for e-mail and contact management

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, I wrote about how I’d switched a big chunk of my home/small business IT to Google Apps and was using it as part of a solution to keep my work and personal calendars separate, but in sync.

Calendar is all very well, but e-mail is still my main communications tool. So how have I found the switch to Google Mail and how do I keep my contacts in sync between devices? Actually, it’s been remarkably straightforward but I have learnt a few things along the way and this post describes the way I have things working.

Switching to GMail was as simple as updating the MX records for my domain but having done so, I needed to get my various devices working together – that means home and work computers as well as my iPhone.

My home computer is a Mac, so I simply enabled IMAP access on my Google Apps mail account and made sure I followed Google’s recommended client settings for setting up Apple Mail. There’s not more to say really – Google provides an IMAP Service and Apple provides an IMAP client.

Apple Mail

On a Windows PC I would have used Windows Mail/Outlook Express (depending on the version of Windows) or Outlook to achieve the same thing. Even so, on the Windows PC that I use for work, I have Google Chrome installed, so I set myself up with a Chrome application shortcut for my Google Apps e-mail account. It’s only webmail, but GMail is dripping with AJAX and so highly functional and very usable.

Google Mail as a Chrome application

With my PCs set up, that left the iPhone. Again, Google publishes advice for configuring IMAP with the iPhone (as well as recommended client settings) and I followed it. Folder list in Mail on the iPhone
I’m still a little confused about what is being saved where – my iPhone mail application has a Sent folder with some items in, but there’s another one called Sent Mail underneath [Google Mail] – similarly, I have two Drafts folders – as well as both Trash and Deleted Messages. None of that really matters though as all my mail seems to be in the Google Mail account (automagically… I’m not going to get too hung up on the details). Push e-mail would be nice (at the moment I have to tell the phone to periodically check for e-mail) but I’m sure Google will add that feature in time – the important thing is that it seems to work.

I tend to use the iPhone’s built-in mail application most of the time but the iPhone interface to GMail is pretty good too and has the advantage that it groups messages by conversation, rather than using the traditional approach of showing individual messages.

Mail on the iPhoneGoogle Mail on the iPhone

With e-mail working, I turned my attention to my contacts. Google Mail was doing a good job of identifying the people I’d sent e-mail to and creating associated contacts but I wanted to make that I had the same contact list available natively on the Mac and the iPhone. No problem – the Mac OS X Address Book application includes Google Contact syncing although I’m a little confused why I have it enabled in both the Address Book application and in iTunes (Contact Sync uses iTunes for synchronisation). Then, Address Book and iTunes worked together to make the contacts available on the iPhone (regardless of the Google part of the solution).

It’s worth noting that I didn’t think the address book synchronisation was working, but signing out of Google Mail (and then back in again) seemed to force a refresh of the contact information inside Google Mail.

Importantly, Google Mail’s contact functionality does not destroy information stored for contacts that it doesn’t know what to do with. For example, I’ve followed Jaka Jančar’s advice for adding Skype usernames to the OS X Address Book and Google Mail just ignores the extra information.

That just left bringing all of my legacy e-mail into my Google Apps mailbox. I haven’t been brave enough to do that yet (actually, it needs a lot of consolidation first) but I will do it eventually – and, when I do, I’ll be sure to blog about how it went…

Virtualisation as an enabler for cloud computing

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my summary of the key messages from Microsoft’s virtualisation launch last week, I promised a post about the segment delivered by Tom Bittman, Gartner’s VP and Chief of Research for Infrastructure and Operations, who spoke about how virtualisation is a key enabler for cloud computing.

Normally, if I hear someone talking about cloud computing they are either: predicting the death of traditional operating systems (notably Microsoft Windows); they are a vendor (perhaps even Microsoft) with their own view on the way that things will work out; or they are trying to provide an artificial definition of what cloud computing is and is not.

Then, there are people like me – infrastructure architects who see emerging technologies blurring the edges of the traditional desktop and web hosting models – technologies like Microsoft Silverlight (taking the Microsoft.NET Framework to the web), Adobe AIR (bringing rich internet applications to the desktop) and Google Gears (allowing offline access to web applications). We’re excited by all the new possibilities, but need to find a way through the minefield… which is where we end up going full circle and returning to conversations with product vendors about their vision for the future.

What I saw in Bittman’s presentation was an analyst, albeit one who was speaking at a Microsoft conference, talking broad terms about cloud computing and how it is affected by virtualisation. No vendor alegiance, just tell it as it is. And this is what he had to say:

When people talk about virtualisation, they talk about saving money, power and space – and they talk about “green IT” – but virtualisation is more than that. Virtualisation is an enabling technology for the trasnformation of IT service delivery, a catalyst for changing architectures, processes, cultures, and the IT marketplace itself. And, through these changes, it enables business transformation.

Virtualisation is a hot topic but it’s also part of something much larger – cloud computing. But rather than moving all of our IT services to the Internet, Gartner see virtuaInternetlisation allegiancetransformationas a means to unlock cloud computing so that internal IT departments deliver services to business units in a manner that is more “cloud like”.

Bittman explained that in the past, our component-oriented approach has led to the management of silos for resource management, capacity planning and performance management. Gartner: Virtualising the data centre - from silos to clouds
Then, as we realise how much these silos are costing, virtualisation is employed to drive down infrastructure costs and increase flexibility – a layer-oriented approach with pools of resource, and what he refers to as “elasticity” – the ability to “do things” much more quickly. Even that is only part of the journey though – by linking the pools of resource to the service level requirements of end users, an automated service-oriented approach can be created – an SoA in the form of cloud computing.

At the moment internal IT is still evolving, but external IT providers are starting to deliver service from the cloud (e.g. Google Apps, salesforce.com, etc.) – and that’s just the start of cloud computing.

Rather than defining cloud computing, Bitmann described some of the key attributes:

  1. Service orientation.
  2. Utility pricing (either subsidised, or usage-based).
  3. Elasticity.
  4. Delivered over the Internet.

The first three of these are the same whether the cloud is internal or external.

Gartner: Virtualisation consolidation and deconsolidationVirtualisation is not really about consolidation. It’s actually the decoupling of components that were previously combined – the application, operating system and hardware – to provides some level of abstraction. A hypervisor is just a service provider for compute resource to a virtual machine. Decoupling is only one part of what’s happening though as the services may be delivered in different ways – what Gartner describes as alternative IT delivery models.

Technology is only one part of this transformation of IT – one of the biggest changes is the way in which we view IT as we move from buying components (e.g. a new server) to services (including thinking about how to consume those services – internally or from the cloud) and this is a cultural/mindset change.

Pricing and licensing also changes – no longer will serial numbers be tied to servers but new, usage-based, models will emerge.

IT funding will change too – with utility pricing leading to a fluid expansion and contraction of infrastructure as required to meet demands.

Speed of deployment is another change – as virtualisation allows for faster deployment and business IT users see the speed in which they can obtain new services, demand will also increase.

Management will be critical – processes for management of service providers and tools as the delivery model flexes based on the various service layers.

And all of this leads towards cloud computing – not outsourcing everything to external providers, but enabling strategic change by using technologies such as virtualisation to allow internal IT to function in a manner which is more akin to an external service, whilst also changing the business’ ability to consume cloud services.

Working with raw digital camera images

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I went along to one of the meetings of my local camera club, which is something I’ve been meaning to do for a while but somehow never got around to. At the meeting, one of the members (Andy Gailer) gave a really interesting presentation on working with raw images. I’ve repeated most of the highlights here, adding a few notes of my own along the way.

Technical details

Raw images are exactly that – the raw pixel data that is captured by a digital sensor. So, in order to understand the use of camera raw, it helps to understand a little bit about the technology that creates the image.

Probably the most important part of a digital camera is the sensor that converts available light into electrical signals. Two types of sensor are commonly used: charge-coupled device (CCD); and complementary metal oxide semiconductor (CMOS). CCD is a more mature technology but CMOS is gaining popularity as it can be implemented using fewer components, requires less power and provides the data more quickly.

Regardless of the technology in use, digital camera sensors consist of an array of photodiodes (or “pixels”) collecting photons (minute amounts of energy which combine to make light) and each pixel is fitted with a microlens to focus light into the sensor site. The number of photons collected in each pixel is converted into an electrical charge and this charge is converted into a voltage, amplified, and converted to a digital value to be processed into a digital image, either in camera or, if a raw image is used, using a computer. It’s important to understand that, in the same way that a bucket can only hold so much water, a pixel can only hold a certain amount of light.

Sensors also come in a variety of sizes. A “full frame” 35mm sensor is the same size as a frame of 35mm film (24x36mm) but a compact digital compact camera will have a much smaller sensor. My Canon Digital Ixus 70 has a 5.75×4.31mm sensor but my Nikon D70 DSLR has a 23.7×15.5 mm (Nikon DX) sensor. The Canon squeezes 7077888 pixels onto that tiny sensor whereas the Nikon only has 6016000 pixels, but each one of the Ixus 70’s pixels is significantly smaller than the D70’s and this will affect the image quality – that’s why not all megapixels are equal.

Bayer sensor patternThe quality of the image will also be affected by it’s contents – each pixel can only capture one of three colours as it has a red, green, or blue filter over the top, usually arranged in a pattern known as the Bayer Mask, with twice as many green filters as red or blue (the pattern is designed to mimic the way that we see colour). On top of the Bayer filter is an infrared filter, then an antialiasing filter (to reduce moiré) and each of these various filters steadily reduces the overall quality of the image.

An alternative filter (the Foveon X3) employs an arrangement that is similar to the one used to make up the coloured emulsion layers in photographic film, where the red, green, and blue pixels are placed on top of one another (different colours of light will penetrate further into the sensor), meaning that all pixels capture all colours, but this sensor type is relatively uncommon and also suffers from low light sensitivity.

Bayer sensor filteringThe resulting data from the sensor consists of three channels of photographic data – red, green and blue but, with the exception of the Foveon sensor, each channel is incomplete because the mask means that only certain pixels will be activated for a given colour. During raw conversion (either in-camera, or on the computer), a process known as demosaicing is used in an attempt to fill in the missing pixel data, based on the comparative brightness of the surrounding pixels, and then sharpening is applied to counteract the effect of so many filters on the sensor.

I mentioned earlier that each sensor site (or pixel) can only capture a finite amount of light, expressed as a number of levels.

The number of bits used in the analogue to digital conversion process will determine the light sensitivity, with 8 bits representing 255 levels, 12 bits for 4095 levels, 14 bits for 16383 levels and 16 bits for 65535 levels. It’s important to understand that a sensor records light in a linear fashion, so reducing the amount of light falling on the sensor by one stop (EV) will halve the number of levels of light that can be recorded. Equally, if the light is doubled, eventually the pixel will be full and the resulting effect is blown highlights.

Similarly, as the light levels drop, an effect known as posterisation (or colour banding) becomes visible, particularly in areas such as shadow detail, or the sky.

Even a few stops can make a huge difference to the number of light levels that the sensor can determine and so it is generally recommended to expose as far to the right of the histogram as possible without clipping (I’ll describe the histogram in a follow-up post). Because human vision is not linear, during raw conversion a tonal curve (including a gamma correction) is applied to the image to make it more pleasing on the eye.

The table below shows the difference between an image recorded as an 8-bit (gamma encoded) JPEG and others recorded as a 12-bit or 14-bit (linear encoded) raw file:

Stop 8-bit JPEG (gamma-encoded) 12-bit raw (linear) 14-bit raw (linear)
1st stop (brightest tones) 69 levels 2048 levels 8192 levels
2nd stop (bright tones) 50 levels 1024 levels 4096 levels
3rd stop (mid-tones) 37 levels 512 levels 2048 levels
4th stop (dark tones) 27 levels 256 levels 1024 levels
5th stop (darkest tones) 20 levels 128 levels 512 levels

Even though the logarithmic scale used for the gamma-encoded image does not fall off as sharply as the linear scale for the raw image, the overall number of discernible light levels is reduced in the JPEG (partly due to the 8-bit nature of the file format), whereas the raw files retain more detail, allowing for some exposure compensation to be applied post-capture. In addition, due to the lossy compression that is inherent with a JPEG, further image quality is sacrificed each time the image is saved.

Colour spacesColour spaces are another consideration, with each space defining the number of visible colours (or gamut) that may be represented in an image. Which colour space is “best” is often a personal consideration but it’s important to note that we can neither see, nor print all of the available colours; however, by storing the maximum possible amount of information, there is more scope for making changes later without degrading image quality. For print work, Adobe RGB may be a good colour space but for on-screen work (where the display device has a smaller gamut), sRGB may be more appropriate. I have now switched the default setting in my Nikon D70 to Adobe RGB 1998 but in reality it makes very little difference as the colour space can be altered later.

JPEG or raw?

For a JPEG image, the following process is applied to every image by the camera:

  1. RGB information from sensor is converted to colour data.
  2. Tone curve applied to convert linear-encoded data to gamma encoding.
  3. White balance set.
  4. Contrast adjusted.
  5. Colour saturation increased.
  6. Sharpening applied.
  7. 12/14-bit native file compressed to lossy 8-bit JPEG.
  8. Image is recorded to memory card.

By shooting raw, no data is lost from the sensor and a better tonal quality is retained. Images can be reprocessed years later for better (or alternative) results; however some raw processing software will be required.

Adobe Camera Raw is a free download and allows all of the adjustments that a camera would normally make to be applied to an image (and more), under the control of the photographer. It integrates with other Adobe applications (e.g. Bridge and Photoshop) for image organisation and editing. At first, the interface can be daunting – but the controls are organised in order of significance (left to right and top to bottom) and many may be ignored. Adobe’s white paper on understanding Adobe Photoshop Camera Raw 4 is also worth a read.

Adobe Camera Raw 4.0

There is one significant drawback with raw image capture though – even though the sensor data is captured in the same way, most camera manufacturers (particularly Canon and Nikon) record the data using a proprietary format. This is why software such as Adobe Camera Raw is constantly updated for new cameras; however it’s also a risk that one day those raw images will become obsolete. There is a potential solution, using Adobe’s Digital Negative (.DNG) format but adoption by manufacturers has been slow and, for many photographers, conversion from a proprietary raw format to DNG is an extra step in the workflow.

Working with raw images

Andy gave some good advice for working with raw images and I’ve added a few tips of my own to Andy’s advice:

  • At the capture stage:
    • Just because you can edit later, don’t rely on it – take your best shots with proper settings – particularly focus and exposure.
    • Get the brightest possible shot without clipping – use the camera’s histogram function and expose to the right.
    • Check the shot in-camera by zooming in on parts of the image on the LCD.
  • Back on the computer, organise the files:
    • Download images to the PC.
    • Sort, organise, tag, rank and caption as desired.
    • Add metadata.
    • Back up the images to a separate storage location.
    • Automate repetitive tasks (e.g. renaming and captioning).
  • Process the raw images:
    • Process for maximum quality.
    • Adjust colour balance.
    • Crop, straighten and sharpen (if required – and only if no more editing is to be performed).
    • Save converted files at 16-bit and back up to offline storage.
  • Edit (if required):
    • Apply any image enhancements, clean up flaws, etc.
    • Perform any creative enhancements.
    • Apply batch actions
    • Prepare for output (printing or web) – if sharpening is required, this should be the last action on the image before saving.
  • Archive:
    • Establish and implement an archival plan.
    • Save files on external devices and media for easy access and retrieval – consider off site storage.

Further reading

Further information may be found in the following articles:

Credits

Based on a presentation by Andy Gailer. The Bayer filter images used in this post are licensed under the GNU free documentation license and the colour space diagram is ©Jeff Schewe, used with permission (images from Wikipedia).

Deleting multiple RSS feeds in Outlook 2007

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have two mailboxes at work and one is permanently diverted to the other – every now and again I have to go in and clear it out (as a copy of every inbound message is left in the first mailbox) and it looks like I should do it more often (I was within a few KB of having inbound mail bounced until I logged in this evening…).

I wondered what was filling my Inbox so I checked out the folder sizes and found that the biggest culprits were RSS feeds from Outlook 2007’s integration with the Internet Explorer (IE) 7 RSS reader (even though the computer still runs IE6 in order to access some legacy web applications – so there is no Outlook to IE integration, as described in Microsoft knowledge base article 920234 – the mailbox has been accessed previously on a machine with Outlook 2007 and IE7 installed and, as Tim Anderson noted a couple of years back, Outlook copies feed contents from the local machine to the mailbox and then keeps it synchronised).

As I read my feeds in Google Reader, I decided to remove them from Outlook – but how (other than individually)? Thanks to Jaap Steinvoorte’s post on deleting RSS feeds in Outlook 2007, I found the answer in the Outlook Account Settings, on the RSS Feeds tab, where there is a big remove button. The same approach can be applied to SharePoint Lists, Internet Calendars and Public Calendars.

Unfortunately, the cached content is still retained and RSS Feeds is a special folder it can’t be deleted… unless you use a downlevel client as Daniel Moth suggests – I used OWA on an Exchange Server 2003 server.

Sure, deleting the entire folder is overkill but it seems to be the only way other than inducing carpal tunnel syndrome through repetitive mouse/keyboard clicks and the end result is a considerably less full mailbox.

Windows Quick Launch toolbar tips

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After last night’s post on Live Mesh, which included a screenshot of my desktop, Garry Martin dropped me a line to ask about the large icons in my Quick Launch Toolbar.

I can’t claim credit for discovering this but it’s a tip I heard Paul Thurrott describe on a Windows Weekly podcast a while back and it doesn’t seem to be very well known.

Large icons in the Quick Launch Toolbar

First of all, I changed the height of my Taskbar by clicking and dragging on the top edge. Next, I unlocked the Taskbar and arranged the toolbars so that the Quick Launch toolbar is visible above the row of taskbar buttons. Finally, selecting large icons involves right-clicking on the divider to the left of the Quick Launch Bar and selecting large icons in the view menu option.

Also, as Paul mentions in his more Windows Vista Tips article, the Windows and number keys can be used together to launch the applications that are linked from the Quick Launch bar (the first 10 of them anyway) (e.g. in the screenshot above, windowskey+5 would launch Outlook and windowskey+0 would launch Notepad, etc.

My system is running Windows Server 2008 but this tip also applies to Windows Vista, Server 2003 and XP (I didn’t try any earlier versions of Windows)

Windows Management Tools from Quest

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I spent most of yesterday with Quest Software, as they explained the various tools that they have that can help to expand and extend off the shelf infrastructure products from companies like Microsoft, Oracle and Sun.

If you’ve performed a large infrastructure migration or implementation, the chances are that you’ve come across Quest at some point (and I knew they had grown rapidly in recent years) but I hadn’t realised just how many tools they had available.

We spent 4 hours talking about Windows Management tools (without even touching on Application Management or Database Management) so clearly there is too much there for a blog post but it’s worth taking a look at their website some time.

In case you hadn’t already seen where Microsoft is heading…

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For 33 years, Microsoft’s vision has been “A computer on every desk and in every home” [“running Microsoft software”]. But that was the vision with Bill Gates in charge and he is now quoted as saying:

“We’ve really achieved the ideal of what I wanted Microsoft to become”

[Bill Gates, June 2008]

Now that Microsoft is under new management the vision has changed. Microsoft’s Chief Operating Officer, Kevin Turner, outlined the new vision in his speech at the recent virtualisation strategy day:

“Create experiences that combine the magic of software with the power of Internet services across a world of devices.”

[Kevin Turner, 8 September 2008]

In the same presentation, Turner spoke of the $8bn that the company will invest in research and development this year, across “4 pillars of innovation breadth”:

  • Desktop: Windows Vista; Internet Explorer; Desktop Optimisation Pack; Microsoft Office System.
  • Enterprise: SQL Server 2008 enterprise database; Windows Server 2008 infrastructure; Visual Studio 2008 development lifecycle; BizTalk business process management; System Center management; Dynamics ERP/CRM; Exchange and OCS unified communications; SharePoint portal, workflow and document management; PerformancePoint business intelligence.
  • Entertainment and devices: Xbox 360; Zune; Mediaroom; Windows Mobile; Games; Surface.
  • Software plus Services: Microsoft Online (business productivity suite – Exchange Server, SharePoint Server, Live Meeting, Communications Server – and Dynamics CRM Online); Live Services (Xbox Live, Live Search, Windows Live, Office Live, Live Mesh).

There are two main points to note in this strategy: enterprise is the fastest growing area in terms of revenue and profit; and the deliberate split between enterprise and consumer online services.

As I outlined in a recent post looking at software as a service (SaaS) vs. software plus services (S+S), there is a balance between on premise computing and cloud computing. Microsoft sees three models, with customer choice at the core (and expects most customers to select a hybrid of two or three models, rather than the fully-hosted SaaS model):

  • Customer hosted, supported and managed.
  • Partner-led, using partner expertise.
  • Microsoft-hosted.

One more key point… last year, Microsoft SharePoint Server became the fastest growing server product in the history of the company and Turner thinks that virtualisation could grow even faster. Only time will tell.