MVP = Mark’s Very Pleased

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

MVPI’ve just heard that my Microsoft Most Valuable Professional (MVP) Award nomination for 2009 was successful and I can now say I’m an MVP for Virtual Machine technology.

Thank you to everyone who reads, links to and comments on this blog as, without your support, I wouldn’t write this stuff and therefore wouldn’t be getting the recognition from Microsoft that I have.

For those of you who skip over the Microsoft-focused content, don’t worry – it doesn’t mean that it will all be Microsoft from now on – I’ll still continue to write about whatever flavour of technology I find interesting at any given time, and I’ll still be trying to remain objective!

Microsoft Virtualization: part 1 (introduction)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Sitting at Microsoft’s London offices waiting for this evening’s Microsoft Virtualization User Group (MVUG) event to start reminded me that I still haven’t written up my notes on the various technologies that make up Microsoft’s virtualisation portfolio. It’s been three months since I spent a couple of days in Reading learning about this, and tonight was a great recap – along with some news (some of which I can’t write about just yet – wait for PDC is all I can say! – and some of which I can).

A few weeks back, I highlighted in my post on virtualisation as an infrastructure architecture consideration that Microsoft’s virtualisation strategy is much broader than just server virtualisation, or virtual desktop infrastructure and introduced the following diagram, based on one which appears in many Microsoft slidedecks:

Microsoft view of virtualisation

At the heart of the strategy is System Center and, whereas VMware will highlight a number of technical weaknesses in the Microsoft products (some of which are of little consequence in reality), this is where Microsoft’s strength lies – especially with System Center Virtual Machine Manager (SCVMM) 2008 just about to be released (more on that soon) – as management is absolutely critical to successful implementation of a virtualisation strategy.

Over the next few days I’ll discuss the various components included in this diagram and highlight some of the key points about the various streams: server; desktop; application; and presentation virtualisation – as well as how they are all brought together in System Center.

In case you hadn’t already seen where Microsoft is heading…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For 33 years, Microsoft’s vision has been “A computer on every desk and in every home” [“running Microsoft software”]. But that was the vision with Bill Gates in charge and he is now quoted as saying:

“We’ve really achieved the ideal of what I wanted Microsoft to become”

[Bill Gates, June 2008]

Now that Microsoft is under new management the vision has changed. Microsoft’s Chief Operating Officer, Kevin Turner, outlined the new vision in his speech at the recent virtualisation strategy day:

“Create experiences that combine the magic of software with the power of Internet services across a world of devices.”

[Kevin Turner, 8 September 2008]

In the same presentation, Turner spoke of the $8bn that the company will invest in research and development this year, across “4 pillars of innovation breadth”:

  • Desktop: Windows Vista; Internet Explorer; Desktop Optimisation Pack; Microsoft Office System.
  • Enterprise: SQL Server 2008 enterprise database; Windows Server 2008 infrastructure; Visual Studio 2008 development lifecycle; BizTalk business process management; System Center management; Dynamics ERP/CRM; Exchange and OCS unified communications; SharePoint portal, workflow and document management; PerformancePoint business intelligence.
  • Entertainment and devices: Xbox 360; Zune; Mediaroom; Windows Mobile; Games; Surface.
  • Software plus Services: Microsoft Online (business productivity suite – Exchange Server, SharePoint Server, Live Meeting, Communications Server – and Dynamics CRM Online); Live Services (Xbox Live, Live Search, Windows Live, Office Live, Live Mesh).

There are two main points to note in this strategy: enterprise is the fastest growing area in terms of revenue and profit; and the deliberate split between enterprise and consumer online services.

As I outlined in a recent post looking at software as a service (SaaS) vs. software plus services (S+S), there is a balance between on premise computing and cloud computing. Microsoft sees three models, with customer choice at the core (and expects most customers to select a hybrid of two or three models, rather than the fully-hosted SaaS model):

  • Customer hosted, supported and managed.
  • Partner-led, using partner expertise.
  • Microsoft-hosted.

One more key point… last year, Microsoft SharePoint Server became the fastest growing server product in the history of the company and Turner thinks that virtualisation could grow even faster. Only time will tell.

Microsoft infrastructure architecture considerations: part 7 (data centre consolidation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few days, I’ve written a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series. Just to summarise, the posts so far have been:

  1. Introduction.
  2. Remote offices.
  3. Controlling network access.
  4. Virtualisation.
  5. Security.
  6. High availability.

In this final infrastructure architecture post, I’ll outline the steps involved in building an infrastructure for data centre consolidation. In this example the infrastructure is a large cluster of servers to run a virtual infrastructure; however many of the considerations would be the same for a non-clustered or a physical solution:

  1. The first step is to choose build a balanced system – whilst it’s inevitable that there will be a bottleneck at some point in the architecture, by designing a balanced system the workloads can be mixed to even out the overall demand – at least, that’s the intention. Using commodity hardware it should be possible to provide a balance between cost and performance using the following configuration for the physical cluster nodes:
    • 4-way quad core (16 core in total) Intel Xeon or AMD Opteron-based server (with Intel VT/AMD-V and NX/XD processor support).
    • 2GB RAM per processor core minimum (4GB per core recommended).
    • 4Gbps Fibre Channel storage solution.
    • Gigabit Ethernet NIC (onboard) for virtual machine management and migration/cluster heartbeat.
    • Quad-port gigabit Ethernet PCI Express NIC for virtual machine access to the network.
    • Windows Server 2008 x64 enterprise or datacenter edition (server core installation).
  2. Ensure that Active Directory is available (at least one physical DC is required in order to get the virtualised infrastructure up and running).
  3. Build the physical servers that will provide the virtualisation farm (16 servers).
  4. Configure the SAN storage.
  5. Provision the physical servers using System Center Configuration Manager (again, a physical server will be required until the cluster is operational) – the servers should be configured as a 14 active/2 passive node failover cluster.
  6. Configure System Center Virtual Machine Manager for virtual machine provisioning, including the necessary PowerShell scripts and the virtual machine repository.
  7. Configure System Center Operations Manager (for health monitoring – both physical and virtual).
  8. Configure System Center Data Protection Manager for virtual machine snapshots (i.e. use snapshots for backup).
  9. Replicate snapshots to another site within the SAN infrastructure (i.e. provide location redundancy).

This is a pretty high-level view, but it shows the basic steps in order to create a large failover cluster with the potential to run many virtualised workloads. The basic principles are there and the solution can be scaled down or out as required to meet the needs of a particular organisation.

The MCS Talks series is still running (and there are additional resources to compliment the first session on infrastructure architecture). I also have some notes from the second session on core infrastructure that are ready to share so, if you’re finding this information useful, make sure you have subscribed to the RSS feed!

Microsoft infrastructure architecture considerations: part 6 (high availability)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In this instalment of the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, I’ll look at some of the architecture considerations relating to providing high availability through redundancy in the infrastructure.

The whole point of high availability is ensuring that there is no single point of failure. In addition to hardware redundancy (RAID on storage, multiple power supplies, redundant NICs, etc.) consideration should be given to operating system or application-level redundancy.

For some applications, redundancy is inherent:

  • Active Directory uses a multiple-master replicated database.
  • Exchange Server 2007 offers various replication options (local, clustered or standby continuous replication).
  • SQL Server 2008 has enhanced database mirroring.

Other applications may be more suited to the provision of redundancy in the infrastructure – either using failover clusters (e.g. for SQL Server 2005, file and print servers, virtualisation hosts, etc.) or with network load balancing (NLB) clusters (e.g. ISA Server, Internet Information Services, Windows SharePoint Services, Office Communications Server, read-only SQL Server, etc.) – in many cases the choice is made by the application vendor as some applications (e.g. ISA Server, SCOM and SCCM) are not cluster-friendly.

Failover clustering (the new name Microsoft cluster services) is greatly improved in Windows Server 2008, with simplified support (no more cluster hardware compatibility list – replaced by a cluster validation tool, although the hardware is still required to be certified for Windows Server 2008), support for more nodes (the maximum is up from 8 to 16), support for multiple-subnet geoclusters and IPv6 as well as new management tools and enhanced security.

In the final post in this series, I’ll take a look at how to build an infrastructure for data centre consolidation.

Microsoft infrastructure architecture considerations: part 5 (security)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the infrastructure architecture considerations relating to security.

The main security challenges which organisations are facing today include: management of access rights; provisioning and de-provisioning (with various groups of users – internal, partners and external); protecting the network boundaries (as there is a greater level of collaboration between organisations); and controlling access to confidential data.

Most organisations today need some level of integration with partners and the traditional approach has been one of:

  • NT Trusts (rarely used externally) – not granular enough.
  • Shadow accounts with matching usernames and passwords – difficult to administer.
  • Proxy accounts shared by multiple users – with no accountability and a consequential lack of security.

Federated rights management is a key piece of the “cloud computing” model and allows for two organisations to trust one another (cf. an NT trust) but without the associated overheads – and with some granularity. The federated trust is loosely coupled – meaning that there is no need for a direct mapping between users and resources – instead an account federation server exists on one side of the trust and a resource federation server exists on the other.

As information is shared with customers and partners traditional location-based methods of controlling information (firewalls, access control lists and encryption) have become ineffective. Users e-mail documents back and forth, paper copies are created as documents are printed, online data storage has become available and portable data storage devices have become less expensive and more common with increasing capacities. This makes it difficult to set a consistent policy for information management and then to manage and audit access. It’s almost inevitable that there will be some information loss or leakage.

(Digital) rights management is one solution – most people are familiar with DRM on music and video files from the Internet and the same principles may be applied to IT infrastructure. Making use of 128-bit encryption together with policies for access and usage rights, rights management provides persistent protection to control access across the information lifecycle. Policies are embedded within the document (e.g. for the ability to print, view, edit, or forward a document – or even for it’s expiration) and access is only provided to trusted identities. It seems strange to me that we are all so used to the protection of assets with perceived worth to consumers but that commercial and government documentation is so often left unsecured.

Of course, security should be all-pervasive, and this post has just scratched the surface looking at a couple of challenges faces by organisations as the network boundaries are eroded by increased collaboration. In the next post of this series, I’ll take a look at some of the infrastructure architecture considerations for providing high availability solutions.

Microsoft infrastructure architecture considerations: part 4 (virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the architectural considerations for using virtualisation technologies.

Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.

If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!

Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.

Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.

Microsoft view of virtualisation

Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).

Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).

The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.

The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).

Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.

The main problems which host virtualisation can address are:

  • Optimising server investments by consolidating roles (driving up utilisation).
  • Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
  • Dynamic data centre.
  • Test and development.

Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).

In the next post in this series, I’ll take a look at some of the infrastructure architecture considerations relating to for security.

Photosynth

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Photosynth logoA few months back, I heard something about Photosynth – a new method of modelling scenes using photographic images to build up a 3D representation and yesterday I got the chance to have a look at it myself. At first, I just didn’t get it, but after having seen a few synths, I now think that this is really cool technology with a lot of potential for real world applications.

It’s difficult to describe Photosynth but it’s essentially a collage of two dimensional images used together to create a larger canvas through which it’s possible to navigate in three dimensions (four actually). It began life as a research project on “photo tourism” at the University of Washington, after which Microsoft Research and Windows Live Labs took it on to produce Photosynth, using technology gained with Microsoft’s acquisition of Seadragon Software and the first live version of Photosynth was launched yesterday.

Clearly this is not a straightforward application – it’s taken two years of development with an average team size of 10 people just to bring the original research project to the stage it’s at today – so I’ll quote the Photosynth website for a description of how it works:

“Photosynth is a potent mixture of two independent breakthroughs: the ability to reconstruct the scene or object from a bunch of flat photographs, and the technology to bring that experience to virtually anyone over the Internet.

Using techniques from the field of computer vision, Photosynth examines images for similarities to each other and uses that information to estimate the shape of the subject and the vantage point the photos were taken from. With this information, we recreate the space and use it as a canvas to display and navigate through the photos.

Providing that experience requires viewing a LOT of data though—much more than you generally get at any one time by surfing someone’s photo album on the web. That’s where our Seadragonâ„¢ technology comes in: delivering just the pixels you need, exactly when you need them. It allows you to browse through dozens of 5, 10, or 100(!) mega-pixel photos effortlessly, without fiddling with a bunch of thumbnails and waiting around for everything to load.”

I decided to try Photosynth out for myself and the first thing I found was that I needed to install some software. On my Windows computer it installed a local application to create synths and an ActiveX control to view them. Creating a synth from 18 sample images of my home office desk took just a few minutes (each of the images I supplied was a 6.1 mega-pixel JPG taken on my Nikon D70) and I was also able to provide copyright/Creative Commons licensing information for the images in the synth:

Once it had uploaded to the Photosynth site, I could add a description, view other people’s comments, get the links to e-mail/embed the synth, and provide location information. I have to say that I am truly amazed how well it worked. Navigate around to the webcam above my laptop and see how you can go around it and see the magnet on the board behind!

It’s worth pointing out that I have not read the Photosynth Photography Guide yet – this was just a set of test photos looking at different things on and around the desk. If you view the image in grid view you can see that there are three images it didn’t know what to do with – I suspect that if I had supplied more images around those areas then they could have worked just fine.

You may also notice a lack of the usual office artifacts (family photos) etc. – they were removed before I created the synth, for privacy reasons, at the request of one of my family members.

My desk might is not the best example of this technology, so here’s another synth that is pretty cool:

In this synth, called Climbing Aegialis (by J.P.Peter) you can see a climber making his way up the rock face – not just in three dimensions – but in four. Using the . and , keys it’s possible to navigate through the images according to the order in which they were taken.

Potting Shed is another good example – taken by Rick Szeliski, a member of the team that put this product together:

Hover over the image to see a doughnut-shaped ring called a halo and click this to navigate around the image in 3D. If you use the normal navigation controls (including in/out with the mouse scrollwheel) it is possible to go through the door and enter the potting shed for a look inside!

There are also some tiny pixel-sized pin-pricks visible as you navigate around the image. These are the points that were identified whilst the 3D matching algorithm was running. They can be toggled on an off with the p key and in this example they are so dense in places that the image can actually be made out from just the pixel cloud.

Now that the first release of Photosynth is up and running, the development team will transition from Windows Live Labs into Microsoft’s MSN business unit where they will work on using the technology for real and integrating it with other services – like Virtual Earth, where synths could be displayed to illustrate a particular point on a map. Aside from photo tourism, other potential applications for the technology include real estate, art and science – anywhere where visualising an item in three or four dimensions could be of use.

The current version of Photosynth is available without charge to anyone with a Windows LiveID and the service includes 20GB of space for images. The synths themselves can take up quite a bit of space and, at least in this first version of the software, all synths are uploaded (a broadband Internet connection will be required). It’s also worth noting that all synths are public so photos will be visible to everyone on the Internet.

If you couldn’t see the synths I embedded in this post, then you need to install an ActiveX control (Internet Explorer) or plugin (Firefox). Direct3D support is also required so Photosynth is only available for Windows (XP or later) at the moment but I’m told that a Mac version is on the way – even Microsoft appreciates that many of the people who will be interested in this technology use a Mac. On the hardware side an integrated graphics card is fine but the number of images in a synth will be limited by the amount of available RAM.

Finally, I wanted to write this post yesterday but, following the launch, the Photosynth website went into meltdown – or as Microsoft described it “The Photosynth site is a little overwhelmed just now” – clearly there is a lot of interest in this technology. For more news on the development of Photosynth, check out the Photosynth blog.

Microsoft infrastructure architecture considerations: part 3 (controlling network access)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post, I’ll look at some of the considerations for controlling access to the network.

Although network access control (NAC) has been around for a few years now, Microsoft’s network access protection (NAP) is new in Windows Server 2008 (previous quarantine controls were limited to VPN connections).

It’s important to understand that NAC/NAP are not security solutions but are concerned with network health – assessing an endpoint and comparing its state with a defined policy, then removing access for non-compliant devices until they have been remediated (i.e. until the policy has been enforced).

The real question as to whether to implement NAC/NAP is whether or not non-compliance represents a business problem.

Assuming that NAP is to be implemented, then there may be different policies required for different groups of users – for example internal staff, contractors and visitors – and each of these might require a different level of enforcement; however, if the the policy is to be applied, enforcement options are:

  • DHCP – easy to implement but also easy to avoid by using a static IP address. It’s also necessary to consider the healthcheck frequency as it relates to the DHCP lease renewal time.
  • VPN – more secure but relies on the Windows Server 2008 RRAS VPN so may require a third party VPN solution to be replaced. In any case, full-VPN access is counter to industry trends as alternative solutions are increasing used.
  • 802.1x – requires a complex design to support all types of network user and not all switches support dynamic VLANs.
  • IPSec – the recommended solution – built into Windows, works with any switch, router or access point, provides strong authentication and (optionally) encryption. In addition, unhealthy clients are truly isolated (i.e. not just placed in a VLAN with other clients to potentially affect or be affected by other machines). The downside is that NAP enforcement with IPSec requires computers to be domain joined (so will not help with visitors or contractors PCs) and is fairly complex from an operational perspective, requiring implementation of the health registration authority (HRA) role and a PKI solution.

In the next post in these series, I’ll take a look at some of the architectural considerations for using virtualisation technologies within the infrastructure.

Microsoft improves support for virtualisation – unless you’re using a VMware product

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Software licensing always seems to be one step behind the technology. In the past, I’ve heard Microsoft comment that to virtualise one of their desktop operating systems (e.g using VMware Virtual Desktop Infrastructure) was in breach of the associated licensing agreements – then they introduced a number of licensing changes – including the Vista Enterprise Centralised Desktop (VECD) – to provide a way forward (at least for those customers with an Enterprise Agreement). Similarly I’ve heard Microsoft employees state that using Thinstall (now owned by VMware and rebranded as ThinApp) to run multiple copies of Internet Explorer is in breach of the EULA (the cynic in me says that I’m sure they would have fewer concerns if the technology involved was App-V). A few years back, even offline virtual machine images needed to be licensed – then Microsoft updated their Windows Server licensing to include virtualisation rights but it was never so clear-cut for applications with complex rules around the reassignment of licenses (e.g. in a disaster recovery failover scenario). Yesterday, Microsoft made another step to bring licencing in line with customer requirements when they waived the previous 90-day reassignment rule for a number of server applications, allowing customers to reassign licenses from one server to another within a server farm as frequently as required (it’s difficult to run a dynamic data centre if the licenses are not portable!).

It’s important to note that Microsoft’s licensing policies are totally agnostic of the virtualisation product in use – but support is an entirely different matter.

Microsoft also updated their support policy for Microsoft software running on a non-Microsoft virtualisation platform (see Microsoft knowledge base article 897615) with an increased number of Microsoft applications supported on Windows Server 2008 Hyper-V, Microsoft Hyper-V Server (not yet a released product) or any third-party validated virtualisation platform – based on the Windows Server Virtualization Validation Programme (SVVP). Other vendors taking part in the SVVP include Cisco, Citrix, Novell, Sun Microsystems and Virtual Iron… but there’s a rather large virtualisation vendor who seems to be missing from the party…

[Update: VMware joined the party… they were just a little late]