Microsoft Hyper-V: A reminder of where we’re at

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I saw a tweet from the MIX 2011 conference that highlighted how Microsoft’s Office 365 software as a service platform runs entirely on their Hyper-V hypervisor.

There are those (generally those who have a big investment in VMware technologies) who say Microsoft’s hypervisor lacks the features to make it suitable for use in the enterprise. I don’t know how much bigger you have to get than Office 365, but the choice of hypervisor is becoming less and less relevant as we move away from infrastructure and concentrate more on the platform.

Even so, now that Hyper-V has reached the magical version 3 milestone (at which people generally start to accept Microsoft products) I thought it was worth a post to look back at where Hyper-V has come from, and where it’s at now:

Looking at some of the technical features:

  • Dynamic memory requires Windows 2003 SP2 or later (and is not yet supported for Linux guests). It’s important to understand the differences between over subscription and over commitment.
  • Performance is as close as no difference for differentiator between hypervisors.
  • Hyper-V uses Windows clustering for high availability – the same technology as is used for live migration.
  • In terms of storage scalability – it’s up to the customer to choose how to slice/dice storage – with partner support for multipathing, hardware snapshotting, etc. Hyper-V users can have 1 LUN for each VM, or for 1000 VMs (of course, no-one would actually do this).
  • Networking also uses the partner ecosystem – for example HP creates software to allow NIC teaming on its servers, and Hyper-V can use a virtual switch to point to this.
  • In terms of data protection, the volume shadow copy service on the host is used an there are a number of choices to make around agent placement. A single agent can be deployed to the host, with all guests protected (allowing whole machine recovery) or guests can have their own agents to allow backups at the application level (for Exchange, SQL Server, etc.).

I’m sure that competitor products may have a longer list of features but in terms of capability, Hyper-V is “good enough” for most scenarios I can think of – I’d be interested to hear what barriers to enterprise adoption people see for Hyper-V?

Another one of my “How Do I?” videos makes it onto the Microsoft TechNet website

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of days back, I noticed that another one of my videos has made it onto the Microsoft TechNet website – this one looks at backing up a Hyper-V host using System Center Data Protection Manager 2007 SP1.

Those who’ve watched earlier videos may notice that the sound quality on this video is much improved as I finally bought myself a half-decent microphone. I’ve also dedicated a PC to the task of recording these videos (recommissioning my old Compaq DeskPro EN510SFF, which has been upgraded with a 250GB disk and 2GB of RAM, and more recently gained a Matrox Millennium G550 dual-display video card picked up for a few pounds on eBay). This machine is certainly no screamer but, as the videos are only recorded at 5 frames per second it’s perfectly capable of keeping up, although TechSmith Camtasia Studio falls over from time to time and the 2.4GHz Intel Pentium 4 processor does take a while to render the final output.

There are some more videos on the way as I’ve submitted three more that have yet to make it onto the TechNet site but, if you’re looking for step-by-step information on perform some common tasks with Microsoft products, then there are a whole bunch of guys working on these TechNet How Do I? videos and they’re definitely worth a look.

More on SCDPM and agent placement in a virtualised environment

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I wrote an introduction to System Center Data Protection Manager (SCDPM). In that post, I mentioned that SCDPM 2007 SP1 supports Virtual Server 2005 R2 and Hyper-V virtualisation platforms but I’d like to elaborate on that and highlight the need to consider where best to deploy the SCDPM agents.

With SCDPM 2007 SP1, we can back up Windows and non-Windows guest operating systems on either Virtual Server 2005 R2, Windows Server 2008 with the Hyper-V role enabled, or Hyper-V Server 2008. Depending upon the guest operating system, it will either be a VSS capable or a non-VSS capable guest .

Linux, Windows NT, Windows 2000, Oracle and line of business applications will generally be non-VSS capable and in this case SCDPM will:

  1. Hibernate the virtual machine to secure the memory and CPU contents to a saved state.
  2. Take a snapshot of the virtual machine using the volume shadow copy service (VSS) – this takes just a few seconds (as the backup is taken from the snapshot, not the offline virtual machine).
  3. Resume the virtual machine.
  4. Use block level checksums to send only the changes within the VHDs (since the last backup) to the SCDPM server.

On a VSS capable guest operating system.

  1. SCDPM contacts the VSS writer on the virtualisation host to request protection from the SCDPM agent, in the form of a referential VSS copy.
  2. A query is performed via the integration components to instructs the VSS writer in the guest operating system (e.g. SQL Server VSS writer, Exchange Server VSS writer, Windows Server VSS writer) to create a consistent snapshot.
  3. Only when the guest data is consistent and clean does the virtualisation layer provide SCDPM with a copy to backup from the host.

This referential VSS writer process means that:

  • There is no downtime (backups are performed online).
  • The use of recursive VSS ensures consistency without hibernation.
  • The only guest requirement is the presence of VM additions/integration components.
  • Guests are protected from the host.

Virtual Server exposes the backup options for VSS and non-VSS capable as online or offline backup. Hyper-V is more descriptive, with Backup using Child Partition snapshot (the equivalent of an online backup) or Backup Using Saved State if there are no integration components available

So, with no downtime and no agent deployment for each guest operating system, why wouldn’t we always protect virtual machines from the host? Well, when we protect the guest from the host, the whole virtual machine is treated as a logical unit without any data selectability or granularity. Whilst there are some advantages to this approach (it allows bare metal recovery of virtual machines to any other host; the whole virtual machine set can be protected with a single SCDPM agent and a single DPML license; non-Window or legacy Windows operating systems can be backed up), if an agent is deployed within the guest then SCDPM can select the data to protect/recover – e.g. individual SQL databases, Exchange storage groups, file sets, Sharepoint farms, etc.) but with the additional cost of deploying and licensing agents.

We can also use a hybrid of the two models – running an agent inside critical virtual machines but only using host-based backups for non-VSS cpabable operating systems. Indeed, it may even be desirable to protect the entire virtual machine and its data separately (e.g. if using passthrough disks then the guest operating system cab be backed up via the host and the data protected via the guest).

SCDPM 2007 SP1 will back up shares, volumes, system state and any VSS-aware application workloads (subject to licensing options – an Enterprise Data Protection Management License will be required for native backup and recovery of applications). On the licensing front, it’s also worth considering the Server Management Suite Enterprise (SMSE) as it includes management licenses for System Center Data Protection Manager, Operations Manager, Configuration Manager and Virtual Machine Manager with free usage rights up to the number of guests licensed with each host operating system edition (one for Windows Server 2008 Standard Edition, four for Windows Server 2008 Enterprise Edition and unlimited for Windows Server 2008 Datacenter Edition).

The key points for agent placement are application consistency and the granularity of restoration required. By deploying an agent inside a virtual machine, a VSS-aware application will be signalled that a snapshot is about to be taken and consistency can be guaranteed. Alternatively, if application consistency is not an issue, by installing the agent on the host, each virtual machine can be backed up as a single container – in effect the virtual machine will be consistent but the application may not be.

Introduction to System Center Data Protection Manager

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Late last year, I was at a Microsoft Virtualisation User Group meeting where Anthony Tyler, a Storage Technology Architect at Microsoft spoke about System Center Data Protection Manager (SCDPM).

Anthony explained how customers experience what he referred to as “backup pain”: everyone needs better nightly backups but tape storage is inefficient; there is poor support for integrating backups with application-specific requirements; disk-based backups consume huge amounts of space; backing up across the WAN (e.g. for centralised backups) is not feasible; and how remote and branch office data protection is expensive and cumbersome.

Microsoft’s answer is SCDPM, now in it’s second release, which addresses these issues as follows:

  • One common approach is to take a full backup at the weekend and then use nightly incrementals but this still involves backing up whole files – SCDPM just backs up the changes in the file (a much smaller volume of data).
  • At remote sites, branch staff may change tapes but the backups are not verified – because SCDPM uses less space for backups, remote backups become feasible.
  • Whilst SCPDM 2006 was pretty much only any use for Windows file servers, 2007 includes application support for Windows Server (2003 and 2008 – including clusters), Exchange Server (2003 and 2007), SQL Server (2000, 2005 and 2008); SharePoint (2003 and 2007, WSS and MOSS), Virtual Server 2005 R2 SP1 and Hyper-V (with SCDPM 2007 SP1), Windows XP and Windows Vista desktop clients.
  • Meanwhile, vendor “ping-pong” is reduced – in the event of problems there is “only one throat to choke”.

SCDPM is intended to be installed on a single-purpose server, running Windows Server 2003 SP1 or later and it relies on SQL Server 2005 and reporting services. Active Directory is also required (for maintenance of access control lists). In essence, SCDPM is just a big VSS engine and, whilst it may be useful to read the TechNet article about how the volume shadow copy service works, the basic principle is a system of requestors that may which to take a snapshot (e.g. SCDPM), writers (which ship with an application and know how best to take a consistent backup – with the onus on the vendor to provide this) and providers (which physically take a snapshot, using hardware or software, although SCDPM is a software solution). The requestors, writers and providers all communicate via the VSS service.

Using a file system filter driver, the SCDPM agent sits in the kernel and watches the file system, tracking block-level changes made to the disk (in a volume map) and writing changes back to the server according to a schedule in order to build snapshots (up to 512 of them). Whilst SCDPM can back up to tape, Microsoft’s view is that the real value for customers is at the application level, with Exchange or SQL admins backing up their application to disk and handing off the offline tape backups to the storage team.

Having set the scene, Anthony’s demonstration took us through the product, and the following were some of the key points I picked up:

  • Administratively, SCDPM is arranged around five context-sensitive menus with actions:
    1. Monitoring – (of alerts and scheduled jobs) with a MOM/SCOM management pack available for centralised reporting.
    2. Protection – setting up groups to enforce data protection policies.
    3. Recovery – browsing and searching for the appropriate recovery point.
    4. Reporting – using SQL reporting services for defined and custom queries.
    5. Management – of agents, disks and libraries (e.g. tape).
  • The SCDPM Management Shell (built on Windows PowerShell) may be used to script operations (everything in the GUI and more).
  • SCDPM should be allocated raw disks (i.e. unformatted – or else it sees the disk as full!). LUNs can be extended as Windows only cares about what storage is being provided) but disks need to be visible in Disk Management so NAS (which uses an SMB redirector) and removable volumes cannot be used with SCDPM (effectively, direct-attached, iSCSI and fibre channel-attached disks are the available options).
  • SQL Server is only used to store the SCDPM configuration – the backup data itself is not stored in SQL.

SCDPM works on the principle of protection groups – groups of objects to be backed up, and the wizard that is used to create a protection group asks how long backups should be retained for and the interval at which backups should be taken, from which it calculates the necessary disk and tape requirements. Optionally data can be compressed, or encrypted (256-bit AES, certificate-based) and, once the initial replica has been taken, backups consist of just the block-level changes to the data. The initial replication can be scheduled (e.g. to run out of hours) or there is the option to replicate on removable media (whereby the replica is restored to the SCDPM server, a consistency check is run, and the block level differences are pulled across the network) although it’s still advisable to transfer the removable media as soon as possible to to avoid another large transfer following the consistency check.

SCDPM maintains an in-memory representation of the file system (a volume map) to monitor disk block usage in a way that allows SCDPM to monitor 127GB of disk space using just 1MB of RAM. Each time SCDPM needs to take a backup, VSS takes a snapshot (literally a picture), then the application moves on whilst the snapshot is streamed to the SCDPM server as a background task. If the server goes offline and the bitmap is lost, then a consistency check will allow SCDPM to work out the differences.

Recovery is as simple as selecting the data to be recovered, the date and time of the recovery point, and where to put it. SCDPM also supports bare-metal recovery so that an image of a server so can be restored to identical hardware; or it can use PXE to rebuild a server from a backup image, install the application, and then restore the data.

A hierarchy of SCDPM servers can be created so that a SCDPM server can be backed up to another DPM server (e.g. in a separate datacentre) or to a centralised tape backup library. Because the data is stored natively, restoration is possible from the secondary server (even if the primary SCDPM server is unavailable).

One of the benefits of DPM is its application-awareness – for example it knows that a database also needs transaction logs, etc. but it hides that complexity from the administrators. Even complex environments such as SharePoint (with many databases, front end servers, and indices) can be kept consistent with SCDPM backups, even supporting single item recovery. Similarly for Exchange Server, SCDPM can invoke eseutil.exe to make the database consistent and handle log file truncation. On a Virtual Server or Hyper-V host (where the host and guest are both running Windows Server 2003 SP1 or later), SCDPM can snapshot a VHD and take a backup in seconds. Even where online backups are not supported, SCDPM allows the virtual machine to be paused, snapshotted and restarted in a few minutes, because only the changes are backed up. As long as the previous versions client is installed, users can even restore their own data from within Windows explorer by right clicking a folder as the VSS copies on the SCDPM server and the local disk are combined into a single view. Whilst it’s fair to note that the level of recovery support is application dependant and SCDPM 2007 only recognises key Microsoft applications, if third party software companies can provide a VSS writer and an XML descriptor then SCDPM should be able to back them up.

Traditionally, Microsoft products only start to gain some traction at their third release. SCDPM isn’t quite there yet (2007 is the second release) but it really is a great solution for backup and restoration of critical infrastructure, allowing application stakeholders (e.g. the SQL DBA, Exchange Administrator, SharePoint administrator or virtualisation administrator) to drive their own backup and restoration process. The third release is in development and SCDPM v3 will include improved support for client and cloud-based scenarios, as well as new data sources and a number of other improvements – indeed, in a webcast yesterday, Jason Buffington (Senior Technical Product Manager for Windows Storage Solutions and Data Protection) described SCDPM v3 as:

“[delivering] unified data protection for Windows servers and clients as a best-of-breed backup and recovery solution from Microsoft for Windows environments […providing] the best protection and most supportable restore scenarios from disk, tape and cloud in a scalable, reliable, managable and cost-effective way.”

Details of SCDPM are available on the Microsoft website and the SCDPM product team has a blog with further information.

Microsoft Virtualization: the R2 wave

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.

Hyper-V in Windows Server 2008 R2

At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:

  • USB device support
  • Dynamic memory management (ballooning)
  • Live Migration
  • 1 VM per storage LUN

Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.

So, starting out with clustering – a few points to note:

  • For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
  • FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
  • When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).

With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).

For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.

Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).

Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.

At a technical level, CSVs:

  • Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
  • Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
  • Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
  • Need:
    • No special hardware requirements.
    • No special application requirements.
    • No file type restrictions.
    • No directory structure or depth limitations.
    • No special agents or additional installations.
    • No proprietary file system (using the well established NTFS).

Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:

  • 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
  • Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
  • The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
  • Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
  • Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
  • Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).

Hyper-V Server

So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).

With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.

Management

The final piece of the puzzle is management:

There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).

SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.

Wrap-up

That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.

I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…

Microsoft Virtualization: part 6 (management)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today’s release of System Center Virtual Machine Manager 2008 is a perfect opportunity to continue my series of blog posts on Microsoft Virtualization technologies by highlighting the management components.

Microsoft view of virtualisation

System Center is at the heart of the Microsoft Virtualization portfolio and this is where Microsoft’s strength lies as management is absolutely critical to successful implementation of virtualisation technologies. Arguably, no other virtualisation vendor has such a complete management portfolio for all the different forms of virtualisation (although competitors may have additional products in certain niche areas) – and no-one else that I’m aware of is able to manage physical and virtual systems in the same tools and in the same view:

  • First up, is System Center Configuration Manager (SCCM) 2007, providing patch management and deployment; operating system and application configuration management; and software upgrades.
  • System Center Virtual Machine Manager (SCVMM) provides virtual machine management and server consolidation and resource utilisation optimisation, as well as providing the ability for physical to virtual (P2V) and limited virtual to virtual (V2V) conversion (predictably, from VMware to Microsoft, but not back again).
  • System Center Operations Manager (SCOM) 2007 (due for a second release in the first quarter of 2009) provides the end-to-end service management; server and application health monitoring and management (regardless of whether the server is physical or virtual); and performance monitoring and analysis.
  • System Center Data Protection Manager (SCDPM) completes the picture, providing live host virtual machine backup with in-guest consistency and rapid recovery (basically, quiescing VMs, before taking a snapshot and restarting the VM whilst backup continues – in a similar manner to VMware Consolidated Backup but also with the ability to act as a traditional backup solution).

But hang on – isn’t that four products to license? Yes, but there are ways to do this in a very cost-effective manner – albeit requiring some knowledge of Microsoft’s licensing policies which can be very confusing at times, so I’ll have a go at explaining things…

From the client management license perspective, SCCM is part of the core CAL suite that is available to volume license customers (i.e. most enterprises who are looking at Microsoft Virtualization). In addition, the Enterprise CAL suite includes SCOM (and many other products).

Looking at server management and quoting a post I wrote a few months ago licensing System Center products:

The most cost-effective way to license multiple System Center products is generally through the purchase of a System Center server management suite licence:

Unlike SCVMM 2007 (which was only available as part of the SMSE), SCVMM 2008 is available as a standalone product but it should be noted that, based on Microsoft’s example pricing, SCVMM 2008 (at $1304) is only marginally less expensive than the cost of the SMSE (at $1497) – both quoted prices include two years of software assurance and, for reference, the lowest price for VMware Virtual Center Management Server (VCMS) on the VMware website this morning is $6044. Whilst it should be noted that the VCMS price is not a direct comparison as it includes 1 year of Gold 12×5 support, it is considerably more expensive and has lower functionality.

It should be noted that the SMSE is virtualisation-technology-agnostic and grants unlimited virtualisation rights. By assigning an SMSE to the physical server, it can be:

  • Patched/updated (SCCM).
  • Monitored (SCOM).
  • Backed Up (SCDPM).
  • VMM host (SCVMM).
  • VMM server (SCVMM).

One of the advantages of using SCVMM and SCOM together is the performance and resource optimisation (PRO) functionality. Stefan Stranger has a good example of PRO in a blog post from earlier this year – basically SCVMM uses the management pack framework in SCOM to detect issues with the underlying infrastructure and suggest appropriate actions for an administrator to take – for example moving a virtual machine workload to another physical host, as demonstrated by Dell integrating SCVMM with their hardware management tools at the Microsoft Management Summit earlier this year).

I’ll end this post with a table which shows the relative feature sets of VMware Virtual Infrastructure Enterprise and the Windows Server 2008 Hyper-V/Server Management Suite Enterprise combination:

VMware Virtual Infrastructure Enterprise Microsoft Windows Server 2008/Server Management Suite Enterprise
Bare-metal Hypervisor ESX/ESXi Hyper-V
Centralised VM management Virtual Center SCVMM
Manage ESX/ESXi and Hyper-V SCVMM
VM Backup VCB SCDPM
High Availability/Failover Virtual Center Windows Server Clustering
VM Migration VMotion Quick Migration
Offline VM Patching Update Manager VMM (with Offline Virtual Machine Servicing Tool)
Guest Operating System patching/configuration management SCCM
End-to-end operating system monitoring SCOM
Intelligent placement DRS SCVMM
Integrated physical and virtual management SMSE

This table is based on one from Microsoft and, in fairness, there are a few features that VMware would cite that Microsoft doesn’t yet have (memory management and live migration are the usual ones). It’s true to say that VMware is also making acquisitions and developing products for additional virtualisation scenarios (and has a new version of Virtual Infrastructure on the way – VI4) but the features and functionality in this table are the ones that the majority of organisations will look for today. VMware has some great products (read my post from the recent VMware Virtualization Forum) – but if I was an IT Manager looking to virtualise my infrastructure, then I’d be thinking hard about whether I really should be spending all that money on the VMware solution, when I could use the same hardware with less expensive software from Microsoft – and manage my virtual estate using the same tools (and processes) that I use for the physical infrastructure (reducing the overall management cost). VMware may have maturity on their side but, when push comes to shove, the total cost of ownership is going to be a major consideration in any technology selection.

November 2008 MVUG meeting announced

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who attended the first Microsoft Virtualization User Group (MVUG) meeting in September will probably appreciate the quality of the event that Patrick Lownds and Matthew Millers put together with guest speakers from Microsoft (Justin Zarb, Matt McSpirit and James O’Neill) presenting on the various elements of the Microsoft Virtualization line-up (which reminds me… I must finish up that series of blog posts…).

The next event has just been announced for the evening of 10 November (at Microsoft in Reading) with presentations on Virtualization Solution Accelerators and System Center Data Protection Manager 2007 (i.e. backing up a virtualised environment) – register for the physical event – or catch the event virtually via LiveMeeting.