Is there such a thing as private cloud?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I had an interesting discussion with a colleague today, who was arguing that there is no such thing as private cloud – it’s just virtualisation, rebranded.

Whilst I agree with his sentiment (many organisations claiming to have implemented private clouds have really just virtualised their server estate), I do think that private clouds can exist.

Cloud is a new business model, but the difference between traditional hosting and cloud computing is more that just commercial. The NIST definition of cloud computing is becoming more and more widely accepted and it defines five essential charactistics, three service models and four deployment models.

The essential characteristics are:

  • “On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.”

and NIST’s private cloud definition is:

“Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

If anything, the NIST definition is incomplete (it doesn’t recognise any service models beyond infrastructure-, platform- and software-as-a-service – I’d add business process as a service too) but the rest is pretty spot on.

Looking at each of the characteristics and comparing them to a simple virtualisation of existing IT:

  • On demand self service: virtualisation alone doesn’t cover this – so private clouds need to include another technology layer to enable this functionality.
  • Broad network access: nothing controversial there, I think.
  • Resource pooling: I agree, standard virtualisation functionality.
  • Rapid elasticity: this is where private cloud struggles against public (bursting to public via a hybrid solution might help, if feasible from a governance/security perspective) but, with suitable capacity management in place, private virtualised infrastructure deployments can be elastic.
  • Measured service: again, an additional layer of technology is required in order to provide this functionality – more than just a standard virtualised solution.

All of this is possible to achieve internally (i.e. privately), and it’s important to note that it’s no good just porting existing applications to a virtualisaed infrastructure – they need to be re-architected to take advantage of these characteristics. But I’m pretty sure there is more to private cloud than just virtualisation with a new name…

As for, whether there is a long term place for private cloud… that’s an entirely separate question!

Red Hat Enterprise Virtualisation (aka “me too!”)

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this month, I managed to attend a Red Hat webcast about their forthcoming virtualisation products. Although Red Hat Enterprise Linux has included the Xen hypervisor used by Citrix for a while now (as do other Linux distros), it seems that Red Hat wants to play in the enterprise virtualisation space with a new platform and management tools, directly competing with Citrix XenServer/Essentials, Microsoft Hyper-V/System Center Virtual Machine Manager and parts of the VMware portfolio.

Red Hat Enterprise Virtualisation (RHEV) is scheduled for release in late 2009 and is currently in private beta. It’s a standalone hypervisor, based on a RHEL kernel with KVM, and is expected to be less than 100MB in size. Bootable from PXE, flash, local disk or SAN it will support up to 96 processing cores and 1TB of RAM, with VMs up to 16 vCPUs and 256GB of RAM. Red Hat is claiming that its high-performance virtual input/output drivers and PCI-pass through direct IO will allow RHEV to offer 98% of the performance of a physical (bare metal) solution. In addition, RHEV includes the dynamic memory page sharing technology that only Microsoft is unable to offer on it’s hypervisor right now; SELinux for isolation; live migration; snapshots; and thin provisioning.

As RHEV approaches launch, it is expected that there will be announcements regarding support for Windows operating systems under Microsoft’s Strategic Virtualisation Validation Program (SVVP), ensuring that customers with a heterogeneous environment (so, almost everyone then) are supported on their platform.

Red Hat seem keep to point out that they are not dropping support for Xen, with support continuing through to at least 2014, on an x86 platform; however the reality is that Xen is being dropped in favour of KVM, which runs inside the kernel and is a full type 1 hypervisor, supporting guests from RHEL3 to 5, and from Windows 2000 to Vista and Server 2008 (presumably soon to include Windows 7 and Server 2008 R2). RHEV is an x64 only solution and makes extensive use of hardware assisted virtualisation, with directed I/O (Intel VT-d/AMD IOMMU) used for secure PCI passthrough together with PCI single root IO virtualisation so that multiple virtual operating systems can achieve native I/O performance for network and block devices.

It all sounds great, but we already have at least three capable hypervisors in the x64 space and they are fast becoming commodity technologies. The real story is with management and Red Hat is also introducing an RHEV Manager product. In many ways it’s no different to other virtualisation management platforms – offering GUI and CLI interfaces for the usual functionality around live migration, high availability, system scheduling, image deployment, power saving and a maintenance mode but one feature I was impressed with (that I don’t remember seeing in System Center Virtual Machine Manager, although I may be mistaken) is a search-driven user interface. Whilst many virtual machine management products have the ability to tag virtual machines for grouping, etc., RHEV Manager can return results based on queries such as, show me all the virtualisation hosts running above 85% utilisation. What it doesn’t have, that SCVMM does (when integrated with SCOM) and that VirtualCenter does (when integrated with Zenoss) is the ability to manage the virtual and physical machine workloads as one, nor can RHEV Manager manage virtual machines running on another virtualisation platform.

The third part of Red Hat’s virtualisation portfolio is RHEV Manager for desktops – a virtual desktop infrastructure offering using the simple protocol for independent computing environments (SPICE) adaptive remote rendering technology to connect to Red Hat’s own connection broker service from within a web browser client using ActiveX or .XPI extensions. In addition to brokering, image management, provisioning, high availability management and event management, RHEV for desktops integrates with LDAP directories (including Microsoft Active Directory) and provides a certificate management console.

Red hat claim that their VDI experience is indistinguishable from a physical desktop including 32-bit colour, high quality streaming video, multi-monitor support (up to 4 monitors), bi-directional audio and video (for VoIP and video conferencing), USB device redirection and WAN optimisation compression. Microsoft’s RDP client can now offer most of these features, but it’s the Citrix ICA client that Red Hat really need to beat.

It does seem that Red Hat has some great new virtualisation products coming out and I’m sure there will be more announcements at next month’s Red Hat Summit in Chicago but now I can see how the VMware guys felt when Microsoft came out with Hyper-V and SCVMM. There is more than a little bit of “me too” here from Red Hat, with, on the face of it, very little true innovation. I’m not writing off RHEV just yet but they really are a little late to market here, with VMware way out in front, Citrix and Microsoft catching up fast, and Red Hat only just getting started.

So how, exactly, should a company license a hosted VDI solution with Windows?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Late last night, I got myself involved in a Twitter conversation with @stufox, who works for Microsoft in New Zealand. I’ve never met Stu – but I do follow him and generally find his tweets interesting; however, it seems that we don’t agree on Microsoft’s approach to licensing Windows for virtual desktop infrastructure.

It started off with an article by Paul Venezia about the perfect storm of bad news for VDI that Stu thought was unfairly critical of Microsoft (and I agree that it is in many ways). The real point that upset Stu is that the article refers to “Microsoft’s draconian licensing for Windows XP VDI” and I didn’t help things when I piled in and said that, “at least from a managed service perspective. Windows client licensing makes VDI prohibitively expensive“.

Twitter’s 140 character messages don’t help much when you get into an argument, so I said I’d respond on this blog today. Let me make one thing clear – I’m not getting into a flame war with Stu, nor am I going to disclose anything from our conversation that isn’t already on our Twitter streams, I just want to explain, publicly, what one of my colleagues has been struggling with and for which, so far at least, Microsoft has been unable to provide a satisfactory solution. Hopefully Stu, someone else at Microsoft, or someone else in the virtualisation world will have an answer – and we can all be happy:

Stu asked me if I thought Microsoft should give away Windows for free. Of course not, not for free (but then I remembered that, after all, that is what they do with Windows Server if I buy Datacenter Edition). I understand that Microsoft is in business to make money. I also understand that all of those copies of Windows used for VDI need to be licensed but there also needs to be a way to do it at a reasonable price (perhaps the price that OEMs would pay to deploy Windows on physical hardware).

Stu’s final (for now) public comment on the subject was that “Blaming VECD licensing for ruining VDI is like saying ‘I’d buy the Ferrari if the engine wasn’t so expensive’“. Sure, VDI is not a cheap option (so a supercar like a Ferrari is probably the right analogy). It requires a significant infrastructure investment and there are technical challenges to overcome (e.g. for multimedia support). In many cases, VDI may be more elegant and more manageable but it presents a higher risk and greater cost than a well-managed traditional desktop solution (many desktop deployments fail in the well-managed part of that). So, the real issue with VDI is not Windows licensing – but Windows Licensing is, nevertheless, one of the “engine” components that needs to be fixed before this metaphorical Ferrari becomes affordable. Particularly when organisations are used to running a fleet of mid-priced diesel saloons.

VDI is not a “silver bullet”. I believe that VDI is, and will continue to be, a niche technology (albeit a significant niche – in the way that thin client/server-based computing has been for the last decade). What I mean by this is that there will be a significant number of customers that deploy VDI, but there will be many more for whom it is not appropriate, regardless of the cost. For many, the traditional “thick” client, even on thinner hardware, and maybe even running virtualised on the desktop, will continue to be the norm for some time to come. But if Microsoft were to sort out their licensing model, then VDI might become a little more attractive for some of us. Let’s give Microsoft the benefit of the doubt here – maybe they are not sabotaging desktop virtualisation – but how, exactly, is a company supposed to license a hosted VDI solution with Windows?

Licensing does tend to follow technology and we’ve seen instances in the past where Microsoft’s virtualisation licensing policies have changed as a result of new technology that they have introduced. Perhaps when Windows Server 2008 R2 hits the streets and Remote Desktop Services allows provides a Microsoft product to act as a VDI broker, we’ll see some more sensible licensing policies for VDI with Windows…

Using cows to measure the environmental benefits associated with server virtualisation…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Much is made of the environmental benefits of server consolidation using virtualisation technologies so Microsoft and Alinean have put together a website to create a report of the likely environmental benefits of implementing Microsoft Virtualization technologies. I don’t know how accurate it is (the point of using Alinean is that there should be sizable amount of independent market research behind this) but, ultimately, the goal here is to sell products (in this case Windows Server 2008 with Hyper-V).

Regardless of the serious environmental and economical qualities of the Hyper-Green site that Microsoft and Alinean have put together, it’s not a patch (humour wise) on the Virtualisation Cow site that the Australian-based virtualisation consultancy Oriel have created, based on using HP server hardware and VMware Virtual Infrastructure software. The Oriel site may not produce a nice report based on market research from IDC and others but I’d rather express my greenhouse gas savings in terms of cows any day!

(This post is dedicated to Patrick Lownds – joint leader of the Microsoft Virtualization UK User Group – who commented at today’s Microsoft Virtualization Readiness training for partners that he was sure this would appear on my blog… it would be a shame to disappoint him…).

Microsoft Virtualization: part 1 (introduction)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Sitting at Microsoft’s London offices waiting for this evening’s Microsoft Virtualization User Group (MVUG) event to start reminded me that I still haven’t written up my notes on the various technologies that make up Microsoft’s virtualisation portfolio. It’s been three months since I spent a couple of days in Reading learning about this, and tonight was a great recap – along with some news (some of which I can’t write about just yet – wait for PDC is all I can say! – and some of which I can).

A few weeks back, I highlighted in my post on virtualisation as an infrastructure architecture consideration that Microsoft’s virtualisation strategy is much broader than just server virtualisation, or virtual desktop infrastructure and introduced the following diagram, based on one which appears in many Microsoft slidedecks:

Microsoft view of virtualisation

At the heart of the strategy is System Center and, whereas VMware will highlight a number of technical weaknesses in the Microsoft products (some of which are of little consequence in reality), this is where Microsoft’s strength lies – especially with System Center Virtual Machine Manager (SCVMM) 2008 just about to be released (more on that soon) – as management is absolutely critical to successful implementation of a virtualisation strategy.

Over the next few days I’ll discuss the various components included in this diagram and highlight some of the key points about the various streams: server; desktop; application; and presentation virtualisation – as well as how they are all brought together in System Center.

Virtualisation as an enabler for cloud computing

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my summary of the key messages from Microsoft’s virtualisation launch last week, I promised a post about the segment delivered by Tom Bittman, Gartner’s VP and Chief of Research for Infrastructure and Operations, who spoke about how virtualisation is a key enabler for cloud computing.

Normally, if I hear someone talking about cloud computing they are either: predicting the death of traditional operating systems (notably Microsoft Windows); they are a vendor (perhaps even Microsoft) with their own view on the way that things will work out; or they are trying to provide an artificial definition of what cloud computing is and is not.

Then, there are people like me – infrastructure architects who see emerging technologies blurring the edges of the traditional desktop and web hosting models – technologies like Microsoft Silverlight (taking the Microsoft.NET Framework to the web), Adobe AIR (bringing rich internet applications to the desktop) and Google Gears (allowing offline access to web applications). We’re excited by all the new possibilities, but need to find a way through the minefield… which is where we end up going full circle and returning to conversations with product vendors about their vision for the future.

What I saw in Bittman’s presentation was an analyst, albeit one who was speaking at a Microsoft conference, talking broad terms about cloud computing and how it is affected by virtualisation. No vendor alegiance, just tell it as it is. And this is what he had to say:

When people talk about virtualisation, they talk about saving money, power and space – and they talk about “green IT” – but virtualisation is more than that. Virtualisation is an enabling technology for the trasnformation of IT service delivery, a catalyst for changing architectures, processes, cultures, and the IT marketplace itself. And, through these changes, it enables business transformation.

Virtualisation is a hot topic but it’s also part of something much larger – cloud computing. But rather than moving all of our IT services to the Internet, Gartner see virtuaInternetlisation allegiancetransformationas a means to unlock cloud computing so that internal IT departments deliver services to business units in a manner that is more “cloud like”.

Bittman explained that in the past, our component-oriented approach has led to the management of silos for resource management, capacity planning and performance management. Gartner: Virtualising the data centre - from silos to clouds
Then, as we realise how much these silos are costing, virtualisation is employed to drive down infrastructure costs and increase flexibility – a layer-oriented approach with pools of resource, and what he refers to as “elasticity” – the ability to “do things” much more quickly. Even that is only part of the journey though – by linking the pools of resource to the service level requirements of end users, an automated service-oriented approach can be created – an SoA in the form of cloud computing.

At the moment internal IT is still evolving, but external IT providers are starting to deliver service from the cloud (e.g. Google Apps, salesforce.com, etc.) – and that’s just the start of cloud computing.

Rather than defining cloud computing, Bitmann described some of the key attributes:

  1. Service orientation.
  2. Utility pricing (either subsidised, or usage-based).
  3. Elasticity.
  4. Delivered over the Internet.

The first three of these are the same whether the cloud is internal or external.

Gartner: Virtualisation consolidation and deconsolidationVirtualisation is not really about consolidation. It’s actually the decoupling of components that were previously combined – the application, operating system and hardware – to provides some level of abstraction. A hypervisor is just a service provider for compute resource to a virtual machine. Decoupling is only one part of what’s happening though as the services may be delivered in different ways – what Gartner describes as alternative IT delivery models.

Technology is only one part of this transformation of IT – one of the biggest changes is the way in which we view IT as we move from buying components (e.g. a new server) to services (including thinking about how to consume those services – internally or from the cloud) and this is a cultural/mindset change.

Pricing and licensing also changes – no longer will serial numbers be tied to servers but new, usage-based, models will emerge.

IT funding will change too – with utility pricing leading to a fluid expansion and contraction of infrastructure as required to meet demands.

Speed of deployment is another change – as virtualisation allows for faster deployment and business IT users see the speed in which they can obtain new services, demand will also increase.

Management will be critical – processes for management of service providers and tools as the delivery model flexes based on the various service layers.

And all of this leads towards cloud computing – not outsourcing everything to external providers, but enabling strategic change by using technologies such as virtualisation to allow internal IT to function in a manner which is more akin to an external service, whilst also changing the business’ ability to consume cloud services.

Microsoft infrastructure architecture considerations: part 4 (virtualisation)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post I’ll look at some of the architectural considerations for using virtualisation technologies.

Virtualisation is a huge discussion point right now but before rushing into a virtualisation solution it’s important to understand what the business problem is that needs to be solved.

If an organisation is looking to reduce data centre hosting costs through a reduction in the overall heat and power requirements then virtualisation may help – but if they want to run applications that rely on legacy operating system releases (like Windows NT 4.0) then the real problem is one of support – the operating system (and probably the application to) are unsupported, regardless of whether or not you can physically touch the underlying platform!

Even if virtualisation does look like it could be a solution (or part of one), it’s important to consider management – I often come up against a situation whereby, for operational reasons, virtualisation is more expensive because the operations teams see the host machine (even it if it just a hypervisor) as an extra server else that needs to administered. That’s a rather primitive way to look at things, but there is a real issue there – management is the most important consideration in any virtualisation solution.

Microsoft believes that it has a strong set of products when it comes to virtualisation, splitting the technologies out as server, desktop, application and presentation virtualisation, all managed with products under the System Center brand.

Microsoft view of virtualisation

Perhaps the area where Microsoft is weakest at the moment (relying on partners like Citrix and Quest to provide a desktop broker service) is desktop virtualisation. Having said that, it’s worth considering the market for a virtualised desktop infrastructure – with notebook PC sales outstripping demand for desktops it could be viewed as a niche market. This is further complicated by the various alternatives to a virtual desktop running on a server somewhere: remote boot of a disk-less PC from a SAN; blade PCs (with an RDP connection from a thin client); or a server-based desktop (e.g. using presentation virtualisation).

Presentation virtualisation is also a niche technology as it failed to oust so called “thick client” technologies from the infrastructure. Even so it’s not uncommon (think of it as a large niche – if that’s not an oxymoron!) and it works particularly well in situations where there is a large volume of data that needs to be accessed in a central database as the remote desktop client is local to the data – rather than to the (possibly remote) user. This separation of the running application from the point of control allows for centralised data storage and a lower cost of management for applications (including session brokering capabilities) and, using new features in Windows Server 2008 (or with third party products on older releases of Windows Server), this may further enhanced with the ability to provide gateways for RPC/HTTPS access (including a brokering capability) (avoiding the need for a full VPN solution) and web access/RemoteApp sessions (terminal server sessions which appear as locally-running applications).

The main problem with presentation virtualisation is incompatibility between applications, or between the desktop operating system and an application (which, for many, is the main barrier to Windows Vista deployment) – that’s where application virtualisation may help. Microsoft Application Virtualization (App-V – formerly known as SoftGrid) attempts to solve this issue of application to application incompatibility as well as aiding application deployment (with no requirement to test for application conflicts). To do this, App-V virtualises the application configuration (removing it from the operating system) and each application runs in it’s own runtime environment with complete isolation. This means that applications can run on clients without being “installed” (so it’s easy to remove unused applications) and allows administration from a central location.

The latest version of App-V is available for a full infrastructure (Microsoft System Center Application Virtualization Management Server), a lightweight infrastructure (Microsoft System Center Application Virtualization Streaming Server) or in MSI-delivered mode (Microsoft System Center Application Virtualization Standalone Mode).

Finally host (or server) virtualisation – the most common form of virtualisation but still deployed on only a fraction of the world’s servers – although there are few organisations that would not virtualise at least a part of their infrastructure, given a green-field scenario.

The main problems which host virtualisation can address are:

  • Optimising server investments by consolidating roles (driving up utilisation).
  • Business continuity management – a whole server becomes a few files, making it highly portable (albeit introducing security and management issues to resolve).
  • Dynamic data centre.
  • Test and development.

Most 64-bit versions of Windows Server have enterprise-ready virtualisation built in (in the shape of Hyper-V) and competitor solutions are available for a 32-bit environment (although most hardware purchased in recent years is 64-bit capable and has the necessary processor support). Windows NT is not supported on Hyper-V; however – so if there are legacy NT-based systems to virtualise, then Virtual Server 2005 R2 my be a more appropriate technology selection (NT 4.0 is still out of support, but at least it it is a scenario that has been tested by Microsoft).

In the next post in this series, I’ll take a look at some of the infrastructure architecture considerations relating to for security.

Microsoft Licensing: Part 5 (virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written previously about Microsoft’s software licensing rules for server virtualisation but in this post, I’ll pick up on a few areas that I haven’t specifically covered before.

Just to summarise the situation with regards to Windows:

  • Windows Server 2008 standard edition and later includes the right to run one virtualised operating system environment (OSE).
  • Windows Server 2003 R2 enterprise edition and later includes the right to run four virtualised OSEs, as does Windows Vista enterprise edition.
  • Windows Server 2003 R2 datacenter edition and later, and Windows Server 2008 for Itanium-based systems include the right to run an unlimited number of virtualised OSEs, provided that all physical processors are licensed and the requisite number of client access licenses (CALs) have been purchased.
  • Each OSE can be the same, or a downlevel version of the Windows product running on the host; however a Windows Server 2003 R2 enterprise edition host is not licensed for Windows Server 2008 guests.
  • Multiple licenses may be assigned to a server (e.g. multiple enterprise edition licenses to run up to 8, 12, 16, etc. OSEs – saving on the cost of licensing the OSEs individually). Standard and enterprise edition licenses can also be re-assigned between servers (but only once every 90 days) and it quickly becomes more cost-effective to use datacenter edition, with its right to unlimited virtual OSEs.
  • If the maximum number of OSE instances are running, then the instance of WIndows running on the physical server may only be used to manage the virtual instances (i.e. it cannot support its own workload).
  • The same licensing rules apply regardless of the virtualisation product in use (so it is possible to buy Windows Server datacenter edition to licence Windows guest OSEs running on a VMware Virtual Infrastructure platform, for example).

When looking at the applications running in the virtual environment, these are licensed as they would be in a physical environment – and where per-processor licensing applies to virtualised applications, this relates to virtual CPUs.

SQL Server 2005 Enterprise Edition allows unlimited virtual SQL servers (using the per-processor licensing model) to run in a virtualised environment, providing that SQL Server has been purchased for the physical server, according to the number of physical CPUs. Similar rules apply to BizTalk Server 2006 R2 enterprise edition.

When using Windows Vista enterprise edition as the virtualisation product (e.g. with Virtual PC) and running Office 2007 enterprise edition, the virtual OSEs can also run Office (even mixed versions).

Microsoft offers two Windows Server virtualisation calculators to estimate the number and cost of Windows Server licences for a variety of virtualisation scenarios (based on US open agreement pricing).

Looking at some of the other types of virtualisation that may be considered:

  • Presentation virtualisation (Terminal Services) requires the purchase of Terminal Server client access licenses (TSCALs) in addition to the server license, the normal per-device/user CALs and any application software. There are some other complications too in that:
    • Microsoft Office is licensed on a per-device basis, so non-volume license customers will also need to purchase a copy of Microsoft Office for the terminal server if clients will use Office applications within their terminal server sessions.
    • If users can roam between devices then all devices must be licensed as roaming users can use any device, anywhere. So, if 1000 terminal server devices are provided but only 50 users need to use Office applications, 1000 copies of Office are required if the users can access any device; however, if the 50 Office users use dedicated devices to access the terminal server and never use the other 950 devices, then only 50 Office licenses are required.

Microsoft Application Virtualization (formerly SoftGrid) is only available to volume license customers.

In part 6 of this series, I’ll look at licensing for some of Microsoft’s security products.

Microsoft Offline Virtual Machine Servicing Tool

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my recent article about the realities of managing a virtualised infrastructure, I mentioned the need to patch offline virtual machine images. Whilst many offline images will be templates, they may still require operating system, security or application updates to ensure that they are not vulnerable when started (or when a cloned VM is created from a template).

Now Microsoft has a beta for a tool that will allow this – imaginatively named the Offline Virtual Machine Servicing Tool. Built on the Windows Workflow Foundation and PowerShell, it works with System Center Virtual Machine Manager and either System Center Configuration Manager or Windows Server Update Services to automate the process of applying operating system updates through the definition of servicing jobs. Each job will:

  1. “Wake” the VM (deploy and start it).
  2. Trigger the appropriate update cycle.
  3. Shut down the VM and return it to the library.

Although I haven’t tried this yet, it does strike me that there is one potential pitfall to be aware of – sysprepped images for VM deployment templates will start into the Windows mini-setup wizard. I guess the workaround in such a scenario is to use tools from the Windows Automated Installation Kit (WAIK) to inject updates into the associated .WIM file and deploy VMs from image, rather than by cloning sysprepped VMs.

Further details of the Offline Virtual Machine Servicing Tool beta may be found on the Microsoft Connect site.

The delicate balance between IT security, supportability and usability

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There is a delicate balance between IT security, supportability and usability. Just like the project management trilogy of fastest time, lowest cost and highest quality, you cannot have all three. Or can you?

Take, for example, a fictitious company with an IT-savvy user who has a business requirement to run non-standard software on his (company-supplied) notebook PC. This guy doesn’t expect support – at least not in the sense that the local IT guys will resolve technical problems with the non-standard build but he does need them to be able to do things like let his machine access the corporate network and join the domain. Why does he need that? Because without it, he has to authenticate individually for every single application. In return, he is happy to comply with company policies and to agree to run the corporate security applications (anti-virus, etc.). Everyone should be happy. Except it doesn’t work that way because the local IT guys are upset when they see something different. Something that doesn’t fit their view of the normal world – the way things should be.

I can understand that.

But our fictitious user’s problem goes a little further. In their quest to increase network security, the network administrators have done something in Cisco-land to implement port security. Moving between network segments (something you might expect to do with a laptop) needs some time for the network to catch up and allow the same MAC address to be used in a different part of the network. And then, not surprisingly, the virtual switch in the virtualisation product on this non-standard build doesn’t work when connected to the corporate LAN (it’s fine on other networks). What is left is a situation whereby anything outside the norm is effectively unsupportable.

Which leaves me thinking that the IT guys need to learn that IT is there to support the business (not the other way around).

Of course this fictitious company and IT-savvy user are real. I’ve just preserved their anonymity by not naming them here but discovering this (very real) situation has led me to believe that I don’t think company-standard notebook builds are the way to go. What we need is to think outside the box a little.

Three years ago, I blogged about using a virtual machine (VM) for my corporate applications and running this on a non-standard host OS. Technologies exist (e.g. VMware ACE) to ensure that VM can only be used in the way that it should be. It could be the other way around (i.e. to give developers a virtual machine with full admin rights and let them do their “stuff” on top of a secured base build) but in practice I’ve found it works better with the corporate applications in the VM and full control over the host. For example, I have a 64-bit Windows Server 2008 build in order to use technologies like Hyper-V (which I couldn’t do inside a virtual machine) but our corporate VPN solution requires a 32-bit Windows operating system and some of our applications only work with Internet Explorer 6 – this is easily accommodated using a virtual machine for access to those corporate applications that do not play well with my chosen client OS.

So why not take this a step further? Why do users need a company PC and a home PC? Up until now the justification has been twofold:

  • Security and supportability – clearly separating the work and personal IT elements allows each to be protected from the other for security purposes. But for many knowledge workers, life is not split so cleanly between work and play. I don’t have “work” and “home” any more. I don’t mean that my wife has kicked me out and I sleep under a desk in the office but that a large chunk of my working week is spent in my home office and that I often work at home in the evenings (less so at weekends). The 9 to 5 (or even 8 to 6) economy is no-more.
  • Ownership of an asset – “my” company-supplied notebook PC is not actually “mine”. It’s a company asset, provided for my use as long as I work for the company. When I leave, the asset, together with all associated data, is transferred back to the company.

But if work and home are no longer cleanly separated, why can’t we resolve the issue of ownership so that I can have a single PC for work and personal use?

Take a company car as an analogy – I don’t drive different cars for work and for home but I do have a car leased for me by the company (for which I am the registered keeper and that I am permitted to use privately). In the UK, many company car schemes are closing and employees are being given an allowance instead to buy or lease a personal vehicle that this then available for business use. There may be restrictions on the type of vehicle – for example, it may need to be a 4 or 5 door hatchback, saloon or estate car (hatchback, sedan or station-wagon for those of you who are reading this in other parts of the world) rather than a 2-seater sports car or a motorbike.

If you apply this model to the IT world, I could be given an allowance for buying or leasing a PC. The operating system could be Windows, Mac OS X or Linux – as long as it can run a virtual machine with the corporate applications. The IT guys can have their world where everything is a known quantity – it all lives inside a VM – where there will be no more hardware procurement to worry about and no more new PC builds when our chosen vendor updates their product line. It will need the IT guys to be able to support a particular virtualisation solution on multiple platforms but that’s not insurmountable. As for corporate security, Windows Server 2008 includes network access protection (NAP) – Cisco have an equivalent technology known as network access control (NAC) – and this can ensure that visiting PCs are quarantined until they are patched to meet the corporate security requirements.

So it seems we can have security, supportability, and usability. What is really required is for IT managers and architects to think differently.