VMware Virtualization Forum 2008

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logoA few days back, I wrote a post about why I don’t write much about VMware. This VMware-blindness is not entirely through choice – and I do believe that I need to keep up to speed on their view of the virtualisation market in order to remain objective when I talk about competitive offerings from Microsoft.

With that in mind, I spent today at the VMware Virtualization Forum 2008 – a free of charge event for VMware customers. I went to a similar event two years ago (when I was working on a VI3 deployment) and found it very worthwhile so I was pleased to be back again this year (I don’t know what happened in 2007!). It’s not a technical event – this is pure sales and marketing, with all the usual sponsors – but it did give me an overview of many of the announcements from last week’s VMworld conference in Las Vegas.

The event started late and, after keeping everyone waiting for 20 minutes, Chris Hammond, VMware’s Regional Director for the UK and Ireland had the audacity to ask delegates to make sure they arrive in the session rooms on time! There was the expected plug for the European version of the VMworld conference and then the keynote began, presented by Lewis Gee, VMware’s Vice President of field operations for EMEA. After a disclaimer that any of the future products discussed were for the purposes of sharing VMware’s vision/roadmap and may be subject to change, he began to look at how VMware has reached its current market position and some of the developments for the future.

Key points from the keynote were:

  • The growth of the virtualisation market:
    • x86 servers have proliferated. Typically a single application is running on each server, working at 5-10% utilisation.
    • Virtual machines allowed the partitioning of servers so that many virtual server instances could exist on a single physical server.
    • Virtualisation was initially used in test and development environment but is now increasingly seen in production. The net result is a consolidation of physical servers but there is still significant growth.
    • As virtualisation technology started to take hold customers began to look at the datacentre as a whole – VMware’s answer was Virtual Infrastructure.
    • Even now, there is significant investment in server hardware – in 2007, 8 million servers shipped (7.6m of which were x86/x64 servers).
    • The base component for VMware’s datacentre offerings is VMware’s bare-metal hypervisor – in paid (ESX) and free (ESXi) form. Whilst VMware now faces stiff competition from Microsoft and others, they were keen to highlight that their platform is proven and cited 120,000 customers, over 7 years, with 85% of customers using in production, and years of uptime at some customer sites.
  • ESXi uses just 32MB on disk – less code means fewer bugs, fewer patches, and VMware argues that their monolithic approach has no dependence on the operating system or arbitrary drivers (in contrast to the approach taken by Microsoft and others).
  • In the face of new competitive pressures (although I doubt they would admit it), VMware are now focusing on improved management capabilities using a model that they call vServices:
    • At the operating system level, VMware Converter provides P2V capabilities to allow physical operating system instances to be migrated to the virtual infrastructure.
    • Application vServices provide availability, security, and scalability using established technologies such as VMotion, Distributed Resource Scheduler (DRS), VMware High Availability (HA) and Storage VMotion.
    • Infrastructure vServices include vCompute, vStorage and vNetwork – for example, at VMworld, Cisco and VMware announced the Nexus 1000v switch – allowing network teams to look at the infrastructure as a whole regardless of whether it is physical or virtual.
    • Security vServices are another example of where VMware is partnering with other companies and the VMSafe functionality allows security vendors to integrate their solutions as another layer inside VI.
    • All of this is supplemented with Management vServices (not just from VMware but also integration with third party products).
    • Finally, of course the whole virtual infrastructure runs on a physical datacentre infrastructure (i.e. hardware).
  • All of this signals a move from a virtual infrastructure to what VMware is calling the Virtual Datacenter Operating System. It’s pure marketing speak but the concept is one of transforming the datacentre into an internal “cloud”. The idea is that the internal cloud is an elastic, self-managing and self-healing utility which should support federation with external clouds of computing and free IT from the constraints of static hardware-mapped applications (all of which sounds remarkably like Gartner’s presentation at the Microsoft Virtualization launch).
  • Looking to the future, VMware plan to release:
    • vCloud – enabling local cloud to off-premise cloud migration (elastic capacity – using local resources where available and supplementing these with additional resources from external sources as demand requires). There are not yet many cloud providers but expect to see this as an emerging service offering.
    • vCenter AppSpeed – providing quality of service (QoS) for a given service (made up of a number of applications, virtual machines, users, servers and datacentres); based around a model of discover, monitor and remediate.
    • VMware Fault Tolerance – application protection against hardware failures with no downtime (application and operating system independent) using mirrored virtual machine images for full business continuity.
  • Up to this point, the presentation had focused on datacentre developments but VMware are not taking their eyes off desktop virtualisation. VMware see desktops following users rather than devices (as do Microsoft) and VMware Virtual Desktop Infrastructure (VDI) is being repositioned as VMware View – providing users with access to the same environment wherever the desktop is hosted (thin client, PC, or mobile).
  • In summary, after 10 years in desktop virtualisation, 7 years in server virtualisation and 5 years in building a management portfolio (mostly through acquisition), VMware is now looking to the cloud:
    • Virtual Datacenter OS is about the internal cloud – providing efficient and flexible use of applications and resources.
    • vCloud is the initiative to allow the virtual datacentre to scale outside the firewall, including federation with the cloud.
    • VMware View is about solving the desktop dilemma – making the delivery of IT people and information centric.

VMware Virtual Datacenter OS

Next up was a customer presentation – with Lawrence Clark from Carphone Warehouse recounting his experiences of implementing virtualisation before Richard Garsthagen (Senior Evangelist at VMware) and Lee Dilworth (Specialist System Engineer at VMware) gave a short demonstration of some of the technical features (although there was very little there that was new for anyone who has seen VMware Virtual Infrastructure before – just, VMotion, HA, DRS and the new VMware fault tolerance functionality).

Following this, were a number of breakout sessions – and the first one I attended was Rory Clements’ presentation on transforming the datacentre through virtualisation. Rory gave an overview of VMware’s datacentre virtualisation products, based around a model of:

  1. Separate, using a hypervisor such as ESXi to run multiple virtual machines on a single physical server, with the benefits of encapsulation, hardware independence, partitioning and security through isolation.
  2. Consolidate, adding management products to the hypervisor layer, resulting in savings on capital expenditure as more and more servers are virtualised, running on a shared storage platform and using dynamic resource scheduling.
  3. Aggregate (capacity on demand), creating a virtual infrastructure with resource pooling, managed dynamically to guarantee application performance (a bold statement if ever I heard one!). Features such as VMotion can be used to remove planned downtime through live failover and HA provides a clustering capability for any application (although I do consider HA to be a misnomer in this case as it does require a restart).
  4. Automate (self-managing datacenter), enabling business agility with products/features such as: Stage Manager to automate the application provisioning cycle; Site Recovery Manager to create a workflow for disaster recovery and automatically fail entire datacentres over between sites; dynamic power management to move workloads and shut down virtualisation hosts that are not required to service current demands; and Update Manager, using DRS to dynamically reallocate workloads, then put a host into maintenance mode, patch, restart and bring the server online, before repeating with the rest of the nodes in the cluster.
  5. Liberate (computing clouds on and off premise) – create a virtual datacentre operating system with the vServices covered in the keynote session.

In a thinly veiled swipe at competitive products that was not entirely based on fact, Rory indicated that Microsoft were only at the first stage – entirely missing the point that they too have a strong virtualisation management solution and can cluster virtualisation hosts (even though the failover is not seamless). I don’t expect VMware to promote a competitive solution but the lack of honesty in the pitch did have a touch of the “used-car salesman” approach to it…

In the next session, Stéphane Broquère, a senior product marketing manager at VMware and formerly CEO at Dunes Technologies (acquired by VMware for their virtualisation lifecycle management software) talked about virtual datacentre automation with VMware:

  • Stéphane spoke of the virtual datacentre operating system as an elastic, self-managing, self-healing software substrate between the hardware pool and applications with software provisioned and allocated to hardware upon demand.
  • Looking specifically at the vManagement technologies, he described vApps using the DTMF open virtualisation format (OVF) to provide metadata which describes the application, service and what it requires – e.g. name, ports, response times, encryption, recovery point objective and VM lifetime.
  • vCenter is the renamed VirtualCenter with a mixture of existing and new functionality. Some products were skipped over (i.e. ConfigControl, CapacityIQ, Chargeback, Orchestrator, AppSpeed) but Stéphane spent some time looking at three products in particular:
    • Lifecycle Manager automates virtual machine provisioning, and provides intelligent deployment, tracking and decommissioning to ensure that a stable datacentre environment is maintained through proper approvals, standard configuration procedures and better accountability – all of which should lead to increased uptime.
    • Lab Manager provides a self-provisioning portal with an image library of virtual machine configurations, controlled with policies and quotas and running on a shared pool of resources (under centralised control).
    • Stage Manager targets release management by placing virtual machine images into a configuration (a service) and then promoting or demoting configurations between environments (e.g. integration, testing, staging, UAT and production) based on the rights assigned to a user. Images can also be archived or cloned to create a copy for further testing.
  • Over time, the various vCenter products (many of which are the result of acquisitions) can be expected to come together and there will be some consolidation (e.g. of the various workflow engines). In addition, VMware will continue to provide APIs and SDKs and collaborate with partners to extend.

After lunch, Lee Dilworth spoke about business continuity and disaster recovery, looking again at VMware HA, VMotion, DRS and Update Manager as well as other features that are not always considered like snapshots and network port trunking.

He also spoke of:

  • Experimental VM failure monitoring capabilities that monitor guest operating systems for failure and the ability to interpret hardware health information from a server management card.
  • Storage VMotion – redistributing workloads to optimise the storage configuration, providing online migration of virtual machine disks to a new data store with zero downtime, using a redo log file to capture in-flight transactions whilst the file copy is taking place (e.g. when migrating between storage arrays).
  • VMware Fault Tolerance – providing continuous availability, although still not an ideal technology for stretch clusters. It should also be noted that VMware Fault Tolerance is limited to a single vCPU and the shadow virtual machine is still live (consuming memory and CPU resources) so is probably not something that should be applied to all virtual machines.
  • vCenter Data Recovery (formerly VMware Consolidated Backup) – providing agentless disk-based backup and recovery, with virtual machine or file level restoration, incremental backups and data de-duplication to save space.
  • Site Recovery Manager (SRM) – allowing seamless failover between datacentres to restart hundreds of virtual machines on another site in the event of an unplanned or planned outage. Importantly, SRM requires a separate Virtual Center management framework on each site and replicated fibre channel or iSCSI LUNs (NFS will follow next year). It is not a replacement for existing storage replication products (it is an orchestration tool to integrate with existing replication products) nor is it a geo-clustering solution.

In the final session of the day, Reg Hall spoke about using virtual desktop infrastructure technologies to provide a desktop service from the datacentre. Key points were:

  • VMware has three desktop solutions:
    • Virtual Desktop Infrastructure (VDI), consisting of a the virtual desktop manager (VDM) connection broker, and standard Virtual Infrastructure functionality. Connected users access a security server and authenticated VDM then manages access to a pool of virtual desktops.
    • Assured Computing Environment (ACE), providing a portable desktop that is managed and secured with a central policy.
    • ThinApp (formerly Thinstall) for application virtualisation, allowing an application to be packaged once and deployed everywhere (on a physical PC, blade server, VDI, thin client, ACE VM, etc.) – although I’m told (by Microsoft) that VMware’s suggestion to use the product to run multiple versions of Internet Explorer side by side would be in breach of the EULA (I am not a lawyer).
  • Highlighting VMware’s memory management as an advantage over competitive solutions (and totally missing the point that by not buying VMware products, the money saved will buy a lot of extra memory), VMware cited double memory overcommitment when running virtual desktops; however their own performance tuning best practice guidance says “Avoid high memory overcomittment [sic.]. Make sure the host has more memory than the total amount of memory that will be used by ESX plus the sum of the working set sizes that will be used by all the virtual machines.”.
  • Assuming that hypervisors will become a target for attackers (a fair assumption), VMSafe provides a hardened virtual machine to protect other workloads through inspection of the virtual infrastructure.
  • As VDI becomes VMware View, desktops will follow users with the same operating system, application and data combinations available from a thin client, thick client (using client virtualisation – e.g. a desktop hypervisor) or even on a mobile device with the ability to check a desktop in/out for synchronisation with datacentre of offline working.
  • VDM will become View Manager and View Composer will manage the process of providing a single master virtual machine with many linked clones, appearing as individual systems but actually a single, scalable virtual image. At this point, patching becomes trivial, with patches applied to the master effectively being applied throughout the VMware View.
  • Other developments include improvements to connection protocols (moving away from RDP); improved 3D graphics virtualisation and a universal client (a device-independent client virtualisation layer).

Overall, the event provided me with what I needed – an overview of the current VMware products, along with a view of what is coming onstream over the next year or so. It’s a shame that the VMware Virtualization Forum suffered from poor organisation, lousy catering (don’t invite me to arrive somewhere before 9, then not provide breakfast, and start late!) and a lack of a proper closedown (the sessions ended, then there were drinks, but no closing presentation) – but the real point is not the event logistics but where this company is headed.

Behind all the marketing rhetoric, VMware is clearly doing some good things. They do have a lead on the competition at the moment but I’ve not seen any evidence that the lead is as advanced as the statements that we’ve-been-doing-hypervisor-based-virtualisation-for-7-years-and-Microsoft-is-only-just-getting-started would indicate. As one VMware employee told me, at last year’s event there was a lot of “tyre-kicking” as customers started to warm up to the idea of virtualisation whereas this year they want to know how to do specific things. That in itself is a very telling story – just because you have the best technology doesn’t mean that’s what customers are ready to deploy and, by the time customers are hosting datacentres in the cloud and running hypervisor-based client operating systems, VMware won’t be the only company offering the technology that lends itself to that type of service.

In case you were wondering why I don’t write much about VMware…

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logoWearing as many hats as I do, I enjoy a variety of relationships with a number of IT hardware, software and services companies on various levels. I try to remain objective when I write on this blog but sometimes those other companies make it difficult.

For example: Microsoft talks to me as a partner, as a customer and as press (they take a very broad view of the press and include bloggers in that group – real journalists will almost certainly disagree) and I get a lot of information, some of which I can write about, and some of which is under NDA (sometimes the problem is remembering in which context I heard the information and therefore what I can or can’t say!); Fujitsu talks to me as an employee (and for that reason I can’t/don’t/won’t say very much about them at all); VMware sort of talk to me as a customer and it would be nice if they talked to me as a partner (they do speak to a number of my colleagues) but mostly they don’t talk to me at all…

This summer, I attended two events about desktop virtualisation within a few days of one another – one from Microsoft and the other from VMware. I was going to write a blog post about desktop virtualisation and Microsoft but I decided to hold back, in the interest of balance, to compare the Microsoft desktop virtualisation story with the VMware one. Except that the “VMware VDI Roadshow” event that I was attending turned out to be hosted by a partner (BT Basilica) and VMware were just the warm-up act for the pre-sales pitch. There was no mention of that when I registered – in fact no mention of Basilica until the last pre-event e-mail (when the sending address switched from events@vmware.com to marketing.campaign@basilica.co.uk) but within a few hours of attending (and before I was back in the office) I’d received an e-mail from someone at BT Basilica asking if they could help me at all with my virtualisation deployments.

Meanwhile, VMware had promised that the slide decks from the event would be made available if I asked for them on my feedback form (I did), so I didn’t make full notes at the presentation. Almost three months on, with calls to BT Basilica, an e-mail to the VMware presenter from that day, and having registered my displeasure in a follow-up telesales call on behalf of BT Basilica, I still don’t have the presentation slides.

So that’s one reason why I don’t have much that’s good to say about VMware right now. That and the fact that I have enjoyed almost no benefits for being a VMware Certified Professional. I would hope that VCPs would be the ideal audience to target for information about product developments, new releases, roadmaps, etc. but apparently not. If I want to stay current on VMware products then I have to do my own research (or pay for a training course).

Then there’s my purchase of VMware Fusion. After weeks of asking why their licensing system showed the license key for my copy of the product (which was purchased in an Apple store) as an evaluation copy, I was unable to get a satisfactory answer. Then version 2.0 was released as a free upgrade for existing registered customers and I heard… silence.

Next week, VMware is running its Virtualisation Forum in London and I registered for attendance a few weeks back but, with a week to go, I’m still waiting to hear if my registration has been accepted (despite having received confirmation that they have my details and will be in touch) – and my follow-up e-mails are, as yet, unanswered. Maybe I’m on a waitlist because the event is full but it would be good to know if that’s the case.

I could go on but, by now, you are probably getting the picture…

VMware are leaders in their market but my experience of the company is not a good one – neither as a business customer nor as a consumer. This is a tiny blog and I’m sure VMware don’t care what I have to say (far less so than they would for Alessandro Perilli or virtualisation specialists like Scott Lowe) but, as I said at the top of this post, I wear many hats, and one of them involves building up my organisations capabilities around a certain vendor’s virtualisation products. So, next time I write about Microsoft’s virtualisation products here, please bear in mind that I did try to balance things up… and VMware didn’t want to know.

Microsoft improves support for virtualisation – unless you’re using a VMware product

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Software licensing always seems to be one step behind the technology. In the past, I’ve heard Microsoft comment that to virtualise one of their desktop operating systems (e.g using VMware Virtual Desktop Infrastructure) was in breach of the associated licensing agreements – then they introduced a number of licensing changes – including the Vista Enterprise Centralised Desktop (VECD) – to provide a way forward (at least for those customers with an Enterprise Agreement). Similarly I’ve heard Microsoft employees state that using Thinstall (now owned by VMware and rebranded as ThinApp) to run multiple copies of Internet Explorer is in breach of the EULA (the cynic in me says that I’m sure they would have fewer concerns if the technology involved was App-V). A few years back, even offline virtual machine images needed to be licensed – then Microsoft updated their Windows Server licensing to include virtualisation rights but it was never so clear-cut for applications with complex rules around the reassignment of licenses (e.g. in a disaster recovery failover scenario). Yesterday, Microsoft made another step to bring licencing in line with customer requirements when they waived the previous 90-day reassignment rule for a number of server applications, allowing customers to reassign licenses from one server to another within a server farm as frequently as required (it’s difficult to run a dynamic data centre if the licenses are not portable!).

It’s important to note that Microsoft’s licensing policies are totally agnostic of the virtualisation product in use – but support is an entirely different matter.

Microsoft also updated their support policy for Microsoft software running on a non-Microsoft virtualisation platform (see Microsoft knowledge base article 897615) with an increased number of Microsoft applications supported on Windows Server 2008 Hyper-V, Microsoft Hyper-V Server (not yet a released product) or any third-party validated virtualisation platform – based on the Windows Server Virtualization Validation Programme (SVVP). Other vendors taking part in the SVVP include Cisco, Citrix, Novell, Sun Microsystems and Virtual Iron… but there’s a rather large virtualisation vendor who seems to be missing from the party…

[Update: VMware joined the party… they were just a little late]

Why Hyper-V does not mean the end of VMware – but at last it provides some competition for ESX

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Microsoft has been very careful in its statements about comparing Hyper-V with ESX. Jason Perlow’s Hyper-V review is a little more forthright and the graphics are great!

I don’t think that VMware is the new Netscape (although it seems IDC might think so) – they will be back with bigger and better things, and then Microsoft will push forward again in the next release of Hyper-V. Even so, all of a sudden, this is a two horse race, and VMware will start to see their market share decline.

And to all those who are comparing Hyper-V with VMware Virtual Infrastructure – get real – that’s not comparing apples with apples. More realistic comparisons are:

  • Hyper-V and ESX.
  • Hyper-V Server (not yet released) and ESXi.
  • Virtual Infrastucture and Hyper-V plus various System Center components.

As for the argument that it’s all about TCO, I’ll leave that to the vendors and analysts to go into the detail but, from a simplistic view, Hyper-V and System Center are much less expensive to purchase than Virtual Infrastructure 3, the technical skills required for support are less specialised (read less expensive) and I find it hard to see how a broad management suite like Microsoft System Center is more expensive to run than a virtualisation-only management product like VMware Virtual Center together with the other products that will be required to manage the workload itself.

Critics say that virtualisation is about more than just the hypervisor and that management is important (it certainly is), then they deride Hyper-V (which is really just a hypervisor and basic management tools) by comparing it to virtual infrastructure’s management features. Their next argument is typically that Hyper-V won’t support desktop virtualisation and, from what I’ve seen, Microsoft is pretty much there on a credible solution for that too – as well as profile, presentation and application virtualisation, with partners like Citrix, Quest and AppSense filling in the gaps.

It’s not all over for VMware but they do need to find a new business model. Quickly.

Passed the VMware Certified Professional exam

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware Certified ProfessionalThis morning I passed the VMware Certified Professional on VI3 exam.

VMware’s non-disclosure agreement prevents me from saying anything about the exam itself but I can say that it involved a lot of preparation and this was my strategy:

  • Most importantly, get some experience of working with VMware products (I have been working on a project to implement VI3 since July and also use VMware Server every day).
  • Attend the mandatory VMware Infrastructure 3 Install and Configure course (I don’t believe that making a £2200 course mandatory is a good thing – people with suitable experience should be allowed to take the test without having to either shell out that sort of cash themselves or persuade their employer to do it – often locking them into an agreement to stay with the company…).
  • Book the exam (oh yes, the £2200 doesn’t include exam fees – that’s another £100).
  • Use the week before Christmas, when most of my colleagues were on holiday, to lock myself away and cram like crazy, reading the course notes through again as well as the product documentation. I find that writing notes helps me to taking information on board and I’ve published my revision notes here (note that these were written prior to taking the exam and, to avoid breaking the terms of the exam NDA, the content has not been edited to reflect what I experienced in the exam – the only changes from the originals relate to formatting and grammar).

Server virtualisation using VMware: two real-life stories

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logo BT logo Nationwide logo
Last week, I attended the VMware Beyond Boundaries event in London – the first of a series of events intended to highlight the benefits that server virtualisation technology has to offer. The day included a number of high-quality presentations; however there were two that had particular meaning because they were given by VMware customers – no marketing fluff, just “this is what we found when we implemented VMware in our organisations”. Those organisations weren’t small either – BT (a leading provider of communications solutions serving customers throughout the world) and Nationwide Building Society (the UK’s largest building society – a members-owned financial services provider with mutual status).

Michael Crader, BT’s Head of Development and Test Infrastructure/Head of Windows Migration talked about the BT datacentre server consolidation and virtualisation strategy. This was a particularly interesting presentation because it showed just how dramatic the savings could be from implementing the technology.

BT’s initial virtualisation project was concerned with test and development facilities and used HP ProLiant DL585 servers with 16-48GB RAM, attached to NetApp primary (24TB) and nearline (32TB) storage with each virtual machine on its own LUN. SnapMirror technology allows backing up a virtual machine in 2-3 seconds, facilitating the removal of two roles whereby two staff were solely responsible for loading tapes (with 96 hour backups of the test infrastructure).

The virtualisation of the test and development facilities was so successful that BT moved on to file and print, and then to production sites, where BT are part way through consolidating 1503 WIntel servers in 5 datacentres to three virtual infrastuctures, aiming for:

  • Clearance of rack space (15:1 server consolidation ratio).
  • Reduction in power/heat requirements.
  • Virtualised servers and storage.
  • Rapid deployment.
  • True capacity on demand.

The supporting hardware is still from HP, using AMD Opteron CPUs but this time BT are using (in each datacentre) 36 HP ProLiant BL45 blade servers for hosting virtual machines, each with 32GB RAM, 3 HP ProLiant DL385 servers for management of the infrastructure (VirtualCenter, Microsoft SQL Server and PlateSpin PowerConvert), 4 fibre channel switches and an HP XP12000 SAN – that’s just 10 racks of equipment per datacentre.

This consolidation will eventually allow BT to:

  • Reduce 375 racks of equipment to 30.
  • Reduce power consumption from approximately 700W per server to around 47W, saving approximately £750,000 a year.
  • Consolidate 4509 network connections (3 per server) to 504.
  • Remove all direct attached storage.

At the time of writing, the project has recovered 419 servers, 792 network ports, 58 racks, used 12TB of SAN storage, saved 250KW of power, 800,000 BTU/hour of heat and removed 75 tonnes of redundant equipment – that’s already massive financial savings, management efficiencies, and that reduction in heat and power is good for the environment too!

Michael Crader also outlined what doesn’t work for virtualisation (on ESX Server 2.5.x):

  • Servers which require more than 4 CPUs
  • Servers with external devices attached
  • Heavily loaded Citrix servers.

His main points for others considering similar projects were that:

  • Providing the infrastructure is in place, migration is straightforward (BT are currently hitting 50-60 migrations per week) with the main activities involving auditing, workload management, downtime and managing customer expectations.
  • The virtual infrastructure is truly providing capacity on demand with the ability to deploy new virtual machines in 11-15 minutes.

In another presentation, Peter West, one of Nationwide Building Society’s Enterprise Architects, outlined Nationwide’s server virtualisation strategy. Like many organisations, Nationwide is suffering from physical server sprawl and increased heat per unit of rackspace. As a major user of Microsoft software, Nationwide had previously begun to use Microsoft Virtual Server; however they moved to VMware ESX Server in order to benefit from the product’s robustness, scalability and manageability – and reduced total cost of ownership (TCO) by 35% in doing so (Virtual Server was cheaper to buy – it’s now free – but it cost more to implement and manage).

Nationwide’s approach to virtualisation is phased; however by 2010 they plan to have virtualised 85-90% of the Intel server estate (production, quality assurance/test, and disaster recovery). Currently, they have 3 farms of 10 servers, connected to EMC Clariion storage and are achieving 17-18:1 server consolidation ratios on 4-way servers with data replication between sites.

Peter West explained that Nationwide’s server consolidation approach is more than just technology – it involves automation, configuration and asset management, capacity on demand and service level management – and a scheme known internally as Automated Lights-out Virtualisation Environment (ALiVE) is being implemented, structured around an number of layers:

  • Policy-based automation
  • Security services
  • Resource management services
  • Infrastructure management services
  • Virtualisation services
  • Platforms
  • IP networks

With ALiVE, Nationwide plans to take 700 development virtual servers, 70 physical development servers, a number of virtual machines on a VMware GSX Server platform and 500 physical machines to VMware Infrastructure 3, addressing issues regarding a lack of standard builds, backup/recovery, limited support, and a lack of SLAs along with a growing demand from development projects, to allow self service provisioning of virtual machines via a portal.

At the risk of sounding like an extension of the VMware marketing department, hopefully, these two examples of real-life virtualisation projects have helped to illustrate some of the advantages of the technology, as well as some of the issues that need to be overcome in server virtualisation projects.

VMware Beyond Boundaries virtualisation roadshow

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware Beyond Boundaries

It’s conference season and I’ll be missing the European Microsoft TechEd IT Forum this year for two reasons: firstly, it clashes with my son’s birthday; secondly, it’s in Barcelona, and last time I attended a TechEd there I found it to be less-well organised than conferences in other European cities (e.g. Amsterdam). There’s also a third reason – it’s unlikely I’ll get budget approval to attend – but the reasons I’ve already given mean I won’t even be trying!

Given my current work commitments, one conference for which I should find it reasonably easy to justify is VMware’s VMworld; however, nice though a trip to Los Angeles might be (actually, there are other American cities that I’d rather visit), I’m expecting to have gained a new son or daughter a few weeks previously and leaving my wife home alone with a toddler and a newborn baby for a week might not be considered a very good idea. With that in mind I was glad to attend the VMware Beyond Boundaries virtualisation roadshow today at London’s Excel conference centre – VMware’s first UK symposium with 500 attendees and 27 sponsors – a sort of mini-VMworld.

Whilst I was a little annoyed at having arrived in time for the first session at 09:30 and VMware apparently being in no hurry to kick off the proceedings, it was a worthwhile day, with presentations on trends in virtualisation and increasing efficiency through virtualisation; live demos of VMware Infrastructure 3; panel discussions on realising business benefits from a virtualisation strategy and recognising when and how to virtualise; real life virtualisation experiences from BT and Nationwide; and a trade show with opportunities to meet with a number of hardware vendors and ISVs.

I’ll post some more about the most interesting sessions, but what follows is a summary of the key messages from the event.

One feature in the introduction session was a video with a bunch of children describing what they thought virtualisation might mean. Two of the quotes that I found particularly interesting were “virtual kind of means to me that you put on a helmet and then you’re in a different world” and the idea that I might “use it to get money and do [my] homework”. Actually, neither of those quotes are too far from the truth.

Taking the first quote, virtualisation is a different world – it’s a paradigm shift from the distributed server operations model that we’re used to in the “WIntel” space – maybe not so radical for those from a mid-range or mainframe server background, but a new style of operations for many support teams. As for the second quote – it is possible to save money through server consolidation which leads to savings in hardware expenditure as well as reduced power and heat requirements (one CPU at a higher utilisation uses less power than several lightly-loaded units) and consolidation (through virtualsiation) also allows organisations to unlock the potential in underutilised servers and get their “homework” done.

Indeed, according to IBM‘s Tikiri Wanduragala, server consolidation is the driver behind most virtualisation projects as organisations try to get more out of their hardware investment, making the most of computing “horsepower” and looking at metrics such as servers per square inch or servers per operator. Realising cost savings is the justification for the consolidation exercise and virtualisation is the enabling technology but as IDC‘s Thomas Meyer commented, he doubted that a conference room would have been filled had the event be billed as a server consolidation event, rather than a server virtualisation one. Wanduragala highlighted other benefits too – virtualsiation is introducing standard by the back door as organisations fight to minimise differences between servers, automate operations and ultimately reduce cost.

Interestingly, for a spokesman from a company whose current marketing message seems to be all about high performance and who is due to launch a (faster) new chip for 4-way servers tomorrow, Intel‘s Richard Curran says that performance per Watt is not the single issue here – organisations also want reliability, and additional features and functionality (e.g. the ability to shut down parts of a server that are not in use), whilst Dell‘s Jeffrey Wartgow points out that virtualisation is more than just a product – it’s a new architecture that impacts on many areas of business. It also brings new problems – like virtual server proliferation – and so new IT policy requirements.

Somewhat predictably for an organisation that has been around since the early days of computing, IBM’s response is that the reactive style of managing management console alerts for PC servers has to be replaced with predictive systems management, more akin to that used in managing mid-range and mainframe servers.

Of course, not every organisation is ready to embrace virtualisation (although IDC claim that 2006 is the year or virtualisation, with 2.1 million virtual servers being deployed, compared with 7 million physical server shipments; and 46% of Global 2000 companies are deploying virtualisation technologies [Forrester Research]). Intel cited the following issues to be resolved in pushing virtualisation projects through:

  • Internal politics, with departments claiming IT real estate (“my application”, “my server”, “my storage”).
  • Skills – getting up to speed with new technologies and new methods (e.g. operations teams that are wedded to spreadsheets of server configuration information find it difficult to cope with dynamically shifting resources as virtual servers are automatically moved to alternative hosts).
  • Justifying indirect cost savings and expressing a total cost of ownership figure.

IDC’s figures back this up with the most significant hurdles in their research being:

  • Institutional resistance (25%).
  • Cost (17%).
  • Lack of technical experience (16%).

The internal politics/institutional resistance issue is one of the most significant barriers to virtualisation deployment. As Tikiri Wanduragala highlighted, often the line of business units hold their own budgets and want to see “their machine” – the answer is to generate new business charging models that reflect the reduced costs in operating a virtual infrastructure. Intel see this as being reflected in the boardroom, where IT Directors are viewed with suspicion as they ask for infrastructure budgets – the answer is the delivery of IT as a service – virtualisation is one shared service infrastructure that can support that model, as Thomas Meyer tagged it, a service oriented infrastructure to work hand in hand with a service oriented architecture.

For many organisations, virtualisation is fast becoming the preferred approach for server deployment, with physical servers being reserved for applications and hardware that are not suited to a virtual platform. On the desktop, virtualisation is taking off more slowly as users have an emotional attachment to their device. HP‘s Iain Stephen noted that there are two main technologies to assist with regaining control of the desktop – the first is client blades (although he did concede that the technology probably hit the market two years too late) and the second is virtual desktops. Fujitsu-Siemens Computers‘ Christophe Lindemann added that client blades simply take the desktop off the desk is not enough – the same management issues remain – and that although many organisations have implemented thin client (terminal server) technology, that too has its limitations.

Microsoft’s dynamic systems initiative, HP’s adaptive infrastructure, Dell’s scalable enterprise, IBM’s autonomic computing, Fujitsu-Siemens Computers’ dynamic data centre and IDC’s dynamic IT are all effectively about the same thing – as HP put it “[to] deliver an integrated architecture that helps you move from high-cost IT islands to lower-cost shared IT assets”. No longer confined to test and development environments, virtualsiation is a key enabler for the vision of providing a shared-service infrastructure. According to IDC, 50% of virtual machines are running production-level applications, including business-critical workloads; and 45% of all planned deployments are seen as virtualisation candidates. It’s not just Windows servers that are being virtualised – Linux and Unix servers can be virtualised too – and ISV support is improving – VMware’s Raghu Raghuram claims that 70% of the top ISVs support software deployed in a virtual environment.

Looking at server trends, the majority of servers (62.4%) are have a rack-mount form factor with a significant proportion (26.7%) being shipped as blades and pedestal/tower servers being very much in the minority [source: IDC]. Most servers procured for virtualisation are 2- or 4-way boxes [source: IDC] (although not specifically mentioned, it should also be noted that the VMware licensing model, which works on the basis of pairs of physical processors, lends itself well to dual-core and the forthcoming multi-core processors).

Virtualisation changes business models – quoting Meyer “it is a disruptive technology in a positive sense” – requiring a new approach to capacity planning and a rethink around the allocation of infrastructure and other IT costs; however it is also a great vehicle to increase operational efficiencies, passing innovation back to business units, allowing customers to meet emerging compliance rules and to meet business continuity requirements whilst increasing hardware utilisation.

Summarising the day, VMware’s Regional Director for the UK and Ireland, Chris Hammans, highlighted that virtual infrastructure is rapidly being adopted, the IT industry is supporting virtual infrastructure and the dynamic data centre is here today.