Could the virtual appliance replace traditional software distribution?

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For some time now, VMware has been pushing the concept of virtual appliances as a new method of software distribution – a pre-built, pre-configured and ready-to-run software application packaged with the operating system inside a virtual machine – and the company has many pre-configured VMware virtual machines ready for download. Now Microsoft has come onboard, encouraging users to download pre-configured .VHD virtual hard disk images for Virtual PC or Virtual Server.

Microsoft sees this as an opportunity for customers to quickly evaluate Microsoft and partner solutions for free in their own environment without the need for dedicated servers or complex installations. VMware’s vision is a little broader and their Virtual Appliance Marketplace holds links to hundreds of virtual appliances from software companies as well as the open source community.

It’s certainly an interesting concept. Instead of installing an application on an operating system and then configuring it to suit, I can take an existing image, pre-configured by the software developers according to their best practice and greatly reduce the time to deploy an application. Of course, there will be issues around standard operating system configurations (many organisations will not accept an application unless it runs on their hardened operating system build) but this use of virtualisation technology has huge potential – and not just for demoware.

Application virtualisation using the Softricity SoftGrid platform

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back I had the opportunity to attend a presentation on application virtualisation using the Softricity SoftGrid platform. Server virtualisation is something I’m becoming increasingly familiar with, but application virtualisation is something else entirely and I have to say I was very impressed with what I saw. Clearly I’m not alone, as Microsoft has acquired Softricity in order to add application virtualisation and streaming to its IT management portfolio.

The basic principal of application virtualisation is that, instead of installing an application, an application runtime environment is created which can roam with a user. The application is packaged via a sequencer (similar to the process of creating an MSI application installer) and broken into feature blocks which can be delivered on demand. In this way the most-used functionality can be provided quickly, with additional functionality in another feature block only loaded as required. When I first heard of this I was initially concerned about the potential network bandwidth required for streaming applications; however feature blocks are cached locally and it is also possible to pre-load the cache (either from a CD or using tools such as Microsoft Systems Management Server).

From a client perspective, a user’s group membership is checked at login time and appropriate application icons are delivered, either from cache, or by using the real-time streaming protocol (RTSP) to pull feature blocks from the server component. The virtual application server uses an SQL database and Active Directory to control access to applications based on group membership and this group model can be used to stage rollout of an application (again, reducing the impact on the network avoiding situations where many users download a new version of an application at the same time).

Not all applications are suitable for virtualisation. For example, very large applications used throughout an organisation (e.g. Microsoft Office) may be better left in the base workstation build; however the majority of applications are ideal for virtualisation. The main reason not to virtualise are if an application provides shell integration that might negatively impact upon another application if it is not present – for example the ability to send an e-mail from within an application may depend on Microsoft Outlook being present and configured.

One advantage of the virtualised application approach is that the operating system is not “dirtied” – because each package includes a virtual registry and a virtual file system (which run on top of the traditional “physical” registry and file system), resolving application issues is often a case of resetting the cache. This approach also makes it possible to run multiple versions of an application side-by-side – for example testing a new version of an application alongside the existing production version. Application virtualisation also has major advantages around reducing rollout time.

The Microsoft acquisition of Softricity has so far brought a number of benefits including the inclusion of the ZeroTouch web interface for self-provisioning of applications within the core product and reduction of client pricing. There is no server pricing element, making this a very cost effective solution – especially for Microsoft volume license customers.

Management of the solution is achieved via a web service, allowing the use of the SoftGrid Management Console or third party systems management products. SoftGrid includes features for policy-based management and application deployment as well as usage tracking and compliance.

I’ve really just skimmed the surface here but I find the concept very interesting and can’t help but feel the Microsoft acquisition will either propel this technology into the mainstream (or possibly kill it off forever – less likely). In the meantime, there’s a lot of information available on the Softricity website.

Server virtualisation using VMware: two real-life stories

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware logo BT logo Nationwide logo
Last week, I attended the VMware Beyond Boundaries event in London – the first of a series of events intended to highlight the benefits that server virtualisation technology has to offer. The day included a number of high-quality presentations; however there were two that had particular meaning because they were given by VMware customers – no marketing fluff, just “this is what we found when we implemented VMware in our organisations”. Those organisations weren’t small either – BT (a leading provider of communications solutions serving customers throughout the world) and Nationwide Building Society (the UK’s largest building society – a members-owned financial services provider with mutual status).

Michael Crader, BT’s Head of Development and Test Infrastructure/Head of Windows Migration talked about the BT datacentre server consolidation and virtualisation strategy. This was a particularly interesting presentation because it showed just how dramatic the savings could be from implementing the technology.

BT’s initial virtualisation project was concerned with test and development facilities and used HP ProLiant DL585 servers with 16-48GB RAM, attached to NetApp primary (24TB) and nearline (32TB) storage with each virtual machine on its own LUN. SnapMirror technology allows backing up a virtual machine in 2-3 seconds, facilitating the removal of two roles whereby two staff were solely responsible for loading tapes (with 96 hour backups of the test infrastructure).

The virtualisation of the test and development facilities was so successful that BT moved on to file and print, and then to production sites, where BT are part way through consolidating 1503 WIntel servers in 5 datacentres to three virtual infrastuctures, aiming for:

  • Clearance of rack space (15:1 server consolidation ratio).
  • Reduction in power/heat requirements.
  • Virtualised servers and storage.
  • Rapid deployment.
  • True capacity on demand.

The supporting hardware is still from HP, using AMD Opteron CPUs but this time BT are using (in each datacentre) 36 HP ProLiant BL45 blade servers for hosting virtual machines, each with 32GB RAM, 3 HP ProLiant DL385 servers for management of the infrastructure (VirtualCenter, Microsoft SQL Server and PlateSpin PowerConvert), 4 fibre channel switches and an HP XP12000 SAN – that’s just 10 racks of equipment per datacentre.

This consolidation will eventually allow BT to:

  • Reduce 375 racks of equipment to 30.
  • Reduce power consumption from approximately 700W per server to around 47W, saving approximately £750,000 a year.
  • Consolidate 4509 network connections (3 per server) to 504.
  • Remove all direct attached storage.

At the time of writing, the project has recovered 419 servers, 792 network ports, 58 racks, used 12TB of SAN storage, saved 250KW of power, 800,000 BTU/hour of heat and removed 75 tonnes of redundant equipment – that’s already massive financial savings, management efficiencies, and that reduction in heat and power is good for the environment too!

Michael Crader also outlined what doesn’t work for virtualisation (on ESX Server 2.5.x):

  • Servers which require more than 4 CPUs
  • Servers with external devices attached
  • Heavily loaded Citrix servers.

His main points for others considering similar projects were that:

  • Providing the infrastructure is in place, migration is straightforward (BT are currently hitting 50-60 migrations per week) with the main activities involving auditing, workload management, downtime and managing customer expectations.
  • The virtual infrastructure is truly providing capacity on demand with the ability to deploy new virtual machines in 11-15 minutes.

In another presentation, Peter West, one of Nationwide Building Society’s Enterprise Architects, outlined Nationwide’s server virtualisation strategy. Like many organisations, Nationwide is suffering from physical server sprawl and increased heat per unit of rackspace. As a major user of Microsoft software, Nationwide had previously begun to use Microsoft Virtual Server; however they moved to VMware ESX Server in order to benefit from the product’s robustness, scalability and manageability – and reduced total cost of ownership (TCO) by 35% in doing so (Virtual Server was cheaper to buy – it’s now free – but it cost more to implement and manage).

Nationwide’s approach to virtualisation is phased; however by 2010 they plan to have virtualised 85-90% of the Intel server estate (production, quality assurance/test, and disaster recovery). Currently, they have 3 farms of 10 servers, connected to EMC Clariion storage and are achieving 17-18:1 server consolidation ratios on 4-way servers with data replication between sites.

Peter West explained that Nationwide’s server consolidation approach is more than just technology – it involves automation, configuration and asset management, capacity on demand and service level management – and a scheme known internally as Automated Lights-out Virtualisation Environment (ALiVE) is being implemented, structured around an number of layers:

  • Policy-based automation
  • Security services
  • Resource management services
  • Infrastructure management services
  • Virtualisation services
  • Platforms
  • IP networks

With ALiVE, Nationwide plans to take 700 development virtual servers, 70 physical development servers, a number of virtual machines on a VMware GSX Server platform and 500 physical machines to VMware Infrastructure 3, addressing issues regarding a lack of standard builds, backup/recovery, limited support, and a lack of SLAs along with a growing demand from development projects, to allow self service provisioning of virtual machines via a portal.

At the risk of sounding like an extension of the VMware marketing department, hopefully, these two examples of real-life virtualisation projects have helped to illustrate some of the advantages of the technology, as well as some of the issues that need to be overcome in server virtualisation projects.

VMware Beyond Boundaries virtualisation roadshow

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

VMware Beyond Boundaries

It’s conference season and I’ll be missing the European Microsoft TechEd IT Forum this year for two reasons: firstly, it clashes with my son’s birthday; secondly, it’s in Barcelona, and last time I attended a TechEd there I found it to be less-well organised than conferences in other European cities (e.g. Amsterdam). There’s also a third reason – it’s unlikely I’ll get budget approval to attend – but the reasons I’ve already given mean I won’t even be trying!

Given my current work commitments, one conference for which I should find it reasonably easy to justify is VMware’s VMworld; however, nice though a trip to Los Angeles might be (actually, there are other American cities that I’d rather visit), I’m expecting to have gained a new son or daughter a few weeks previously and leaving my wife home alone with a toddler and a newborn baby for a week might not be considered a very good idea. With that in mind I was glad to attend the VMware Beyond Boundaries virtualisation roadshow today at London’s Excel conference centre – VMware’s first UK symposium with 500 attendees and 27 sponsors – a sort of mini-VMworld.

Whilst I was a little annoyed at having arrived in time for the first session at 09:30 and VMware apparently being in no hurry to kick off the proceedings, it was a worthwhile day, with presentations on trends in virtualisation and increasing efficiency through virtualisation; live demos of VMware Infrastructure 3; panel discussions on realising business benefits from a virtualisation strategy and recognising when and how to virtualise; real life virtualisation experiences from BT and Nationwide; and a trade show with opportunities to meet with a number of hardware vendors and ISVs.

I’ll post some more about the most interesting sessions, but what follows is a summary of the key messages from the event.

One feature in the introduction session was a video with a bunch of children describing what they thought virtualisation might mean. Two of the quotes that I found particularly interesting were “virtual kind of means to me that you put on a helmet and then you’re in a different world” and the idea that I might “use it to get money and do [my] homework”. Actually, neither of those quotes are too far from the truth.

Taking the first quote, virtualisation is a different world – it’s a paradigm shift from the distributed server operations model that we’re used to in the “WIntel” space – maybe not so radical for those from a mid-range or mainframe server background, but a new style of operations for many support teams. As for the second quote – it is possible to save money through server consolidation which leads to savings in hardware expenditure as well as reduced power and heat requirements (one CPU at a higher utilisation uses less power than several lightly-loaded units) and consolidation (through virtualsiation) also allows organisations to unlock the potential in underutilised servers and get their “homework” done.

Indeed, according to IBM‘s Tikiri Wanduragala, server consolidation is the driver behind most virtualisation projects as organisations try to get more out of their hardware investment, making the most of computing “horsepower” and looking at metrics such as servers per square inch or servers per operator. Realising cost savings is the justification for the consolidation exercise and virtualisation is the enabling technology but as IDC‘s Thomas Meyer commented, he doubted that a conference room would have been filled had the event be billed as a server consolidation event, rather than a server virtualisation one. Wanduragala highlighted other benefits too – virtualsiation is introducing standard by the back door as organisations fight to minimise differences between servers, automate operations and ultimately reduce cost.

Interestingly, for a spokesman from a company whose current marketing message seems to be all about high performance and who is due to launch a (faster) new chip for 4-way servers tomorrow, Intel‘s Richard Curran says that performance per Watt is not the single issue here – organisations also want reliability, and additional features and functionality (e.g. the ability to shut down parts of a server that are not in use), whilst Dell‘s Jeffrey Wartgow points out that virtualisation is more than just a product – it’s a new architecture that impacts on many areas of business. It also brings new problems – like virtual server proliferation – and so new IT policy requirements.

Somewhat predictably for an organisation that has been around since the early days of computing, IBM’s response is that the reactive style of managing management console alerts for PC servers has to be replaced with predictive systems management, more akin to that used in managing mid-range and mainframe servers.

Of course, not every organisation is ready to embrace virtualisation (although IDC claim that 2006 is the year or virtualisation, with 2.1 million virtual servers being deployed, compared with 7 million physical server shipments; and 46% of Global 2000 companies are deploying virtualisation technologies [Forrester Research]). Intel cited the following issues to be resolved in pushing virtualisation projects through:

  • Internal politics, with departments claiming IT real estate (“my application”, “my server”, “my storage”).
  • Skills – getting up to speed with new technologies and new methods (e.g. operations teams that are wedded to spreadsheets of server configuration information find it difficult to cope with dynamically shifting resources as virtual servers are automatically moved to alternative hosts).
  • Justifying indirect cost savings and expressing a total cost of ownership figure.

IDC’s figures back this up with the most significant hurdles in their research being:

  • Institutional resistance (25%).
  • Cost (17%).
  • Lack of technical experience (16%).

The internal politics/institutional resistance issue is one of the most significant barriers to virtualisation deployment. As Tikiri Wanduragala highlighted, often the line of business units hold their own budgets and want to see “their machine” – the answer is to generate new business charging models that reflect the reduced costs in operating a virtual infrastructure. Intel see this as being reflected in the boardroom, where IT Directors are viewed with suspicion as they ask for infrastructure budgets – the answer is the delivery of IT as a service – virtualisation is one shared service infrastructure that can support that model, as Thomas Meyer tagged it, a service oriented infrastructure to work hand in hand with a service oriented architecture.

For many organisations, virtualisation is fast becoming the preferred approach for server deployment, with physical servers being reserved for applications and hardware that are not suited to a virtual platform. On the desktop, virtualisation is taking off more slowly as users have an emotional attachment to their device. HP‘s Iain Stephen noted that there are two main technologies to assist with regaining control of the desktop – the first is client blades (although he did concede that the technology probably hit the market two years too late) and the second is virtual desktops. Fujitsu-Siemens Computers‘ Christophe Lindemann added that client blades simply take the desktop off the desk is not enough – the same management issues remain – and that although many organisations have implemented thin client (terminal server) technology, that too has its limitations.

Microsoft’s dynamic systems initiative, HP’s adaptive infrastructure, Dell’s scalable enterprise, IBM’s autonomic computing, Fujitsu-Siemens Computers’ dynamic data centre and IDC’s dynamic IT are all effectively about the same thing – as HP put it “[to] deliver an integrated architecture that helps you move from high-cost IT islands to lower-cost shared IT assets”. No longer confined to test and development environments, virtualsiation is a key enabler for the vision of providing a shared-service infrastructure. According to IDC, 50% of virtual machines are running production-level applications, including business-critical workloads; and 45% of all planned deployments are seen as virtualisation candidates. It’s not just Windows servers that are being virtualised – Linux and Unix servers can be virtualised too – and ISV support is improving – VMware’s Raghu Raghuram claims that 70% of the top ISVs support software deployed in a virtual environment.

Looking at server trends, the majority of servers (62.4%) are have a rack-mount form factor with a significant proportion (26.7%) being shipped as blades and pedestal/tower servers being very much in the minority [source: IDC]. Most servers procured for virtualisation are 2- or 4-way boxes [source: IDC] (although not specifically mentioned, it should also be noted that the VMware licensing model, which works on the basis of pairs of physical processors, lends itself well to dual-core and the forthcoming multi-core processors).

Virtualisation changes business models – quoting Meyer “it is a disruptive technology in a positive sense” – requiring a new approach to capacity planning and a rethink around the allocation of infrastructure and other IT costs; however it is also a great vehicle to increase operational efficiencies, passing innovation back to business units, allowing customers to meet emerging compliance rules and to meet business continuity requirements whilst increasing hardware utilisation.

Summarising the day, VMware’s Regional Director for the UK and Ireland, Chris Hammans, highlighted that virtual infrastructure is rapidly being adopted, the IT industry is supporting virtual infrastructure and the dynamic data centre is here today.

Converting from physical to virtual machines

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days back, I received an e-mail from someone who was trying to convert a Windows 2000 physical server to a virtual machine (VM) and had read some of the posts on this blog. He commented that today’s virtualisation software seemed to be much more complicated than the virtualisation he remembered from his mainframe days but, whilst my mainframe experience is pretty limited (one week’s work experience at the local hospital and a year compiling support statistics/coding a call stack analysis tool at ICL‘s VME System Support Centre in Manchester), I’d have to say that my understanding of the mainframe approach is probably more comparable to the concept of containers and zones in Sun Solaris rather than the virtualisation products from VMware and Microsoft.

For anyone who is trying to get a physical machine across into a VM, I’ve previously written posts about three ways to do this (an overview of Microsoft’s Virtual Server migration toolkit, my experience of using PlateSpin PowerConvert and an article I found about using disk imaging software to convert a machine); however Michael Pietroforte’s post about six ways to convert physical to virtual is probably worth a read.

P2Ving my notebook PC: part 2

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I wrote about how I’d lost most of my bank holiday weekend trying to perform a physical to virtual (P2V) conversion of my corporate notebook PC. Well, I’m pleased to say that I’ve resolved the remaining issues and I’m very happy with the results.

The last remaining problem after I’d used PlateSpin PowerConvert to carry out the conversion to Microsoft Virtual Server 2005 R2 was getting the Cisco Systems VPN Client to work. I spent two days trying various settings, removing and reinstalling the VPN software (and the Zone Labs Integrity Client that my corporate VPN connection also requires) but was getting nowhere.

With or without a VPN solution, my end goal was a VMware virtual machine, as Microsoft Virtual Server is intended as a remote/server virtualisation solution, and Microsoft Virtual PC only runs on Windows/Macintosh platforms (I needed a cross-platform solution as I intend to run my virtual machine as a guest on both Windows and Linux). That’s where VMware Server beta 3 came in useful, as I used its virtual machine importer feature to import the Virtual Server configuration before installing the VMware Tools and copying the whole virtual machine elsewhere to run it using the VMware Player.

If this sounds complicated, then there are some good reasons for taking the physical hardware – Microsoft Virtual Server – VMware Server – VMware Player route.

  • Firstly, PlateSpin PowerConvert didn’t recognise my VMware Server beta 3 server and I don’t have a licensed copy of VMware Workstation/GSX/ESX (except an old VMware Workstation 4 licence) so Microsoft Virtual Server 2005 R2 was my only viable route.
  • Secondly, whilst VMware claim that their Player supports Microsoft virtual machines, my experience is that the import fails.
  • Finally, VMware Player does not include VMware Tools. Although VMware Tools can be installed to a virtual machine within VMware Player, the use of VMware Server to carry out the import provided an ideal opportunity to install the tools.

Incidentally, VMware Server’s virtual machine importer was very impressive, giving me the option to use the existing Virtual Server disks or to copy them to VMware Server format (I chose the latter) as well as options for legacy or new VMware formats. It can also import from certain disk image files and that may well be a method of avoiding the use of the software that I used to carry out the P2V operation.

Once I’d rebuilt my notebook with a different base operating system (I’m using Windows Vista beta 2 at the moment), it was simply a case of installing the VMware Player. Although I don’t recall any errors on installation, I did need to manually configure the VMware Bridge Protocol on my Ethernet connection as VMnet 0 (and reboot), before VMware Player would allow the guest to connect to the network.

Plug and play dealt with the virtual hardware changes along the way and the VPN connection worked first time (without any obvious changes) – I can only assume that the VMware bridged network connection works in a different way to the Virtual Server network that was causing my VPN client to fail in a Virtual Server virtual machine.

After spending most of today working with my Windows XP virtual corporate client running as a guest under Windows Vista, the whole project seems to have been a success, although I’m still planning on dual-booting Windows Vista with Linux (keeping the virtual machine on a partition accessible from both operating systems) so there may well be a part three to this story yet.

Using PlateSpin PowerConvert to P2V my notebook PC

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

With Windows Vista and Office 2007 now at beta 2, I figured that it’s time to test them out on a decent PC. I’d also like to dual-boot with a Linux distro as the only way to really get to know an operating system is to use it on a daily basis but the problem is that I’m running out of hardware. Most of my PCs are around 3 years old, with 1.5GHz Pentium 4 CPUs and between 256MB and 512MB of memory. I could buy some more memory for the older PCs, but I’m hoping to buy two new machines later this year instead. Meanwhile, the Fujitsu Siemens Lifebook S7010D that my employer has provided for my work is a 1 year-old machine with 1GB RAM – plenty for my testing (although I haven’t checked if the graphics card will support the full Aero interface).

My problem is that I can’t just wipe my hard disk and start again. The Lifebook is joined to a corporate domain and has VPN client software installed so that I can access the network from wherever I happen to be. That’s where virtualisation comes in… I thought that by performing a physical to virtual (P2V) conversion, I could run my Windows XP build inside a virtual environment on a Windows Vista or Linux host.
Platespin
I’m also co-authoring my employer’s virtualisation strategy, so I called PlateSpin in Canada (because I’d missed the end of the business day in the UK) and they agreed to supply me with three evaluation licenses for their PowerConvert software. The good news is that I completed my P2V conversion. The bad news is that my experience of the product was not entirely smooth and it took a fair chunk of last week and most of my bank holiday weekend too.

The software installation was straightforward enough, detecting that there was no SQL server installation present and installing an MSDE instance. PowerConvert Server doesn’t show up as an application on the Start Menu as it is actually just a set of Microsoft .NET web services and a separate client is required to perform any operations, downloadable from http://servername/powerconvert/client.setup.exe.

Once everything was installed, I got to work on discovering my network infrastructure. PowerConvert automatically located the various domains and workgroups on the network and when I ran discover jobs it found my Microsoft Virtual Server 2005 R2 installation (but didn’t see my VMware Server beta 3 installation). It also struggled for a while with discovering server details for my Windows XP source machine (even after a reboot and with the client firewall disabled) – I never did find the cause of that particular issue (even after following PlateSpin knowledge base article 20350) but after taking the PC to work, hooking up to the corporate LAN and bringing it home again that night, everything jumped into life.

With all PCs discovered, I was ready to carry out a conversion. The basic process is as follows:

  1. Discover the source and target server details.
  2. Create a virtual machine on the target server.
  3. Boot the virtual machine into Windows PE and load the PowerConvert controller.
  4. Take control of the source server, boot this into Windows PE and load the PowerConvert controller.
  5. Copy files.
  6. Restart the target virtual machine, and finalise configuration.
  7. Tidy up.

That sounds simple enough, until considering that PowerConvert also handles the changes in the underlying hardware – something that’s not possible with simple disk duplication software.

Everything looked good up to the point of loading the controller on my source machine which just couldn’t connect (and didn’t seem to recognise the network). I tried various conversion job settings and after various failed attempts, including stalled jobs which refused to be aborted (once an attempt is made to abort a job, PowerConvert doesn’t check to see if it was stopped successfully – it just refuses to allow a subsequent attempt to abort the job) and consequential removal and reinstallation of PowerConvert as detailed in PlateSpin knowledge base article 20324 (to free up the source machine and allow another attempt at conversion), I re-read the text file supplied with the installation. It turns out that the out-of-the-box installation didn’t recognise my Broadcom NetXtreme gigabit Ethernet card (not exactly an uncommon network interface) but once the physical target take control ISO packages were updated, that particular issue was resolved (as confirmed using the PlateSpin Analyzer tool – see PlateSpin knowledge base article 20478). Rather than having to manually apply updates, I’d prefer to see the installation routine check the PlateSpin website for updates and install them automatically.

It looked as if I finally had everything working and I left a conversion running overnight but came down the next morning to see the target machine rebooting with a STOP 0x0000007B error (blue screen of death). It turns out that although I’d configured the PowerConvert job to convert my single physical hard disk with two partitions into two dynamic virtual IDE disks, it had still configured a virtual SCSI controller on the target virtual machine and not surprisingly that couldn’t read the IDE disks. I tried various resolutions, including rebooting the virtual machine into the Windows XP Recovery Console but without the administrator password (I had access to an account in the Administrators group but not the Administrator), I couldn’t do much. Unfortunately, the software is licenced on a per-conversion basis (althere there are other options) and “PowerConvert will burn a license once the file transfer step of the job has been completed” (see PlateSpin knowledge base article 20357) so that was one of my evaluation licenses burned.

Accepting that my failed attempt was not recoverable, I aborted the job and tried again, this time converting my two physical partitions to two dynamic virtual SCSI disks. This time the job completed successfully.

I now have a working virtual corporate notebook, still joined to the domain, still with the same security identifiers and disk signatures but with a different set of underlying hardware. I still need to get my VPN client working inside the virtual environment but if I can clear that final hurdle then I’ll be ready to ditch the source machine and reach my dual-boot Vista/Linux goal.

In summary, PlateSpin PowerConvert tries to do something complex in a simple and elegant way, using modern technology (web services, the Microsoft .NET framework and Windows PE). Unfortunately, I didn’t find it to be very robust. I’m no developer but I am an experienced Windows systems administrator and infrastructure designer and this was hard work. The product may be better with VMware but I didn’t get a chance to try as it didn’t recognise my VMware Server beta 3 installation. One thing’s for sure – PowerConvert has stacks of potential – if PlateSpin can sort out the reliability issues. If not, then I might as well take a look at the VMware P2V assistant, or Microsoft’s Virtual Server migration toolkit (VSMT).

Duplicating virtual machines using SysPrep

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the joys of virtualisation is the flexibility afforded by the ability to copy virtual machine files around the network for backup purposes or just to create a new machine (especially with Microsoft’s new Virtual Server licensing arrangements). Unfortunately, just as for “real” computers, simple file copies of Windows-based virtual machines can cause problems and are not supported (see Microsoft knowledge base article 162001).

All is not lost though, as Microsoft does support the duplication of virtual hard disks using the system preparation tool (SysPrep) and Megan Davis has written about sysprepping virtual machines on her blog. I tested it today and it works really well – basically a 3 step process of:

  1. Install and configure a source virtual machine as required (i.e. operating system installed, virtual machine additions installed, service packs and other updates applied), making sure it is in a workgroup (i.e. not a domain member).
  2. Locate the appropriate version of the Windows deployment tools (I used the ones from the \support\tools\deploy.cab file on a Windows Server 2003 CD) and create an answer file (C:\sysprep\sysprep.inf). Then copy the sysprep.exe and setupcl.exe deployment tools to C:\sysprep.
  3. Run SysPrep to reseal and shut down the guest operating system, then copy the virtualmachinename.vhd file to a secure location (make it read-only to prevent accidental overwrites, but also apply appropriate NTFS permissions). This file can then be duplicated at will to quickly create new virtual machines with a fully-configured operating system.

For anyone who is unfamiliar with SysPrep, check out Killan’s guide to SysPrep (which, despite claiming not to be written for corporate administrators or OEM system builders, seems like a pretty good reference to me).

Toshiba PX1223E-1G32 320GB External Hard DiskIncidentally, there are major performance gains to be had by moving virtual machines onto another disk (spindle – not just partition). Unfortunately my repurposed laptop hard disks were too slow (especially on a USB 1.1 connection), so I had to go out this afternoon and buy a USB 2.0 PCI adapter along with a decent external hard disk (a Toshiba 320GB 7200 RPM external USB 2.0 hard drive with 8MB data buffer) – that speeded things up nicely.

John Howard’s moving to Redmond – Good luck John

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday evening, I bumped into Microsoft UK’s John Howard (not to be confused with the current Australian Prime Minister). For the last year and a half, I’ve known John as an “IT Professional Evangelist”, covering a variety of Windows infrastructure topics, but anyone who’s seen him present on virtualisation will understand it is one of his main interests.

Back at the end of January, John had told me that he’s moving to Redmond to join “Corp” and take up a position as a Program Manager in the Windows Virtualisation team and whilst it wasn’t the biggest secret in the world, he did ask me to keep quiet until it was all confirmed. Well, it’s confirmed – John’s published the news on his blog, so it’s okay for me to talk about it now!

It’s a big move, selling up in the UK and moving the whole family to North-West America, so I’d like to wish John all the very best for that, to say thanks for all the help and advice – and thanks for being a friendly face around Microsoft UK.

I still can’t say who’s stepping into John’s shoes but I worked it out for myself at a recent event so it’s not that difficult. Purely by coincidence, I was idly wondering a couple of months back if there were any positions vacant in the IT professional technical evangelist team at Microsoft UK (it sounds like a great job to me), but I didn’t see the post advertised externally and I’ve not been back at Fujitsu for long so I plan to be staying put for a while – anyway, I’m an unofficial Microsoft evangelist right here!

As for John’s blog – a lot of what he’s doing will be Microsoft confidential; but he did say that, together with his colleagues Ben Armstrong, Mike Sterling and Mike Kolitz he’s hoping to start a Virtual Server team blog – along the lines of the Exchange Server team blog (you had me at EHLO) or the Internet Explorer team blog (IEBlog). I guess that will be the space to watch and catch the latest details on the development of the Virtual Server hypervisor technology.

Processor area networking

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I was at a very interesting presentation from Fujitsu-Siemens Computers. It doesn’t really matter who the OEM was – it was the concept that grabbed me, and I’m sure IBM and HP will also be looking at this and that Dell will jump on board once it hits the mass market. That concept was processor area networking.

We’ve all got used to storage area networks (SANs) in recent years – the concept being to separate storage from servers so that a pool of storage can be provided as and when required.

Consider an e-mail server with 1500 users and 100Mb mailbox limits. When designing such a system, it is necessary to separate the operating system, database, database transaction logs, and message transfer queues for recoverability and performance. The database might also be split for fast recovery of VIP’s mailboxes but my basic need is to provide up to 150Gb of storage for the database (1500 users x 100Mb). Then another 110% storage capacity is required for database maintenance and all of a sudden the required disk space for the database jumps to 315Gb – and that doesn’t include the operating system, database transaction logs or message transfer queues!

Single instance storage might reduce this number, as would the fact that most users won’t have a full mailbox, but most designers will provide the maximum theoretical capacity “just in case” because to provision it later would involve: gaining management support for the upgrade; procuring the additional hardware; and scheduling downtime to provide the additional storage (assuming the hardware is able to physically accommodate the extra disks).

Multiply this out across an organisation and that is a lot of storage sitting around “just in case”, increasing hardware purchase and storage management costs in the process. Then consider the fact that storage hardware prices are continually dropping and it becomes apparent that the additional storage could probably have been purchased at a lower price when it was actually needed.

Using a SAN, coupled with an effective management strategy, storage can be dynamically provisioned (or even deprovisioned) on a “just in time” basis, rather than specifying every server with extra storage to cope with anticipated future requirements. No longer is 110% extra storage capacity required on the e-mail server in case the administrator needs to perform offline defragmentation – they simply ask the SAN administrator to provision that storage as required from the pool of free space (which is still required, but is smaller than the sum of all the free space on a all of the separate servers across the enterprise).

Other advantages include the co-location of all mission critical data (instead of being spread around a number of diverse server systems) and the ability to manage that data effectively for disaster recovery and business continuity service provision. Experienced SAN administrators are required to manage the storage, but there are associated manpower savings elsewhere (e.g. managing the backup of a diverse set of servers, each with their own mission critical data).

A SAN is only part of what Fujitsu-Siemens Computers are calling the dynamic data centre, moving away from the traditional silos of resource capability.

Processor area networking (PAN) extends takes the SAN storage concept and applies it to the processing capacity provided for data centre systems.

So, taking the e-mail server example further, it is unlikely that all of an organisation’s e-mail would be placed on a single server and as the company grows (organically or by acquisition), additional capacity will be required. Traditionally, each server would be specified with spare capacity (within the finite constraints of the number of concurrent connections that can be supported) and over time, new servers would be added to handle the growth. In an ideal world, mailboxes would be spread across a farm of inexpensive servers, rapidly bringing new capacity online and moving mailboxes between servers to marry demand with supply.

Many administrators will acknowledge that servers typically only average 20% utilisation and by removing all input/output (I/O) capabilities from the server, diskless processing units can be provided (effectively blade servers). These servers are connected to control blades which manage the processing area network, diverting I/O to the SAN or the network as appropriate.

Using such an infrastructure in a data centre, along with middleware (to provide virtualisation, automation and integration technologies) it is possible to move away from silos of resource and be completely flexible about how services are allocated to servers, responding to peaks in demand (acknowledging that there will always be requirements for separation by business criticality or security).

Egenera‘s BladeFrame technology is one implementation of processor area networking and last week, Fujitsu-Siemens Computers and Egenera announced an EMEA-wide deal to integrate Egenera Bladeframe technology with Fujitsu-Siemens servers.

I get the feeling that processor area networking will be an interesting technology area to watch. With virtualisation rapidly becoming accepted as an approach for flexible server provision (and not just for test and development environments), the PAN approach is a logical extension to this and it’s only a matter of time before PANs become as common as SANs are in today’s data centres.