Cloning a Windows system disk using nothing but free software

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As part of the process of replacing the hard disk in my server at home, I needed to clone the operating system between two drives. As my Windows Server installation consists of two partitions (my C: and a 100MB system reserved partition), I couldn’t use Microsoft’s disk imaging tool (imagex.exe) as it only works on single partitions (i.e. it’s not possible to image multiple partitions in a single operation).

I could have used commercial software like Symantec Ghost but I figured there must be a legitimate, free, way to do this by now and it turns out there is – I used the open source Clonezilla utility (I also considered some alternatives but found that some needed to be installed and I wanted something that would leave no trace on the system).

I had some issues at first – for some reason my machine wouldn’t boot from the CD I created but I found the best way was to install Clonezilla on the target disk.

To do this, I put the new disk in a USB HDD docking station and created a 200MB FAT partition on it. Next, I downloaded the .ZIP version of CloneZilla and copied the files to the new disk. I then ran /utils/win32/makeboot.bat to make the disk bootable (it’s important to run makeboot.bat from the new disk, not from the .ZIP file on the local system disk). The last step (which I didn’t see in the instructions and spent quite a bit of time troubleshooting) is to make the new disk active (using Disk Management or diskpart.exe).

With Clonezilla installed on my “new” disk, I connected it to the server and booted from this disk, electing to load CloneZilla into RAM and overwrite it as part of the cloning process.

I then left it to run for a few minutes before removing the old disk and rebooting back into Windows Server.

(Quite why I’m still running a Windows Server at home, I’m not sure… I don’t really need an Active Directory and for DNS, DHCP and TFTP I really should switch to Linux… I guess Windows is just what I know best… it’s comfortable!)

Three gotchas to be aware of:

  • If you don’t make the Clonezilla partition active you won’t be able to boot from it (basic, I know, but it’s not in the instructions that I followed).
  • Clonezilla clones the partitions as they are (i.e. it’s a clone – and there is no resizing to use additional space on the disk) – it’s easy to expand the volume later, but if you’re moving to a smaller disk, you may have to shrink the existing partition(s) before cloning.
  • The AMD64 version of Clonezilla hung at the calculating bitmap stage of the Partclone process , with a seemily random remaining time and 0% progress. I left this for several hours (on two occasions) and it did not complete (it appeared to write the partition structure to the disk, but not to transfer any data).  The “fix” seems to be to use the i686 version of Clonezilla.

Using Windows to remove a Mac OS X EFI partition from a disk

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The old hard drive from my Mac is destined to find a new role in my low-power server (hopefully dropping the power consumption even further by switching from a 3.5″ disk to a 2.5″ disk). Before that can happen though, I needed to wipe it and clone my Windows Server installation.

After hooking the drive up, I found that it had two partitions: one large non-Windows partition that was easily removed in Server Manager’s Disk Management snap-in; and one EFI partition that Disk Management didn’t want to delete.

The answer, it seems, is to dive into the command line and use diskpart.exe.

After selecting the appropriate disk, the clean command quickly removed the offending partition. I then initialised it in Disk Management, electing to make it an MBR disk (it was GPT).

Resources from recent Windows Server User Group Live Meeting

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Thanks to everyone who attended the rescheduled Live Meeting last month on Connecting on-premise applications with the Windows Azure platform (with Allan Naim and Phil Winstanley).

Unfortunately the gremlins didn’t subside – after rescheduling the event I was unable to get a microphone working – which is a bit of an issue for a facilitator (thanks to Phil for stepping up to the mark) and the Live Meeting recording has not worked completely either.

Nevertheless, resources from the event (slide deck, audio recording, demonstration video, and readme file Live Meeting recordings) are now available.

For information on future Windows Server User Group events, check the WSUG blog or follow @windowsserverug on Twitter.

[A version of this post also appears on the Windows Server User Group blog]
[Updated 18 April 2011: Live Meeting recordings are now available]

Microsoft finally releases an iSCSI software target as a free download

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For years, if you wanted to get hold of the Microsoft iSCSI software target (e.g. for testing Windows clusters without access to “proper” storage), you had to rely on “finding” a copy that a Microsoft employee had left lying around (it was officially only available for internal use). Then came advice on extracting it from Windows Storage Server. Now it’s finally been made available as a free download for Windows Server. Fantastic news!

Rescheduled: Connecting on-premise applications with the Windows Azure platform (Windows Server User Group)

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I wrote about a Live Meeting I was running for the Windows Server User Group (WSUG), looking at using Windows Azure Connect to connect on-premise server infrastructure with Microsoft’s public cloud offering.

If you tried to attend that meeting, I’m sorry, but due to some logistical difficulties that were outside my control, the meeting was unable to go ahead at the advertised time and, although we e-mailed everyone who had registered, I’m sorry if you didn’t get the message until it was too late.

I’m pleased to say that this event has now been rescheduled for the same time (19:00 – although by then we’ll be on BST not GMT) next Monday (28 March 2011).

Please accept my apologies for the short notice we gave last night, and please do register for the rescheduled meeting.

[A version of this post also appears on the Windows Server User Group blog]

Connecting on-premise applications with the Windows Azure platform (Windows Server User Group)

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When Microsoft announced Windows Azure, one of my questions was “what does that mean for IT Pros?”. There’s loads of information to help developers write applications for the cloud, but what about those of us who do infrastructure: servers, networks, and other such things?

In truth, everything becomes commoditised in time and, as Quest’s Joe Bagueley pointed out on Twitter a few days ago, infrastructure as a service (IaaS) will become commoditised as platform as a service (PaaS) solutions take over and there will come a time when we care about what hypervisor we are running on about as much as we care about network drivers today. That is to say that, someone might care but, for most of us, we’ll be consuming commodity services and we won’t need to know about the underlying infrastructure.

So, what will there for for server admins to do? Well, that takes me back to Windows Azure (which is a PaaS solution). For some time now, I’ve been keen to learn about integrating on and off-premise systems – for example getting application components that are running on Windows Server working with other parts of the application in Windows Azure. To do this, Microsoft has created Windows Azure Connect – a new Windows Azure service that enables customers to setup secure, IP-level network connectivity between their Windows Azure compute services and existing, on-premise resources. This allows Windows Azure applications to leverage and integrate with existing infrastructure investments in order to ease adoption of Azure in the enterprise – and I’m really pleased that, after nearly a year of trying to set something up, the Windows Server User Group (WSUG) is running a Live Meeting on this topic (thanks to a lot of help from Phil Winstanley, ex-MVP and now native at Microsoft).

Our speaker will be Allan Naim, an Azure Architect Evangelist at Microsoft. Allan has more than 15 years of experience designing and building distributed middleware applications including both custom and off the shelf Enterprise Application Integration architectures and, on the evening of 22 March 2011 (starting at 19:00 GMT), he’ll spend an hour taking us through Windows Azure Connect.

Combined with the event that Mark Parris has organised for 6 April 2011 where one of the topics is Active Directory Federation Services (AD-FS), these two WSUG sessions should give Windows Server administrators a great opportunity to learn about integrating Windows Server and Windows Azure.

Register for the Azure Connect Live Meeting now. Why not register for the AD RMS and AD FS in-person event too?

[A version of this post also appears on the Windows Server User Group blog]

Hyper-V R2 service pack 1, Dynamic Memory, RemoteFX and virtual desktops

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have to admit that I’ve tuned out a bit on the virtualisation front over the last year.  It seems that some vendors are ramming VDI down our throats as the answer to everything; meanwhile others are confusing virtualisation with “the cloud”.  I’m also doing less hands-on work with technology these days too and I struggle to make a business case to fly over to Redmond for the MVP Summit so I was glad when I was invited to join a call and take a look at some of the improvements Microsoft has made in Hyper-V as part of Windows Server 2008 R2 service pack 1.

Dynamic memory

There was a time when VMware criticised Microsoft for not having any Live Migration capabilities in Hyper-V but we’ve had them for a while now (since Windows Server 2008 R2).  Then there’s the whole device drivers in the hypervisor vs. drivers in the parent partition argument (I prefer hardware flexibility, even if there is the occasional bit of troubleshooting required, over a monolithic hypervisor and locked-down hardware compatibility list).  More recently the criticism has been directed at dynamic memory and I have to admit Microsoft didn’t help themselves with this either: first it was in the product, then it was out; and some evangelists and Product Managers said dynamic memory allocation was A Bad Thing:

“Sadly, another “me too” feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.”

In case you’re wondering, at my notes from when this feature was dropped from Hyper-V in the R2 release candidate (it was previously demonstrated in the beta). Now that Microsoft has dynamic memory working it’s apparently A Good Thing (Microsoft’s PR works like that – bad when Microsoft doesn’t have it, right up to the point when they do…).

To be fair, it turns out Microsoft’s dynamic memory is not the same as VMware’s – it’s all about over-subscription vs. over commitment. Whereas VMware will overcommit memory and then de-duplicate to reclaim what it needs, Microsoft takes the approach of only providing each VM with enough memory to start up, monitoring performance and adding memory as required, and taking it back when applications are closed.

As for those consolidation ratio improvements: Michael Kleef, one of Microsoft’s Technical Program Managers in the Server and Cloud Division has found that dynamic memory can deliver a 40% improvement in VDI density (Michael also spoke about this at TechEd Europe last year).  Microsoft’s tests were conducted using the Login Virtual Session Indexer (LoginVSI) tool which is designed to script virtual workloads and is used by many vendors to test virtualised infrastructure.

It turns out that, when implementing VDI solutions, disk I/O is the first problem, memory comes next, and only after that is fixed will you hit a processor bottleneck. Instead of allocating 1GB of RAM for each Windows 7 VM, Microsoft used dynamic memory with a 512MB VM (which is supported on Hyper-V).  There’s no need to wait for an algorithm to compute where memory can be reclaimed – instead the minimum requirement is provided, and additional memory is allocated on demand – and Microsoft claims that other solutions rely on weakened operating system security to get to this level of density.  There’s no need to tweak the hypervisor either.

Microsoft’s tests were conducted using HP and Dell servers with 96GB of RAM (the sweet spot above which larger DIMMS are required and so the infrastructure cost rises significantly).  Using Dell’s reference architecture for Hyper-V R2, Microsoft managed to run the same workload on just 8 blades (instead of 12) using service pack 1 and dynamic memory, without ever exhausting server capacity or hitting the limits of unacceptable response times.

Dynamic memory reclamation uses Hyper-V/Windows’ ability to hot-add/remove memory with the system constantly monitoring itself for virtual machines under memory pressure (expanding using the configured memory buffer) or with excess memory, after which they become candidates to remove memory (not immediately in case the user restarts an application).  Whilst it’s particularly useful in a VDI scenario, Microsoft say it also works well with web workloads and server operating systems, delivering a 25-50% density improvement.

More Windows 7 VMs per logical CPU

Dynamic memory is just one of the new virtualisation features in Windows Server 2008 R2 service pack 1.  Another is a new support limit of 12 VMs per logical processor for exclusively Windows 7 workloads (it remains at 8 for other workloads).  And Windows 7 service pack 1 includes the necessary client side components to take advantage of the server-side improvements.

RemoteFX

The other major improvement in Windows Server 2008 R2 service pack 1 is RemoteFX.  This is a server-side graphics acceleration technology.  Due to improvements in the Remote Desktop (RDP) protocol, now at version 7.1, Microsoft is able to provide a more efficient encode/decode pipeline, together with enhanced USB redirection including support for phones, audio, webcams, etc. – all inside an RDP session.

Most of the RemoteFX benefits apply to VDI scenarios but one part also benefits session virtualisation (previously known as Terminal Services) – that’s the RDP encode/decode pipeline which Microsoft says is a game changer.

Microsoft has always claimed that Hyper-V’s architecture makes it scalable. With no device drivers inside the hypervisor (native device drivers only exist on the parent partition) and a VMBus used for communications between virtual machines and the parent partition.  Using this approach, virtual machines can now use a virtual GPU driver to provide the Direct3D or DirectX capabilities that are required for some modern applications – e.g. certain Silverlight or Internet Explorer 9 features.  Using the GPU installed in the server, RemoteFX allows VMs to request content via the virtual GPU and the VMBus, render using the physical GPU and pass the results back to the VM again.

The new RemoteFX encode/decode pipeline uses a render, capture and compress (RCC) process to render on the GPU but to encode the protocol using either the GPU, CPU or an application-specific integrated circiut (ASIC).  Using an ASIC is analogous to TCP offloading in that there is no work required by the CPU.  There’s also a decode ASIC – so clients can use RDP 7.1 in an ultra-thin client package (a solid state ASIC) with RemoteFX decoding.

Summary

Windows 7 and Windows Server 2008 R2 service pack is mostly a rollup of hotfixes but it also delivers some major virtualisation improvements that should help Microsoft to establish itself as a credible competitor in the VDI space. Of course, the hypervisor is just one part of a complex infrastructure and Microsoft still relies on partners to provide parts of the solution – but by using products like Citrix Xen Desktop as a session broker, and tools from Appsense for user state virtualisation, it’s finally possible to deliver a credible VDI solution on the Microsoft stack.

The wait is over – almost: the first service pack for Windows 7 and Windows Server 2008 R2 is ready to ship

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

You know that old saying with Windows: “wait for the first service pack”? Well, some might say that, in these days of continuous updates, it no longer applies (I would be one of those people).  Even so if you are one of those people who has been holding out for the release of the first service pack for Windows 7 and Windows Server 2008 R2, it’s nearly here – and you’ve been waiting for a while now (in fact, it’s been so long I could have sworn it had already shipped!)…

Today, Microsoft will announce the release to manufacture (RTM) of Windows 7 and Windows Server 2008 R2 service pack 1 (but general availability is not until 22 February 2011).  I’m told that OEMs and technology adoption program (TAP) partners will get the bits first – MSDN and TechNet subscribers will have to wait until closer to the general availability date. I’ve had no word on availability for volume license customers so I’d assume 22 February.

As I wrote back in March 2010, there is a single service pack for both client and server (just as with Vista and Server 2008); however the features that it unlocks are different for the two operating systems.  My next post goes into some of the technical details of the improvements that are made to Hyper-V in Windows Server 2008 R2 service pack 1.

Using DHCP reserved client options for certain devices

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been struggling with poor Internet connectivity for a while now – the speed is fine (any speed tests I conduct indicate a perfectly healthy 3-5Mbps on on “up to 8Mbps” ADSL line) but I frequently suffer from timeout, only to find that a refresh a few moments later brings the page back quickly.

Suspecting a DNS issue (my core infrastructure server only has a Atom processor and is a little light on memory), I decided to bypass my local DNS server for those devices that don’t really need it because all the services they access are on the Internet (e.g. my iPad).

I wasn’t sure how to do this – all of my devices pick up their TCP/IP settings (and more) via DHCP – but then I realised that the Windows Server 2008 R2 DHCP service (and possibly earlier versions too) allows me to configure reserved client options.

I worked out which IP address my iPad was using, then converted the lease to a reservation. Once I had a reservation set for the device, I could configure the reserved client options (i.e. updating the DNS server addresses to only use my ISP servers, OpenDNS, or Google’s DNS servers).

Unfortunately I’m still experiencing the timeouts and it may just be that my elderly Solwise ADSL modem/router needs replacing… oh well, I guess it’s time to go back to the drawing board!

Slidedecks from recent WSUG events

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Thanks to everyone who attended the Windows Server User Group events last week with guest speakers Joey Snow and Dan Pearson (we hope you manage to get home soon).

For those who were interested in the slide decks, you can find links to them below:

[A version of this post also appears on the Windows Server User Group blog]