A couple of years back, I was running Windows Server 2008 on my everyday notebook PC so that I could work with Hyper-V. That wasn’t really ideal and, these days, I’m back on a client OS – Windows 7 as it happens…
Even so, I’ve been discussing the concept of a developer workstation with my friend and colleague, Garry Martin, for some time now. I say “developer”, because Garry works in our application services business but the setup I came up with is equally valid for sysadmins who need a client side virtualisation solution (and no, type 2 hypervisors do not cut it – I want to run on bare metal!).
I finally got Hyper-V Server running from a USB flash drive a few days before Microsoft announced that it is a supported scenario (although I still haven’t seen the OEM document that describes the supported process). That provided the base for my solution… something that anyone with suitable hardware can use to boot from USB into a development environment, without any impact on their corporate build. Since then, I’ve confirmed that the RTM version of Hyper-V Server 2008 R2 works too (my testing was on the RC).
Next, I integrated the network card drivers for my system before starting to customise the Hyper-V Server installation. This is just the same as working with a server core installation of Windows Server but these days sconfig.vbs
makes life easier (e.g. when setting up the computer name, network, remote management, Windows updates, etc.), although it was still necessary to manually invoke the control intl.cpl
and control timedate.cpl
commands to convince Hyper-V that I’m in the UK not Redmond…
Other changes that I made included:
- Using the built in FTP client (
ftp.exe
) to download a selection of my favourite tools for use when working with server core/Hyper-V Server: a Windows port of the GNUwget
utility; 7-Zip; anddevcon.exe
. In order to do this I needed to create a firewall exception usingnetsh advfirewall firewall add rule name="FTP client" dir=in action=allow program="c:\windows\system32\ftp.exe" enable=yes
(following the guidance in Microsoft knowledge base article 947709). - Installing PowerShell using the following commands (although I later found that I could do this with
sconfig.vbs
– note that it didn’t work for me usingocsetup.exe
–dism.exe
is a new command in Windows Server 2008 R2 and it looks pretty powerful):dism /online /enable-feature /featurename:NetFx2-ServerCore
dism /online /enable-feature /featurename:MicrosoftWindowsPowerShell
- Downloading and installing the PowerShell management library for Hyper-V.
- Copying 4 files from a full Windows Server 2008 installation to allow the Microsoft Remote Desktop Connection client to run as a portable application.
The real beauty of this installation is that, now I’ve got everything working, it’s encapsulated in a single virtual hard disk (.VHD) image that can be given to any of our developers or technical specialists. I can take my bootable USB thumb drive to any machine and boot up my environment but, if I used an external hard disk instead, then I could even take my virtual machine images with me – and Garry has done some research into which drives/flash memory to use, which should appear as a guest post later this week. Creating and managing VMs can be done via PowerShell (remember, this setup is mobile and is unlikely to be accessible from a management workstation) and access to those running VMs is possible from PowerShell or Remote Desktop. I could even install Firefox if I wanted (actually, I’ve not tried that on Hyper-V Server but it works on Windows Server 2008 server core)
Of course, what I’d really like is for Microsoft to produce a proper client-side hypervisor, like Citrix XenClient but, until that day, this setup seems to work pretty well.
Hi Mark,
Long time lurker, first time commenter. I’m wondering why you wouldn’t opt for a full installation of Windows Server with Hyper-V for a devlopment build? I’ve recently been piloting a full installation with our SharePoint developers and I can’t see that they would be happy without a host OS for their client applications and Hyper-V Manager. Do you load the VMs with all of the Office client applications and make them members of an internal domain or is there some other approach to the virtual environments themselves? Obviously there’s a million ways to skin this cat, I’m just struggling to get my head around the advantages of this approach. Is it just the license savings on the host? Do you run a Windows 7 VM and a Windows Server 2008 R2 VM concurrently and is so is this just for the Windows 7 user experience? I’m curious, because our developers don’t miss the few Windows 7 features that aren’t supported.
Cheers,
Tristan
Hi Tristan – lurkers are always welcome :-)
The reasons for this were twofold:
Exactly how it will work for the developers is yet to be worked out, but we can pre-load the VMs and then manage them with a few PowerShell scripts and access them using RDP locally on the workstation. As you say, Office apps won’t run on Hyper-V Server so they will need to be in a VM but there’s no reason why a 4GB workstation couldn’t run Hyper-V Server, a Windows Server VM with SharePoint, and a Windows XP/Vista/7 client.
You’re right though that Windows Server on a notebook works pretty well (I ran it for about a year after the release of Windows Server 2008) but it just wasn’t the right solution for this particular scenario. Hopefully one day we’ll get a real client-side hypervisor so that we can hotkey switch between the corporate build and a developer workstation.
Cheers, Mark
On the topic of baremetal boots I have successfully implemented booting HP workstations to any MS Terminal server using Ubuntu as the pxe boot.
Dhcp downloads the ubuntu hardware drivers to the workstations using LTSP.
The workstations are configured to boot from network and have no har disks. They receive the the DHCP settings which direct them to either the Ubuntu ltsp server OR (for disk based win 7 etc) our MS WDS server based on their DHCP reserve.
LTSP takes over but instead of directing the workstations to the standard linux terminal service I have used screen scripts to direct them to specific terminal servers on our network based on their mac address.
For our students the screeen script sends them to a student TS. Staff another. They can use hotkeys to switch screens to TS they are allowed to access. This way they can use Ubuntu, 2003 (legacy apps)or 2008 terminal services.
The nice part is that they only ‘see’ the terminal server itself and are not required to log into the ubuntu server itself.
Hi Adrian, slightly off topic – but still really useful information (I heard a similar story from someone else who was remote booting Linux via DHCP and TFTP a few days ago) – you have it working now, but still might find this post useful if you don’t want to use WDS.
By the way, I couldn’t reach your school’s website this morning, so I don’t know where you are in New Zealand but hope you and yours are unaffected by the latest earthquake on the South Island.