Management of Microsoft Hyper-V on Windows Server 2008 (Server Core)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently bought a new server in order to consolidate various machines onto one host.  The intention here is to license Microsoft Hyper-V Server when it is released but, as that’s not available to me right now, I thought I’d use the latest Windows Server 2008 (Server Core) build with the Hyper-V role enabled.  Everything was looking good until I built the server, installed Hyper-V (using the ocsetup Microsoft-Hyper-V command) and realised that although I had a functioning Hyper-V server, I had no way to manage it.

According to the release notes for the Hyper-V beta:

"To manage Hyper-V on a server core installation, you can do the following:

  • Use Hyper-V Manager to connect to the server core installation remotely from a full installation of Windows Server 2008 on which the Hyper-V role is installed.
  • Use the WMI interface."

I wanted to run Hyper-V on Server Core because my experience of running Virtual Server on Windows Server 2003 has been that patching the host is a major issue involving downtime on each guest virtual machine.  Similarly (unless I migrate the workload to another server) applying updates to the parent partition on Hyper-V will also result in downtime in each child partition.  By using Server Core, I reduce the size of the attack surface and therefore the likelihood of a critical patch being applicable to my server.  If I need another Windows Server 2008 machine with Hyper-V installed just to manage the box then that’s not helping me much – even a version of Hyper-V Manager to run on a Windows client machine and administer the server would be a huge step forward!

I’ve raised a feedback request highlighting this as a potential issue which restricts the scenarios in which Hyper-V will be deployed; however I’m expecting it to be closed as "by design" and therefore not holding out much hope of this getting fixed before product release.

Looking forward to Windows Server 2008: Part 2 (Setup and Configuration)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in October, I started to look at the next version of Microsoft’s server operating system – Windows Server 2008. In that post I concentrated on two of the new technologies – Server Core and Windows Server Virtualization (since renamed as Hyper-V).

For those who have installed previous versions of Windows Server, Windows Server 2008 setup will be totally new. Windows Vista users will be familiar with some of the concepts, but Windows Server takes things a step further with simplified configuration and role-based administration.

Using a technology known as Windows PE, the new setup model allows multiple builds to be stored in a single image (using the .WIM file format). Because many of these builds will share the same files, single instance storage is used to reduce the volume of disk space required, allowing six operating system versions to fit into one DVD image (with plenty of free space).

The first stage of the setup process is about collecting information. Windows Setup now asks fewer questions and instead of being spread throughout the process (anybody ever left a server installation running and then returned to find it had stopped half way through for input of some networking details?) the information is all gathered at this first stage in the process. After gathering details for the language, time and currency, keyboard, product key (which can be left and entered later), version of Windows to install, license agreement and selection of a disk on which to install the operating system (including options for disk management), Windows Setup is ready to begin the installation. Incidentally, it’s probably worth noting that SATA disk controllers have been problematic when setting up previous versions of Windows. Windows Server 2008 had no issues with the motherboard SATA controller on the Dell server that I used for my research.

After collecting information, Windows Setup moves on to the actual installation. This consists of copying files, expanding files (which took about 10 minutes on my system), installing features, installing updates, two reboots and completing installation. One final reboot brings the system up to the login screen after which Windows is installed. On my server (with a fast processor, but only 512MB of RAM) the whole process took around 20 minutes.

At this point you may be wondering where the computer name, domain name, etc. is entered. Windows Setup initially installs the server into a workgroup (called WORKGROUP) and uses an automatically generated computer name. The Administrator password must be changed at first logon, after which the desktop is prepared and loaded.

Windows Server 2003 included an HTML application called the Configure Your Server Wizard and service pack 1 added the post-setup security updates (PSSU) functionality to allow the application of updates before enabling non-essential services. In Windows Server 2008 this is enhanced with a feature called the Initial Tasks Configuration Wizard. This takes an administrator through the final steps in setup (or initial tasks in configuration):

  1. Provide computer information – configure networking, change the computer name and join a domain.
  2. Update this server – enable Automatic Updates and Windows Error Reporting, download the latest updates.
  3. Customise this server – add roles or features, enable Remote Desktop, configure Windows Firewall (now enabled by default).

Roles and Features are an important change in Windows Server 2008. The enhanced role-based administration model provides a simple approach for an administrator to install Windows components and configure the firewall to allow access in a secure manner. At release candidate 1 (RC1), Windows Server 2008 includes 17 roles (e.g. Active Directory Domain Services, DHCP Server, DNS Server, Web Server, etc.) and 35 features (e.g. failover clustering, .NET Framework 3.0, Telnet Server, Windows PowerShell).

Finally, all of the initial configuration tasks can be saved as HTML for printing, storage, or e-mailing (e.g. to a configuration management system).

Although Windows Server 2008 includes many familiar Microsoft Management Console snap-ins, it includes a new console which is intended to act as a central point of administration – Server Manager. Broken out into Roles, Features, Diagnostics (Event Viewer, Reliability and Performance, and Device Manager), Configuration (Task Scheduler, Windows Firewall with Advanced Security, Services, WMI Control and Local Users and Groups)and Storage (Windows Server Backup and Disk Management), Server Manager provides most of the information that an administrator needs – all in one place.

It’s worth noting that the Initial Tasks Configuration Wizard and Server Manager do not apply for Server Core installations. Server Manager can be used to remotely administer a computer running Server Core, or hardcore administrators can configure the server from the command line.

So that’s Windows Server 2008 setup and configuration in a nutshell. Greatly simplified. More secure. Much faster.

Of course, there are options for customising Windows images and pre-defining setup options but these are beyond the scope of this article. Further information can be found elsewhere on the ‘net – I recommend starting with the Microsoft Deployment Getting Started Guide.

Windows Server 2008 will be launched on 27 February 2008. It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then. In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

Windows Server 2008 {is coming soon}

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 logoIt’s been a couple of weeks since I posted anything on this blog as I decided to spend Christmas with my family (i.e. not with the computer) and didn’t have any posts ready to publish.  I’ve also been suffering recently from a combination of writer’s block and too much work so, as a consequence, I have many things in my head but very little written down… mostly about Windows Server 2008. As it will be part of a very significant product launch in a few weeks’ time I thought it was about time I updated my previous post looking forward to Windows Server 2008 and highlighted the main advantages of Microsoft’s latest Windows Server release, although I have to confess that much of this is based on Microsoft’s marketing message (with a little of my own opinion for good measure).

A couple of months back, I watched Bill Laing, General Manager for Microsoft’s Windows Server Division, give a keynote presentation to the press on Windows Server 2008 during which he looked back over the history of Windows Server:

  • Windows NT was really a file and print server product with some application support, finally starting to gain acceptance with NT 4.0, which launched in 1996.  In those days, enterprise applications ran on large (typically Unix-based) computers or mainframes and the main competition for departmental deployments was Novell NetWare.
  • Windows 2000 marked a significant change with the introduction of Active Directory and scalability improvements.
  • Even though IIS had existed as a standalone product and then in Windows 2000, Windows Server 2003 was a turning point for Windows application hosting, with Internet Information Services (IIS) 6, 64-bit hardware support, and specialised SKUs (e.g. Windows Storage Server) as well as a web edition of Windows Server.
  • Windows Server 2003 R2 was a midpoint release with new tools for administrators.
  • Windows Server 2008 is a major new update that Microsoft is pitching as a customer focused release.

Bill Laing highlighted a number of hardware inflection points around 64-bit hardware support, multiple processor cores, power consumption and virtualisation.  In addition, he cited customer feedback as the main reason for providing role-based server management, the ability to remove the desktop experience and only run essential server services, and of course the old favourites (or "foundational attributes" as Bill Laing referred to them) – reliability, management and performance.

So, how has Microsoft responded to this famed customer feedback?  They are pitching the major improvements in Windows Server 2008 as follows:

  • your platform {reliable} – looking first at Windows Server as a server platform, Microsoft has provided a solid foundation with:
    • A new management experience.  Server Manager provides a simple point of administration for role-based deployment.  Out of the box, Windows Server 2008 has 17 optional roles (e.g. Active Directory, file, print, web, etc.) and 35 optional features (e.g. multi-path input/output, desktop experience, clustering, etc.).  Windows PowerShell is integrated within the operating system (removing what I consider to be one of the main barriers to adoption of this extremely powerful technology).  Microsoft has also made improvements in the area of power management (now enabled by default) and is working with developers to ensure that applications are written to be more efficient in their use of power (polling vs. quiescing, etc.).
    • Reliability. A new server installation option – Server Core – allows organisations to run servers with only essential Windows services and a limited user interface, supporting selected server roles for command-line (or remote) administration.  There is also a new networking stack, with improved TCP/IP performance and scalability.  Finally, failover clustering (renamed to avoid confusion with other clustering technologies) has been improved from both the implementation perspective and in the provision of support for clusters.
  • web experiences {stunning} – another major change in Windows Server 2008 is IIS 7.  IIS7 uses a modular architecture to improve application performance and aid extensibility.  There are also new IIS management and deployment tools.  This is backed up with new Windows Media services for advanced streaming and caching as well as web application services for communications and workflow integration.
  • infrastructure {virtualised} – whilst other vendors (i.e. VMware) may benefit from their experience of the x86/x64 virtualisation technologies, there is little doubt in my mind that Hyper-V represents a huge step forward for Microsoft.  Furthermore, Microsoft is pitching its virtualisation story as a multi-level approach from the point of view of:
    • Licensing – since Windows Server 2003 R2, Microsoft has adjusted its Windows Server licensing model to support virtualisation (despite claims to the contrary from competitors).  The Microsoft virtual hard disk (.VHD) format is also available with a royalty-free license.
    • Infrastructure – new virtualisation technologies (such as Hyper-V) work with hardware support from Intel and AMD to allow agile virtualisation solutions that better utilise server resources.
    • Management – System Center Virtual Machine Manager helps customers to ease the process of virtualising their infrastructure and to better utilise the available resources, providing the same management tools for both virtual and physical machines.
    • Interoperability – working with both Citrix (XenSource) and Novell (SUSE Linux), Microsoft is able to support heterogeneity across the data centre).
    • Applications – in addition to virtualising server resources, Microsoft SoftGrid and Windows Server 2008 Terminal Services as technologies for application and presentation virtualisation.  Windows Server 2008 Terminal Services includes both Terminal Services Gateway and Terminal Services RemoteApp support.
  • your data {secure} – finally, security.  The days of insecure Microsoft operating systems are long since gone (in fact, Windows Server has always been pretty good) but new technologies in Windows Server 2008 include the server component of the network access protection (NAP) supported by Windows Vista for health validation and compliance checking, read-only domain controllers for secure delegated branch office deployment of Active Directory, fine grained password policies, and Active Directory rights management services for protecting documents during cross-organisational collaboration.

It’s also worth noting that Windows Server 2008 represents a turning point in the shift to 64-bit computing.  Unlike with desktop operating systems, where there is a vicious circle of vendors that won’t write 64-bit device drivers until there is proven demand and users who won’t adopt 64-bit technology until there is vendor support, in the x86/x64 server world there is broad support for 64-bit technologies and Windows Server is the last planned release of a 32-bit server operating system.

As an IT consultant, I agree with Microsoft that there is increasing pressure for IT departments to become more agile and return some benefit to the business – to reduce the cost of "keeping the lights on" and increase the organisation’s ability to innovate.  Microsoft thinks that Windows Server 2008 is more than just an operating system upgrade – that it is key to optimising the infrastructure – and I have to agree.  I was critical of Windows Vista when it was launched (actually, I was critical of the way that Microsoft left its XP customers waiting for a service pack… and we’re still waiting…) but I really can see advantages in the new technologies that Windows Server 2008 brings.  Will organisations deploy Windows Server 2008 right away?  I certainly hope so – there are many compelling reasons to use the new technology but, perhaps more significantly, the release of Windows Vista over a year previously has allowed many of the issues with the common technologies to be ironed out ahead of the server product release.

Finally, what’s with the curly braces smattered throughout this post?  Heroes happen {here} is the theme for the Microsoft marketing around the Windows Server 2008, Visual Studio 2008 and SQL Server 2008 joint product launch.  For those of us on this side of the pond, a UK launch site has also been released with press and customer events planned for 27 February and IT Professional events from 19 March onwards.   I’m also hoping to work with Scotty McLeod and Austin Osuide to step up the Windows Server Team events in 2008 and of course, watch this space for more detail on some of the technologies that I mentioned in this post.  In the meantime, check out Microsoft’s Windows Server 2008 Technical Overview.

Windows Server 2008 moves a step closer to release

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I don’t normally cover new product releases here but there are one or two products on the horizon that are what might be considered "significant releases".

The first of these is Windows Server 2008 and around about now, Microsoft is due to announce release candidate 1 (RC1), marking another step forward towards product release (and launch in February 2008).

Windows Server 2008 RC1 doesn’t include any major build updates (compared to RC0) but it also coincides with Windows Vista service pack 1 (SP1) RC1, effectively bringing Windows Vista onto the same codebase as Windows Server 2008.

Also on track for launch in the same timeframe as Vista SP1 is Windows XP SP3 (whilst I’ve not seen any details yet on the ship date for this, I expect it to be made available at around about the same time as Windows Vista SP1 and Windows Server 2008).

Now is the time to start planning for Windows Server 2008

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently attended a presentation at which the CA (formerly Computer Associates) Directory of Strategic Alliances, Dan Woolley, spoke about how CA is supporting Windows Server 2008.

CA is not a company that I associate with bringing products to market quickly and I understand why companies are often reticent to invest in research and development in order to support new operating system releases.  Hardware and software vendors want to see customer demand before they invest – just look at the debacle with Windows Vista driver support! There are those that blame Microsoft for a lack of device support in Windows Vista but they worked with vendors for years (literally) before product launch and even now, a full year later, many items of hardware and software have issues because they have not been updated. That’s not Microsoft’s fault but a lack or foresight from others.

It’s true that some minor releases are probably not worth the effort, but supporting a new version of Windows, or a new version of a major server product like Exchange Server or SQL Server should be a no-brainer.  Shouldn’t it?

It’s the same with 64-bit driver support (although Microsoft is partly to blame there, as their own 64-bit products seem to lag behind the 32-bit counterparts – hopefully that will change with the "Longhorn" wave of products.

Dan Woolley’s presentation outlined the way that CA views new product releases and, whilst his view was that they are ready when the customers are ready, from my perspective it felt like a company justifying why they wait to provide new product versions.

CIOs expect infrastructure to be extensible, stable, flexible and predictable

He made the point that CIOs expect infrastructure to be extensible, stable, flexible and predictable (they abhor change as the impact of change on thousands of customers, users, and servers is difficult to understand) and how they:

  • Deliver services to facilitate corporate business (so require a stable infrastructure platform).
  • Work to maximise IT investments.

That may be true but Woolley didn’t consider the cost of running on legacy systems.  Last year I was working with a major accounting firm that was desperate to move away from NT because the extended support costs were too high (let alone the security risks of running on an operating system for which no new patches are being developed).  As recently as 2005, I worked with a major retailer whose back office systems in around 2000 outlets are running on Windows NT 3.51 and whose point of sale system depends on FoxPro for MS-DOS!  Their view is that it works and that wholesale replacement of the infrastructure is expensive.  The problem was that they were struggling to obtain spare parts for legacy hardware and modern hardware didn’t support the old software.  They were literally running on borrowed time (and still are, to the best of my knowledge).

CA’s view is that, when it comes to product deployment, there are five types of organisation:

  • Innovators – investigating new products in the period before it is launched  – e.g. Microsoft’s Technology Adoption Programme (TAP) customers.
  • Early adopters – who work with new products from the moment they are launched up to around about the 9 month point.
  • General adoption – product deployment between 9 months and 4 years.
  • Late adopters – deploying products towards the end of their mainstream support (these organisations are probably running Windows 2000 and are only now looking at a move to Windows Server 2003).
  • Laggards – the type of customers that I described earlier.

Looking at the majority of that group, there are a number of deployment themes:

  • Inquiring (pre-launch).
  • Interest and testing (post-launch).
  • Budgeting (~4 months after launch)
  • Prototyping and plots (~1 year after launch)
  • Deployment (~18 months after launch)
  • Replacement/upgrade programmes (~5 years after launch – co-incidentally at the end of Microsoft’s mainstream support phase)
  • Migration (7+ years after launch – onto another platform altogether).

What is interesting though is that there are also two distinct curves for product deployment:

  • Sold licenses.
  • Deployed enterprise licenses.

This is probably because of the way that project financing works.  I know from bitter experience that it’s often better to buy what I need up front and deploy later because if I wait until the moment that I need some more licenses, the budget will not be forthcoming.  It seems that I’m not alone in this.

CA view their primary market as the customers on a general/late adoption timescale.  That sounds to me like a company trying to justify why it’s products are always late to market.  Windows Server 2008 will launch early next year and serious partners need to be there with products to work with the new operating system right from the launch – innovators expect a few problems along the way but when I’m trying to convince customers to be early adopters I don’t want to be held back by non-existent management agents, backup clients, etc.

Windows Server 2008 is built on shared code with Windows Vista so the early hiccups and device adoption should already have been ironed out

CA’s view supports the "wait for service pack 1" mentality but then Woolley closed his presentation by stating that CA builds on Microsoft platforms because they consider them to be extensible, stable, flexible and predictable and because they will allow the delivery of service to facilitate corporate business imperatives and maximise IT investments.  He stated that CA has been working with Microsoft on Windows Server 2008 architecture reviews, design previews, TAPs and logo testing but if they are truly supportive of Microsoft’s new server operating system, then why do they consider their primary market as not being ready for another year?

Once upon a time hardware led software but in today’s environment business is supported by software and the hardware is just something on which to run the software.  In today’s environment we have to consider a services model.  Microsoft’s move towards regular monthly patches supports this (they are certainly less focused on service packs with the last service pack for Windows XP – the client operating system with the largest installed base – having shipped over three years ago).

Windows Server 2008 is built on shared code with Windows Vista so the early hiccups and device adoption should already have been ironed out.  That means that Windows Server 2008 should not be viewed as "too new", "too disruptive" (it will actually ship at service pack 1 level) and, all being well, the adoption curve may be quicker than some think.

Hyper-V is the new name for Windows Server Virtualization

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I was in Redmond, at a Windows Server 2008 technical conference. Not a word was said about Windows Server 2008 product packaging (except that I think one speaker may have said that the details for the various SKUs were still being worked on). Well, it’s amazing how things can change in a few days, as one of the big announcements at this week’s TechEd IT Forum 2007 in Barcelona is the Windows Server 2008 product pricing, packaging and licensing. I don’t normally cover “news” here – there are others who do a much better job of that than I would – but I am interested in the new Hyper-V announcement.

Hyper-V is the new name for the product codenamed Viridian, also known as Windows Server Virtualization, and expected to ship within 180 days of Windows Server 2008. Interestingly, as well as the SKUs that were expected for web, standard, enterprise, datacenter and Itanium editions of Windows Server 2008, there will be versions of Windows Server 2008 standard, enterprise and datacenter editions without the Hyper-V technology (Hyper-V will only be available for x64 versions of Windows Server 2008) as well as a separate SKU for Hyper-V priced at just $28.

$28 sounds remarkably low – why not just make it free (and greatly simplify the product model)? In any case, this places Hyper-V in a great position to compete on price with Citrix Xen Server or VMware ESX Server 3i (it should be noted that I have yet to see pricing announced for VMware Server 3i) – I’ve already written that I think Hyper-V has the potential to compete on technical merit (something that its predecessor, Virtual Server 2005 R2, couldn’t).

At the same time, Microsoft announced a Windows Server Virtualisation validation programme – designed to validate Windows Server with virtualisation software and enable Microsoft to offer co-operative technical support to customers running Windows Server on validated, non-Windows server virtualisation software platforms (such as Xen) as well as virtualisation solution accelerators and general availability of System Center Virtual Machine Manager 2007.

Whilst VNU are reporting that VMware are “unfazed” by the Microsoft Hyper-V announcement, I have absolutely no doubt that Microsoft is serious about making a name for itself in the x86/x64 server virtualisation market.

Windows Server Virtualization unwrapped

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, Microsoft released Windows Server 2008 Release Candidate 0 (RC0) to a limited audience and, hidden away in RC0 is an alpha release of Windows Server Virtualization (the two updates to apply from the %systemroot%\wsv folder are numbered 939853 and 929854).

I’ve been limited in what I can write about WSV up to now (although I did write a brief WSV post a few months back); however at yesterday’s event about creating and managing a virtual environment on the Microsoft platform (more on that soon) I heard most of what I’ve been keeping under wraps presented by Microsoft UK’s James O’Neill and Steve Lamb (and a few more snippets on Tuesday from XenSource), meaning that it’s now in the public domain and I can post it here (although I have removed a few of the finer points that are still under NDA):

  • Windows Server Virtualization uses a totally new architecture – it is not just an update to Virtual Server 2005. WSV is Microsoft’s first hypervisor-based virtualisation product where the hypervisor is approximately 1MB in size and is 100% Microsoft code (for reliability and security) – no third party extensions. It is no more than a resource partition in order to provide access to hardware and not opening the hypervisor to third parties provides protection against theoretical hyperjacking attacks such as the blue pill (where a rootkit is installed in the hypervisor and is practically impossible to detect).
  • WSV requires a 64-bit CPU and hardware assisted virtualisation (Intel VT or AMD-V) enabled in the BIOS (often disabled by default).
  • There will also be two methods of installation for WSV:
    • Full installation as a role on Windows Server 2008 (once enabled, a reboot “slides” the hypervisor under the operating system and it becomes virtualised).
    • Server core role for the smallest and most secure footprint (with the advantage of fewer patches to apply).
  • Initial builds require a full installation but WSV will run on Server Core.
  • The first installation becomes the parent, with subsequent VMs acting as children. The parent has elevated permissions. The host/guest relationship no longer applies with the hypervisor model; however if the parent fails, the children will also fail. This may be mitigated by clustering parents and using quick migration to fail children over to another node.
  • Emulated drivers are still available with wide support (440BX chipset, Adaptec SCSI, DEC Ethernet, etc.) but they have a costly performance overhead with multiple calls back and forth between parent and child and context switches from user to kernel mode. WSV also includes a synthetic device driver model with virtual service providers (VSPs) for parents and virtual service clients (VSCs) for children. Synthetic drivers require no emulation and interact directly with hardware assisted virtualisation, providing near-native performance. XenSource drivers for Linux will be compatible with WSV.
  • There will be no USB support – Microsoft see most USB demand for client virtualisation and although USB support may be required for some server functions (e.g. smartcard authentication), this will not be provided in the initial WSV release
  • Microsoft views memory paging to be of limited use and states that over-committing RAM (memory ballooning) is only of practical use in a test and development environment. Furthermore it can actually reduce performance where applications/operating systems attempt to make full use of all available memory and therefore cause excessive paging between physical and virtual RAM. Virtual servers require the same volumes of memory and disk as their physical counterparts.
  • In terms of operating system support, Windows Vista and Server 2008 already support synthetic device driver (with support being added to Windows Server 2003). In response to customer demand, Microsoft has worked with XenSource to provide a platform that will allow both Linux and Windows workloads with near native performance though XenSource’s synthetic device drivers for Linux. Emulation is still available for other operating systems.
  • Virtual Server VMs will run in WSV as the VHD format is unchanged; however virtual machine additions will need to be removed and replaced with ICs (integration components) for synthetic drivers using the integration services setup disk (similar to virtual machine additions, but without emulation) to provide enlightenment for access to the VMbus.
  • Hot addition of resources is not included in the initial WSV release.
  • Live migration will not be included within the first WSV release but quick migration will be. The two technologies are similar but quick migration involves pausing a VM, writing RAM to a shared disk (saving state) and then loading the saved state into RAM on another server and restarting the VM – typically in around 10 seconds – whereas live migration copies the RAM contents between two servers using an iterative process until there are just a few dirty pages left, then briefly pausing the VM, copying the final pages, and restarting on the new host with sub-second downtime.
  • WSV will be released within 180 days of Windows Server 2008.

Looking forward to Windows Server 2008: Part 1 (Server Core and Windows Server Virtualization)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst the first two posts that I wrote for this blog were quite generic, discussing such items as web site security for banks and digital rights management, this time I’m going to take a look at the technology itself – including some of the stuff that excites me right now with Microsoft’s Windows Server System.

Many readers will be familiar with Windows XP or Windows Vista on their desktop but may not be aware that Windows Server operating systems also have a sizable chunk of the small and medium size server market.   This market is set to expand as more enterprises implement virtualisation technologies (running many small servers on one larger system, which may run Windows Server, Linux, or something more specialist like VMware ESX Server).

Like XP and Vista, Windows 2000 Server and Advanced Server (both now defunct), Windows Server 2003 (and R2) and soon Windows Server 2008 have their roots in Windows NT (which itself has a lot in common with LAN Manager).  This is both a blessing and a curse as while the technology has been around for a few years now and is (by and large) rock solid, the need to retain backwards compatibility can also mean that new products struggle to balance security and reliability with legacy code.

Microsoft is often criticised for a perceived lack of system stability in Windows but it’s my experience that a well-managed Windows Server is a solid and reliable platform for business applications.  The key is to treat a Windows Server computer as if it were the corporate mainframe rather than adopting a   personal computer mentality for administration.  This means strict policies controlling the application of software updates and application installation as well as consideration as to which services are really required.

It’s this last point that is most crucial.  By not installing all of the available Windows components and by turning off non-essential services, it’s possible to reduce the attack surface for any would-be hacker.  A reduced attack surface not only means less chance of falling foul of an exploit but it also means less patches to deploy.  It’s with this in mind that Microsoft produced Windows Server Core – an installation option for the forthcoming Windows Server 2008 product (formerly codenamed Longhorn Server).

As the name suggests, Windows Server Core is a version of Windows with just the core operating system components and a selection of server roles available for installation (e.g. Active Directory domain controller, DHCP server, DNS server, web server, etc.).  Server Core doesn’t have a GUI as such and is entirely managed from a command prompt (or remotely using standard Windows management tools).  Even though some graphical utilities can be launched (like Notepad), there is no Start Menu, no Windows Explorer, no web browser and, crucially, a much smaller system footprint.  The idea is that core infrastructure and application servers can be run on a server core computer, either in branch office locations or within the corporate data centre and managed remotely.  And, because of the reduced footprint, system software updates should be less frequent, resulting in improved server uptime (as well as a lower risk of attack by a would-be hacker).

If Server Core is not exciting enough, then Windows Server Virtualization should be.  I mentioned virtualisation earlier and it has certainly become a hot topic this year.  For a while now, the market leader (at least in the enterprise space) has been VMware (and, as Tracey Caldwell noted a few weeks ago, VMware shares have been hot property), with their Player, Workstation, Server and ESX Server products.  Microsoft, Citrix (XenSource) and a number of smaller companies have provided some competition but Microsoft will up the ante with Windows Server Virtualization, which is expected to ship within 180 days of Windows Server 2008.  No longer running as a guest on a host operating system (as the current Microsoft Virtual Server 2005 R2 and VMware Server products do), Windows Server Virtualization will directly compete with VMware ESX Server in the enterprise space, with a totally new architecture including a thin “hypervisor” layer facilitating direct access to virtualisation technology-enabled hardware and allowing near-native performance for many virtual machines on a single physical server.  Whilst Microsoft is targeting the server market with this product (they do not plan to include the features that would be required for a virtual desktop infrastructure, such as USB device support and sound capabilities) it will finally establish Microsoft as a serious player in the virtualisation space (even as the market leader within a couple of years).  Furthermore, Windows Server Virtualization will be available as a supported role on Windows Server Core; allowing for virtual machines to be run on an extremely reliable and secure platform.  From a management perspective there will be a new System Center product – Virtual Machine Manager, allowing for management of virtual machines across a number of Windows servers, including quick migration, templated VM deployment and conversion from physical and other virtual machine formats.

Windows Server Core and Windows Server Virtualization are just two of the major improvements in Windows Server 2008.  Over the coming weeks, I’ll be writing about some of the other new features that can be expected with this major new release.

Windows Server 2008 will be launched on 27 February 2008.  It seems unlikely that it will be available for purchase in stores at that time; however corporate users with volume license agreements should have access to the final code by then.  In the meantime, it’s worth checking out Microsoft’s Windows Server 2008 website and the Windows Server UK User Group.

[This post originally appeared on the Seriosoft blog, under the pseudonym Mark James.]

Useful source for Microsoft resource kit utilities

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of nights back, I needed to get hold of a Windows 2000 Server resource kit utility called cusrmgr.exe in an attempt to add a global security group from a domain to the local Administrators group on a Windows Server 2008 core server (following the advice in Microsoft knowledge base article 297307). Being many miles from home (and without a working remote access solution at present), I needed to download the utility from somewhere but, whilst Microsoft makes many resource kit utilities available for download from the web, this is not one of them. Luckily an Austrian firm called Dynawell web site services has provided various resource kits for download at their website. (If anyone from Microsoft is reading this, please don’t shut them down – they do at least acknowledge that a license is required to use the utilities.)

Unfortunately, cusrmgr.exe -m \\remotecomputername -alg localgroupname -u globalgroupname didn’t work out for me on Windows Server 2008 Server Core.

A few commands to get started with Windows Server Core

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Quoting Scotty McLeod:

Mark mailed me last night to ask about my crib sheet for Core Server but as it was Friday evening was taking a rest from the digital world. A hour and a half later he mailed me back to say he had found all he needed.

Now this was from the mail Mark’s first real go in anger at installing and configuring Core Server but we have to remember he is an great Windows professional and old enough to have used command lines for a significant proportion of his life with computers.

I’m honoured that Scotty refers to me as a professional but somewhat concerned at the same time that my age (I’m only 35!) is linked with command line usage. Actually, I think it’s got more to do with geekiness and although I can’t confess to being a Linux/Unix expert, I do love diving into a command shell. I guess what Scotty is saying is that I’m old enough to have cut my teeth in the computing world before GUIs were the norm – and he’s right.

Anyway, back to Server Core. I love it. I hate it. No, I love it. Well, I love the idea and I’m sure I will love using the product but, because it’s not yet finished, the administration of a Server Core box can be a chore. Consequently, here’s my checklist of tasks from when I needed to get a Server Core box up and running last Friday (based on the June CTP build).

  1. Enable remote desktop (from a Windows Vista client):
    cscript %windir%\system32\SCRegEdit.wsf /ar 0
  2. Change the machine name:
    netdom renamecomputer %computername% /newname:newcomputername
  3. Set the IP address for the primary NIC:
    netsh interface ipv4 set address "Local Area Connection" ipaddress subnetmask gatewayipaddress
  4. Set the DNS server addresses:
    netsh interface ipv4 add dns "Local Area Connection" ipaddress [index=indexnumber]
  5. Disable the firewall (at least until everything is working):
    netsh firewall set opmode disable
  6. Join a domain:
    netdom join %computername% /domain:domainname /userd:domainname\username /passwordd:*
  7. Restart the server:
    shutdown -r
  8. Change the drive letter allocation for an existing disk (e.g. the CD-ROM drive):
    diskpart
    select volume volumenumber
    assign letter=driveletter
  9. Format additional disks (in my case, these had been partitioned during setup but additional diskpart.exe commands could be used):
    diskpart
    select disk disknumber
    select partition partitionnumber
    format fs=ntfs label="volumelable" quiet
  10. Label a disk (e.g. the system disk):
    label driveletter: "volumelable"
  11. Add a domain user to a local group (note that there are some serious restrictions around this – Microsoft knowledge base article 324639 has more details):
    net localgroup groupname /add domainname\username

This has just scraped the surface with a few commands that I needed – it would have taken me a lot longer to write this post without these excellent resources:

Other links that may be useful include the Windows command line reference and my own post on using netsh to set multiple DNS server addresses.