Microsoft’s New Efficency comes to Wembley

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As I opened the curtains in my hotel room this morning, I was greeted with a very wet and grey view of North London. Wembley Stadium looks far less impressive on a day like today than it did in the night-time shot that graced the front page of Bing here in the UK yesterday but still it’s hard not to be in awe of this place.

I’ve been to a couple of events at the new Wembley Stadium before: last year’s Google Developer Day (sadly there was no UK event this year); and the recent U2 concert – but this time I’m here courtesy of Microsoft for their UK Technical Launch event and the main products on show are Windows 7, Windows Server 2008 R2 and Exchange Server 2010 in what Microsoft is calling “The New Efficiency”.

I was twittering throughout the event @markwilsonit but this post highlights some of the key messages from the main sessions today, although I’ve skipped over the details of the standard technical product demonstrations as I hope to cover these in future posts:

  • There are more than 7100 applications tested and working on Windows 7 today and there should be more than 8000 certified by the time that the product hits general availability.
  • Windows 7 was beta tested by more than 8 million people, with 700,000 in the UK.
  • The Windows Optimised Desktop is represented by a layered model of products including:
    • Management infrastructure: System Center and Forefront for deployment, application management, PC monitoring and security management.
    • Server infrastructure: Windows Server 2008 R2 for Active Directory, Group Policy, network services and server-based client infrastructure.
    • Client infrastructure: Windows 7 and the Microsoft Desktop Optimisation Pack for the Asset Inventory Service, AppLocker and BitLocker.
  • Windows is easier than ever to deploy, using freely available tools such as the Microsoft Deployment Toolkit (MDT) 2010 to engineer, service and deploy images – whether they are thin, thick or a hybrid.
  • System Center Configuration Manager (SCCM) 2007 provides a deployment engine for zero-touch installations, hooking into standard tools such as MDT, the User State Migration Tool (USMT), WinPE, etc.
  • PowerShell is becoming central to Windows IT administration.
  • Windows Server 2008 R2’s new brokering capability presents new opportunities for server based computing.

For me, the highlight of the event was Ward Ralston’s appearance for the closing keynote. Ward used to implement Microsoft infrastructure but these days he is a Product Manager for Windows Server 2008 R2 (I’ve spoken to him previously, although today was my first chance to meet him face to face). Whilst some delegates were critical of the customer interviews, his New Efficiency presentation nicely summarised the day as he explained that:

  • Many organisations are struggling with decreasing IT budgets.
  • Meanwhile IT departments are trying to meet the demands of: IT consumerisation (as a generation that has grown up with computers enters the workforce); security and compliance (the last few years have brought a huge surge in compliance regulations – and the global “economic reset” is sure to bring more); and an ever-more mobile and distributed workforce (where we need to ensure confidentiality and non-repudiation wherever the users are).
  • IT departments have to cut costs – but that’s only part of the solution as productivity and innovation are just as important to increase efficiency.
  • In short (productivity + innovation)/cost = doing more with less
  • Managing more with less is about: reducing IT complexity; improving control and reducing helpdesk costs; increasing automation; and consolidating server resources.
    Doing more is about: enabling new services, efficiently connecting people to information, optimising business processes, and allowing employees to securely work from anywhere
  • Microsoft’s New Efficiency is where cost savings, productivity and innovation come together.

It would be easy to criticise today’s event, for instance to pick out certain presenters who that could have benefited from the use of Windows Magnifier, but I know just how much work went into making today’s event run as smoothly as it did and, on balance, I felt it was a good day. For those who have never been to a Microsoft launch, they may have expected something more but I’ve been to more of these events than I care to remember and so this was exactly what I expected: lots of marketing rhetoric delivered via PowerPoint; some demos, most of which worked; and, I think, something for everyone to take away and consider as their organisation looks at meeting the challenges that we all face in our day jobs – even if that was just the free copy of Windows 7 Ultimate Edition… (full disclosure: I accepted this offer and it in no way influences the contents of this blog post).

I’ll be back at Wembley again tomorrow, this time for the Microsoft Partner Network 2009 – and expect to see more Windows 7 and Server 2008 R2 related posts on this site over the coming weeks and months.

Which service pack level is Windows Server 2008 R2 at?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those that remember Windows Server 2003 R2 may recall that it shipped on two disks: the first contained Windows Server 2003 with SP1 integrated; and the second contained the R2 features. When Windows Server 2003 SP2 shipped, it was equally applicable to Windows Server 2003 and to Windows Server 2003 R2. Simple.

With Windows Server 2008, it shipped with service pack 1 included, in line with it’s client operating system sibling – Windows Vista. When service pack 2 was released, it applied to both Windows Server 2008 and to Windows Vista. Still with me?

Today, one of my colleagues asked a question of me – what service pack level does Windows Server 2008 R2 sit at – SP1, SP2, or both (i.e. multiple versions). The answer is neither. Unlike Windows Server 2003 R2, which was kind of linked to Windows XP, but not really, Windows Server 2008 R2 and Windows 7 are very closely related. Windows Server 2008 R2 doesn’t actually display a service pack level in its system properties and I would expect the first service pack for Windows 7 to be equally applicable to Windows Server 2008 R2 (although I haven’t seen any information from Microsoft to confirm this). What’s not clear is whether the first service pack for Windows Server 2008 R2 and Windows 7 will also be service pack 3 for Windows Server 2008 and Vista? I suspect not and would expect Windows Server 2008 and Windows Server 2008 R2 to take divergent paths from a service pack perspective. Indeed, it could be argued that service packs are less relevant in these days of continuous updates. At the time of writing, the Windows service pack roadmap simply says that the “Next Update and Estimated Date of Availability” for Windows Server 2008 is “To be determined” and there is no mention of Windows 7 or Server 2008 R2.

So, three consecutive operating system releases with three different combinations of release naming and service pack application… not surprisingly resulting in a lot of confused people. For more information on the mess that is Microsoft approach to major releases, update releases, service packs and feature packs, check out the Windows Server product roadmap.

Injecting network drivers into a Hyper-V Server (or Windows Server) installation

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks ago, I blogged about running Windows from a flash drive – specifically running Hyper-V Server 2008 R2. One thing I hadn’t got around to at that time though was injecting the necessary drivers to provide network access to/from the server – which is pretty critical for a virtualisation host! Under network settings, the Hyper-V Server Configuration (sconfig.vbs) showed that there were no active network adapters found but I knew this should be pretty easy to fix.

One of the strengths of the Hyper-V architecture is that it uses the standard Windows device driver model. This is in stark contrast to the monolithic model used for VMware ESX (and ESXi) and is the reason that I can’t do something similar with ESXi. In fact, adding network drivers to Hyper-V Server (or for that matter Windows Server 2008 running in server core mode, or even for command line administration of a full Windows Server 2008 installation) is pretty straightforward.

The network card I needed to support is a Marvell Yukon 88E8055 PCI-E Gigabit Ethernet Controller and, even though Windows 7 recognised the hardware and installed the appropriate drivers at installation time, I couldn’t find the drivers in the install.wim file on the DVD. That was no problem – Marvell’s download site had x64 drivers for Windows 7 available and these are also be suitable for Windows Server 2008 R2 and Hyper-V Server 2008 R2. Armed with the appropriate driver (yk62x64.sys v11.10.7.3), I ran pnputil -i -a yk62x64.inf on my Hyper-V Server:

Microsoft PnP Utility

Processing inf :            yk62x64.inf
Successfully installed the driver on a device on the system.
Driver package added successfully.
Published name :            oem0.inf

Total attempted:              1
Number successfully imported: 1

(oem0.inf and an associated oem0.pnf file were created in the %windir%\inf\ folder)

With drivers loaded, I restarted the server (probably not necessary but I wanted to ensure that all services were running) and Hyper-V Server recognised the network card, allowing me to make configuration changes if required.

To validate the configuration, I ran pnputil -e, to which the response was:

Microsoft PnP Utility

Published name :            oem0.inf
Driver package provider :   Marvell
Class :                     Network adapters
Driver date and version :   07/20/2009 11.10.7.3
Signer name :               Microsoft Windows Hardware Compatibility Publisher

So, that’s installing network drivers on Hyper-V Server, what about removing them? Here, I was less successful. I tried removing the plug and play package with pnputil -f -d oem0.inf and this removed the package from %windir%\inf but, after a reboot, my network settings persisted. I also used devcon.exe, the command line equivalent to the Windows Device Manager (making sure I had the amd86 version, not i386 or ia64) to successfully remove the PnP package (devcon -f dp_delete oem0.inf) as well as the network interface (devcon remove "PCI\VEN_11AB&DEV_4363") but this still left several copies of yk62x64.sys available in various Windows system folders. Again, after a reboot the network card was re-enabled. Uninstalling network drivers is not a very likely scenario in most cases but, with a bootable flash device potentially roaming between hardware platforms, it would be good to work out how to do this. Of course, my work is based on the release candidate – the RTM version of Hyper-V Server 2008 R2 is yet to be released to web.

Running Windows from a USB flash drive

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve titled this post as “Running Windows from a USB flash drive” because the same principles should be equally applicable to all Windows 7-based operating systems (and even Vista if the Windows 7 bootloader is used) but my specific scenario was based on Hyper-V Server 2008 R2.

I got this working a few hours after Windows 7, Server 2008 and Hyper-V Server 2008 R2 were released to manufacturing but I was still using release candidate code – fingers crossed it still works with the final release!

Boot from VHD is a fantastic new technology in Windows 7/Server 2008 R2 and derivative operating systems and I’ve often wondered if it’s possible to use it to run Hyper-V from a USB flash drive (just like the “embedded” version of VMware ESXi offered by some OEMs). Well, as it happens it is – and this post describes the steps I had to take to make it work.

First of all, I needed to create a virtual hard disk and install an operating system onto it. As Keith Combs noted, there are various ways to do this but only one is supported; however there is also a handy video on TechNet which takes you through the steps of creating a VHD and booting from it.

Using the TechNet video as a guide, I issued the following commands from the command prompt to create my virtual hard disk and apply an image from the Hyper-V Server 2008 R2 release candidate DVD:

diskpart
create vdisk file=driveletter:\virtualharddisk.vhd maximum=15000 type=expandable
select vdisk file=driveletter:\virtualharddisk.vhd
attach vdisk
list disk

(make a note of the disk number.)

select disk disknumber
create partition primary
select partition 1
active
format fs=ntfs quick
assign
exit

(note the drive letter for the newly mounted VHD.)

imagex /info dvddrive:\sources\install.wim

(identify the correctentry.)

imagex /apply dvddrive:\sources\install.wim /check imageindex vhddrive:\
diskpart
select vdisk file=driveletter:\virtualharddisk.vhd
detach vdisk
exit

At this point, Hyper-V Server had been imaged into my new VHD, which could then be copied to the USB flash drive.

Next, to load the VHD from the Boot Manager, I edited the boot configuration data (which is what would be required in a standard boot from VHD scenario); however, as I found later, a different set of actions is needed for booting from the USB flash drive.

bcdedit /copy {current} /d “Hyper-V Server 2008 R2”
bcdedit

(make a note of the GUID for the newly created entry.)

bcdedit /set {guid} device vhd=[usbdrive:]\virtualharddisk.vhd
bcdedit /set {guid} osdevice vhd=[usbdrive:]\virtualharddisk.vhd
bcdedit /set {guid} detecthal on
bcdedit /set {guid} description “Hyper-V Server 2008 R2”

It’s worth understanding that the use of drive letters (which are transient in nature) does not cause a problem as the BCD Editor (bcdedit.exe) extracts the data about the partition and saves it in the BCD store (i.e. it does not actually save the drive letter).

After rebooting, Hyper-V Server loaded from my USB flash drive and ran through the out of box experience. At this stage I had Hyper-V Server running off the flash drive but only if my original Windows installation (with the boot manager) was available and, as soon as I removed the hard disk (I wanted to be sure that I was booting off the flash drive with no other dependencies), then the whole thing collapsed in a heap. Thanks to Garry Martin, I checked my BIOS configuration and made sure that USB device boots were enabled (they were not) but I then spent about a day playing around with various BCD configurations (as well as various attempts to fix my BCD with bootrec.exe) until I stumbled on a post from Vineet Sarda (not for the first time, based on the comments that include one from yours truly a few weeks back!) that discusses booting from VHD without a native operating system.

Following Vineet’s example, I booted my system into Windows 7 (I could have used the Windows Recovery Environment), reformatted the USB flash drive before copying my VHD image back onto it, and issued the following commands:

diskpart
select vdisk file=usbdrive:\virtualharddisk.vhd
attach vdisk
list volume
exit

(note the drive letter for the newly mounted VHD.)

bcdboot vhddrive:\Windows /s usbdrive: /v

(i.e. copying the BCD from the operating system image contained within the VHD, to the physical USB drive. Note that, when running on a live system it is important to specify the target drive for the BCD in order to avoid overwriting the live configuration.)

I then shut down the system, removed the hard disk and booted from the USB flash drive, after which the Windows Boot Manager loaded an operating system from within the VHD.

Looking at my BCD configuration (shown here for reference), I can see the source of my many hours of confusion – the Boot Manager resides on the physical media (my USB key – which was allocated drive D: in this case) and loads an operating system from the virtual disk that is given another drive letter (in this case C:):

Windows Boot Manager
——————–
identifier              {bootmgr}
device                  partition=D:
description             Windows Boot Manager
locale                  en-us
inherit                 {globalsettings}
default                 {current}
resumeobject            {27f66313-771a-11de-90bb-00037ab36ab6}
displayorder            {current}
toolsdisplayorder       {memdiag}
timeout                 30

Windows Boot Loader
——————-
identifier              {current}
device                  partition=C:
path                    \windows\system32\winload.exe
description             Hyper-V Server 2008 R2
locale                  en-us
inherit                 {bootloadersettings}
osdevice                partition=C:
systemroot              \windows
resumeobject            {27f66313-771a-11de-90bb-00037ab36ab6}
nx                      OptOut
detecthal               Yes

It took a while to boot (my flash drive was a freebie is not the fastest in the world) but, once loaded into memory, Hyper-V Server seemed to run without any noticeable delay. I figure that, as long as the workload is stored on another disk this should not present any problems and, given suitably fast flash memory, it ought to be possible to improve boot times as well. Running a full Windows operating System (e.g. Windows 7) in this manner is an entirely different matter – very few USB flash drives will be able to stand the constant writes and further testing would be required.

Now that I have Hyper-V Server running from an inexpensive USB flash drive with no reliance on my PC’s internal hard disk, all I need to do is inject the correct network drivers and I will have a virtualisation solution for colleagues who want to run a full hypervisor on their corporate notebooks, without deviating from the company’s standard client build.

Additional information

The following notes/links may provide useful background information:

Windows 7 and Server 2008 R2 released to manufacturing

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After much anticipation, Microsoft has announced that Windows 7 and Windows Server 2008 R2 have been released to manufacturing (RTM). The build numbers are is 7600 and 7200 respectively and my post yesterday highlighted the dates when partners and customers will be able to get their hands on the software.

Congratulations to the Windows client and server teams on shipping two great operating system releases. They have their own blog posts on the subject (Windows client and server). I’ll be writing more Windows 7 (and Server 2008 R2) content over the coming days and months so stay tuned!

(System Center Virtual Machine Manager 2008 R2 has also RTMed to coincide with will be released within 60-days with support for the new version of Hyper-V contained within Windows Server 2008 R2 and Hyper-V Server 2008 R2.)

[Update: edited SCVMM text to correct previous misinformation (which came from Microsoft PR!)]
[Update: removed erroneous reference to build 7200 (also sourced from Microsoft PR!)]

Early reports of SLAT-enabled processors significantly increasing RDS session concurrency

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Let me caveat my next statement by saying that I think Hyper-V is a great virtualisation platform that meets the needs of many customer environments… but… Hyper-V does lack some features that would allow it to stand tall alongside the market leading product (VMware ESX) and I was disappointed when the dynamic memory feature was pulled from the second release of Hyper-V.

As I wrote when discussing new features in the Windows Server 2008 R2 release candidate:

“I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.”

Well, it seems that there may be a silver lining to this cloud (or at least, a shiny metallic grey one) as Clive Watson highlighted the results from some testing with Remote Desktop Services (Microsoft’s VDI broker) running on Hyper-V and reported that:

“We conducted our testing using both non-SLAT and SLAT hardware and found that SLAT enabled processors increased the number of sessions by a factor of 1.6x to 2.5x compared to non-SLAT processors.”

Basically using an SLAT-enabled processor (Intel Nested Page Tables and AMD Enhanced Page Tables) in a server should make a big difference to the consolidation ratios achieved in a VDI scenario.

Of course, if SLAT allows improved performance, then other platforms will also benefit from it (although not necessarily to the same degree) but, if VDI really is a feasible technology solution (I have my doubts and consider it a “significant niche” solution), I’m sure Microsoft will come up with something for the third incarnation of Hyper-V.

Joint user group meeting (Windows Server UK User Group/Active Directory UK User Group/Vista Squad)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After a successful joint meeting in March, the Windows Server UK User Group (including the associated LinkedIn group) and the Active Directory UK User Group are meeting up again, and this time the Vista Squad are joining the party too as we spend an the evening of 28 May 2009 at Microsoft’s London offices looking at Windows 7 (client and server).

This is the first Windows 7 event of it’s kind in the UK and will include talks on: Application Compatability for Windows 7 and Windows Server 2008 R2; Top 10 Reasons to Upgrade to Windows Server 2008 R2; and Enterprise Features in Windows 7 and Windows Server 2008 R2.

Check out the event website for full session and registration details.

Windows Server 2008 R2 release candidate: what’s new? (part 2)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 R2 logoA couple of weeks back, I wrote about some of the new features in Windows Server 2008 R2 but I did say that was only part 1 as there were a few surprises in store (held back for discussion at TechEd this week):

  • First up, Hyper-V R2 will support 64 logical CPUs. At release, Hyper-V was supported for up to 4 CPUs each with 4 cores, then Intel released it’s Dunnington 6-core chips and a hotfix was released for 24 core support (see Microsoft knowledge base article 956710). Originally the R2 release was going to support 32 cores but performance testing went well, so today Microsoft will announce support for 64 logical cores. What this means is that Hyper-V hosts can achieve better density levels and run more virtual machines with multiple virtual CPUs, improving the platform’s ability to scale in line with hardware developments.
  • Secondly, there is a new feature in Hyper-V R2 called processor compatibility mode. Sharp-eyed users of the Windows Server 2008 R2 release candidate may have noticed a new checkbox labelled migrate to a physical computer with a different processor version. Configured on a per-VM basis, this allows virtual machines to be migrated between hosts using CPUs from different processor families (from the same vendor – this is Intel-Intel and AMD-AMD, not AMD-Intel or vice versa), providing greater flexibility when expanding clusters with new hardware, by abstracting virtual machine down to the lowest possible denominator in terms of available instruction set.
  • Finally, there will be a new feature in Windows Server 2008 R2 called file classification infrastructure (FCI). Nir Ben Zvi is a Senior Program Manager working on Windows Server at Microsoft and he explained to me that customers are struggling with increasing risks and costs as they balance data management needs with data management tools. With the new FCI functionality, Microsoft sees customers classifying their data and applying a policy according to the classification, so that it may be treated differently according to the user requesting access. Classification runs on a schedule and can even detect patterns of text in a scanned document. Stale files can be expired (moved to an administrator-controlled directory, with expiry notified in advance). Documents may be watermarked. And, it should be no surprise that FCI supports integration with SharePoint as well extensibility by partners.

If Windows Server 2008 was good, R2 is looking better. The release candidate is available now, with general availability expected in the second half of 2009 (although not confirmed by Microsoft on any official sites).

Windows Server 2008 R2 release candidate: what’s new? (part 1)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Windows Server 2008 R2 logoLast year, I wrote a post about some of the things to look forward to in Windows Server 2008 R2 and, a week or so later, I was able to follow it up with the news that Terminal Services gets a big improvement as it becomes Remote Desktop Services (RDS). Six months have gone by, we’ve had the beta, and now the release candidate is here… and that release candidate has some new features – mostly relating to performance and scalability:

  • Looking first at the improvements to Hyper-V (in addition to those in last week’s post on the R2 wave of virtualisation products):
    • There are networking improvements with VM Chimney/TCP Offload capabilities whereby network operations are redirected to the physical NIC (where the NIC supports this), reducing the CPU burden and improving performance. The original version of Hyper-V supported chimney operations in the parent, but virtual machines could not take advantage of the functionality. This helps Hyper-V to scale as 10Gbps Ethernet becomes more common (a Hyper-V host can already saturate a Gigabit Ethernet connection if required) but it’s worth noting that not all applications can benefit from this as it’s more suitable for large file transfers (file servers, etc.) rather than web servers.
    • Another new Hyper-V networking feature is NIC direct memory access (NIC DMA), which shortens the overall path length from a physical NIC queue to virtual machine, resulting in further performance improvements. Because each NIC queue is assigned to a specific virtual NIC there’s still no sharing of memory (so no impact on security isolation) but direct access to virtual machine memory does avoid copies in the VSP and route lookups in the virtual switch; however this feature is disabled by default (as the only real benefit is found with 10Gbps Ethernet and only a few NICs currently have the capability to process it).
    • The long-awaited live migration functionality is definitely in (it was also in pre-release versions of the Hyper-V but was pulled before release). Windows Server 2008 R2’s clustered shared volumes are instrumental to making this feature work well and, even though I don’t believe it’s entirely necessary, VMware have had the functionality for several years now and Microsoft needs to be able to say “me too”.
    • Sadly, another “me too” feature (dynamic memory) has definitely been dropped from the R2 release. I asked Microsoft’s Jeff Woolsey, Principle Group Program Manager for Hyper-V, what the problem was and he responded that memory overcommitment results in a significant performance hit if the memory is fully utilised and that even VMware (whose ESX hypervisor does have this functionality) advises against it’s use in production environments. I can see that it’s not a huge factor in server consolidation exercises, but for VDI scenarios (using the new RDS functionality), it could have made a significant difference in consolidation ratios.
  • Away from Hyper-V there are further performance and scalability improvements in the operating system, with support for up to 256 logical CPUs, improved scheduling on NUMA architectures, and support for solid state disks. As well as the power management improvements I mentioned in my original post last October, the operating system uses less memory and networking improvements result in improved file transfer speeds on the LAN, whilst new multi-threaded capabilities in robocopy.exe (using the /mt switch) can provide up to an 800% improvement in WAN file transfers. Putting these improvements into practice, Microsoft told me that one OLTP benchmark for SQL Server showed a 70% improvement by moving from 64 to 128 processors and a file server throughput test showed a 32% improvement just by upgrading the operating system from Windows Server 2008 to Windows Server 2008 R2. Indeed, Microsoft is keen to show off these improvements at TechEd next month (together with System Center products being used to manage and cap power usage) and they will also announce a new power logo as an additional qualification for the Windows Server logo programme. Some of the power improvements will be back-ported to Windows Server 2008 SP2, although that operating system still won’t quite match up to R2.

None of these are big features but they have the potential to make some significant differences in the efficiency of an organisation’s Windows Server estate – an important consideration as economic and environmental pressures affect the way in which we run our IT systems. This isn’t the whole story though as Microsoft still has a few more surprises in this release candidate. With the RC code available to TechNet and MSDN subscribers today, I’m not sure how Microsoft is planning on keeping them quiet but, for now, my lips are sealed so stay tuned for part 2…

Windows 7 and Windows Server 2008 R2 release candidate availability

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s been a lot of chatter on the ‘net about Windows 7 release dates and new features but a lot of it is based on one or two leaks that then get reported (and sometimes misreported) across a variety of news sites and blogs.

After various reports that we could see a Windows 7 release candidate (RC) earlier in April, and various leaked builds, today’s the day when the Windows 7 and Windows Server 2008 R2 RCs will officially be made available to MSDN and TechNet subscribers (the client release candidate was announced last week and the official announcement around the Windows Server 2008 R2 release candidate is due today).

For those who are not TechEd or MSDN subscribers, the RC will be available to the public on/around 5 May.

Whilst the Windows 7 client was already feature complete at the beta, the server version, Windows Server 2008 R2, includes some new functionality – some of which I’ll detail in a separate blog post and some of which will not be announced until TechEd on 11 May 2009.

If you want to know more about the Windows 7 release candidate, then Ed Bott has a Windows 7 release candidate FAQ which is a good place to start. One thing you won’t find in there though is a release date for Windows 7, as Bott quotes one Microsoft executive:

“Those who know, won’t say. Those who say, don’t know.”

As for the future of Windows Mary Jo Foley reported last week that work is underway on “Windows 8” and is suggesting it could be with us as early as 2011/2. If Microsoft continues the 2-year major/minor cycles for the server version and co-develops the Windows client and server releases again, that would fit but, for now, let’s concentrate on Windows 7!

Finally, Microsoft has a new website launching tomorrow (but which has been available for a few days now) aimed at IT professionals in the Windows space. If you find the Engineering Windows 7 blog a little wordy (sometimes I wish they would stick to the Twitter rule of 140 characters!), Talking About Windows is a video blog which provides insight on Windows 7 from the Microsoft engineers who helped build the product, combined with real-world commentary from IT professionals.