PowerShell running on server core (without resorting to application virtualisation)

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

PowerShell evangelist (and Microsoft deployment guru) David Saxon dropped me a note this morning to let me know that Quest Software’s Dmitry Sotnikov has got PowerShell running on Server Core.

Nice work Dmitry. It’s not a supported configuration (as Jeffrey Snover notes in his post on the PowerShell Team blog) but something that people have been wanting to see for a while now.

(Aaron Parker managed to get this working another way – using application virtualisation)

Comparing internal and USB-attached hard disk performance in a notebook PC

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I was in a meeting with a potential business partner and their software was performing more slowly than they had expected in the virtual environment on my notebook PC. The application was using a SQL Server 2005 Express Edition database and SQL Server is not normally a good candidate for virtualisation but I was prepared to accept the performance hit as I do not want any traces of the product to remain on my PC once the evaluation is over.

Basic inspection using Task Manager showed that neither the virtual nor the physical system was stressed from a memory or CPU perspective but the disk access light was on continuously, suggesting that the application was IO-bound (as might be expected with a database-driven application). As I was also running low on physical disk space, I considered whether moving the VM to an external disk would improve performance.

On the face of it, spreading IO across disk spindles should improve performance but with SATA hard disk interfaces providing a theoretical data transfer rate of 1.5-3.0Gbps and USB 2.0 support at 480Mbps, my external (USB-attached) drive is, on paper at least, likely to result in reduced IO when compared with the internal disk. That’s not the whole story though – once you factor in the consideration that standard notebook hard drives are slow (4200 or 5400RPM), this becomes less of a concern as the theoretical throughput of the disk controller suddenly looks far less attainable (my primary hard drive maxes out at 600Mbps). Then consider that actual hard disk performance under Windows is determined not only by the speed of the drive but also by factors such as the motherboard chipset, UDMA/PIO mode, RAID configuration, CPU speed, RAM size and even the quality of the drivers and it’s far from straightforward.

I decided to take a deeper look into this. I should caveat this with a note that performance testing is not my forte but I armed myself with a couple of utilities that are free for non-commercial use – Disk Thruput Tester (DiskTT.exe) and HD Tune.

Both disks were attached to the same PC, a Fujitsu-Siemens S7210 with a 2.2GHz Intel Mobile Core 2 Duo (Merom) CPU, 4GB RAM and two 2.5″ SATA hard disks but the internal disk was a Western Digital Scorpio WD1200BEVS-22USTO whilst the external was a Fujitsu MHY2120BH in a Freecom ToughDrive enclosure.

My (admittedly basic) testing revealed that although the USB device was a little slower on sequential reads, and quite a bit slower on sequential writes, the random access figure was very similar:

Internal (SATA) disk External (USB) disk
Sequential writes 25.1MBps 22.1MBps
Sequential reads 607.7MBps 570.8MBps
Random access 729.3MBps 721.6MBps

Testing was performed using a 1024MB file, in 1024 chunks and the cache was flushed after writing. No work was performed on the PC during testing (background processes only). Subsequent re-runs produced similar test results.

Disk throughput test results for internal diskDisk throughput test results for external (USB-attached) disk

Something doesn’t quite stack up here though. My drive is supposed to max out at 600Mbps (not MBps) so I put the strange results down to running a 32-bit application on 64-bit Windows and ran a different test using HD Tune. This gave some interesting results too:

Internal (SATA) disk External (USB) disk
Minimum transfer rate 19.5MBps 18.1MBps
Maximum transfer rate 52.3MBps 30.6MBps
Average transfer rate 40.3MBps 27.6MBps
Access time 17.0ms 17.7ms
Burst rate 58.9MBps 24.5MBps
CPU utilisation 13.2% 14.3%

Based on these figures, the USB-attached disk is slower than the internal disk but what I found interesting was the graph that HD Tune produced – the USB-attached disk was producing more-or-less consistent results across the whole drive whereas the internal disk tailed off considerably through the test.

Disk performance test results for internal disk
Disk performance test results for external (USB-attached) disk

There’s a huge difference between benchmark testing and practical use though – I needed to know if the USB disk was still slower than the internal one when it ran with a real workload. I don’t have any sophisticated load testing tools (or experience) so I decided to use the reliability and performance (performance monitor) capabilities in Windows Server 2008 to measure the performance of two identical virtual machines, each running on a different disk.

Brent Ozar has written a good article on using perfmon for SQL performance testing and, whilst my application is running on SQL Server (so the article may help me find bottlenecks if I’m still having issues later), by now I was more interested in the effect of moving the virtual machine between disks. It did suggest some useful counters to use though:

  • Memory – Available MBytes
  • Paging File – % Usage
  • Physical Disk – % Disk Time
  • Physical Disk – Avg. Disk Queue Length
  • Physical Disk – Avg. Disk sec/Read
  • Physical Disk – Avg. Disk sec/Write
  • Physical Disk – Disk Reads/sec
  • Physical Disk – Disk Writes/sec
  • Processor – % Processor Time
  • System – Processor Queue Length

I set this up to monitor both my internal and external disks, and to log to a third external disk so as to minimise the impact of the logging on the test.

Starting from the same snapshot, I ran the VM on the external disk and monitored the performance as I started the VM, waited for the Windows Vista Welcome screen and then shut it down again. I then repeated the test with another copy of the same VM, from the same snapshot, but running on the internal disk.

Sadly, when I opened the performance monitor file that the data collector had created, the disk counters had not been recorded (which was disappointing) but I did notice that the test had run for 4 minutes and 44 seconds on the internal disk and only taken 3 minutes and 58 seconds on the external one, suggesting that the external disk was actually faster in practice.

I’ll admit that this testing is hardly scientific – I did say that performance testing is not my forte. Ideally I’d research this further and I’ve already spent more time on this than I intended to but, on the face of it, using the slower USB-attached hard disk still seems to improve VM performance because the disk is dedicated to that VM and not being shared with the operating system.

I’d be interested to hear other people’s comments and experience in this area.

Waiting for Windows 7: is Vista really that bad?

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was at an event last week where Gareth Hall, UK Product Manager for Windows Server 2008, commented on the product’s fantastic press reviews, with even Jon Honeyball (who it seems is well known for his less-than-complimentary response to Microsoft’s output of late) commenting that:

“Server 2008 excels in just about every area [… and] is certainly ready for prime time. There’s no need to wait for Service Pack 1”

[Jon Honeyball, PC Pro, February 2008]

It seems that, wherever you look, Windows Server 2008 is almost universally acclaimed. And rightly so – I believe that it is a fantastic operating system release (let’s face it, Windows Server 2003 and R2 were very good too) and is packed full of features that have the potential to add significant value to solutions.

So, tell me, why are the same journalists who think Windows Server 2008 is great, still berating Windows Vista – the client version of the same operating system codebase? Sure, Vista is for a different market, Vista has different features, and it’s only fair to say that Vista took some time to bed down, but after more than a year of continuous updates and a major service pack is it really that bad?

This week, IT Week is running a leader on the “migration muddle” that organisations face. Should IT Manager’s skip Vista and go straight to Windows 7, with Bill Gates allegedly saying that “sometime in the next year we will have a new version [of Windows]”?

The short answer is “No!”. My advice is either to move to Vista now and save the pain of trying to jump two or three releases to Windows 7 later, or accept a more pragmatic approach of managed diversity.

The trouble is that Microsoft has muddied the water by dropping hints about what the future may hold. What was once arguably the world’s biggest and best marketing machine seems to have lost its way recently – either maintain the silence and keep us guessing what Windows 7 means, or open up and let us decide whether it’s worth the wait. With the current situation, IT Managers are confused: the press are, by and large, critical of Vista; consumers and early adopters have complained of poor device support (not Microsoft’s fault); and even Microsoft seems ready to forget about pushing their current client operating system and move on to the next big thing.

In all my roles – as a consultant, an infrastructure architect, a Microsoft partner and of course as a blogger, I’d love to know more about Windows 7 – and Microsoft does need to be more transparent if it expects customers to make a decision. Instead, they seem to be hoping that hints of something new that’s not Vista will help to sell Enterprise Agreements (complete with Software Assurance) to corporates.

Accessing USB devices from within Microsoft virtual machines

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my Hyper-V presentation on Wednesday, I said that USB support was one of the things that is missing from Hyper-V. That is correct – i.e. there is no ability to add USB devices as virtual hardware – but, in a conversation yesterday, Clive Watson pointed out that if you connect to a virtual machine using RDP, there is the ability to access local resources – including hard drives and smart card readers.

The way to do this is to use the Local Resources tab in the Remote Desktop Connection client options, where local devices and resources may be specified for connection:

Accessing local resources in the RDP client

If you click more, there is the option to select smart cards, serial ports, drives and supported plug and play devices (i.e. those that support redirection). In this case, I selected the USB hard drive that was currently plugged into my computer:

Accessing local resources in the RDP client

And when I connect to the virtual machine using RDP, it is listed the drive as driveletter on localmachine:

Accessing local resources via RDP - as seen on the remote machine

This is really a Terminal Services (presentation virtualisation) feature – rather than something in Hyper-V – and so it is true to say that there is no USB device support in Hyper-V for other access methods (e.g. from a virtual machine console) and that the RDP connection method is a workaround for occasional access. Microsoft see USB support as a desktop virtualisation feature and the only way that will change is if they see enough customer feedback to tell them that it’s something we need on servers too.

My slides from the Microsoft UK user groups community day

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m presenting two sessions at the Microsoft UK user groups community day today on behalf of the Windows Server Team.

The first is an introduction to Hyper-V and the second will look at server core installations of Windows Server 2008. I’ve included full speaker notes in the slide decks, as well as some additional material that I won’t have time to present and screen grabs from my demos. Both decks are available on my Windows Live SkyDrive, along with a couple of videos I recorded of the Hyper-V installation process:




Removing phantom network adapters from virtual machines

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night, I rebuilt my Windows Server 2008 machine at home to use the RTM build (it was running on an escrow build from a few days before it was finally released) and Hyper-V RC0. It was non-trivial because the virtual machines I had running on the server had to be recreated in order to move from the Hyper-V beta to the release candidate (which meant merging snapshots) and so it’s taken me a few weeks to get around to it.

The recreation of the virtual machine configuration (but using the existing virtual hard disk) meant that Windows detected new network adapters when I started up the VM. Where I previously had a NIC called Local Area Connection using Microsoft VMBus Network Adapter I now had a NIC called Local Area Connection 2 using Microsoft VMBus Network Adapter #2. The original adapter still configured but not visible. Ordinarily, that’s not a problem – the friendly name for the NIC can be edited but when I went to apply the correct TCP/IP settings, a warning was displayed that:

The IP address ipaddress you have entered for this network adapter is already assigned to another adapter Microsoft VMBus Network Adapter. Microsoft VMBus Network Adapter is hidden from the network and Dial-up Connections folder because it is not physically in the computer or is a legacy adapter that is not working. If the same address is assigned to both adapters and they become active, only one of them will use this address. This may result in incorrect system configuration. Do you want to enter a different IP address for this adapter in the list of IP addresses in the advanced dialog box?

That wasn’t a problem for my domain controller VM, but the ISA Server VM didn’t want to play ball – hardly surprising as I was messing around with the virtual network hardware in a firewall!

In a physical environment, I could have reinserted the original NIC, uninstalled the drivers, removed the NIC and then installed the new one, but that was less straightforward with my virtual hardware as the process had also involved upgrading the Hyper-V gues integration components. I tried getting Device Manager to show the original adapter using:

set devmgr_show_nonpresent_devices=1
start devmgmt.msc

but it was still not visible (even after enabling the option to show hidden devices). Time to break out the command line utilities.

As described in Microsoft knowledge base article 269155, I ran devcon to identify the phantom device and then remove it. Interestingly, running devcon findall =net produced more results than devcon listclass net and the additional entries were the original VMBus Network Adapters. After identifying their identifier for the NIC (e.g. VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}: Microsoft VMBus Network Adapter), I could use devcon to remove the device:

devcon -r remove "@VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}"

Result! devcon reported:

VMBUS\{20AC6313-BD23-41C6-AE17-D1CA99DA4923}\5&37A0B134&0&{20AC6313-BD23-41C6-AE17-D1CA99DA4923}: Removed
1 device(s) removed.

I repeated this for all phantom devices (and uninstalled the extra NICs that had been created but were visible, using Device Manager). I then refreshed Device Manager (scan for hardware changes), plug and play kicked in and I just had the NIC(s) that I wanted, with the original name(s). Finally, I configured TCP/IP as it had been before the Hyper-V upgrade and ISA Server jumped into life.

Just one extra point of note: the devcon package that Microsoft supplies in Microsoft knowledge base article 311272 includes versions for i386 and IA64 architectures but not x64. It worked for me on my ISA Server virtual machine, which is running 32-bit Windows Server 2003 R2, but was unable to remove the phantom device on my domain controller, which uses 64-bit Windows Server 2003 R2. I later found that devcon is one of the Support Tools on the Windows installation media (suptools.msi). After installing these, I was able to use devcon on x64 platforms too.

The Windows runas command and the /netonly switch

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier today I needed to administer a Windows Server remotely, using a Microsoft Management Console (MMC) snap-in. Unfortunately, the computer I was using was in one domain and the remote server was in a workgroup, meaning that many of the MMC operations failed due to security issues. I tried running MMC as the administrator for the remote machine (using runas /user:remotecomputername\username mmc) but kept on getting a message that indicated an authentication failure:

RUNAS ERROR: Unable to run – mmc
1311: There are currently no logon servers available to service the logon request.

Then I found out about an obscure switch for the runas command – /netonly, used to indicate that the supplied credentials are for remote access only. By changing my command to:

runas /netonly /user:remotecomputername\username mmc

I was able to authenticate against the remote computer without needing the credentials to also be valid on the local computer, as described by Craig Andera.

Customising Windows Server 2008 server core

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months back, I wrote a post with a few commands to get started with server core on Windows Server 2008. Since then, I’ve had some fun tweaking server core installations (including some cheekiness installing third party web servers and browsers).

Sander Berkouwer wrote a series of blog posts last summer that look at changing the look and feel of a server core installation:

  1. Changing regional and language options (international settings) as well as time and date options.
  2. Changing display settings such as screen resolution and color depth, screen saver, window and background colors, cleartype and windows dragging settings.
  3. Changing keyboard and mouse settings/cursors.
  4. Changing the splash screen, logon screen and tweaking the command prompt window.

Server core may be intended for core infrastructure servers in lights-out data centres but even so, some customisation can be useful. Sander’s notes should help most people get things started.

Surfing with server core

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The whole point of the server core installation mode for Windows Server 2008 is a reduced attack surface – no Windows Explorer, no Internet Explorer, no .NET Framework. That’s all well and good but sometimes it’s useful to download a file over HTTP to a server core machine.

No problem – just download a version of GNU wget that has been compiled for Windows and use that to download the file. It needed a couple of configuration items to get past my corporate proxy server but worked flawlessly:

set http_proxy=http://proxyserver:portnumber
wget --proxy-user=domainname\username --proxy-passwd=password http://uri/

That’s probably as far as most people need to go – adding a simple command line utility to a command-line Windows installation – but I wanted to take things a step further (purely out of curiosity) and I installed Mozilla Firefox (v2.0.0.13). It worked, so I decided to try Apple Safari (v3.1) and Opera (v9.26). Safari installed (except the Bonjour component) but has a dependency on the Internet Options control panel applet (which is not present in server core) so I couldn’t define any proxy server settings. Meanwhile, Opera had no noticeable issues installing and loading a few test web pages. Next, I tried Internet Explorer 8 beta 1 and, as I expected, the installation failed. Bizarrely, it didn’t detect that I was trying to install it on server core but did attempt the installation, before failing and advising a restart followed by visit a web page (presumably using a competitor’s browser!) which redirects to Microsoft knowledge base article 949220.

Finally, I decided to go to the other extreme and try a text-mode browser. I found a version of Lynx that has been compiled for Windows but in order to get past my proxy server it needed the same environment variable as wget:

set http_proxy=http://proxyserver:portnumber

Even with this, it is incapable of performing authenticated proxy operations so I kept getting an HTTP 407 response. The workaround is to use the NTLM Authorization Proxy Server (NTLMAPS), which depends on Python (for which I found a 64-bit MSI package for Windows). Basically, NTLMAPS acts as a local proxy, configured to add the authentication headers and pass the request to the upstream server.

By editing the server.cfg file to include the following entries (all other configuration items were left at their defaults) and running the start runserver.bat command to launch the NTLMAPS server I was able to get NTLMAPS to prompt me for my password at startup and listen for HTTP requests (but not HTTPS) on port 5865:

[GENERAL]
PARENT_PROXY:proxyserver
PARENT_PROXY_PORT:portnumber

[NTLM_AUTH]
NT_DOMAIN:domainname
USER:username
PASSWORD:

Then, I ran the following:

set http_proxy=http://localhost:5865/
lynx

and was able to successfully browse the Internet through my corporate proxy server.

In all seriousness, I can’t really think of a good reason to install a full browser on server core but the wget command is probably useful. Even so, it’s still good to know that there are a few options for emergency surfing from a server core installation.

Recording Windows Media screencasts

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Next month, I’ll be delivering a couple of presentations on behalf of the Windows Server Team UK at the Microsoft UK user groups community day. It won’t be the same without Scotty (who first invited me to take part) and I’ve never presented to a large group before so, frankly, I’m more than a little nervous (and if I’ve asked too many questions in one of your presentations – I’m thinking here of Eileen, Steve, John, James, Jason, et al. – now is the chance for you to get your own back).

Anyway, I’m working on some insurance policies to help make sure that the demo gods look favourably on me – one of which is pre-recording some of my demos. In truth, it’s not just to make sure that the demos run smoothly, but also to condense 10 minutes of activities down into 2 (watching progress bars during the installation of Windows components is hardly exciting). So, I’ve been recording some screencasts (aka. blogcasts, vodcasts, vidcasts, video podcasts, etc.) to fall back on. It turns out to be quite simple – based largely on a post that John Howard wrote a while back with recorder settings for Windows Media Encoder (WME).

First of all, download a copy of Windows Media Encoder (I used 9.00.00.2980) and it seems to run fine on my x64 installation of Windows Server 2008, although I’ve just noticed that there is an x64 version available that I will install and use next time.

Next, drop the screen resolution and colour depth. John recommended 800×600 pixels at 16-bit colour depth but I used a slightly different method, capturing just one window (a remote desktop connection to a another machine, with the RDP connection running at 800×600). I also found that the capture was a little taxing on my graphics hardware, so it was worth dropping back to the Windows Vista basic display settings for a while (I reverted to Aero once I had captured the video).

When WME loads, it starts a wizard to create a session – I chose to ignore that and configure session properties manually. The key items are:

  • Sources tab: Provide a name for your source, check video and select Screen Capture (click configure to select a window or region for capture), check audio and select an appropriate source (I chose to record without any sound and added a soundtrack later).
  • Output tab: Deselect pull from encoder, check encode to file and enter a filename.
  • Compression tab: Select a destination of web server (progressive download) with screen capture (CBR) video encoding and a voice quality audio (CBR) audio encoding, select a bit rate of 93kbps and edit the encoding to use Windows Media Audio Voice 9 and Windows Media Video 9 Screen, with a custom video format and no interlacing or non-square pixels, finally, edit the buffer size to 8 seconds and the video smoothness to 100.
  • Attributes tab: Add some metadata for the recording.

All other settings can be left at their defaults.

After recording (encoding) the required demonstrations, there should be some .WMV files in the output directory. I had planned to edit these on the Mac but decided to stick with Windows Media and downloaded Windows Movie Maker 2.6 instead. This is a little basic and a bit buggy at times (with some caching going on as I took several takes to correctly narrate the screencast, sometimes necessitating exiting and restarting the application before it would pick up the correct recording) but on the whole it was perfectly good enough for recording screencasts.

The resulting output was then saved as another Windows Media File, ready for import into my PowerPoint deck.

I’m not going to start screencasting on this blog just yet. Firstly, it will kill my bandwidth (although I could use YouTube or another online service). Secondly, writing is time-consuming enough – video will just be too labour-intensive. Thirdly, I don’t think I’ve found any content yet that really needs video. In the meantime, I’m hoping that this method will allow me to show some working demos at Microsoft’s offices in Reading on on 9 April.