Duplicating virtual machines using SysPrep

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the joys of virtualisation is the flexibility afforded by the ability to copy virtual machine files around the network for backup purposes or just to create a new machine (especially with Microsoft’s new Virtual Server licensing arrangements). Unfortunately, just as for “real” computers, simple file copies of Windows-based virtual machines can cause problems and are not supported (see Microsoft knowledge base article 162001).

All is not lost though, as Microsoft does support the duplication of virtual hard disks using the system preparation tool (SysPrep) and Megan Davis has written about sysprepping virtual machines on her blog. I tested it today and it works really well – basically a 3 step process of:

  1. Install and configure a source virtual machine as required (i.e. operating system installed, virtual machine additions installed, service packs and other updates applied), making sure it is in a workgroup (i.e. not a domain member).
  2. Locate the appropriate version of the Windows deployment tools (I used the ones from the \support\tools\deploy.cab file on a Windows Server 2003 CD) and create an answer file (C:\sysprep\sysprep.inf). Then copy the sysprep.exe and setupcl.exe deployment tools to C:\sysprep.
  3. Run SysPrep to reseal and shut down the guest operating system, then copy the virtualmachinename.vhd file to a secure location (make it read-only to prevent accidental overwrites, but also apply appropriate NTFS permissions). This file can then be duplicated at will to quickly create new virtual machines with a fully-configured operating system.

For anyone who is unfamiliar with SysPrep, check out Killan’s guide to SysPrep (which, despite claiming not to be written for corporate administrators or OEM system builders, seems like a pretty good reference to me).

Toshiba PX1223E-1G32 320GB External Hard DiskIncidentally, there are major performance gains to be had by moving virtual machines onto another disk (spindle – not just partition). Unfortunately my repurposed laptop hard disks were too slow (especially on a USB 1.1 connection), so I had to go out this afternoon and buy a USB 2.0 PCI adapter along with a decent external hard disk (a Toshiba 320GB 7200 RPM external USB 2.0 hard drive with 8MB data buffer) – that speeded things up nicely.

Introducing Windows PowerShell

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I missed this news last week, but it seems that Microsoft has finally named their next-generation command shell (formerly codenamed Monad). I liked Microsoft Script Host (MSH) or Microsoft command shell. They were accurate descriptions of what it does. Unfortunately we seem to be stuck with the name Windows PowerShell. Urghhh.

Forum evangelism

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This example forum post history was stolen from James O’Neill. Given the comments I get whenever I write about Macs (like the prospect buying a Mac and installing an operating system that’s not OS X on it), it seemed kind of relevant:

A: I’m thinking of getting a new computer.
B: I’ve got a Mac, you should get one too.
C: Macs are pretty, but Windows is more flexible.
D: Windoze is evil man. Look at all the money M$ makes. You should get Linux [gives list of distributions].
B: Linux is hard. My granny can use a Mac, and she’s been dead for 10 years.
D: If she can’t build a kernel she shouldn’t have a computer, tree hugger.
C: Have you looked at Windows XP-Dead Grandparent Edition? It’s got lots of features [lists them. All of them].
E: Yeah, but that’s the problem XP DGE is so bloated. It’s been downhill since Windows 3.0, and we didn’t get viruses in those days.
D: And those features are just a cover for Micro$oft to steal your brain.
C: [Gives feature by feature justification, explains 15 years of changes in viruses. Denies brain stealing rumour. Misses meal].
A: None of you have given me a reason to choose one OS over another.
F-Z: WE DON’T CARE!
K: Why do you need a computer? In my day we did everything in the darkroom – computers are just cheating.
J: Hey, I’m new here and I’m not sure if this is the right place – does anyone have a recipe for pancakes?
L: Grab yourself a 3174 and run it green screen to an OS/390 host. If you’re short of cash then AS/400s are going for about £129 on eBay. Those fancy Mac things are really based on RS6000 technology anyway. Apple steal everything just like M$.
X: Nah – OS/390 hasn’t cut it since they renamed it Z/OS…

Sound familiar to anyone?

The rise of 64-bit Windows server computing

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months back, when Microsoft announced that many of its forthcoming server products would be 64-bit only, I confessed that I don’t know a lot about 64-bit computing. That all changed last week when I attended two events (the first sponsored by Intel and the second by AMD) where the two main 64-bit platforms were discussed and Microsoft’s decision to follow the 64-bit path suddenly makes a lot of sense.

For some time now, memory has been a constraint on scaling-up server capacity. The 32-bit x86 architecture that we’ve been using for so long now is limited to a maximum of 4GB of addressable memory (232 = 4,294,967,296) – once a huge amount of memory but not so any more. Once you consider that on a Windows system, half of this is reserved for the system address space (1GB if the /3GB switch is used in boot.ini) the remaining private address space for each process starts to look a little tight (even when virtual memory is considered). Even though Windows Server 2003 brought improvements over Windows 2000 in its support for the /3GB switch, not all applications recognise the extra private address space that it provides and the corresponding reduction in the system address space can be problematic for the operating system. Some technologies, like physical address extension (PAE) and address windowing extensions (AWE) can increase the addressable space to 64GB, but for those of us who remember trying to squeeze applications into high memory back in the days of MS-DOS, that all sounds a little bit like emm386.exe!

The answer to this problem of memory addressing is 64-bit computing, which allows much more memory to be addressed (as shown in the table below) but unfortunately means that new servers and a switch to a 64-bit operating systems are required. For Windows users that means Windows XP Professional x64 Edition on the desktop and either Windows Server 2003 Standard, Enterprise or Datacenter x64 or Itanium Edition at the back end (Windows Server 2003 Standard Edition is not compatible with Intel’s Itanium processor family and Windows Server 2003 Web Edition is 32-bit only).

Memory limits with 32- and 64-bit architectures

It’s not all doom and gloom – for those who have recently invested in new servers, it’s worth noting that certain recent models of Intel’s Xeon and Pentium processors have included a technology called Intel Extended Memory 64 Technology (EM64T), so pretty much any server purchased from a tier 1 vendor (basically HP, IBM or Dell) over the last 12 months will have included 64-bit capabilities (with the exception of blade servers, which are generally still 32-bit for power consumption reasons – at least in the Intel space).

EM64T is effectively Intel’s port of the AMD x64 technology (AMD64, as used in the Opteron CPU line) and probably represents the simplest step towards 64-bit computing as Intel’s Itanium processor family (IPF) line uses a totally separate architecture called explicitly parallel instruction computing (EPIC).

In terms of application compatibility, 32-bit applications can run on a 64-bit Windows platform using a system called Windows on Windows 64 (WOW64). Although this is effectively emulating a 32-bit environment, the increased performance allowed with a 64-bit architecture means that performance is maintained and meanwhile, 64-bit applications on the same server can run natively. Itanium-based servers are an exception – they require an execution layer to convert the x64 emulation to EPIC, which can result in some degradation in 32-bit performance (which needs to be offset against Itanium 2’s high performance capabilities, e.g. when used natively as a database server). There are some caveats – Windows requires 64-bit device drivers (for many, hardware support is the main problem for 64-bit adoption) and there is no support for 16-bit applications (that means no MS-DOS support).

There is much discussion about which 64-bit model is “best”. There are many who think AMD are ahead of Intel (and AMD are happy to cite an Information Week article which suggests they are gaining market share), but then there are situations where Itanium 2’s processor architecture allows for high performance computing (cf. the main x64/EM64T advantage of increased memory addressability). Comparing Intel’s Xeon processors with EM64T and Itanium 2 models reveals that the Itanium supports 1024TB of external memory (cf. 1TB for the EM64T x64 implementation) as well as additional cache and general purpose registers but the main advantages with Itanium relate to the support for parallel processing. With a traditional architecture, source code is compiler to create sequential machine code and any parallel processing is reliant upon the best efforts of the operating system. EPIC compilers for the Itanium architecture produce machine code that is already optimised for parallel processing.

Whatever the processor model, both Intel and AMD’s marketing slides agree that CPU speed is no longer the bottleneck in improving server performance. A server is a complex system and performance is constrained by the component with the lowest performance. Identifying the bottleneck is an issue of memory bandwidth (how fast data can be moved to and from memory), memory latency (how fast data can be accessed), memory addressability (how much data can be accessed) and I/O performance (how much data must be moved). That’s why processor and server manufacturers also provide supporting technologies designed to increase performance – areas such as demand based power management, systems management, network throughput, memory management (the more memory that is added, the less reliable it is – hence the available of RAID and hot-swap memory technologies), expansion (e.g. PCI Express) and virtualisation.

Where AMD’s x64 processors shine is that their architecture is more efficient. Traditional front-side bus (FSB) computers rely on a component known as the northbridge. I/O and memory access share the same front side bus affecting memory bandwidth. Memory access is delayed because it must pass through the northbridge (memory latency). I/O performance is restricted due to bandwidth bottlenecks accessing attached devices. Additionally, in a multiprocessor system, each core competes for access to the same FSB, making the memory bandwidth issue worse.

Non-uniform memory access (NUMA) computers place the memory with the processor cores, avoiding the need to access memory via a northbridge. AMD’s Opteron processor allows a total address space of up to 256TB, allowing each CPU to access up to 6.4Gbps of dedicated memory bandwidth with a local memory latency of ~65ns (similar to L3 cache performance) and three HyperTransport connections, each allowing 8Gbps of I/O bandwidth.

Scaling out to multiple processors is straightforward with direct connections between processors using the 8Gbps HyperTransport (some OEMs, e.g. Fujitsu-Siemens, use this to allow two dual-CPU blade servers to be connected making a 4 CPU blade system).

Adding a second processor core results in higher performance and better throughput (higher density, lower latency and higher overall performance) but with equal power consumption. The power consumption issue is an important one, with data centre placement increasingly being related to sufficient power capacity and high density rack-based and blade servers leading to cooling issues.

AMD’s figures suggest that by switching from a dual-core 4-way Intel Xeon processor system (8 cores in total) to a similar AMD Opteron system results in a drop from 740W to 380W of power [source: AMD]. With 116,439 4-way servers shipped in Western Europe alone during 2005 [source: IDC], that represents a saving of 41.9MW per year.

In summary, 64-bit hardware has been shipping for a while now. Both Intel and AMD admit that there is more to processor performance than clock speed, and AMD’s direct connect architecture removes some of the bottlenecks associated with the traditional x86 architecture as well as reducing power consumption (and hence heat production). The transition to 64-bit software has also begun, with x64 servers (Opteron and EM64T) providing flexibility in the migration from a 32-bit operating system and associated applications.

Putting PKI into practice

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I blogged about public/private key cryptography in plain(ish) English. That post was based on a session which I saw Microsoft UK’s Steve Lamb present. A couple of weeks back, I saw the follow-up session, where Steve put some of this into practice, securing websites, e-mail and files.

Before looking at my notes from Steve’s demos, it’s worth picking up on a few items that were either not covered in the previous post, or which could be useful in understanding what follows:

  • In the previous post, I described encryption using symmetric and asymmetric keys; however many real-world scenarios involve a hybrid approach. In such an approach, a random number generator is used to seed a symmetric session key which contains a shared secret. That key is then encrypted asymmetrically for distribution to one or more recipients. Once distributed, the (fast) symmetrical encrypted session key can be used.
  • If using the certificate authority (CA) built into Windows, it’s useful to note that this can be installed in standalone or enterprise mode. The difference is that enterprise mode is integrated with Active Directory, allowing the automatic publishing of certificates. In a real-world environment, multiple-tiers of CA would be used and standalone or enterprise CAs can both be used as either root or issuing CAs. It’s also worth noting that whilst the supplied certificate templates cannot be edited, they can be copied and the copies amended.
  • Whilst it is technically possible to have one public/private key pair and an associated certificate for multiple circumstances, there are often administrative purposes for separating them. For example, in life one would not have common keys for their house and their car, in case they needed to change one without giving access to the other away. Another analogy is to compare with single sign on, where it is convenient to have access to all systems, but may be more secure to have one set of access permissions per computer system.

The rest of this post describes the practical demonstrations which Steve gave for using PKI in the real world.

Securing a web site using HTTPS is relatively simple:

  1. Create the site.
  2. Enrol for a web server certificate (this authenticates the server to the client and it is important that the common name in the certificate matches the fully qualified server name, in order to prevent warnings on the client).
  3. Configure the web server to use secure sockets layer (SSL) for the site – either forced or allowed.

Sometimes web sites will be published via a firewall (such as ISA Server), which to the outside world would appear to be the web server. Using such an approach for an SSL-secured site, the certificate would be installed on the firewall (along with the corresponding private key). This has the advantage of letting intelligent application layer firewalls inspect inbound HTTPS traffic before passing it on to the web server (either as HTTP or HTTPS, possibly over IPSec). Each site is published by creating a listener – i.e. telling the firewall to listen on a particular port for traffic destined for a particular site. Sites cannot share a listener, so if one site is already listening on port 443, other sites will need to be allocated different port numbers.

Another common scenario is load-balancing. In such circumstances, the certificate would need to be installed on each server which appears to be the website. It’s important to note that some CAs may prohibit such actions in the licensing for a server certificate, in which case a certificate would need to be purchased for each server.

Interestingly, it is possible to publish a site using 0-bit SSL encryption – i.e. unencrypted but still appearing as secure (i.e. URL includes https:// and a padlock is displayed in the browser). Such a scenario is rare (at least among reputable websites), but is something to watch out for.

Configuring secure e-mail is also straightforward:

  1. Enrol for a user certificate (if using a Windows CA, this can either be achieved via a browser connection to http://servername/certsrv or by using the appropriate MMC snap-ins).
  2. Configure the e-mail client to use the certificate (which can be viewed within the web browser configuration, or using the appropriate MMC snap-in).
  3. When sending a message, opt to sign or encrypt e-mail (show icons).

Signed e-mail requires that the recipient trusts the issuer, but they do not necessarily need to have access to the issuing CA; however, this may be a reason to use an external CA to issue the certificate. Encrypted e-mail requires access to the user’s public key (in a certificate).

To demonstrate access to a tampered e-mail, Steve turned off Outlook’s cached mode and directly edited a message on the server (using the Exchange Server M: drive to edit a .EML file)

Another possible use for a PKI is in securing files. There are a number of levels of file access security, from BIOS passwords (problematic to manage); syskey mode 3 (useful for protecting access to notebook PCs – a system restore disk will be required for forgotten syskey passwords/lost key storage); good passwords/passphrases to mitigate known hash attacks; and finally the Windows encrypting file system (EFS).

EFS is transparent to both applications and users except that encrypted files are displayed as green in Windows Explorer (cf. blue compressed files). EFS is also unfeasible to break so recovery agents should be implemented. There are however some EFS “gotchas” to watch out for:

  • EFS is expensive in terms of CPU time, so may be best offloaded to hardware.
  • When using EFS with groups, if the group membership changes after the file is encrypted, new users are still denied access. Consequently using EFS with groups is not recommended.
  • EFS certificates should be backed up – with the keys! If there is no PKI or data recovery agent (DRA) then access to the files will be lost (UK readers should consider the consequences of the regulation of investigatory powers act 2000 if encrypted data cannot be recovered). Windows users can use the cipher /x command to store certificates and keys in a file (e.g. on a USB drive). Private keys can also be exported (in Windows) using the certificate export wizard.

Best practice indicates:

  • The DRA should be exported and removed from the computer.
  • Plain text shreds should be eliminated (e.g. using cipher /w to write 00 and FF to the disk at random).
  • Use an enterprise CA with automatic enrollment and a DRA configured via group policy.

More information can be found in Steve’s article on improving web security with encryption and firewall technologies in Microsoft’s November 2005 issue of TechNet magazine.

Create customised Windows installations with nLite

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I heard about nLite whilst I was listening the episode 41 of the This Week in Tech podcast. I haven’t used it yet, but it sounds like a great freeware tool for customising a Windows installations right up to creating a bootable ISO image, including slipstreaming service packs, hotfixes and drivers – it sure beats Microsoft’s Setup Manager.

nLite has a dependency on the Microsoft .NET Framework 2.0 but also has a selection of popular packages ready for integration into the Windows source as add-ons (Firefox 1.5, Adobe Reader, AVG AntiVirus, etc.). If I hadn’t already put a lot of effort into an unattended XP build and didn’t already use WSUS for windows updates I’d be seriously tempted to give it a go.

Microsoft’s next generation command shell

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in June 2004, I got in a panic because I heard that VBScript was about to be phased out. Don Jones commented that VBScript will still be there in Windows, it just won’t be developed any further, then later I heard about the new Microsoft scripting host (MSH) shell (codenamed Monad).

At yesterday’s IT Forum ’05 highlights (part 2) event, Thomas Lee gave a preview of Monad. Although I am enrolled on the Microsoft command shell beta program, pressures of work and family life have left very little time to do anything with it up to now, but having seen Thomas’ demo, I’ve installed MSH beta 3 on my day-to-day notebook computer and will try to use it instead of cmd.exe, regedit.exe and some of my other everyday tools.

Those of us who have worked with Windows since… well since MS-DOS… will remember the command prompt, as will those who use other operating systems. Graphical user interfaces (GUIs) are all very well (and a well designed GUI can be remarkably intuitive), but a command line interface (CLI) is my preference. Despite a whole load of new and powerful commands in recent Windows releases (like netsh.exe), Windows still lags behind Unix in many ways when it comes to command line operations and MSH is an attempt to catch up with, and then exceed, the tools provided by other operating systems.

MSH is a next-generation shell that is intended to:

  • Be as interactive and scriptable as BASH or KSH.
  • Be as programmatic as Perl or Ruby.
  • Be as production-oriented as AS/400 CL or VMS DCL.
  • Allow access to data stores as easily as accessing file systems.

It sounds like a tall order but amazingly, Microsoft seem to have cracked it. MSH is also pretty easy to use and, it’s secure by default, avoiding many of the issues associated with VBScript. So secure, in fact, that before running MSH you may wish to execute the following registry change:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSH\1\ShellIds\Microsoft.Management.Automation.msh]
"Path"="C:\\Program Files\\Microsoft Command Shell\\v1.0\\msh.exe"
"ExecutionPolicy"="Unrestricted"

(There’s more on MSH security below).

MSH is instantly recognisable. If I was to type cmd, I would be greeted with something similar to the following:

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\>

If I type msh instead, everything is very familiar:

Microsoft Command Shell
Copyright (C) 2005 Microsoft Corporation. All rights reserved.

MSH C:\>

Type dir and guess what happens? For that matter, type ls! If you want to see the contents of a file then either type filename or cat filename will work too. get-history returns a list of commands, to re-run, save, or edit.

Whereas Unix (and Unix-like) systems tend to store configuration in text files, Windows configurations are incredibly complex with multiple data stores (the registry, text files, DNS, Active Directory, etc.). These data stores normally fall into one of two categories: hierarchical, highly recursive data structures (file system directories, registry keys, Active Directory organizational units, etc.) or fairly flat structures (e.g. DNS zones and records). MSH makes them all easy to navigate using a system of data providers.

For example, I can type cd hklm: followed by dir to see all the HKEY_LOCAL_MACHINE keys, navigating the registry as though it were a file system. This simplicity is both elegant and incredibly powerful. There are no AD or DNS providers yet, but the next version of Exchange (codenamed Exchange 12) will include MSH support, treating Exchange databases as just another data store (get-mailbox, etc.). Exchange 12 support is not implemented as a data provider, but as a commandlet (cmdlet), because its data structure is not really hierarchical (at least down to the mailbox level) – the full list of Exchange commands can be found using get-excommand.

For example, if I want to see the details of my system, I can use get-wmiobject command. That’s a bit long to type, so I can actually use get-w and then complete the command with the tab key (as can in cmd.exe these days). get-wmiobject win32_computersystem returns the details of my system as an object with attributes, that I can assign to a variable (e.g. $mysystem=get-wmiobject win32_computersystem). After that, $mysystem.name returns the name of my computer, $mysystem.manufacturer returns Fujitsu Siemens and $mysystem.model returns Lifebook S7010. That would have been so much harder to obtain in VBscript. Take it to the next level and you can see how the data can be queried and actions taken according to the results (e.g. if ($mysystem.model -eq "Lifebook S7010") {"Do something"}).

MSH has built in help (e.g. get-wmiobject -?) and more is no longer an external command so get-wmiobject -? | more works too.

Some commands will return an array of objects, for example get-wmiobject -list. That’s a pretty long and unmanageable list but if I use $wmilist=get-wmiobject -list, I can use $wmilist.length to see how many objects were returned, $wmilist[objectnumber] to view a single object and of course I can also use $wmilist[objectnumber].attributename to refer to a single item.

On a typical Unix system, pipes are used to pass text between commands. Scripts can be used to split strings and extract data (also known as prayer-based parsing, because if the command output is changed, the scripts break). MSH pipes are .NET objects with metadata. That means that a hierarchy of objects can be passed between commands. So, I can also show my WMI array as a table by piping it through format table (ft), i.e. $wmilist | ft (fl is format list).

Having looked at how simple to use, yet powerful, MSH is, let’s look at some of the product specifications:

  • MSH is intended to support a number of different administrative levels:
  • Operators – command line only.
  • Simple scripters – simple sequences, untyped variables and functions with unnamed parameters.
  • Advanced scripters – typed variables and functions with named parameters.
  • Sophisticated scripters – scoped variables, functions with initialised parameters, function cmdlets and scriptblocks.
  • The four administrative levels are supported with different script types:
    • Text – .NET interpretations of the traditional Unix scripting model.
    • COMWindows script host (WSH)/VBScript-style scripting.
    • .NET – for manipulating any native .NET object.
    • Commands – MSH cmdlets emitting objects.
  • These script types are supported with a variety of data types, including .NET, XML, WMI/ADSI and ADO.
  • MSH is intended to be tremendously customisable and it will eventually allow dispense with the style separation between GUI, CLI and systems programming skills so that even an infrastructure bod like me will be able to issue a GUI command (or use a Win32 program) and see what the MSH equivalent command is, so that I can build my own scripts and run a task repeatedly! Unfortunately this GUI-CLI functionality has been dropped from the first release, but hopefully we’ll see it in a service pack later.

    From a security perspective, MSH is extremely secure, with four operational modes:

    • Restricted mode allows interactive operations only, with no script execution.
    • AllSigned mode requires that scripts must be signed by a trusted source.
    • RemoteSigned mode requires that scripts from the Internet must be signed by a trusted source.
    • Unrestricted mode will allow any script to run, but will always warn before remote scripts are executed.

    Other security measures include:

    • No file association for .msh files.
    • The current folder (.) is not on the path by default.

    So when can we get MSH? Beta 3 is available for download now and the final release is expected with Exchange 12. Although MSH will not included with Windows Vista, I’m told that it does work on a Vista/Longhorn platform.

    Finally, for one more example of how easy Monad can be to use, try this:

    "Monad Rocks " * 200

    Links
    Thomas Lee’s Monad information centre
    Monad on Channel 9
    Monad product team blog
    MSH on Wikipedia
    microsoft.public.windows.server.scripting newsgroup

    Changing drive icons in Windows Explorer

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    A few weeks back, John Howard blogged about changing the Windows Explorer drive icons for his multimedia cards. I decided to give it a go myself and it is pretty cool, although I still can’t find a memory stick icon that I like, so that’s been left at the default setting.

    Drive icons

    One point to be aware of (that I missed in John’s post) – the DriveIcons and DriveLabel registry items are subkeys (not values) – the actual registry settings that I used are:

    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons]
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\E]
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\E\DefaultIcon]@="%systemroot%\\system32\\shell32.dll,194"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\E\DefaultLabel]@="SmartMedia Card"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\F]@=""
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\F\DefaultIcon]@="%systemroot%\\system32\\shell32.dll,189"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\F\DefaultLabel]@="CompactFlash Card/MicroDrive"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\G]
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\G\DefaultIcon]@="%systemroot%\\system32\\shell32.dll,193"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\G\DefaultLabel]@="Secure Digital Card"
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\H]
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\H\DefaultIcon]@=""
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\DriveIcons\H\DefaultLabel]@="Memory Stick"

    Icons don’t have to be stored within a dynamic link library (DLL) – one MSFN forum post indicates that .ICO files can be used too.

    Finally, my USB Flash Drive also has an icon. Because this could have a different drive letter depending on what other devices are connected, I didn’t use the registry approach. Instead, I saved an autorun.inf file in the root folder of the device, with the following contents:

    [autorun]
    label=USB Flash Drive (128MB)
    icon=shell32.dll,12

    Using this method the drive lable and icon change whichever computer I use the device in (provided that shell32.dll is available).

    Understanding DHCP console icons

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    A few weeks back, I was troubleshooting some DHCP issues and came across Microsoft knowledge base article 259786, which gives a link to a handy reference of DHCP console icons. Unfortunately at the time of writing, the link in the knowledge base article is broken – the DHCP console icons reference is available in the Microsoft Windows Server TechCenter.

    Microsoft’s common engineering criteria and Windows Server product roadmap

    This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

    I’ve often heard people at Microsoft talk about the common engineering criteria – usually when stating that one of the criteria is that a management pack for Microsoft Operations Manager (MOM) 2005 must be produced with each new server product (at release). A few minutes back, I stumbled across Microsoft’s pages which describe the full common engineering criteria and the associated report on common engineering criteria progress.

    Also worth a read is the Windows Server product roadmap and the Windows service pack roadmap.

    Finally, for an opportunity to provide feedback on Windows Server and to suggest new features, there is the Windows Server feedback page (although there’s no guarantee that a suggestion will be taken forward).