Using Windows Autopilot to deploy PCs in the middle of a pandemic

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A year ago, who would have thought that so many people would still be working from home because of COVID-19? That a pandemic response would lead to such a huge impact on the way we live? That we’d be having discussions about the future role of the office?

Lots of things changed in 2020. Some of them may never change back.

Changes to PC operating system deployment methods

There is a saying (attributed to the Greek philosopher, Heraclitus) that the one constant in life is change…

Over nearly 30 years working in IT, I’ve worked on a lot of PC rollouts. And the technology keeps on changing:

  • Back in 1994, I was using Laplink software with parallel cables (so much faster than serial connections) to push Windows for Workgroups 3.11 onto PCs for the UK Ministry of Defence.
  • In 2001, Ghost (which by then had been purchased by Norton) was the way to do it. Working with a a Microsoft partner called Conchango, my team at Polo Ralph Lauren rolled out 4000 new and rebuilt PCs. We did this across 8 European countries, supporting languages and PC hardware types with just two images.
  • By 2005, I was working for Conchango and using early versions of the Microsoft Business Desktop Deployment (BDD) solution accelerator to push standard operating environment (SOE) images to PCs for a UK retail and hospitality company.
  • By 2007, BDD had become Microsoft Deployment. Later, that was absorbed into System Center Configuration Manager.

After this, the PC deployment stuff gets a bit fuzzy. My career had moved in a different direction and, these days, I’m less worried about the detail (I have subject matter experts to rely on). My concerns are around the practicalities of meeting business requirements by making appropriate technology selections.

Which brings me back to the current day.

A set of business requirements

Imagine it’s early 2021 and you’re faced with this set of requirements:

  • Must deploy new Windows 10 PCs to a significant proportion of the business’ staff.
  • Must comply with UK restrictions and guidance in relation to the COVID-19 novel coronavirus.
  • Should follow Microsoft’s current recommended practice.
  • Must maintain compliance with all company standards for security and for information management. In particular, must not impact the company’s existing ISO 27001, ISO 9001 or Cyber Essentials Plus certifications.
  • Should not involve significant administrative overhead.

A solution, built around Windows Autopilot

The good news is that this is all possible. And it’s really straightforward to achieve using a combination of Microsoft technologies.

  • Azure Active Directory provides a universal identity platform, including conditional access, multifactor authentication.
  • Windows Autopilot takes a standard Windows 10 image (no need for customised “gold builds”) and applies appropriate policies to configure and secure it in accordance with organisational requirements. It does this by working with other Microsoft Endpoint Manager (MEM) components, like Intune.
  • OneDrive keeps user profile data backed up to the cloud, with common folders redirected so they remain synced, regardless of the PC being used.

What does it look like?

My colleague, Thom McKiernan (@ThomMcK), created a great unboxing video of his experience, opening up and getting started with his Surface Pro 7+:

(I tried to do the same with my Surface Laptop 3 but unboxing videos are clearly not my thing.)

Why does this matter?

The important thing for me is not the tech. It’s the impact that this had on our business. To be clear:

We deployed new PCs to staff, during a national lockdown, without the IT department touching a single PC.

For me, it took around 10 minutes from opening the box to sitting at a usable desktop with Microsoft Teams and Edge. (What else do you need to work in 2021?)

That would have been unthinkable a few years ago.

It seems that, on an almost daily basis, I talk to clients who are struggling with technology to allow staff to work from home. It always seems to come back to legacy VPNs or virtual desktop “solutions” that are holding the IT department back.

So, if you’re looking at how your organisation manages its end user device deployments, I recommend taking a look at Windows Autopilot. Perhaps you’re already licensed for Microsoft 365, in which case you have the tools. And, if you need some help to get it all working, well, you know who to ask…

Featured image created from Microsoft press images.

Microsoft Ignite | The Tour: London Recap

This content is 6 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

Cloning my Mac’s hard drive to gain some extra space

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My MacBook (bought in 2008, unfortunately just before the unibody MacBook Pros were introduced) has always been running with upgraded memory and storage but it was starting to creak.  Performance is okay (it’s not earth-shattering but all I do on this machine is digital photography-related workflow) and it won’t take any more RAM than the 4GB I have installed but I was constantly battling against a full hard disk.

After a recent holiday when I was unable to archive the day’s shots and had to start filling my “spare” (read old and slow) memory cards to avoid deleting unarchived images, I decided to upgrade the disk. I did briefly consider switching to a solid state solution (until I saw the price – enough to buy a new computer), then I looked at a hybrid device, before I realised that I could swap out the 320GB Western Digital SATA HDD for a 750GB model from Seagate. The disk only cost me around £73 but next day shipping bumped it up a bit further (from Misco – other retailers were offering better pricing but had no stock). Even so, it was a worthwhile upgrade because it means all of my pictures are stored on a single disk again, rather than spread all over various media.

Of course, no image really exists until it’s in at least two places (so I do have multiple backups) but the key point is that, when I’m travelling, Lightroom can see all of my images.

I didn’t want to go through the process of reinstalling Mac OS X, Lightroom, Photoshop CS4, etc. so I decided to clone my installation between the two disks.  After giving up on a rather Heath Robinson USB to IDE/SATA cable solution that I have, I dropped another £24.99 on a docking station for SATA disk drives (an emergency purchase from PC World).

I’m used to cloning disks in Windows, using a variety of approaches with both free OS deployment tools from Microsoft and third party applications. As it happens, cloning disks in OS X is pretty straightforward too; indeed it’s how I do my backups, using a utility called Carbon Copy Cloner (some people prefer Super Duper). Using this approach I: created a new partition on the new disk (in Disk Utility), then cloned the contents of my old hard disk to the new partition (with Carbon Copy Cloner); then test boot with both drives in place (holding down the Alt/Option key to select the boot device); before finally swapping the disks over, once I knew that the copy had been successful.  Because it’s a file level copy, it took some time (just under six hours) but I have no issues with partition layouts – the software simply recreated the original file system on the partition that I specified on the new disk.  There’s more details of the cloning process in a blog post from Low End Mac but it certainly saved me a lot of time compared with a complete system rebuild.

Now all I need to do is sort out those images…

Hardware specific application targeting with MDT 2010

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Guest Post
[Editor’s note: this post was originally published on Garry Martin’s blog on 28 October 2009. As Garry’s closing down his own site but the content is still valid, he asked me if I’d like to post it here and I gratefully accepted!]

I’m running a Proof of Concept (PoC) at work at the moment which is making use of Microsoft Deployment Toolkit (MDT) 2010. Whilst most of the drivers we need can be managed by using the Out-of-Box Drivers Import function, some are delivered by the OEM as .EXE or .MSI packages. Whilst we could use multiple Task Sequences to manage these, or even select the applications individually at build time, our preference was to use some sort of hardware specific targeting.

Process

First of all, we needed to uniquely identify the hardware, and for this purpose we used the Plug and Play (PnP) Device ID, or hardware ID as it is sometimes called.

To determine the hardware IDs for a device by using Device Manager:

  1. Install the device on a test computer
  2. Open DeviceManager
  3. Find your device in the list
  4. Right-click the entry for your device, and then click Properties
  5. In the Device Properties dialog box, click the Details tab
  6. In the Property list, click Hardware Ids
  7. Under Value, make a note of the characters displayed. They are arranged with the most specific at the top to the most general at the bottom. You can select one or more items in the list, and then press CTRL+C to copy them to the clipboard.

In our case, the Sierra Wireless MC8755 Device gave us USB\VID_1199&PID_6802&REV_0001 as the most specific value and USB\VID_1199&PID_6802 as the least specific, so we made a note of these before continuing.

Next, we downloaded the Sierra Wireless MC87xx 3G Watcher .MSI package from our notebook OEM support site. Sierra Wireless have instructions for performing a silent install of the 3G Watcher package, so we used those to understand the installation command we would need to use.

So, we had a unique ID for targeting, the installation package, and the installation command line we would need to use. Now we needed to configure MDT to deploy it. First, we create a New Application.

  1. In the MDT 2010 Deployment Workbench console tree, right-click Applications, and click New Application
  2. On the Application Type page, click Next to install an application and copy its source files to the deployment share
  3. On the Details page, type the application’s name in the Application Name box, and click Next
  4. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  5. On the Destination page, click Next to use the default name for the application in the deployment share
  6. On the Command Details page, type the command you want to use to install the application, and click Next. We used the following
    msiexec.exe /i 3GWatcher.msi /qn
  7. On the Summary page, review the application’s details, and click Next
  8. On the Confirmation page, click Finish to close the New Application Wizard.

Next we modify the Task Sequence and create our query.

  1. In the MDT 2010 Deployment Workbench console tree, click Task Sequences
  2. In the details pane, right-click the name of the Task Sequence you want to add the Application to, and then click Properties
  3. In the Task Sequence Properties dialog box, click the Task Sequence tab
  4. Expand State Restore and click on Install Applications
  5. Click the Add button, and select General, then Install Application
  6. On the Properties tab for Install Application, type the application’s name in the Name box, and click the Options tab
  7. On the Options tab, click the Add button and select If statement
  8. On the Source page, type the path or browse to the folder containing the application’s source files, and click Next
  9. In the If Statement Properties dialog box, ensure All Conditions is selected and click OK
  10. On the Options tab, click the Add button and select Query WMI

This is where we’ll now use a WMI query that will provide our Hardware Specific Application Targeting. You’ll need to modify this for your particular hardware, but we previously discovered that our least specific Device ID value was USB\VID_1199&PID_6802 so we will use this to help form our query.

  1. In the Task Sequence WMI Condition dialog box, ensure the WMI namespace is root\cimv2 and type the following in the WQI Query text box, clicking OK when finished:
    SELECT * FROM Win32_PNPEntity WHERE DeviceID LIKE '%VID_1199&PID_6802%'
  2. Click OK to exit the Task Sequences dialog box

And that’s it. When you deploy a computer using the modified Task Sequence, the WMI query will run and, if matched, install the application. If a match can’t be found, the application won’t be installed. Hardware Specific Application Targeting in a nutshell.

Installing Windows from a network server without Windows Deployment Services

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’d like to start this post with a statement:

Windows Deployment Services (WDS) is a useful role in Windows Server 2008 R2.  It’s free (to licensed Windows users), supports multitasking, and is a perfectly good method of pushing Windows images to clients…

Unfortunately that statement has a caveat:

… but it needs to be installed on an Active Directory-member computer.

For some, that’s a non-starter.  And sometimes, you just want a quick and dirty solution.

I have a small dedicated server at home to run Active Directory along with basic network services (DNS, DHCP, etc.) for my home IT.  I also run Philippe Jounin’s excellent TFTP Daemon (service edition) on it in order to support image loads on my Cisco 7940 IP Phone.

In order to rebuild the Hyper-V server that I use for infrastructure test and development, I wanted to boot across the network and install Windows Server 2008 R2 – and a few days ago I found Mark Kubacki’s post about TFTPd32 and DHCP Server – Windows Deployment Services without WDS. Perfect!  No need to install another role on my little Atom-powered server – particularly as, once this server is built, I’ll probably install WDS on it  to deploy images to my various test virtual machines!

So, this is the process – with thanks to Mark Kubacki, and to Ryan T Adams (who wrote about installing Vista without a CD Drive using TFTP – for instance, installing Windows on a netbook) who were gracious enough to blog about their experiences and give me something to build upon:

  1. Download tftpboot.exe from Ryan T Adams’ site and run it to extract the contents to a suitable hard drive location (i.e. the TFTP root folder).  Unfortunately, you probably won’t need most of this 154MB download (more on that in a moment) but it will get you started.
  2. Start tftpd32.exe (or copy the files to your TFTP root, if you are already running a TFTP service, as I was) and add tftpd32.exe (or tftpd32_svc.exe) as a Windows Firewall exception (you could just disable the firewall but I don’t recommend that approach).
  3. Either set TFTPD32 to act as a DHCP server and specify the boot file options (as Ryan describes), or configure DHCP options 066 and 067 (boot server host name and boot file name) on another DHCP server (Mark shows how to do this for the Windows DHCP Server role) using the IP address of the TFTP server and the boot file name of boot\pxeboot.com.
  4. Make sure that the TFTP Server is set to include PXE capability in the advanced TFTP options and that it’s DHCP Server capability is turned off if you are using another DHCP server.
  5. Restart the TFTP Server (or service) to pick up the configuration changes.
  6. Boot a computer (or virtual machine) from its network card, press F12 when prompted and wait for Windows PE to load, then map a drive to another machine on the network which is sharing the Windows media (I use Slysoft Virtual Clone Drive to mount an operating system ISO file and I’ve shared the virtual drive).
  7. Switch to the newly mapped drive and type setup.exe to run Windows Setup.

Unfortunately, the version of the Windows Preinstallation Environment (Windows PE) that Ryan has supplied in tftpboot.exe is a 32-bit version (of Windows PE 2.0, I think).  When I tried to use this to install Windows Server 2008 R2 (which is 64-bit only), I was greeted with the following message:

This version of Z:\setup.exe is not compatible with the version of Windows you’re running.  Check your computer’s system information to see whether you need a x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher.

I needed a 64-bit version of Windows PE.  No problem.  That’s included in the Windows Automated Installation Kit (WAIK), so I overwrote Ryan’s winpe.wim with the one from %programfiles%\Windows AIK\Tools\PETools\amd64, and restarted the computer I wanted to build.  This time Windows Setup ran with no issues and Windows Server was installed successfully.

Even though I used the TFTPD32, this method could be used to install Windows from just about any TFTP server (it could even be running on totally different operating system, I guess), or even to load another WIM file (i.e. not Windows PE) from a network boot. I’m sure if I had more time I could come up with all sorts of scenarios (boot Windows directly from the network?) but, for now, I’ll stick to using this method as a WDS replacement.

Windows 7 beta deployment tools

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who are checking out the Windows 7 beta with a view towards automated deployments may be interested to note that a beta of the Windows Automated Installation Kit has been released for Windows 7 along with an open beta of the Microsoft Deployment Toolkit 2010.

Giving or receiving a PC as a Christmas present? Take an image of the drive first

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some quick advice for those of you about to open up new PCs bought for Christmas (for that matter, the advice is equally applicable whatever the occasion)… avoid the temptation to dive straight in and have a play. Instead, take an image of the hard drive as it arrived from the factory – yes, you can always use the manufacturer’s instructions to return the machine to a restored state but this can take a long time (as I found a few weeks back when I succumbed to temptation and had a play with my Lenovo Idea Pad S10e before imaging it!).

As I write this, I’m setting up a new Dell notebook for someone (an Inspiron 1525) and the first key I pressed (after the power button) was F2 to go into the BIOS. Here I changed a couple of settings (boot order and numlock key state), then booted again with F12 for the boot menu to use my device of choice (floppy disk drive, USB drive, or PXE network boot) to boot to my image capture software of choice (Symantec Ghost, Windows Deployment Service, you name your poison) and take an image of the entire drive (not a partition).

Following this, I can can configure the device as intended (remove the crapware, install some AV software, install an Office suite, etc.) and then take another image when I’m done.

Of course, for corporate deployments it’s normal to blow away the manufacturer’s image and install something more appropriate to the organisation’s requirements but, for home and small business users, it’s perfectly acceptable to use the factory-supplied build and this might just save you some time if you ever need to return to square one.

Using packet level drivers for MS-DOS network connectivity

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the reasons to use Windows PE for operating system deployment is that it’s built on a modern version of Windows so, at least in theory, driver support is less of an issue than it would be using MS-DOS boot disks.

Even so, there are still times when a good old MS-DOS boot disk comes in handy and networking is a particular pain point – NDIS drivers are a pain to configure so packet-level drivers are often useful for operating system deployment tasks (but not optimised for anything more substantial). Available for many of the more common Ethernet cards, they are generally 16-bit utilities for MS-DOS and so will not work in 32-bit or 64-bit operating systems.

As this is not exactly cutting edge technology, many of the useful sites are starting to drop off the ‘net (but a surprising number remain) – here’s a few links I found that might come in handy:

A few things I found whilst drive imaging with Symantec Ghost

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I need to rebuild one of my PCs before lending it to someone for a few days but before I do that I want to take an image of it. If I had the Microsoft Deployment Toolkit 2008 set up at home then that would be reasonably straightforward but I don’t, and the old drive imaging technologies will be fine for this – at least that’s what I thought until I spent half the night and a good chunk of this morning fighting with Symantec Ghost… So, here’s a few of the things that I’ve (re-)discovered about Ghost in the last few hours.

  • Using Ghost in peer-to-peer mode does require the slave and the master machines to be running the same version of Ghost – it will present a version mismatch if you try and run different versions.
  • Ghost 6.x Enterprise has a multicast option but I couldn’t get it to work (it was always greyed out for me). Symantec’s knowledge base suggests that this may be down to TCP/IP issues and I’m pretty sure that packet-level network drivers are required with the MS-DOS client (the Windows server can use the normal Windows network settings) but, even with a suitable packet driver loaded, I gave up after a few hours without success.
  • GhostCast Server uses (UDP) port 6666 for communications.
  • GhostCast Server 8.x will create a Windows Firewall exception for itself but the exception still needs to be enabled manually.
  • On a multi-homed server, there seems to be no way to select the NIC on which the GhostCast Server presents a session.
  • Multicasting also seems to need the client and server versions to match one another. 16-bit Ghost 7.x should work with an 8.x server but it wasn’t working out for me with 7.5 and 8.2 (32-bit 8.x clients were connecting to the server fine, so I knew it was working, but I didn’t want to image those machines – and I didn’t have a copy of the 7.x server).
  • Compression adds a lot of time to the imaging process.

Eventually, I got everything working with a 16-bit copy of Ghost 8.2 running on MS-DOS (to be completely accurate, it was a Windows ME startup disk created from Windows XP) communicating with a GhostCast Server 8.2 running on Windows Server 2008.

And for anyone who is wondering why I was messing about with 16-bit executables and MS-DOS (in these days of Windows PE), Toffa suggested that I should try a Windows PE disk with the 32-bit ghost client. Although that would have let me access USB-attached external storage, I didn’t have enough space on a USB drive and was storing my image on a server. Windows XP (and so PE) doesn’t natively recognise the network card on the machine I was imaging, so that would have required me to extend the Windows PE image and provide additional driver support. Somehow, using a universal network boot disk seemed like the easy option.

Windows Vista and Office 2007 deployment brain dump

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week I’m working on a Desktop Deployment Planning Services (DDPS) engagement with a customer. It’s been a while since I last looked at deployment (basically I haven’t done anything since I passed the Windows Vista and Office 2007 deployment exam) so I’m revising my notes in preparation for a workshop tomorrow.

As a supplement to my previous post on a BDD 2007 overview and Office 2007 customisation and deployment using BDD 2007, this is a rollup of just about everything I could lay my hands on about Vista and Office deployment. It’s not particularly well structured – let’s just call it a “brain dump”. If anyone has anything extra to add, please leave a comment at the end of this post:

Windows imaging technologies

  • ImageX (imagex.exe) is a command line tool for manipulating Windows Imaging Format (.WIM) files. It is built using the Windows imaging API (WIMGAPI) together with a .WIM file system filter.
  • Windows Vista images are HAL-independent and make use of single instance storage. To minimise the amount of space used by Windows Vista installation images, use imagex.exe to apply images to separate folders on a computer and then append these images to the final image.
  • Windows System Image Manager (SIM) is used to create and maintain answer files.
  • To modify an image, use imagex.exe to mount it and then apply an unattended setup answer file (unattend.xml).
  • Package Manager (pkgmgr.exe) can be used to update both image files and computers that have already had an image applied:
    • When used to update computers that have already had an update applied, pkgmgr.exe can install, configure and update features in Windows Vista (e.g. installed components, device drivers, language packs, updates). It can also be used with an unattended installation answer file for new installations.
    • When adding additional drivers to an existing Windows Vista image, use pkgmgr.exe to add the drivers from a folder.

Windows Vista deployment

  • Windows Setup (setup.exe) for Vista is now GUI-only and there is no more winnt.exe and winnt32.exe.
  • Windows installation is structured around a number of configuration passes:
    • Windows PE.
    • Offline servicing.
    • Generalise.
    • Specialise.
    • Audit system.
    • Audit user.
    • OOBE system.
  • unattend.xml is a single unattended installation answer file, replacing multiple files in previous versions of Windows – including unattend.txt, cmdlines.txt, winbom.ini, oobeinfo.ini and sysprep.inf.
  • To avoid prompting users for input during the installation of Windows Vista, create an unattended setup installation file and copy this to a USB flash drive, then ensure that the flash drive is present during Windows Vista installation.
  • unattend.xml must be renamed to autounattend.xml when used on removable media during installation and replaces winnt.sif.
  • The Out-of-Box Experience (OOBE) is now known as Windows Welcome and is controlled with oobe.xml, which includes options for Windows Welcome, ISP sign-up and the Windows Vista Welcome Center.
  • Disk repartitioning can be configured in the first pass of the Windows PE section of unattend.xml.
  • When using multiple hardware configurations, create a distribution point that includes an Out-of-Box Drivers folder.
  • Windows Deployment Services (WDS) replaces Remote Installation Services (RIS).
  • When using WDS with computers that do not have PXE capabilities, create a WDS discovery image and use this to create a bootable CD for Windows Vista installation.
  • When using WDS on a server that provides DHCP services, enable DHCP Option 60 and configure WDS to listen on port 67.
  • If the WDS Image Capture Wizard is unable to capture a reference computer image, restart the reference computer and run sysprep /generalize /oobe.
  • The Windows Automated Installation Kit (WAIK) replaces deploy.cab and contains updated versions of tools previously provided to OEMs (e.g. Windows PE) for use in corporate deployments.
  • The OEM Preinstallation Toolkit (OPK) is for system builders, containing the WAIK and additional OEM-specific information (e.g. OEM licensing).
  • bootsect.exe is used to enable deployment alongside earlier versions of Windows with the Windows Vista boot manager (bootmgr.exe) – it replaces fixfat.exe and fixntfs.exe (both included with Windows Vista). Microsoft knowledge base article 919529 has more details.
  • boot.ini has been replaced by the >Boot Configuration Data.
  • The System Preparation Tool (sysprep.exe) is installed by default on Windows Vista systems in %windir%\system32\sysprep and there are several changes when compared with previous versions:
    • sysprep /reseal is replaced with sysprep /generalize /oobe.
    • sysprep /factory is replaced by sysprep /audit.
    • sysprep /mini is replaced by sysprep /oobe.
    • sysprep /nosidgen is replaced by sysprep /generalize.
    • sysprep /clean and sysprep /bmsd are deprecated.
    • sysprep /activated is replaced by sysprep /generalize (together with slmgr.vbs for managing the activation status of a computer)
    • OEMs are required to run sysprep /oobe before delivery of new computers.

Customising Office 2007 installations

  • Windows Installer Patch (.MSP) files can be used to produce customised Office installations (and then called using a script).
  • Multiple installation shares can be defined within a .MSP file.

Office 2007 deployment

  • To create an Office 2007 installation share (e.g. for scripted deployment), create a shared folder on a server and copy the installation files from the source media to the root of the shared folder.
  • To slipstream Microsoft Office 2007 updates into the deployment, create a folder called updates in the Microsoft Office 2007 distribution folder and copy all updates to this folder.

User data migration:

  • The User State Migration Toolkit (USMT) v3.0 can be used with both Windows XP and Windows Vista.
  • miguser.xml can be used to ensure that USMT captures files with a particular extension during migration.
  • The USMT scanstate.exe command can be used with the /p switch to ensure that sufficient free space exists in a target folder.
  • USMT can migrate user state using a network server during an upgrade that involves repartitioning of disks.
  • If the partition table is to be left intact during a migration, use a local partition with sufficient free space for temporary storage.
  • scanstate.exe can scan a source computer, collect files and create a store without modifying the source. The default action is to compress files and store them in image file (usmt3.mig).
  • loadstate.exe will migrate files and settings from and existing store to the destination computer.
  • The scanscate.exe and loadstate.exe commands have matching command line arguments.
  • Migration XML files include rules to define what should be migrated and are specified with the /i switch:
    • Custom XML files define components to exclude and are created using scanstate /genconfig:config.xml.
    • migsys.xml is used with the /targetxp switch to migrate operating system and browser settings.
    • migapp.xml is used to migrate application settings.
    • miguser.xml is used to migrate user files, folders and filetypes.
    • If the destination computer is running Windows XP, modify miguser.xml, migapp.xml and migsys.xml
    • If the destination computer is running Windows Vista, modify miguser.xml and migapp.xml but migsys.xml is not supported – use config.xml instead.
    • migxml.xsd can write and validate xml files.
  • scanstate /p can be used to create a space estimate file called usmtsize.txt (it will also be necessary to specify /nocompress).

Office 2003-2007 interoperability

Localisation

  • To add multiple language support to Office 2007 applications, install the appropriate language pack on the installation share and update config.xml.
  • To add a language pack to an existing computer, use pkgmgr.exe to apply a new unattended setup installation file that references the appropriate language pack.
  • If the Windows SIM is unable to access language pack settings in a customised Windows Vista image, generate a new catalog based on the custom image.

Further reading