Think about the end-user experience

I recently marked 30 years working full-time in the IT industry. That’s a long time. When I started, we didn’t all have laptops (I shared a PC in the office), we had phones on desks, administrators to help with our… administration, and work was a place where we went as well as a thing that we did.

Over time, I’ve seen a lot of change: new systems, processes, ways of working. But right now is the biggest of them all. For the last nine-and-a-half years, all of my work has been stored in one Office 365 tenant. Now it’s being migrated to another, as part of some cleanup from a merger/acquisition that took place a while ago.

I’m just a normal end-user

I’m just a user. Albeit one with a technical background. And maybe that’s why I’m concerned. During the Covid-19 pandemic, I was issued a new laptop and everything was rebuilt using self-service. It went very well, but this is different. There is no going back. Once my laptop has been wiped and rebuilt into the new organisation, there is no “old machine” to go back to if I incorrectly synced my data.

Sure, I’ve done this before – but only when I’ve left one employer to go somewhere else. Never when I’ve been trying to continue work in the same place.

People change management

To be clear, the migration team has been great. This is not your typical internal IT project – this is being run properly. There are end-user communications, assigned tasks to complete to help make sure everything goes smoothly, FAQs, migration guides. ADKAR is in full flow. It all looks like it should go swimmingly. But 30 years of working in tech tells me to expect the unexpected (plus a tendency to be over-anxious and to catastrophise). And modern security practices mean that, if I was to make a copy of all my data on an external drive, “just in case”, I’ll set all sorts of alarm bells ringing in the SOC.

I’ll have to roll with it.

The schedule

So, there’s the technical issues resolved – or at least put to one side. Next is the migration window. It runs for 2 weeks. But the second of those weeks is the school half term holidays in a sizeable chunk of the UK. I, for one, will be away from work. I also have an assignment to complete by the end of the month, all the usual pre-holiday preparations squaring work away, and this is whilst I have two days taking part in an AI hackathon event and two days when I’m on call for questions in relation to our Microsoft Azure Expert MSP audit. “I’m sorry, I can’t find that information right now because I’m rebuilding my PC and migrating between Microsoft 365 tenants” isn’t going to go down well.

In short, there is no good time for my migration. And this is what it’s like for “real” end-users in our clients’ organisations. When they don’t want to clear down their email or delete old data it’s (generally) not because they are awkward (well, not always). They have a job to do, and we (IT) are doing something with one of the tools that they use to do that job. There’s uncertainty about how things will work after the migration and they need to allocate time. Time that they may not have.

Walking in someone else’s shoes

All too often, us IT folks just say “it’ll be fine”, without understanding the uncertainty that we impose on our customers – the users of the systems that we manage. Maybe it’s good for me to stand in their shoes, to be a typical business end-user, to understand what it’s like to be on the end of an IT project. Maybe we should all do it more often, and then we can run better projects.

Featured image by Kosta from Pixabay.

An updated approach to Virtual Desktop Infrastructure: Azure Virtual Desktop on Azure Stack HCI

With the general availability of Azure Virtual Desktop (AVD) on Azure Stack HCI, organisations have a powerful new platform for providing virtual desktop infrastructure (VDI) services. No longer torn between complex and expensive server farms, or desktops running in the cloud – the best of both worlds is here.

The complexities of managing an end user computing service

Over the many years I’ve consulted with IT departments, one of the many common themes has been around the complexities of managing end user computing services.

It used to be that their investments in standard PC builds had led to a plethora of additional management products. These days, Windows (or, more accurately, Microsoft 365) does that to such a level that the layers of added products are not needed. With a Modern Workplace solution, we can deploy a new PC from a factory image, log on with a username from a recognised domain (for example user@node4.co.uk), build that PC to meet corporate standards and get the end user up and running quickly with access to their data, all in a secure and compliant manner.

But there have always been edge cases. The legacy application that is critical to the business but doesn’t run on a modern version of Windows. Or the application with a licensing model that doesn’t lend itself to being installed on everyone’s PC for occasional use. For these cases, virtual desktop infrastructure has been a common approach to publish an application or a desktop.

For other organisations, the use of VDI is seen as an opportunity to abstract the desktop from the device. Either saving on device costs by buying lightweight terminal devices that connect to a farm of virtual desktops, or by using the secure desktop container as an opportunity to allow access from pretty much anywhere, because the device doesn’t directly access the organisation’s data.

I’m not going to advocate for one approach over another. Of course, I have Opinions but, at Node4, we start from the position of understanding the business problem we’re trying to solve, and then working out which technology will best support that outcome.

A shift in the landscape

Whilst Microsoft has had its own remote desktop offerings over the years, they’ve tended to partner with companies like Citrix rather than develop a full-blown solution. Meanwhile, companies like VMware had their own products – though with the Broadcom acquisition and sale of its VDI products (including Workspace One and Horizon), their future looks uncertain.

This makes customers uneasy. But there is hope.

Microsoft has not stood still and, for a few years it’s been maturing its VDI in the cloud service – Azure Virtual Desktop.

AVD provides a secure, remote desktop experience from anywhere, delivering a virtualised desktop experience that’s fully optimised for Windows 11 and Windows 10 multi-session capabilities. With various licensing options including within key Microsoft 365 subscription plans, AVD is now an established service. So much so that there are even products and services to help with managing AVD environments – for example from Nerdio. But, until recently, the biggest drawback with AVD was that it only ran in the public cloud – and whilst that’s exactly what some organisations need, it’s not suitable for some others.

A true hybrid cloud solution

(At this point, I’m tempted to introduce a metaphor about when cloud computing comes to ground. But fog and mist don’t conjure up the image I’m trying to project here…)

Recently, there has been a significant development with AVD. It’s largely gone unnoticed – but AVD is now generally available on Azure Stack HCI.

Azure Stack extends the robust capabilities of Azure’s cloud services to be run locally – either in an on-premises or an edge computing scenario. Azure Stack hyperconverged infrastructure (HCI) is a hybrid product that connects on-premises systems to Azure for cloud-based services, monitoring, and management. Effectively, Azure Stack HCI provides many of the benefits of public cloud infrastructure whilst meeting the use case and regulatory requirements for specialised workloads that can’t be run in the public cloud. 

Some of the benefits of running Azure Virtual Desktop on Azure Stack HCI

There are many advantages to running software locally. Immediate examples are to address data residency requirements, latency-sensitive workloads, or those with data proximity requirements. Looking specifically at Azure Virtual Desktop on Azure Stack HCI, we can:

  • Improve performance by placing session hosts closer to the end users.
  • Keep application and user data on-premises and so local to the users.
  • Improve access and performance for legacy client-server applications by co-locating the application and its data sources.
  • Provide a full Windows 11 experience regardless of the device used for access.
  • Unified management with other Azure resources.
  • Make use of fully patched operating system images from the Azure Marketplace.

What about my existing VDI?

Node4 will aid you in finding the best path from your existing VDI to AVD. Our consultants are experienced in using Microsoft’s Cloud Adoption Framework for Azure to establish an AVD landing zone and to take a structured approach to assessing and migrating existing workloads, user profiles and data to AVD.

Why Node4 is best positioned to help

I’ve already written about how Node4’s expert Consultants can help deploy Azure Virtual Desktop to meet your organisation’s specific needs but that’s only looking at one small part of the picture.

Because, for those organisations who don’t want to invest in hardware solutions, we have hosted services for Azure Stack HCI. We also provide flexible and secure communications solutions. And we’re an Azure Expert Managed Services Provider (MSP).

I may be a little biased, but I think that’s a pretty strong set of services. Put them all together and we are uniquely positioned to help you make the most of AVD on-premises, co-located in one of our datacentres, on a Node4 hosted platform, or in the public cloud.

So, if you are looking at how to modernise your VDI, we’d love to hear from you. Feel free to get in touch using the contact details below – and let’s have a conversation.

This post was originally published on the Node4 blog.

Using Windows Autopilot to deploy PCs in the middle of a pandemic

This content is 4 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A year ago, who would have thought that so many people would still be working from home because of COVID-19? That a pandemic response would lead to such a huge impact on the way we live? That we’d be having discussions about the future role of the office?

Lots of things changed in 2020. Some of them may never change back.

Changes to PC operating system deployment methods

There is a saying (attributed to the Greek philosopher, Heraclitus) that the one constant in life is change…

Over nearly 30 years working in IT, I’ve worked on a lot of PC rollouts. And the technology keeps on changing:

  • Back in 1994, I was using Laplink software with parallel cables (so much faster than serial connections) to push Windows for Workgroups 3.11 onto PCs for the UK Ministry of Defence.
  • In 2001, Ghost (which by then had been purchased by Norton) was the way to do it. Working with a a Microsoft partner called Conchango, my team at Polo Ralph Lauren rolled out 4000 new and rebuilt PCs. We did this across 8 European countries, supporting languages and PC hardware types with just two images.
  • By 2005, I was working for Conchango and using early versions of the Microsoft Business Desktop Deployment (BDD) solution accelerator to push standard operating environment (SOE) images to PCs for a UK retail and hospitality company.
  • By 2007, BDD had become Microsoft Deployment. Later, that was absorbed into System Center Configuration Manager.

After this, the PC deployment stuff gets a bit fuzzy. My career had moved in a different direction and, these days, I’m less worried about the detail (I have subject matter experts to rely on). My concerns are around the practicalities of meeting business requirements by making appropriate technology selections.

Which brings me back to the current day.

A set of business requirements

Imagine it’s early 2021 and you’re faced with this set of requirements:

  • Must deploy new Windows 10 PCs to a significant proportion of the business’ staff.
  • Must comply with UK restrictions and guidance in relation to the COVID-19 novel coronavirus.
  • Should follow Microsoft’s current recommended practice.
  • Must maintain compliance with all company standards for security and for information management. In particular, must not impact the company’s existing ISO 27001, ISO 9001 or Cyber Essentials Plus certifications.
  • Should not involve significant administrative overhead.

A solution, built around Windows Autopilot

The good news is that this is all possible. And it’s really straightforward to achieve using a combination of Microsoft technologies.

  • Azure Active Directory provides a universal identity platform, including conditional access, multifactor authentication.
  • Windows Autopilot takes a standard Windows 10 image (no need for customised “gold builds”) and applies appropriate policies to configure and secure it in accordance with organisational requirements. It does this by working with other Microsoft Endpoint Manager (MEM) components, like Intune.
  • OneDrive keeps user profile data backed up to the cloud, with common folders redirected so they remain synced, regardless of the PC being used.

What does it look like?

My colleague, Thom McKiernan (@ThomMcK), created a great unboxing video of his experience, opening up and getting started with his Surface Pro 7+:

(I tried to do the same with my Surface Laptop 3 but unboxing videos are clearly not my thing.)

Why does this matter?

The important thing for me is not the tech. It’s the impact that this had on our business. To be clear:

We deployed new PCs to staff, during a national lockdown, without the IT department touching a single PC.

For me, it took around 10 minutes from opening the box to sitting at a usable desktop with Microsoft Teams and Edge. (What else do you need to work in 2021?)

That would have been unthinkable a few years ago.

It seems that, on an almost daily basis, I talk to clients who are struggling with technology to allow staff to work from home. It always seems to come back to legacy VPNs or virtual desktop “solutions” that are holding the IT department back.

So, if you’re looking at how your organisation manages its end user device deployments, I recommend taking a look at Windows Autopilot. Perhaps you’re already licensed for Microsoft 365, in which case you have the tools. And, if you need some help to get it all working, well, you know who to ask…

Featured image created from Microsoft press images.

Providing fast mailbox access to Exchange Online in virtualised desktop scenarios

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.

A logical view on end user computing architecture

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of years, I’ve worked with a variety of customers looking to transform the way they deliver end user computing services. Typically they fall into two camps:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

There are others too (like the ones who bought into a Mobile Device Management platform that’s no longer working for them) but the two examples above are by far and away the most common issues I see. When helping customers to understand their options for providing end user computing services, I like to step up a level from the technology – to get back to the logical building blocks of an end user computing solution. And, over time, I’ve developed and refined a diagram that seems to resonate pretty well with customers as a framework around which to build end user solutions.

Logical view on the end user computing landscape


Starting at the bottom left of the diagram, I’ll describe each of the main blocks in turn:

  • Identity and access: I start here because identity is absolutely key in any modern enterprise. If you’re still thinking about devices and operating systems – you’re doing it wrong (more on that later). Instead, the model is built around using someone’s identity to determine what applications they can access and how data is protected. Identity platforms work across cloud and on-premises environments, provide additional factors for authentication, self-service functionality (e.g. for password and group management), single sign-on to both corporate and cloud applications, integration with consumer and partner directory services and the ability to federate (i.e. to use a security token service to authenticate on-premises).
  • Data protection: with identity frameworks in place, let’s turn our attention to the data. Arguably there should be many more building blocks here but the main ones are around digital rights management, data loss prevention and endpoint security (firewalls, anti-virus, encryption, etc.).
  • Connectivity: until we all consume all of our services from the cloud, we generally need some form of connectivity to “the mothership”, whether that’s a client-less solution (like Microsoft DirectAccess) or another form of VPN. And of course that needs to run over some kind of network – typically a Wi-Fi or 4G connection but maybe Ethernet.
  • Devices: Arguably, there’s far too much attention paid to different types of devices here but there are considerations around form factor and ownership. Ultimately, with the correct levels of management control, it shouldn’t matter who owns the device but, for now, there’s a distinction between corporately-owned and user-owned devices. And what’s the “other” for? I use it as a placeholder to discuss embedded systems, etc.
  • Desktop operating system: Windows, MacOS, Linux… increasingly it doesn’t matter what the OS is as apps run cross-platform or even in a browser.
  • Mobile operating system: iOS, Android (maybe Windows Mobile). Again, it’s just a platform to run a browser – though there are considerations around native applications, app stores, etc. (we’ll come back to those in a short while).
  • Application delivery: this is where the “fun” starts. Often, this will be influenced by some technical debt – and many organisations will use more than one of the technologies listed. Apps may be locally installed – and they can be managed using a variety of management tools. In my world it’s System Center Configuration Manager, Intune and the major mobile app stores but, for others, there may be a different set of tools. Then there’s virtualised/containerised applications, remote desktops and published applications, trusted apps that run from a file share and, finally, the panacea that is a browser-delivered app. Which makes me think… maybe this diagram needs to consider add-ins and extensions… for now, let’s keep it simple.
  • Device and asset management: until we live in a world of entirely user-owned devices, there are assets to manage. Then, sadly, we have to control devices – whoever they belong to – whether that’s policy-driven device and application management, more traditional configuration management, or just the provision of a catalogue of approved applications. Then there’s alerting, perhaps backups (though maybe not if the data is stored away from devices) and something I’ve referred to as “desktop optimisation” which is really the management tools for some of the delivery methods and tools described elsewhere.
  • Productivity services: name your poison – Office 365 or G-Suite – it doesn’t matter; these are the things that people do in their productivity apps. You may disagree with some of the categories (would Slack fit into enterprise social networking, or is it team sites?) but ultimately it’s about an extensible set of productivity services that end users can consume.
  • Input/output services: I print very little but others print a lot. Similarly, there’s scanning to be done. The paperless office is still not here…
  • Environmental management: over time, this will fade away in favour of mobile device and application management solutions but, today, many organisations still need to consider how they control the configuration of desktop operating systems – in the Windows world that might mean Group Policy and for other platforms it could be scripted.
  • Business data and applications: all of the stuff above means nothing if organisations can’t unlock the potential of their data – whether it’s in the CRM or ERP system, end user-driven reporting and BI, workflow or another line of business system.
  • High availability and business continuity: You’ll notice that this block has no subcomponents. For me, it’s nothing more than a consideration. If the end user computing architecture has been designed to be device and platform agnostic, then replacing a device should be straightforward – no need to maintain whole infrastructures for business continuity purposes. Similarly, if all I need is a device with an Internet connection and a browser, then the high availability conversation moves away from the end user computing platform and into how we provide the services that end users need to access.

I’m sure the model will continue to develop over time – it’s far from perfect and some items will be de-emphasised over the years (for example the differentiation between mobile and desktop operating systems will become less important) whilst others will need to be added, but it seems a reasonable starting point around which to start a discussion.