In the second of my post-TechEd blog posts, I’ll take a look at one of the sessions I attended where Microsoft’s Eduardo Kassner spoke about various architectures for desktop delivery in relation to Microsoft’s vision for the Windows optimised desktop (CLI305). Again, I’ll stick with highlights in note form as, if I write up the session in full, it won’t be much fun to read!
- Kassner started out by looking at who defines the desktop environment, graphing desktop performance against configuration control:
- At the outset, the IT department (or the end user) installs approved applications and both configuration and performance are optimal.
- Then the user installs some “cool shareware”, perhaps some other approved applications or personal software (e.g. iTunes) and it feels like performance has bogged down a little.
- As time goes on, the PC may suffer from a virus attack, and the organisation needs an inventory of the installed applications, and the configuration is generally unknown. Performance suffers as a result of the unmanaged change.
- Eventually, without control, update or maintenance, the PC become “sluggish”.
- Complaints about desktop environments typically come down to: slow environment; application failures; complicated management; complicated maintenance; difficulty in updating builds, etc.
- Looking at how well we manage systems: image management; patch management; hardware/software inventory; roles/profiles/personas; operating system or application deployment; and application lifecycle are all about desktop configuration. And the related processes are equally applicable to a “rich client”, “terminal client” or a “virtual client”.
- Whatever the architecture, the list of required capabilities is the same: audit; compliance; configuration management; inventory management; application lifecycle; role based security and configuration; quality of service.
- Something else to consider is that hardware and software combinations grow over time: new generations of hardware are launched (each with new management capabilities) and new operating system releases support alternative means of increasing performance, managing updates and configuration – in 2008, Gartner wrote:
“Extending a notebook PC life cycle beyond three years can result in a 14% TCO increase”
and a few months earlier, they wrote that:
“Optimum PC replacement decisions are based on the operating system (OS) and on functional compatibility, usually four years”
[source: Gartner, Operational Considerations in Determining PC Replacement Life Cycle]
Although when looking across a variety of analyst reports, three years seems to be the optimal point (there are some variations depending on the considerations made, but the general window is 2-5 years).
- Regardless of the PC replacement cycle; the market is looking at two ways to “solve” the problem or running multiple operating system versions on multiple generations of hardware: “thin client” and “VDI” (also known as hosted virtual desktops) but Kassner does not agree that these technologies alone can resolve the issues:
- In 1999, thin client shipments were 700,000 against a market size of 133m PCs [source: IDC 1999 Enterprise Thin Client Year in Review] – that’s around 0.6% of the worldwide desktop market.
- In 2008, thin clients accounted for 3m units out of an overall market of 248m units [source: Gartner, 2008 PC Market Size Worldwide] – that’s up to 1.2% of the market, but still a very tiny proportion.
- So what about the other 98.8% of the market? Kassner used 8 years’ worth of analyst reports to demonstrate that the TCO between a well-managed traditional desktop client and a Windows-based terminal was almost identical – although considerably lower than an unmanaged desktop. The interesting point was that in recent years the analysts stopped referring to the different architectures and just compared degrees of management! Then he compared VDI scenarios: showing that there was a 10% variance in TCO between a VDI desktop and a wide-open “regular desktop” but when that desktop was locked down and well-managed the delta was only 2%. That 2% saving is not enough to cover the setup cost a VDI infrastructure! Kassner did stress that he wasn’t saying VDI was no good at all – just that it was not for all and that a similar benefit can be achieved from simply virtualising the applications:
“Virtualized applications can reduce the cost of testing, packaging and supporting an application by 60%, and they reduced overall TCO by 5% to 7% in our model.”
[source: Gartner, TCO of Traditional Software Distribution vs. Application Virtualization]
- Having argued that thick vs. thin vs. VDI makes very little difference to desktop TCO, Kassner continued by commenting that the software plus services platform provides more options than ever, with access to applications from traditional PC, smartphone and web interfaces and a mixture of corporately owned and non-corporate assets (e.g. employees’ home PCs, or offshore contractor PCs). Indeed, application compatibility drives client device options and this depends upon the supported development stack and presentation capabilities of the device – a smartphone (perhaps the first example of IT consumerisation – and also a “thin client” device in its own right) is a example of a device that provides just a subset of the overall feature set and so is not as “forgiving” as a PC – one size does not fit all!
- Kassner then went on to discuss opportunities for saving money with rich clients; but his summary was that it’s still a configuration management discussion:
- Using a combination of group policy, a corporate base image, data synchronisation and well-defined security policies, we can create a well-managed desktop.
- For this well-managed desktop, whether it is running on a rich client, a remote desktop client, with virtualised applications, using VDI or as a blade PC, we still need the same processes for image management, patch management, hardware/software inventory, operating system or application deployment, and application lifecycle management.
- Once we can apply the well-managed desktop to various user roles (e.g. mobile, office, or task-based workers) on corporate or non-corporate assets, we can say that we have an optimised desktop.
- Analysts indicate that “The PC of 2012 Will Morph Into the Composite Work Space” [source: Gartner], combining client hypervisors, application virtualisation, persistent personalisation and policy controls: effectively separating the various components for hardware, operating system and applications. Looking at Microsoft’s view on this (after all, this was a Microsoft presentation!), there are two products to look at – both of which are Software Assurance benefits from the Microsoft Desktop Optimization Pack (MDOP) (although competitive products are available):
- Application virtualisation (Microsoft App-V or similar) creates a package of an application and streams it to the desktop, eliminating the software installation process and isolating each application. This technology can be used to resolve conflicts between applications as well as to simplify application delivery and testing.
- Desktop virtualisation (MED-V with Virtual PC or similar) creates a container with a full operating system environment to resolve incompatibility between applications and an alternative operating system, running two environments on the same PC [and, although Eduardo Kassner did not mention this in his presentation, it’s this management of multiple environments that provides a management headache, without suitable management toolsets – which is why I do not recommend Windows 7 XP Mode for the enterprise).
- Having looked at the various architectures and their (lack of) effect on TCO, Kassner moved on to discuss Microsoft’s strategy.
- In short, dependencies create complexity, so by breaking apart the hardware, operating system, applications and user data/settings the resulting separation creates flexibility.
- Using familiar technologies: we can manage the user data and settings with folder redirection, roaming profiles and group policy; we can separate applications using App-V, RemoteApps or MED-V, and we can run multiple operating systems (although Microsoft has yet to introduce a client-side hypervisor, or a solution capable of 64-bit guest support) on a variety of hardware platforms (thin, thick, or mobile) – creating what Microsoft refers to as the Windows Optimized Desktop.
- Microsoft’s guidance is to take the processes that produce a well-managed client to build a sustainable desktop strategy, then to define a number of roles (real roles – not departments, or jobs – e.g. mobile, office, anywhere, task, contract/offshore) and select the appropriate distribution strategy (or strategies). To help with this, there is a Windows Optimized Desktop solution accelerator (soon to become the Windows Optimized Desktop Toolkit).
There’s quite a bit more detail in the slides but these notes cover the main points. However you look at it, the architecture for desktop delivery is not that relevant – it’s how it’s managed that counts.
Sounds about right to me. You name the technology, I reckon a well managed PC beats it 95% of the time. VDI, I’m not sold. The costs for the hardware and licensing are too high. Terminal Services has a role and blends nicely with the PC.
Look at PC costs these days. They’re only a few euros/pounds more than a decent terminal. As for replacability, do things right with your network and the use can self-provision a PC in 15 minutes and all of their data is on the network. Laptops: BitLocker rocks there for security even for non-SA customers thanks to the new low price of Ultimate edition.
The PC combined with well designed and run ConfigMgr, AD/Group Policy cannot be beat most of the time. And it’s only getting better. ConfigMgr vNext totally rocks. The briefing at TEE09 was an eye opener and I can’t wait to play with the beta.
I agree with much of this. I can’t imagine that Microsoft aren’t working on a client side hypervisor but haven’t heard a whisper about it… There’s certainly a strong message around the Optimized Desktop, and that, coupled with the deployment scenarios in v.next of ConfigMgr sure start to deliver on the ‘single pane of glass’ configuration management we hear so much about.
I’ve long struggled with the ROI arguments for vDi. Given the ease of management you can achieve within System Center, I do not believe that the normal ROI arguments for vDi stack up, and if you don’t currently have top of the range management tools and processes in place, your vDi project is going to be a disaster anyway. I have seen a couple of really good vDi projects, where the customer had a real use-case for the tech and it works really well, but for the vast majority I don’t think it’s anywhere near ready yet and the investment would be much better made around desktop optimization NOW, easing the path to vDi if it ever gets to a point where it’s technically, economically and functionally viable/attractive.
I agree with Aiden above, I’m really looking forward to v.next, could be quite a wait though… In the mean time, roll on Service Manager!
If you lock things down so that software maintenance costs are minimal then hardware costs come to the fore. Hard drives, CD drives, RAM and CPU all cost more for a thick client than a thin client so thin client wins on price. The performance of a good server is not affordable on all PCs in an organization so thin client solutions are the way to go. I would not even look at a thick client solution for more than a tiny percentage of seats on a typical system. Forget 3 or 4 year refresh cycles for thin clients. It’s more like 10 years. After all, most of use still use 1024×768 which has been available for nearly 20 years.
When I look at terminal servers, I see little need for anything from Redmond. If GNU/Linux has no app, I write one in PHP and deliver it through the web browser. UNIX shared memory maximizes performance per dollar on the server too.
The thin client shows the pix and sends the clicks so it can run GNU/Linux. Saving licence costs on servers and clients is a huge part of the cost of acquisition.
@Robert – you raise some good points but here are few couple more considerations:
The reports that the presentation I wrote about here was based on were all produced by independant analysts. There will be thin client fans, thick client fans, VDI fans, but the simple answer is that the desktop deliver technology is simply a means to an end – and one size rarely fits all.