Microsoft Ignite 2024 on a page

You probably noticed, but Microsoft held its Ignite conference in Chicago last week. As is normal now, there’s a “Book of News” for all the major announcements and the keynotes are available for online review. But there’s an awful lot to sort through. Luckily, CNET created a 15 minute summary of Satya Nadella’s keynote:

Major announcements from Ignite 2024

Last year, I wrote about how it was clear that Microsoft is all about Artificial Intelligence (AI) and this year is no different. The rest of this post focuses on the main announcements with a little bit of analysis from yours truly on what the implications might be.

AnnouncementWhat it meansFind out more
Investing in security, particularly around Purview.Data governance is of central importance in the age of AI. Microsoft has announced updates to prevent oversharing, risky use of AI, and misuse of protected materials. With one of the major concerns being accidental access to badly-secured information, this will be an important development, for those that make use of it.https://aka.ms/Ignite2024Security/
Zero Day QuestA new hacking event with $4m in rewards. Bound to grab headlines!https://aka.ms/ZeroDayQuest
Copilot as the UI for AIIf there’s one thing to take away from Ignite it’s that Microsoft sees Copilot as the UI for AI (it becomes the organising layer for work and how it gets done).

1. Every employee will have a Copilot that knows them and their work – enhancing productivity and saving time.
2. There will be agents to automate business processes.
3. And the IT dept has a control system to manage secure and measure the impact of Copilot.
Copilot ActionsCopilot Actions are intended to reduce the time spent on repetitive everyday tasks – they were described as “Outlook Rules for the age of AI” (but for the entire Microsoft 365 ecosystem). I’m sceptical on these but willing to be convinced. Let’s see how well they work in practice.https://aka.ms/CopilotActions
Copilot AgentsIf 2023-4 were about generative AI, “agentic” computing is the term for 2025.

There will be Agents within the context of a team – teammates scoped to specific roles – e.g. a facilitator to keep meeting focus in Teams and manage follow-up/action items; a Project Management Agent in Planner – to create a plan and oversee task assignments/content creation; self-service agents to provide information – augmenting HR and IT departments to answer questions and complete tasks; and a SharePoint Agent per site – providing instant access to real-time information.

Organisations can create their own agents using Copilot Studio – and the aim is that it should be as easy to create an Agent as it is to create a document.
https://aka.ms/AgentsInM365
Copilot AnalyticsAnswering criticism about the cost of licensing Copilot, Microsoft is providing analytics to correlate usage to a business metric. Organisations will be able to tune their Copilot usage to business KPIs and show how Copilot usage is translating into business outcomes.https://aka.ms/CopilotAnalytics
Mobile Application Management on Windows 365Microsoft is clearly keen to push its “cloud PC” concept – Windows 365 – with new applications so that users can access a secure computing environment from iOS and Android devices. Having spent years working to bring clients away from expensive thin client infrastructure and back to properly managed “thick clients”, I’m not convinced about the “Cloud PC”, but maybe I’m just an old man shouting at the clouds…https://aka.ms/WindowsAppAndroid
Windows 365 LinkWindows 365 Link is a simple, secure purpose built access device (aka a thin PC). It’s admin-less and password-less with security configurations enabled by default that cannot be turned off. The aim is that users can connect directly to their cloud PC with no data left locally (available from April 2025). If you’re going to invest in this approach, then it could be a useful device – but it’s not a Microsoft version of a Mac Mini – it’s all about the cloud.https://aka.ms/Windows365Link
Windows Resiliency InitiativeDoes anyone remember “Trustworthy Computing”? Well, the Windows Resiliency Initiative is the latest attempt to make Windows more secure and reliable. It includes new features like Windows Hotpatch to apply critical updates without a restart across an entire IT estate. https://aka.ms/WinWithSecurity
Azure LocalA rebranding and expansion of Azure Stack to bring Azure Arc to the edge. Organisations can run mission critical workloads in distributed locations.https://aka.ms/AzureLocal
Azure Integrated HSMMicrosoft’s first in-house security chip hardens key management without impacting performance. This will be part of every new server deployed on Azure starting next year.https://aka.ms/AzureIntegratedHSM
Azure BoostMicrosoft’s first in-house data processing unit (DPU) is designed to accelerate data-centric workloads. It can run cloud storage workloads with 3x less power and 4x the performance.https://aka.ms/AzureBoostDPU
Preview NVIDIA Blackwall AI infrastructure on AzureBy this point, even I’m yawning, but this is a fantastically fast computing environment for optimised AI training workloads. It’s not really something that most of us will use.https://aka.ms/NDGB200v6
Azure HBv5Co-engineered with AMD, this was described as a new standard for high performance computing and cited as being up to 8 times faster than any other cloud VM.https://aka.ms/AzureHBv5

FabricSQL Server is coming natively to Fabric in the form of Microsoft Fabric Databases. The aim here is to simplify operational databases as Fabric already did for analytical requirements. It provides an enterprise data platform that serves all use cases, making use of open source formats in the Fabric OneLake data lake. I have to admit, it does sound very interesting, but there will undoubtedly be some nuances that I’ll leave to my data-focused colleagues.https://aka.ms/Fabric
Azure AI FoundryDescribed as a “first class application server for the AI age” – unifying all models, tooling, safety and monitoring into a single experience, integrated with development tools as a standalone SDK and a portal. 1800 models in the catalogue for model customisation and experimentation.https://aka.ms/MaaSExperimentation
https://aka.ms/CustomizationCollaborations
Azure AI Agent ServiceBuild, deploy and scale AI apps to automate business processes. Compared with Copilot Studio for a graphical approach, this provides a code-first approach for developers to create agents, grounded in data, wherever it is.https://ai.azure.com/
Other AI announcementsThere will be AI reports and other management capabilities in Foundry, including including evaluation of models.

Safety is important – with tools to build secure AI including PromptShield to detect/block manipulation of outputs and risk/safety evaluations for image content.
Quantum ComputingThis will be the buzzword that replaces AI in the coming years. Quantum is undoubtedly significant but it’s still highly experimental. Nevertheless, Microsoft is making progress in the Quantum arms race, with a the “World’s most powerful quantum computer” with 24 logical Qubits, double the previous record.https://aka.ms/AQIgniteBlog

Featured image: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

An updated approach to Virtual Desktop Infrastructure: Azure Virtual Desktop on Azure Stack HCI

With the general availability of Azure Virtual Desktop (AVD) on Azure Stack HCI, organisations have a powerful new platform for providing virtual desktop infrastructure (VDI) services. No longer torn between complex and expensive server farms, or desktops running in the cloud – the best of both worlds is here.

The complexities of managing an end user computing service

Over the many years I’ve consulted with IT departments, one of the many common themes has been around the complexities of managing end user computing services.

It used to be that their investments in standard PC builds had led to a plethora of additional management products. These days, Windows (or, more accurately, Microsoft 365) does that to such a level that the layers of added products are not needed. With a Modern Workplace solution, we can deploy a new PC from a factory image, log on with a username from a recognised domain (for example user@node4.co.uk), build that PC to meet corporate standards and get the end user up and running quickly with access to their data, all in a secure and compliant manner.

But there have always been edge cases. The legacy application that is critical to the business but doesn’t run on a modern version of Windows. Or the application with a licensing model that doesn’t lend itself to being installed on everyone’s PC for occasional use. For these cases, virtual desktop infrastructure has been a common approach to publish an application or a desktop.

For other organisations, the use of VDI is seen as an opportunity to abstract the desktop from the device. Either saving on device costs by buying lightweight terminal devices that connect to a farm of virtual desktops, or by using the secure desktop container as an opportunity to allow access from pretty much anywhere, because the device doesn’t directly access the organisation’s data.

I’m not going to advocate for one approach over another. Of course, I have Opinions but, at Node4, we start from the position of understanding the business problem we’re trying to solve, and then working out which technology will best support that outcome.

A shift in the landscape

Whilst Microsoft has had its own remote desktop offerings over the years, they’ve tended to partner with companies like Citrix rather than develop a full-blown solution. Meanwhile, companies like VMware had their own products – though with the Broadcom acquisition and sale of its VDI products (including Workspace One and Horizon), their future looks uncertain.

This makes customers uneasy. But there is hope.

Microsoft has not stood still and, for a few years it’s been maturing its VDI in the cloud service – Azure Virtual Desktop.

AVD provides a secure, remote desktop experience from anywhere, delivering a virtualised desktop experience that’s fully optimised for Windows 11 and Windows 10 multi-session capabilities. With various licensing options including within key Microsoft 365 subscription plans, AVD is now an established service. So much so that there are even products and services to help with managing AVD environments – for example from Nerdio. But, until recently, the biggest drawback with AVD was that it only ran in the public cloud – and whilst that’s exactly what some organisations need, it’s not suitable for some others.

A true hybrid cloud solution

(At this point, I’m tempted to introduce a metaphor about when cloud computing comes to ground. But fog and mist don’t conjure up the image I’m trying to project here…)

Recently, there has been a significant development with AVD. It’s largely gone unnoticed – but AVD is now generally available on Azure Stack HCI.

Azure Stack extends the robust capabilities of Azure’s cloud services to be run locally – either in an on-premises or an edge computing scenario. Azure Stack hyperconverged infrastructure (HCI) is a hybrid product that connects on-premises systems to Azure for cloud-based services, monitoring, and management. Effectively, Azure Stack HCI provides many of the benefits of public cloud infrastructure whilst meeting the use case and regulatory requirements for specialised workloads that can’t be run in the public cloud. 

Some of the benefits of running Azure Virtual Desktop on Azure Stack HCI

There are many advantages to running software locally. Immediate examples are to address data residency requirements, latency-sensitive workloads, or those with data proximity requirements. Looking specifically at Azure Virtual Desktop on Azure Stack HCI, we can:

  • Improve performance by placing session hosts closer to the end users.
  • Keep application and user data on-premises and so local to the users.
  • Improve access and performance for legacy client-server applications by co-locating the application and its data sources.
  • Provide a full Windows 11 experience regardless of the device used for access.
  • Unified management with other Azure resources.
  • Make use of fully patched operating system images from the Azure Marketplace.

What about my existing VDI?

Node4 will aid you in finding the best path from your existing VDI to AVD. Our consultants are experienced in using Microsoft’s Cloud Adoption Framework for Azure to establish an AVD landing zone and to take a structured approach to assessing and migrating existing workloads, user profiles and data to AVD.

Why Node4 is best positioned to help

I’ve already written about how Node4’s expert Consultants can help deploy Azure Virtual Desktop to meet your organisation’s specific needs but that’s only looking at one small part of the picture.

Because, for those organisations who don’t want to invest in hardware solutions, we have hosted services for Azure Stack HCI. We also provide flexible and secure communications solutions. And we’re an Azure Expert Managed Services Provider (MSP).

I may be a little biased, but I think that’s a pretty strong set of services. Put them all together and we are uniquely positioned to help you make the most of AVD on-premises, co-located in one of our datacentres, on a Node4 hosted platform, or in the public cloud.

So, if you are looking at how to modernise your VDI, we’d love to hear from you. Feel free to get in touch using the contact details below – and let’s have a conversation.

This post was originally published on the Node4 blog.

Weeknote 1/2024: A new beginning

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Wow, that was a bump. New Year celebrations over, a day off for the public holiday, and straight back to work.

After a lot of uncertainty in December, I’ve been keen to get stuck in to something valuable, and I’m not breaking any confidentiality by saying that my focus right now is on refreshing the collateral behind Node4’s Public Cloud offerings. I need to work across the business – my Office of the CTO (OCTO) role is about strategy, innovation and offering development – but the work also needs to include specialist sales colleagues, our marketing teams, and of course the experts that actually deliver the engagements.

So that’s the day job. Alongside that, I’ve been:

  • Avoiding stating any grand new year resolutions. I’ll only break them. It was literally hours before I broke my goal of not posting on Twitter/X this year. Though I did step away from a 453-day streak on Duolingo to focus my spare time on other, hopefully less gamified, pursuits:
  • Doing far too little exercise. A recurring health condition is impacting my ability to walk, run, cycle and to get back to Caveman Conditioning. It’s getting a bit better but it may be another week before I can have my new year fitness kick-start.
  • Eating badly. Logging everything in the Zoe app is helping me to see what I should avoid (spoiler: I need to eat more plants and less sweet stuff) but my willpower is still shockingly bad. I was also alarmed to see Prof. Tim Spector launching what appeared to be an ultra-processed food (UPF) product. More on that after I’ve got to M&S and actually seen the ingredients list for the Zoe Gut Shot, but others are telling me it’s not a UPF.
  • Redesigning the disaster recovery strategy for my photos. I learned the hard way several years ago that RAID is not a backup, and nothing exists unless it’s in three places. For me that’s the original, a copy on my Synology NAS, and copy in the cloud. My cloud (Azure) backups were in a proprietary format from the Synology Hyper Backup program, so I’ve started to synchronise the native files by following a very useful article from Charbel Nemnom, MVP. Unfortunately the timestamps get re-written on synchronisation, but the metadata is still inside the files and these are the disaster copies – hopefully I’ll never need to rely on them.
  • Watching the third season of Slow Horses. No spoilers please. I still have 4 episodes to watch… but it’s great TV.
  • Watching Mr Bates vs. The Post Office. The more I learn about the Post Office Scandal, the more I’m genuinely shocked. I worked for Fujitsu (and, previously, ICL) for just over 15 years. I was nothing to do with Horizon, and knew nothing of the scandal, but it’s really made me think about the values of the company where I spent around half my career to date.
  • Spreading some of my late Father-in-law’s ashes by his tree in the Olney Community Orchard.
  • Meeting up with old friends from my “youth”, as one returns to England from his home in California, for a Christmas visit.

Other things

Other things I found noteworthy this week:

  • Which came first, the chicken or the egg scissors or the blister-pack?

Press coverage

This week, I was quoted in this article:

Coming up

This weekend will see:

  • A return to Team MK Youth Cycle Coaching. Our local cyclo-cross league is finished for the 2023/4 season so we’re switching back to road cycling as we move into the new year.
  • Some home IT projects (more on them next week).
  • General adulting and administration.

Next week, I’ll be continuing the work I mentioned at the head of this post, but also joining an online Group Coaching session from Professor John Amaechi OBE. I have no idea what to expect but I’m a huge fan of his wise commentary. I’m also listening to The Promises of Giants on Audible. (I was reading on Kindle, but switched to the audiobook.)

This week in photos

Featured image: Author’s own
(this week’s flooding of the River Great Ouse at Olney)

What did we learn at Microsoft Ignite 2023?

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Right now, there’s a whole load of journalists and influencers writing about what Microsoft announced at Ignite. I’m not a journalist, and Microsoft has long since stopped considering me as an influencer. Even so, I’m going to take a look at the key messages. Not the technology announcements – for those there’s the Microsoft Ignite 2023 Book of News – but the real things IT Leaders need to take away from this event.

Microsoft’s investment in OpenAI

It’s all about AI. I know, you’re tired of the hype, suffering from AI fatigue, but for Microsoft, it really is about AI. And if you were unconvinced just how important AI is to Microsoft’s strategy, their action to snap up key members of staff from an imploding OpenAI a week later should be all you need to see:

Tortoise Media‘s Barney Macintyre (@barneymac) summed it up brilliantly when he said that

“Satya Nadella, chief executive of Microsoft, has played a blinder. Altman’s firing raised the risk that he would lose a key ally at a company into which Microsoft has invested $13 billion. After it became clear the board wouldn’t accept his reinstatement, Nadella offered jobs to Altman, Brockman and other loyalist researchers thinking about leaving.

The upshot: a new AI lab, filled with talent and wholly owned by Microsoft – without the bossy board. An $86 billion subsidiary for a $13 billion investment.”

But the soap opera continued and, by the middle of the week, Altman was back at OpenAI, apparently with the blessing of Microsoft!

If nothing else, this whole saga should reinforce just how important OpenAI is to Microsoft.

The age of the Copilot

Copilot is Microsoft’s brand for a set of assistive technologies that will sit alongside applications and provide an agent experience, built on ChatGPT, Dall-E and other models. Copilots are going to be everywhere. So much so that there is a “stack” for Copilot and Satya described Microsoft as “a Copilot company”.

That stack consists of:

  • The AI infrastructure in Azure – all Copilots are built on AzureAI.
  • Foundation models from OpenAI, including the Azure OpenAI Service to provide access in a protected manner but also new OpenAI models, fine-tuning, hosted APIs, and an open source model catalogue – including Models as a Service in Azure.
  • Your data – and Microsoft is pushing Fabric as all the data management tools in one SaaS experience, with onwards flow to Microsoft 365 for improved decision-making, Purview for data governance, and Copilots to assist. One place to unify, prepare and model data (for AI to act upon).
  • Applications, with tools like Microsoft Teams becoming more than just communication and collaboration but a “multi-player canvas for business processes”.
  • A new Copilot Studio to extend and customise Microsoft Copilot, with 1100 prebuilt plugins and connectors for every Azure data service and many common enterprise data platforms.
  • All wrapped with a set of AI safety and security measures – both in the platform (model and safety system) and in application (metaprompts, grounding and user experience).

In addition to this, Bing Chat is now re-branded as Copilot – with an enterprise version at no additional cost to eligible Entra ID users. On LinkedIn this week, one Microsoft exec posted that “Copilot is going to be the new UI for work”.

In short, Copilots will be everywhere.

Azure as the world’s computer

Of course, other cloud platforms exist, but I’m writing about Microsoft here. So what did they announce that makes Azure even more powerful and suited to running these new AI workloads?

  • Re-affirming the commitment to zero carbon power sources and then becoming carbon negative.
  • Manufacturing their own hollow-core fibre to drive up speeds.
  • Azure Boost (offloading server virtualisation processes from the hypervisor to hardware).
  • Taking the innovation from Intel and AMD but also introducing new Microsoft silicon: Azure Cobalt (ARM-based CPU series) and Azure Maia (AI accelerator in the form of an LLM training and inference chip).
  • More AI models and APIs. New tooling (Azure AI Studio).
  • Improvements in the data layer with enhancements to Microsoft Fabric. The “Microsoft Intelligent Data Platform” now has 4 tenets: databases; analytics; AI; and governance.
  • Extending Copilot across every role and function (as I briefly discussed in the previous section).

In summary, and looking forward

Microsoft is powering ahead on the back of its AI investments. And, as tired of the hype as we may all be, it would be foolish to ignore it. Copilots look to be the next generation of assistive technology that will help drive productivity. Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

In time we’ll see AI and mixed reality coming together to make sense of our intent and the world around us. Voice, gestures, and where we look become new inputs – the world becomes our prompt and interface.

Featured images: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Weeknote 20/2020: back to work

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Looking back on another week of tech exploits during the COVID-19 coronavirus chaos…

The end of my furlough

The week started off with exam study, working towards Microsoft exam AZ-300 (as mentioned last week). That was somewhat derailed when I was asked to return to work from Wednesday, ending my Furlough Leave at very short notice. With 2.5 days lost from my study plan, it shouldn’t have been a surprise that I ended my working week with a late-night exam failure (though it was still a disappointment).

Returning to work is positive though – whilst being paid to stay at home may seem ideal to some, it didn’t work so well for me. I wanted to make sure I made good use of my time, catching up on personal development activities that I’d normally struggle to fit in. But I was also acutely aware that there were things I could be doing to support colleagues but which I wasn’t allowed to. And, ultimately, I’m really glad to be employed during this period of economic uncertainty.

Smart cities

It looks like one of my main activities for the next few weeks will be working on a Data Strategy for a combined authority, so I spent Tuesday afternoon trying to think about some of the challenges that an organisation with responsibility for transportation and economic growth across a region might face. That led me to some great resources on smart cities including these:

  • There are some inspirational initiatives featured in this video from The Economist:
  • Finally (and if you only have a few minutes to spare), this short video from Vinci Energies provides an overview of what smart cities are really about:

Remote workshop delivery

I also had my first experience of taking part in a series of workshops delivered using Microsoft Teams. Teams is a tool that I use extensively, but normally for internal meetings and ad-hoc calls with clients, not for delivering consulting engagements.

Whilst they would undoubtedly have been easier performed face-to-face, that’s just not possible in the current climate, so the adaptation was necessary.

The rules are the same, whatever the format – preparation is key. Understand what you’re looking to get out of the session and be ready with content to drive the conversation if it’s not quite headed where you need it to.

Editing/deleting posts in Microsoft Teams private channels

On the subject of Microsoft Teams, I was confused earlier this week when I couldn’t edit one of my own posts in a private channel. Thanks to some advice from Steve Goodman (@SteveGoodman), I found that the ability to delete and/or edit messages is set separately on a private channel (normal channels inherit from the team).

The Microsoft Office app

Thanks to Alun Rogers (@AlunRogers), I discovered the Microsoft office app this week. It’s a great companion to Office 365 (or , searching across all apps, similar to Delve but in an app rather than in-browser. The Microsoft Office app is available for download from the Microsoft Store.

Azure Network Watcher

And, whilst on the subject of nuggets of usefulness in the Microsoft stable…

A little piece of history

I found an old map book on my shelf this week: a Halford’s Pocket Touring Atlas of Great Britain and Ireland, priced at sixpence. I love poring over maps – they provide a fascinating insight into the development of the landscape and the built environment.

That’s all for now

Those are just a few highlights (and a lowlight) from the week – there’s much more on my Twitter feed

“Disaster Recovery” and related thoughts…

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Backup, Archive, High Availbility, Disaster Recovery, Business Continuity. All related. Yet all different.

One of my colleagues was recently faced with needing to run “a DR [disaster recovery] workshop” for a client. My initial impression was:

  • What disasters are they planning for?
  • I’ll bet they are thinking about Coronavirus and working remotely. That’s not really DR.
  • Or are they really thinking about a backup strategy?

So I decided to turn some of my rambling thoughts into a blog post. Each of these topics could be a post in its own right – I’m just scraping the surface here…

Let’s start with backup (and recovery)

Backups (of data) are a fairly simple concept. Anything that would create a problem if it was lost should be backed up. For example, my digital photos are considered to not exist at all unless they are synchronised (or backed up) to at least two other places (some network-attached storage, and the cloud).

In a business context, we run backups in order to be able to recover (restore) our content (configuration or data) within a given window. We may have weekly full backups and daily incremental or differential backups (perhaps with more regular snapshots), then retain parent, grandparent and great-grandparent copies of the full backups (four weeks) and keep each of these as (lunar) monthly backups for a year. That’s just an example – each organisation will have its own backup/retention policies and those backups may be stored on or off-site, on tape or disk.

In summary, backups are about making sure we have an up to date copy of our important configuration information and data, so we can recover it if the primary copy is lost or damaged.

And for bonus content, some services we might consider in a modern infrastructure context include Azure Backup or AWS Backup.

Backups must be verified and periodically tested in order to have any use.

Archiving information

When I wrote about backups above, I mentioned keeping multiple copies covering various points in time. Whilst some may consider this adequate for archival, archival is the storage of data for long-term preservation of read-only access – for example, documents that must be stored for an extended period of time (for example 7, 10, 25, 99 years). Once that would have been paper documents, in boxes. Now it might be digital files (or database contents) on tape or disk (potentially cloud storage).

Archival might still use backup software and associated retention policies, but we’ll think carefully about the medium we store it on. For very long term physical storage we might need to consider the media formats (paper is bulky and transferred to microfiche, or old magnetic media degrades, so it’s moved to optical storage – but the hardware becomes obsolete, so it’s moved to another format). If storing on disk (on-premises or in the cloud), we can use slower (cheaper) disks and accept that restoration from the archive may take additional time.

In summary, archival is about long-term data storage, generally measured in many years and archives might be stored off-line, or near-line.

Technologies we might use for archival are similar to backups, but we could consider lower-cost storage – e.g. Azure Storage‘s Cool or Archive tiers or Amazon S3 Glacier.

Keeping systems highly available

High Availability (HA) is about making sure that our systems are available for as much time as possible – or certainly within a given service level agreement (SLA).

Traditionally, we used technologies like a redundant array of inexpensive devices (RAID) for disks or memory, error checking memory, or redundant power supplies. We might also have created server clusters or farms. All of these methods have the intention of removing single points of failure (SPOFs).

In the cloud, we leave a lot of the infrastructure considerations to the cloud service provider and we design for failure in other ways.

  • We assume that virtual machines will fail and create availability sets.
  • We plan to scale out across multiple hosts for applications that can take advantage of that architecture.
  • We store data in multiple regions.
  • We may even consider multiple clouds.

Again, the level of redundancy built into the app and its supporting infrastructure must be designed according to requirements – as defined by the SLA. There may be no point in providing an expensive four nines uptime for an application that’s used once a month by one person, who works normal office hours. But, then again, what if that application is business critical – like payroll? Again, refer to the SLA – and maybe think about business continuity too… more on that in a moment.

Some of my clients have tried to implement Windows Server clusters in Azure. I’ve yet to be convinced and still consider that it’s old-world thinking applied in a contemporary scenario. There are better ways to design a highly available file service in 2020.

In summary, high availability is about ensuring that an application or service is available within the requirements of the associated service level agreement.

Technologies might include some of the hardware considerations I listed earlier, but these days we’re probably thinking more about:

Remember to also consider other applications/systems upon which an application relies.

Also, quoting from some of Microsoft’s training materials:

“To achieve four 9’s (99.99%), you probably can’t rely on manual intervention to recover from failures. The application must be self-diagnosing and self-healing.

Beyond four 9’s, it is challenging to detect outages quickly enough to meet the SLA.

Think about the time window that your SLA is measured against. The smaller the window, the tighter the tolerances. It probably doesn’t make sense to define your SLA in terms of hourly or daily uptime.”

Microsoft Learn: Design for recoverability and availability in Azure: High Availability

Disaster recovery

As the name suggests, Disaster Recovery (DR) is about recovering from a disaster, whatever that might be.

It could be physical damage to a piece of hardware (a switch, a server) that requires replacement or recovery from backup. It could be a whole server room or datacentre that’s been damaged or destroyed. It could be data loss as a result of malicious or accidental actions by an employee.

This is where DR plans come into play- firstly analysing the risks that might lead to disaster (including possible data loss and major downtime scenarios) and then looking at recovery objectives – the application’s recovery point objective (RPO) and recovery time objective (RTO).

Quoting Microsoft’s training materials again:

An illustration showing the duration, in hours, of the recovery point objective and recovery time objective from the time of the disaster.

“Recovery Point Objective (RPO): The maximum duration of acceptable data loss. RPO is measured in units of time, not volume: “30 minutes of data”, “four hours of data”, and so on. RPO is about limiting and recovering from data loss, not data theft.

Recovery Time Objective (RTO): The maximum duration of acceptable downtime, where “downtime” needs to be defined by your specification. For example, if the acceptable downtime duration is eight hours in the event of a disaster, then your RTO is eight hours.”

Microsoft Learn: Design for recoverability and availability in Azure: Disaster Recovery

For example, I may have a database that needs to be able to withstand no more than 15 minutes’ data loss and an associated SLA that dictates no more than 4 hours’ downtime in a given period. For that, my RPO is 15 minutes and the RTO is 4 hours. I need to make sure that I take snapshots (e.g. of transaction logs for replay) at least every 15 minutes and that my restoration process to get from offline to fully recovered takes no more than 4 hours (which will, of course, determine the technologies used).

Considerations when creating a DR plan might include:

  • What are the requirements for each application/service?
  • How are systems linked – what are the dependencies between applications/services?
  • How will you recover within the required RPO and RTO constraints?
  • How can replicated data be switched over?
  • Are there multiple environments (e.g. dev, test and production)?
  • How will you recover from logical errors in a database that might impact several generations of backup, or that may have spread through multiple data replicas?
  • What about cloud services – do you need to backup SaaS data (e.g. Office 365)? (Possibly not, if you’re happy with a retention-period based restoration from a “recycle bin” or similar but what if an administrator deletes some data?)

As can be seen, there are many factors here – more than I can go into in this blog post, but a disaster recovery strategy needs to consider backup/recovery, archive, availability (high or otherwise), technology and service (it may help to think about some of the ITIL service design processes).

In summary, disaster recovery is about having a plan to be able to recover from an event that results in downtime and data loss.

Technologies that might help include Azure Site Recovery. Applications can also be designed with data replication and recovery in mind, for example, using geo-replication capabilities in Azure Storage/Amazon S3, Azure SQL Server/Amazon RDS or using a globally-distributed database such as Azure Cosmos DB. And DR plans must be periodically tested.

Business continuity

Finally, Business Continuity (BC). This is something that many organisations will have had to contend with over the last few weeks and months.

BC is often confused with DR but they are different. Business continuity is about continuing to conduct business when something goes wrong. That may be how to carry on working whilst working on recovering from a disaster. Or it may be how to adapt processes to allow a workforce to continue functioning in compliance with social distancing regulations.

Again, BC needs a plan. But many of those plans will be reconsidered now – if your BC arrangements are that in the event of an office closure, people go to a hosted DR site with some spare equipment that will be made available within an agreed timescale, that might not help in the event of a global pandemic, when everyone else wants to use that facility. Instead, how will your workforce continue to work at home? Which systems are important?How will you provide secure remote access to those systems? (How will you serve customers whilst employees are also looking after children?) The list goes on.

Technology may help with BC, but technology alone will not provide a solution. The use of modern approaches to End User Computing will certainly make secure remote and mobile working a possibility (indeed, organisations that have taken a modern approach will probably already be familiar with those practices) but a lot of the issues will relate to people and process.

In summary, Business Continuity plans may be invoked if there is a disaster but they are about adapting business processes to maintain service in times of disruption.

Wrapping up

As I was writing this post, I thought about many tangents that I could go off and cover. I’m pretty sure the topic could be a book and this post scrapes the surface. Nevertheless, I hope my thoughts are useful and show that disaster recovery cannot be considered in isolation.

Microsoft Online Services: tenants, subscriptions and domain names

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I often come across confusion with clients trying to understand the differences between tenants, subscriptions and domain names when deploying Microsoft services. This post attempts to clear up some misunderstandings and to – hopefully – make things a little clearer.

Each organisation has a Microsoft Online Services tenant which has a unique DNS name in the format organisationname.onmicrosoft.com. This is unique to the tenant and cannot be changed. Of course, a company can establish multiple organisations, each with its own tenant but these will always be independent of one another and need to be managed separately.

It’s important to remember that each tenant has a single Azure Active Directory (Azure AD). There is a 1:1 relationship between the Azure AD and the tenant. The Azure AD directory uses a unique tenant ID, represented in GUID format. Azure AD can be synchronised with an existing on premises Active Directory Domain Services (AD DS) directory using the Azure AD Connect software.

Multiple service offerings (services) can be deployed into the tenant: Office 365; Intune; Dynamics 365; Azure. Some of these services support multiple subscriptions that may be deployed for several reasons, including separation of administrative control. Quoting from the Microsoft documentation:

“An Azure subscription has a trust relationship with Azure Active Directory (Azure AD). A subscription trusts Azure AD to authenticate users, services, and devices.

Multiple subscriptions can trust the same Azure AD directory. Each subscription can only trust a single directory.”

Associate or add an Azure subscription to your Azure Active Directory tenant

Multiple custom (DNS) domain names can be applied to services – so mycompany.com, mycompany.co.uk and myoldcompanyname.com could all be directed to the same services – but there is still a limit of one tenant name per tenant.

Further reading

Subscriptions, licenses, accounts, and tenants for Microsoft’s cloud offerings.

A logical view on a virtual datacentre services architecture

This content is 6 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of years ago, I wrote a post about a logical view of an End-User Computing (EUC) architecture (which provides a platform for Modern Workplace). It’s served me well and the model continues to be developed (although the changes are subtle so it’s not really worth writing a new post for the 2019 version).

Building on the original EUC/Modern Workplace framework, I started to think what it might look like for datacentre services – and this is something I came up with last year that’s starting to take shape.

Just as for the EUC model, I’ve tried to step up a level from the technology – to get back to the logical building blocks of the solution so that I can apply them according to a specific client’s requirements. I know that it’s far from complete – just look at an Azure or AWS feature list and you can come up with many more classifications for cloud services – but I think it provides the basics and a starting point for a conversation:

Logical view of a virtual datacentre environment

Starting at the bottom left of the diagram, I’ll describe each of the main blocks in turn:

  • Whether hosted on-premises, co-located or making use of public cloud capabilities, Connectivity is a key consideration for datacentre services. This element of the solution includes the WAN connectivity between sites, site-to-site VPN connections to secure access to the datacentre, Internet breakout and network security at the endpoints – specifically the firewalls and other network security appliances in the datacentre.
  • Whilst many of the SBBs in the virtual datacentre services architecture are equally applicable for co-located or on-premises datacentres, there are some specific Cloud Considerations. Firstly, cloud solutions must be designed for failure – i.e. to design out any elements that may lead to non-availability of services (or at least to fail within agreed service levels). Depending on the organisation(s) consuming the services, there may also be considerations around data location. Finally, and most significantly, the cloud provider(s) must practice trustworthy computing and, ideally, will conform to the UK National Cyber Security Centre (NCSC)’s 14 cloud security principles (or equivalent).
  • Just as for the EUC/Modern Workplace architecture, Identity and Access is key to the provision of virtual datacentre services. A directory service is at the heart of the solution, combined with a model for limiting the scope of access to resources. Together with Role Based Access Control (RBAC), this allows for fine-grained access permissions to be defined. Some form of remote access is required – both to access services running in the datacentre and for management purposes. Meanwhile, identity integration is concerned with integrating the datacentre directory service with existing (on-premises) identity solutions and providing SSO for applications, both in the virtual datacentre and elsewhere in the cloud (i.e. SaaS applications).
  • Data Protection takes place throughout the solution – but key considerations include intrusion detection and endpoint security. Just as for end-user devices, endpoint security covers such aspects as firewalls, anti-virus/malware protection and encryption of data at rest.
  • In the centre of the diagram, the Fabric is based on the US National Institute of Standards and Technology (NIST)’s established definition of essential characteristics for cloud computing.
  • The NIST guidance referred to above also defines three service models for cloud computing: Infrastructure as a Service (IaaS); Platform as a Service (PaaS) and Software as a Service (SaaS).
  • In the case of IaaS, there are considerations around the choice of Operating System. Supported operating systems will depend on the cloud service provider.
  • Many cloud service providers will also provide one or more Marketplaces with both first and third-party (ISV) products ranging from firewalls and security appliances to pre-configured application servers.
  • Application Services are the real reason that the virtual datacentre services exist, and applications may be web, mobile or API-based. There may also be traditional hosted server applications – especially where IaaS is in use.
  • The whole stack is wrapped with a suite of Management Tools. These exist to ensure that the cloud services are effectively managed in line with expected practices and cover all of the operational tasks that would be expected for any datacentre including: licensing; resource management; billing; HA and disaster recovery/business continuity; backup and recovery; configuration management; software updates; automation; management policies and monitoring/alerting.

If you have feedback – for example, a glaring hole or suggestions for changes, please feel free to leave a comment below.

Microsoft Ignite | The Tour: London Recap

This content is 6 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most valuable personal development activities in my early career was a trip to the Microsoft TechEd conference in Amsterdam. I learned a lot – not just technically but about making the most of events to gather information, make new industry contacts, and generally top up my knowledge. Indeed, even as a relatively junior consultant, I found that dipping into multiple topics for an hour or so gave me a really good grounding to discover more (or just enough to know something about the topic) – far more so than an instructor-led training course.

Over the years, I attended further “TechEd”s in Amsterdam, Barcelona and Berlin. I fought off the “oh Mark’s on another jolly” comments by sharing information – incidentally, conference attendance is no “jolly” – there may be drinks and even parties but those are after long days of serious mental cramming, often on top of broken sleep in a cheap hotel miles from the conference centre.

Microsoft TechEd is no more. Over the years, as the budgets were cut, the standard of the conference dropped and in the UK we had a local event called Future Decoded. I attended several of these – and it was at Future Decoded that I discovered risual – where I’ve been working for almost four years now.

Now, Future Decoded has also fallen by the wayside and Microsoft has focused on taking it’s principal technical conference – Microsoft Ignite – on tour, delivering global content locally.

So, a few weeks ago, I found myself at the ExCeL conference centre in London’s Docklands, looking forward to a couple of days at “Microsoft Ignite | The Tour: London”.

Conference format

Just like TechEd, and at Future Decoded (in the days before I had to use my time between keynotes on stand duty!), the event was broken up into tracks with sessions lasting around an hour. Because that was an hour of content (and Microsoft event talks are often scheduled as an hour, plus 15 minutes Q&A), it was pretty intense, and opportunities to ask questions were generally limited to trying to grab the speaker after their talk, or at the “Ask the Experts” stands in the main hall.

One difference to Microsoft conferences I’ve previously attended was the lack of “level 400” sessions: every session I saw was level 100-300 (mostly 200/300). That’s fine – that’s the level of content I would expect but there may be some who are looking for more detail. If it’s detail you’re after then Ignite doesn’t seem to be the place.

Also, I noticed that Day 2 had fewer delegates and lacked some of the “hype” from Day 1: whereas the Day 1 welcome talk was over-subscribed, the Day 2 equivalent was almost empty and light on content (not even giving airtime to the conference sponsors). Nevertheless, it was easy to get around the venue (apart from a couple of pinch points).

Personal highlights

I managed to cover 11 topics over two days (plus a fair amount of networking). The track format of the event was intended to let a delegate follow a complete learning path but, as someone who’s a generalist (that’s what Architects have to be), I spread myself around to cover:

  • Dealing with a massive onset of data ingestion (Jeramiah Dooley/@jdooley_clt).
  • Enterprise network connectivity in a cloud-first world (Paul Collinge/@pcollingemsft).
  • Building a world without passwords.
  • Discovering Azure Tooling and Utilities (Simona Cotin/@simona_cotin).
  • Selecting the right data storage strategy for your cloud application (Jeramiah Dooley/@jdooley_clt).
  • Governance in Azure (Sam Cogan/@samcogan).
  • Planning and implementing hybrid network connectivity (Thomas Maurer/@ThomasMaurer).
  • Transform device management with Windows Autopilot, Intune and OneDrive (Michael Niehaus/@mniehaus and Mizanur Rahman).
  • Maintaining your hybrid environment (Niel Peterson/@nepeters).
  • Windows Server 2019 Deep Dive (Jeff Woolsey/@wsv_guy).
  • Consolidating infrastructure with the Azure Kubernetes Service (Erik St Martin/@erikstmartin).

In the past, I’d have written a blog post for each topic. I was going to say that I simply don’t have the time to do that these days but by the time I’d finished writing this post, I thought maybe I could have split it up a bit more! Regardless, here are some snippets of information from my time at Microsoft Ignite | The Tour: London. There’s more information in the slide decks – which are available for download, along with the content for the many sessions I didn’t attend.

Data ingestion

Ingesting data can be broken into:

  • Real-time ingestion.
  • Real-time analysis (see trends as they happen – and make changes to create a competitive differentiator).
  • Producing actions as patterns emerge.
  • Automating reactions in external services.
  • Making data consumable (in whatever form people need to use it).

Azure has many services to assist with this – take a look at IoT Hub, Azure Event Hubs, Azure Databricks and more.

Enterprise network connectivity for the cloud

Cloud traffic is increasing whilst traffic that remains internal to the corporate network is in decline. Traditional management approaches are no longer fit for purpose.

Office applications use multiple persistent connections – this causes challenges for proxy servers which generally degrade the Office 365 user experience. Remediation is possible, with:

  • Differentiated traffic – follow Microsoft advice to manage known endpoints, including the Office 365 IP address and URL web service.
  • Let Microsoft route traffic (data is in a region, not a place). Use DNS resolution to egress connections close to the user (a list of all Microsoft peering locations is available). Optimise the route length and avoid hairpins.
  • Assess network security using application-level security, reducing IP ranges and ports and evaluating the service to see if some activities can be performed in Office 365, rather than at the network edge (e.g. DLP, AV scanning).

For Azure:

  • Azure ExpressRoute is a connection to the edge of the Microsoft global backbone (not to a datacentre). It offers 2 lines for resilience and two peering types at the gateway – private and public (Microsoft) peering.
  • Azure Virtual WAN can be used to build a hub for a region and to connect sites.
  • Replace branch office routers with software-defined (SDWAN) devices and break out where appropriate.
Microsoft global network

Passwordless authentication

Basically, there are three options:

  • Windows Hello.
  • Microsoft Authenticator.
  • FIDO2 Keys.

Azure tooling and utilities

Useful resources include:

Selecting data storage for a cloud application

What to use? It depends! Classify data by:

  • Type of data:
    • Structured (fits into a table)
    • Semi-structured (may fit in a table but may also use outside metadata, external tables, etc.)
    • Unstructured (documents, images, videos, etc.)
  • Properties of the data:
    • Volume (how much)
    • Velocity (change rate)
    • Variety (sources, types, etc.)
Item TypeVolume Velocity Variety
Product catalogue Semi-structured High Low Low
Product photos Unstructured High Low Low
Sales data Semi-structured Medium High High

How to match data to storage:

  • Storage-driven: build apps on what you have.
  • Cloud-driven: deploy to the storage that makes sense.
  • Function-driven: build what you need; storage comes with it.

Governance in Azure

It’s important to understand what’s running in an Azure subscription – consider cost, security and compliance:

  • Review (and set a baseline):
    • Tools include: Resource Graph; Cost Management; Security Center; Secure Score.
  • Organise (housekeeping to create a subscription hierarchy, classify subscriptions and resources, and apply access rights consistently):
    • Tools include: Management Groups; Tags; RBAC;
  • Audit:
    • Make changes to implement governance without impacting people/work. Develop policies, apply budgets and audit the impact of the policies.
    • Tools include: Cost Management; Azure Policy.
  • Enforce
    • Change policies to enforcement, add resolution actions and enforce budgets.
    • Consider what will happen for non-compliance?
    • Tools include: Azure Policy; Cost Management; Azure Blueprints.
  • (Loop back to review)
    • Have we achieved what we wanted to?
    • Understand what is being spent and why.
    • Know that only approved resources are deployed.
    • Be sure of adhering to security practices.
    • Opportunities for further improvement.

Planning and implementing hybrid network connectivity

Moving to the cloud allows for fast deployment but planning is just as important as it ever was. Meanwhile, startups can be cloud-only but most established organisations have some legacy and need to keep some workloads on-premises, with secure and reliable hybrid communication.

Considerations include:

  • Extension of the internal protected network:
    • Should workloads in Azure only be accessible from the Internal network?
    • Are Azure-hosted workloads restricted from accessing the Internet?
    • Should Azure have a single entry and egress point?
    • Can the connection traverse the public Internet (compliance/regulation)?
  • IP addressing:
    • Existing addresses on-premises; public IP addresses.
    • Namespaces and name resolution.
  • Multiple regions:
    • Where are the users (multiple on-premises sites); where are the workloads (multiple Azure regions); how will connectivity work (should each site have its own connectivity)?
  • Azure virtual networks:
    • Form an isolated boundary with secure communications.
    • Azure-assigned IP addresses (no need for a DHCP server).
    • Segmented with subnets.
    • Network Security Groups (NSGs) create boundaries around subnets.
  • Connectivity:
    • Site to site (S2S) VPNs at up to 1Gbps
      • Encrypted traffic over the public Internet to the GatewaySubnet in Azure, which hosts VPN Gateway VMs.
      • 99.9% SLA on the Gateway in Azure (not the connection).
      • Don’t deploy production workloads on the GatewaySubnet; /26, /27 or /28 subnets recommended; don’t apply NSGs to the GatewaySubnet – i.e. let Azure manage it.
    • Dedicated connections (Azure ExpressRoute): private connection at up to 10Gbps to Azure with:
      • Private peering (to access Azure).
      • Microsoft peering (for Office 365, Dynamics 365 and Azure public IPs).
      • 99.9% SLA on the entire connection.
    • Other connectivity services:
      • Azure ExpressRoute Direct: a 100Gbps direct connection to Azure.
      • Azure ExpressRoute Global Reach: using the Microsoft network to connect multiple local on-premises locations.
      • Azure Virtual WAN: branch to branch and branch to Azure connectivity with software-defined networks.
  • Hybrid networking technologies:

Modern Device Management (Autopilot, Intune and OneDrive)

The old way of managing PC builds:

  1. Build an image with customisations and drivers
  2. Deploy to a new computer, overwriting what was on it
  3. Expensive – and the device has a perfectly good OS – time-consuming

Instead, how about:

  1. Unbox PC
  2. Transform with minimal user interaction
  3. Device is ready for productive use

The transformation is:

  • Take OEM-optimised Windows 10:
    • Windows 10 Pro and drivers.
    • Clean OS.
  • Plus software, settings, updates, features, user data (with OneDrive for Business).
  • Ready for productive use.

The goal is to reduce the overall cost of deploying devices. Ship to a user with half a page of instructions…

Windows Autopilot overview

Autopilot deployment is cloud driven and will eventually be centralised through Intune:

  1. Register device:
    • From OEM or Channel (manufacturer, model and serial number).
    • Automatically (existing Intune-managed devices).
    • Manually using a PowerShell script to generate a CSV file with serial number and hardware hash, which is then uploaded to the Intune portal.
  2. Assign Autopilot profile:
    • Use Azure AD Groups to assign/target.
    • The profile includes settings such as deployment mode, BitLocker encryption, device naming, out of box experience (OOBE).
    • An Azure AD device object is created for each imported Autopilot device.
  3. Deploy:
    • Needs Azure AD Premium P1/P2
    • Scenarios include:
      • User-driven with Azure AD:
        • Boot to OOBE, choose language, locale, keyboard and provide credentials.
        • The device is joined to Azure AD, enrolled to Intune and policies are applied.
        • User signs on and user-assigned items from Intune policy are applied.
        • Once the desktop loads, everything is present, including file links in OneDrive) – time depends on the software being pushed.
      • Self-deploying (e.g. kiosk, digital signage):
        • No credentials required; device authenticates with Azure AD using TPM 2.0.
      • User-driven with hybrid Azure AD join:
        • Requires Offline Domain Join Connector to create AD DS computer account.
        • Device connected to the corporate network (in order to access AD DS), registered with Autopilot, then as before.
        • Sign on to Azure AD and then to AD DS during deployment. If they use the same UPN then it makes things simple for users!
      • Autopilot for existing devices (Windows 7 to 10 upgrades):
        • Backup data in advance (e.g. with OneDrive)
        • Deploy generic Windows 10.
        • Run Autopilot user-driven mode (can’t harvest hardware hashes in Windows 7 so use a JSON config file in the image – the offline equivalent of a profile. Intune will ignore unknown device and Autopilot will use the file instead; after deployment of Windows 10, Intune will notice a PC in the group and apply the profile so it will work if the PC is reset in future).

Autopilot roadmap (1903) includes:

  • “White glove” pre-provisioning for end users: QR code to track, print welcome letter and shipping label!
  • Enrolment status page (ESP) improvements.
  • Cortana voiceover disabled on OOBE.
  • Self-updating Autopilot (update Autopilot without waiting to update Windows).

Maintaining your hybrid environment

Common requirements in an IaaS environment include wanting to use a policy-based configuration with a single management and monitoring solution and auto-remediation.

Azure Automation allows configuration and inventory; monitoring and insights; and response and automation. The Azure Portal provides a single pane of glass for hybrid management (Windows or Linux; any cloud or on-premises).

For configuration and state management, use Azure Automation State Configuration (built on PowerShell Desired State Configuration).

Inventory can be managed with Log Analytics extensions for Windows or Linux. An Azure Monitoring Agent is available for on-premises or other clouds. Inventory is not instant though – can take 3-10 minutes for Log Analytics to ingest the data. Changes can be visualised (for state tracking purposes) in the Azure Portal.

Azure Monitor and Log Analytics can be used for data-driven insights, unified monitoring and workflow integration.

Responding to alerts can be achieved with Azure Automation Runbooks, which store scripts in Azure and run them in Azure. Scripts can use PowerShell or Python so support both Windows and Linux). A webhook can be triggered with and HTTP POST request. A Hybrid runbook worker can be used to run on-premises or in another cloud.

It’s possible to use the Azure VM agent to run a command on a VM from Azure portal without logging in!

Windows Server 2019

Windows Server strategy starts with Azure. Windows Server 2019 is focused on:

  • Hybrid:
    • Backup/connect/replicate VMs.
    • Storage Migration Service to migrate unstructured data into Azure IaaS or another on-premises location (from 2003+ to 2016/19).
      1. Inventory (interrogate storage, network security, SMB shares and data).
      2. Transfer (pairings of source and destination), including ACLs, users and groups. Details are logged in a CSV file.
      3. Cutover (make the new server look like the old one – same name and IP address). Validate before cutover – ensure everything will be OK. Read-only process (except change of name and IP at the end for the old server).
    • Azure File Sync: centralise file storage in Azure and transform existing file servers into hot caches of data.
    • Azure Network Adapter to connect servers directly to Azure networks (see above).
  • Hyper-converged infrastructure (HCI):
    • The server market is still growing and is increasingly SSD-based.
    • Traditional rack looked like SAN, storage fabric, hypervisors, appliances (e.g. load balancer) and top of rack Ethernet switches.
    • Now we use standard x86 servers with local drives and software-defined everything. Manage with Admin Center in Windows Server (see below).
    • Windows Server now has support for persistent memory: DIMM-based; still there after a power-cycle.
    • The Windows Server Software Defined (WSSD) programme is the Microsoft approach to software-defined infrastructure.
  • Security: shielded VMs for Linux (VM as a black box, even for an administrator); integrated Windows Defender ATP; Exploit Guard; System Guard Runtime.
  • Application innovation: semi-annual updates are designed for containers. Windows Server 2019 is the latest LTSC channel so it has the 1709/1803 additions:
    • Enable developers and IT Pros to create cloud-native apps and modernise traditional apps using containers and micro services.
    • Linux containers on Windows host.
    • Service Fabric and Kubernetes for container orchestration.
    • Windows subsystem for Linux.
    • Optimised images for server core and nano server.

Windows Admin Center is core to the future of Windows Server management and, because it’s based on remote management, servers can be core or full installations – even containers (logs and console). Download from http://aka.ms/WACDownload

  • 50MB download, no need for a server. Runs in a browser and is included in Windows/Windows Server licence
  • Runs on a layer of PowerShell. Use the >_ icon to see the raw PowerShell used by Admin Center (copy and paste to use elsewhere).
  • Extensible platform.

What’s next?

  • More cloud integration
  • Update cadence is:
    • Insider builds every 2 weeks.
    • Semi-annual channel every 6 months (specifically for containers):
      • 1709/1803/1809/19xx.
    • Long-term servicing channel
      • Every 2-3 years.
      • 2016, 2019 (in September 2018), etc.

Windows Server 2008 and 2008 R2 reach the end of support in January 2020 but customers can move Windows Server 2008/2008 R2 servers to Azure and get 3 years of security updates for free (on-premises support is chargeable).

Further reading: What’s New in Windows Server 2019.

Containers/Azure Kubernetes Service

Containers:

  • Are fully-packaged applications that use a standard image format for better resource isolation and utilisation.
  • Are ready to deploy via an API call.
  • Are not Virtual machines (for Linux).
  • Do not use hardware virtualisation.
  • Offer no hard security boundary (for Linux).
  • Can be more cost effective/reliable.
  • Have no GUI.

Kubernetes is:

  • An open source system for auto-deployment, scaling and management of containerized apps.
  • Container Orchestrator to manage scheduling; affinity/anti-affinity; health monitoring; failover; scaling; networking; service discovery.
  • Modular and pluggable.
  • Self-healing.
  • Designed by Google based on a system they use to run billions of containers per week.
  • Described in “Phippy goes to the zoo”.

Azure container offers include:

  • Azure Container Instances (ACI): containers on demand (Linux or Windows) with no need to provision VMs or clusters; per-second billing; integration with other Azure services; a public IP; persistent storage.
  • Azure App Service for Linux: a fully-managed PaaS for containers including workflows and advanced features for web applications.
  • Azure Kubernetes Service (AKS): a managed Kubernetes offering.

Wrap-up

So, there you have it. An extremely long blog post with some highlights from my attendance at Microsoft Ignite | The Tour: London. It’s taken a while to write up so I hope the notes are useful to someone else!

UK Government Protective Marking and the Microsoft Cloud

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently heard a Consultant from another Microsoft partner talking about storing “IL3” information in Azure. That rang alarm bells with me, because Impact Levels (ILs) haven’t been a “thing” for UK Government data since April 2014. For the record, here’s the official guidance on the UK Government data security classifications and this video explains why the system was changed:

Meanwhile, this one is a good example of what it means in practice:

So, what does that mean for storing data in Azure, Dynamics 365 and Office 365? Basically, information classified OFFICIAL can be stored in the Microsoft Cloud – for more information, refer to the Microsoft Trust Center. And, because OFFICIAL-SENSITIVE is not another classification (it’s merely highlighting information where additional care may be needed), that’s fine too.

I’ve worked with many UK Government organisations (local/regional, and central) and most are looking to the cloud as a means to reduce costs and improve services. The fact that more than 90% of public data is classified OFFICIAL (indeed, that’s the default for anything in Government) is no reason to avoid using the cloud.