Microsoft infrastructure architecture considerations: part 3 (controlling network access)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing the series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, based on the MCS Talks: Enterprise Infrastructure series, in this post, I’ll look at some of the considerations for controlling access to the network.

Although network access control (NAC) has been around for a few years now, Microsoft’s network access protection (NAP) is new in Windows Server 2008 (previous quarantine controls were limited to VPN connections).

It’s important to understand that NAC/NAP are not security solutions but are concerned with network health – assessing an endpoint and comparing its state with a defined policy, then removing access for non-compliant devices until they have been remediated (i.e. until the policy has been enforced).

The real question as to whether to implement NAC/NAP is whether or not non-compliance represents a business problem.

Assuming that NAP is to be implemented, then there may be different policies required for different groups of users – for example internal staff, contractors and visitors – and each of these might require a different level of enforcement; however, if the the policy is to be applied, enforcement options are:

  • DHCP – easy to implement but also easy to avoid by using a static IP address. It’s also necessary to consider the healthcheck frequency as it relates to the DHCP lease renewal time.
  • VPN – more secure but relies on the Windows Server 2008 RRAS VPN so may require a third party VPN solution to be replaced. In any case, full-VPN access is counter to industry trends as alternative solutions are increasing used.
  • 802.1x – requires a complex design to support all types of network user and not all switches support dynamic VLANs.
  • IPSec – the recommended solution – built into Windows, works with any switch, router or access point, provides strong authentication and (optionally) encryption. In addition, unhealthy clients are truly isolated (i.e. not just placed in a VLAN with other clients to potentially affect or be affected by other machines). The downside is that NAP enforcement with IPSec requires computers to be domain joined (so will not help with visitors or contractors PCs) and is fairly complex from an operational perspective, requiring implementation of the health registration authority (HRA) role and a PKI solution.

In the next post in these series, I’ll take a look at some of the architectural considerations for using virtualisation technologies within the infrastructure.

Microsoft infrastructure architecture considerations: part 2 (remote offices)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Continuing from my earlier post which sets the scene for a series of posts on the architectural considerations for designing a predominantly-Microsoft IT infrastructure, in this post, I’ll look at some of the considerations for remote offices.

Geographically dispersed organisations face a number of challenges in order to support remote offices including: WAN performance/reliability; provisioning new services/applications/servers; management; remote user support; user experience; data security; space; and cost.

One approach that can help with some (not all) of these concerns is placing a domain controller (DC) in each remote location; but this has been problematic until recently because it increases the overall number of servers (it’s not advisable to co-locate other services on a domain controller because administration can’t be delegated to a local administrator on a domain controller and the number of Domain Admins should be kept to a minimum) and it’s a security risk (physical access to the domain controller computer makes a potential hacker’s job so much simpler). For that reason, Microsoft introduced read only domain controllers (RODCs) in Windows Server 2008.

There are still some considerations as to whether this is the appropriate solution though. Benefits include:

  • Administrative role separation.
  • Faster logon times (improved access to data).
  • Isolated corruption area.
  • Improved security.

whilst other considerations and potential impacts include:

  • The need for a schema update.
  • Careful RODC placement.
  • Impact on directory-enabled applications.
  • Possibility of site topology design changes.

Regardless of whether a remote office DC (either using the RODC capabilities or as a full DC) is deployed, then server sprawl (through the introduction of branch office servers for a variety of purposes) can be combatted with the concept of a branch “appliance” – not in the true sense of a piece of dedicated hardware runnings an operating system and application that is heavily customised to meet the needs of a specific service – but by applying appliance principles to server design and running multiple workloads in a manner that allows for self-management and healing.

The first step is to virtualise the workloads. Hyper-V is built into Windows Server 2008 and the licensing model supports virtualisation at no additional cost. Using the server core installation option, the appliance (physical host) management burden is reduced with a smaller attack surface and reduced patching. Multiple workloads may be consolidated onto a single physical host (increasing utilisation and removing end-of-life hardware) but there are some downsides too:

  • There’s an additional server to manage (the parent/host partition) and child/guest partitions will still require management but tools like System Center Virtual Machine Manager (SCVMM) can assist (particularly when combined with other System Center products).
  • A good business continuity plan is required – the branch office “appliance” becomes a single point of failure and it’s important to minimise the impact of this.
  • IT staff skills need to be updated to manage server core and virtualisation technologies.

So, what about the workloads on the branch office “appliance”? First up is the domain controller role (RODC or full DC) and this can be run as a virtual machine or as an additional role on the host. Which is “best” is entirely down to preference – running the DC alongside Hyper-V on the physical hardware means there is one less virtual machine to manage and operate (multiplied by the number of remote sites) but running it in a VM allows the DC to be “sandboxed”. One important consideration is licensing – if Windows Server 2008 standard edition is in use (which includes one virtual operating system environment, rather than enterprise edition’s four, or datacenter edition’s unlimited virtualisation rights) then running the DC on the host saves a license – and there is still some administrative role separation as the DC and virtualisation host will probably be managed centrally, with a local administrator taking some responsibility for the other workloads (such as file services).

That leads on to a common workload – file services. A local file server offers a good user experience but is often difficult to back up and manage. One solution is to implement DFS-R in a hub and spoke arrangement and to keep the backup responsibility data centre. If the remote file server fails, then replication can be used to restore from a central server. Of course, DFS-R is not always idea for replicating large volumes of data; however the DFS arrangement allows users to view local and remote data as though it were physically stored a single location and there have been a number of improvements in Windows Server 2008 DFS-R (cf. Windows Server 2003 R2). In addition, SMB 2.0 is less “chatty” than previous implementations, allowing for performance benefits when using a Windows Vista client with a Windows Server 2008 server.

Using these methods, it should be possible to avoid remote file server backups and remote DCs should not need to be backed up either (Active Directory is a multi-master replicated database so it has an inherent disaster recovery capability). All that’s required is some method of rebuilding a failed physical server – and the options there will depend on the available bandwidth. My personal preference is to use BITS to ensure that the remote server always holds a copy of the latest build image on a separate disk drive and then to use this to rebuild a failed server with the minimum of administrator intervention or WAN traffic.

In the next post in these series, I’ll take a look at some of the considerations for using network access protection to manage devices that are not compliant with the organisation’s security policies.

Microsoft infrastructure architecture considerations: part 1 (introduction)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I highlighted the MCS Talks: Enterprise Architecture series of webcasts that Microsoft is running to share the field experience of Microsoft Consulting Services (MCS) in designing and architecting Microsoft-based infrastructure solutions – and yesterday’s post picked up on a key message about software as a service/software plus services from the infrastructure futures section of session 1: infrastructure architecture.

Over the coming days and weeks, I’ll highlight some of the key messages from the rest of the first session, looking at some of the architectural considerations around:

  • Remote offices.
  • Controlling network access.
  • Virtualisation.
  • Security.
  • High availability.
  • Data centre consolidation.

Whilst much of the information will be from the MCS Talks, I’ll also include some additional information where relevant, but, before diving into the details, it’s worth noting that products rarely solve problems. Sure enough, buying a software tool may fix one problem, but it generally adds to the complexity of the infrastructure and in that way does not get to the root issue. Infrastrcture optimisation (even a self assessment) can help to move IT conversations to a business level as well as allowing the individual tasks that are required to reach meet the overall objectives to be prioritised.

Even though the overall strategy needs to be based on business considerations, there are still architectural considerations to take into account when designing the technical solution and, even though this series of blog posts refers to Microsoft products, there is no reason (architecturally) why alternatives should not be considered.

So, you want to be an infrastructure architect?

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the years I’ve had various jobs which have been basically the same role but with different job titles. Officially, I’ve been a Consultant, Senior Consultant, Project Manager, Senior Technical Consultant, Senior Customer Solution Architect (which would have been a Principal Consultant in the same organisation a few years earlier but management swapped the “architect” word for a drop in implied seniority) but if you ask me what I am, I tend to say I’m an infrastructure architect.

Issue 15 of The [MSDN] Architecture Journal included an article about becoming an architect in a systems integrator. I read this with interest, as that’s basically what I do for a living (believe me, I enjoy writing about technology but it will be a long while before I can give up my day job)!

The Architecture Journal tends to have an application focus (which is only natural – after all, it is produced by developer-focused group in a software company) and I don’t know much about application development but I do know how to put together IT solutions using common off the shelf (COTS) applications. I tend to work mostly with Microsoft products but I’ve made it my business to learn about the alternatives (which is why I’m a VMware Certified Professional and an Red Hat Certified Technician). Even so, I’m stuck at a crossroads. I’m passionate about technology – I really like to use it to solve problems – but I work for a managed services company (an outsourcer in common parlance) where we deliver solutions in the form of services and bespoke technology solutions are not encouraged. It seems that, if I want to progress in my current organisation, I’m under more and more pressure to leave my technical acumen behind and concentrate on the some of the other architect’s competencies.

Architect competencies

I’m passionate about technology – I really like to use it to solve problems

I understand that IT architecture is about far more than just technology. That’s why I gained a project management qualification (since lapsed, but the skills are still there) and, over the years, I’ve developed some of the softer skills too – some which can be learnt (like listening and communications skills) – others of which only come with experience. I think it’s important to be able to dive into the technology when required (which, incidentally, I find helps to earn the respect of your team and then assists with the leadership part of the architect’s role) but just as important to be able to rise up and take a holistic view of the overall solution. I know that I’m not alone in my belief that many of the architects joining our company are too detached from technology to truly understand what it can do to address customers’ business problems.

Architect roles
OK, so I’m a solutions architect who can still geek out when the need arises. I’m still a way off becoming an enterprise architect – but do I really need to leave behind my technical skills (after having already dumped specialist knowledge in favour of breadth)? Surely there is a role for senior technologists? Or have I hit a glass ceiling, at just 36 years of age?

I’m hoping not – and that’s why I’m interested in the series of webcasts that Microsoft Consulting Services are running over the next few months – MCS Talks: Enterprise Architecture. Session 1 looked at infrastructure architecture (a recorded version of the first session is available) and future sessions will examine:

  • Core infrastructure.
  • Messaging.
  • Security and PKI.
  • Identify and access management.
  • Desktop deployment.
  • Configuration management.
  • Operations management.
  • SharePoint.
  • Application virtualisation.

As should be expected, being delivered by Microsoft consultants, the sessions are Microsoft product-heavy (even the session titles give that much away); however the intention of the series is to connect business challenges with technology solutions and the Microsoft products mentioned could be replaced with alternatives from an other vendors. More details on the series can be found on the MCS Talks blog.

This might not appeal to true enterprise architects but for those of us who work in the solution or technical architecture space, this looks like it may well be worth an hour or so of our time each fortnight for the rest of the year. At the very least it should help to increase breadth of knowledge around Microsoft infrastructure products.

And, of course, I’ll be spouting forth with my own edited highlights on this blog.