Exchange and Outlook resource roundup

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I mentioned that I’ve been attending an Exchange Server 2013 training course, when I wrote earlier in the week about creating dynamic distribution groups using custom directory attributes.

Our course instructor, Annette Gill, has curated a number of resources and links on her website for Exchange (2007, 2010 and 2013), Windows Server 2008, and System Center Configuration Manager (SCCM 2007 and 2012).  Of particular  interest to me right now are the Exchange Server 2013 Resources and Exchange Server 2013 Miscellaneous Links.

I also found something else of note during one of the labs. I don’t really use Public Folders and I was struggling to get one to display in the Outlook client after I’d created it and given access to a user.  Outlook MVP Diane Poremsky’s reply to a TechNet Forum post gave me the answer – Ctrl+6 refreshed the folder list (I already had it open) and the Public Folder came into view.  Incidentally, a full list of Outlook keyboard shortcuts can be found on the Microsoft Office website (that list is for 2010, but should work for 2013 too).

There are more “tips and tricks for Windows, Office and whatever” on Diane’s website.


Finally, one of the Microsoft consultants currently working with my team is one of the joint authors for the Microsoft Exchange Server 2013: Design, Deploy and Deliver an Enterprise Messaging Solution book that’s due to be published next month. Exchange 2013 texts are a bit thin on the ground at the moment but this book has been written by some of the best authorities I know on the topic – especially when it comes to designing, deploying and delivering solutions.

Microsoft’s New Efficency comes to Wembley

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As I opened the curtains in my hotel room this morning, I was greeted with a very wet and grey view of North London. Wembley Stadium looks far less impressive on a day like today than it did in the night-time shot that graced the front page of Bing here in the UK yesterday but still it’s hard not to be in awe of this place.

I’ve been to a couple of events at the new Wembley Stadium before: last year’s Google Developer Day (sadly there was no UK event this year); and the recent U2 concert – but this time I’m here courtesy of Microsoft for their UK Technical Launch event and the main products on show are Windows 7, Windows Server 2008 R2 and Exchange Server 2010 in what Microsoft is calling “The New Efficiency”.

I was twittering throughout the event @markwilsonit but this post highlights some of the key messages from the main sessions today, although I’ve skipped over the details of the standard technical product demonstrations as I hope to cover these in future posts:

  • There are more than 7100 applications tested and working on Windows 7 today and there should be more than 8000 certified by the time that the product hits general availability.
  • Windows 7 was beta tested by more than 8 million people, with 700,000 in the UK.
  • The Windows Optimised Desktop is represented by a layered model of products including:
    • Management infrastructure: System Center and Forefront for deployment, application management, PC monitoring and security management.
    • Server infrastructure: Windows Server 2008 R2 for Active Directory, Group Policy, network services and server-based client infrastructure.
    • Client infrastructure: Windows 7 and the Microsoft Desktop Optimisation Pack for the Asset Inventory Service, AppLocker and BitLocker.
  • Windows is easier than ever to deploy, using freely available tools such as the Microsoft Deployment Toolkit (MDT) 2010 to engineer, service and deploy images – whether they are thin, thick or a hybrid.
  • System Center Configuration Manager (SCCM) 2007 provides a deployment engine for zero-touch installations, hooking into standard tools such as MDT, the User State Migration Tool (USMT), WinPE, etc.
  • PowerShell is becoming central to Windows IT administration.
  • Windows Server 2008 R2’s new brokering capability presents new opportunities for server based computing.

For me, the highlight of the event was Ward Ralston’s appearance for the closing keynote. Ward used to implement Microsoft infrastructure but these days he is a Product Manager for Windows Server 2008 R2 (I’ve spoken to him previously, although today was my first chance to meet him face to face). Whilst some delegates were critical of the customer interviews, his New Efficiency presentation nicely summarised the day as he explained that:

  • Many organisations are struggling with decreasing IT budgets.
  • Meanwhile IT departments are trying to meet the demands of: IT consumerisation (as a generation that has grown up with computers enters the workforce); security and compliance (the last few years have brought a huge surge in compliance regulations – and the global “economic reset” is sure to bring more); and an ever-more mobile and distributed workforce (where we need to ensure confidentiality and non-repudiation wherever the users are).
  • IT departments have to cut costs – but that’s only part of the solution as productivity and innovation are just as important to increase efficiency.
  • In short (productivity + innovation)/cost = doing more with less
  • Managing more with less is about: reducing IT complexity; improving control and reducing helpdesk costs; increasing automation; and consolidating server resources.
    Doing more is about: enabling new services, efficiently connecting people to information, optimising business processes, and allowing employees to securely work from anywhere
  • Microsoft’s New Efficiency is where cost savings, productivity and innovation come together.

It would be easy to criticise today’s event, for instance to pick out certain presenters who that could have benefited from the use of Windows Magnifier, but I know just how much work went into making today’s event run as smoothly as it did and, on balance, I felt it was a good day. For those who have never been to a Microsoft launch, they may have expected something more but I’ve been to more of these events than I care to remember and so this was exactly what I expected: lots of marketing rhetoric delivered via PowerPoint; some demos, most of which worked; and, I think, something for everyone to take away and consider as their organisation looks at meeting the challenges that we all face in our day jobs – even if that was just the free copy of Windows 7 Ultimate Edition… (full disclosure: I accepted this offer and it in no way influences the contents of this blog post).

I’ll be back at Wembley again tomorrow, this time for the Microsoft Partner Network 2009 – and expect to see more Windows 7 and Server 2008 R2 related posts on this site over the coming weeks and months.

Microsoft Virtualization: the R2 wave

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The fourth Microsoft Virtualisation User Group (MVUG) meeting took place last night and Microsoft’s Matt McSpirit presented a session on the R2 wave of virtualisation products. I’ve written previously about some of the things to expect in Windows Server 2008 R2 but Matt’s presentation was specifically related to virtualisation and there are some cool things to look forward to.

Hyper-V in Windows Server 2008 R2

At last night’s event, Matt asked the UK User Group what they saw as the main limitations in the original Hyper-V release and the four main ones were:

  • USB device support
  • Dynamic memory management (ballooning)
  • Live Migration
  • 1 VM per storage LUN

Hyper-V R2 does not address all of these (regardless of feedback, the product group is still unconvinced about the need for USB device support… and dynamic memory was pulled from the beta – it’s unclear whether it will make it back in before release) but live migration is in and Windows finally gets a clustered file system in the 2008 R2 release.

So, starting out with clustering – a few points to note:

  • For the easiest support path, look for cluster solutions on the Windows Server Catalog that have been validated by Microsoft’s Failover Cluster Configuration Program (FCCP).
  • FCCP solutions are recommended by Microsoft but are not strictly required for support – as long as all the components (i.e. server and SAN) are certified for Windows Server 2008 – a failover clustering validation report will still be required though – FCCP provides another level of confidence.
  • When looking at cluster storage, fibre channel (FC) and iSCSI are the dominant SAN technologies. With 10Gbps Ethernet coming onstream, iSCSI looked ready to race ahead and has the advantage of using standard Ethernet hardware (which is why Dell bought EqualLogic and HP bought LeftHand Networks) but then Fibre Channel over Ethernet came onstream, which is potentially even faster (as outlined in a recent RunAs Radio podcast).

With a failover cluster, Hyper-V has always been able to offer high availability for unplanned outages – just as VMware do with their HA product (although Windows Server 2008 Enterprise or Datacenter Editions were required – Standard Edition does not include failover clustering).

For planned outages, quick migration offered the ability to pause a virtual machine and move it to another Hyper-V host but there was one significant downside of this. Because Microsoft didn’t have a clustered file system, each storage LUN could only be owned by one cluster node at a time (a “shared nothing” model). If several VMs were on the same LUN, all of them needed to be managed as a group so that they could be paused, the connectivity failed over, and then restarted, which slowed down transfer times and limited flexibility. The recommendation was for 1 LUN per VM and this doesn’t scale well with tens, hundreds, or thousands of virtual machines although it does offer one advantage as there is no contention for disk access. Third party clustered file system solutions are available for Windows (e.g. Sanbolic Melio FS) but, as Rakesh Malhotra explains on his blog, these products have their limitations too.

Windows Server 2008 R2 Hyper-V can now provide Live Migration for planned failovers – so Microsoft finally has an alternative to VMware VMotion (at no additional cost). This is made possible because of the new clustered shared volume (CSV) feature with IO fault tolerance (dynamic IO) overcomes the limitations with the shared nothing model and allows up to 256TB per LUN, running on NTFS with no need for third party products. The VM is still stored on a shared storage volume and at the time of failover, memory is scanned for dirty pages whilst still running on the source cluster node. Using an iterative process of scanning memory for dirty pages and transferring them to the target node, the memory contents are transferred (over a dedicated network link) until there are so few that the last few pages may be sent and control passed to the target node in fraction of a second with no discernible downtime (including ARP table updates to maintain network connectivity).

Allowing multiple cluster nodes to access a shared LUN is as simple as marking the LUN as a CSV in the Failover Clustering MMC snap-in. Each node has a consistent namespace for LUNS so as many VMs as required my be stored on a CSV as need (although all nodes must use the same letter for the system drive – e.g. C:). Each CSV appears as an NTFS mount point, e.g. C:\ClusterStorage\Volume1
and even though the volume is only mounted on one node, distributed file access is co-ordinated through another node so that the VM can perform direct IO. Dynamic IO ensures that, if the SAN (or Ethernet) connection fails then IO is re-routed accordingly and if the owning node fails then volume ownership is redirected accordingly. CSV is based on two assumptions (that data read/write requests far outnumber metadata access/modification requests; and that concurrent multi-node cached access to files is not needed for files such as VHDs) and is optimised for Hyper-V.

At a technical level, CSVs:

  • Are implemented as a file system mini-filter driver, pinning files to prevent block allocation movement and tracking the logical-to-physical mapping information on a per-file basis, using this to perform direct reads/writes.
  • Enable all nodes to perform high performance direct reads/writes to all clustered storage and read/write IO performance to a volume is the same from any node.
  • Use SMB v2 connections for all namespace and file metadata operations (e.g. to create, open, delete or extend a file).
  • Need:
    • No special hardware requirements.
    • No special application requirements.
    • No file type restrictions.
    • No directory structure or depth limitations.
    • No special agents or additional installations.
    • No proprietary file system (using the well established NTFS).

Live migration and clustered storage are major improvements but other new features for Hyper-V R2 include:

  • 32 logical processor (core) support, up from 16 at RTM and 24 with a hotfix (to support 6-core CPUs) so that Hyper-V will now support up to 4 8-core CPUs (and I would expect this to be increased as multi-core CPUs continue to develop).
  • Core parking to allow more intelligent use of processor cores – putting them into a low power suspend state if the workload allows (configurable via group policy).
  • The ability to hot add/remove storage so that additional VHDs or pass through disks may be assigned to to running VMs if the guest OS supports supports the Hyper-V SCSI controller (which should cover most recent operating systems but not Windows XP 32-bit or 2000).
  • Second Level Address Translation (SLAT) to make use of new virtualisation technologies from Intel (Intel VT extended page tables) and AMD (AMD-V nested paging) – more details on these technologies can be found in Johan De Gelas’s hardware virtualisation article at AnandTech.
  • Boot from VHD – allowing virtual hard disks to be deployed to virtual or or physical machines.
  • Network improvements (jumbo frames to allow larger Ethernet frames and TCP offload for on-NIC TCP/IP processing).

Hyper-V Server

So that’s covered the Hyper-V role in Windows Server 2008 R2 but what about its baby brother – Hyper-V Server 2008 R2? The good news is that Hyper-V Server 2008 R2 will have the same capabilities as Hyper-V in Windows Server 2008 R2 Enterprise Edition (previously it was based on Standard Edition) to allow access to up to 1TB of memory, 32 logical cores, hot addition/removal of storage, and failover clustering (with clustered shared volumes and live migration). It’s also free, and requires no dedicated management product although it does need to be managed using the RSAT tools for Windows Server 2008 R2 of Windows 7 (Microsoft’s advice is never to manage an uplevel operating system from a downlevel client).

With all that for free, why would you buy Windows Server 2008 R2 as a virtualisation host? The answer is that Hyper-V Server does not include licenses for guest operating systems as Windows Server 2008 Standard, Enterprise and Datacenter Editions do; it is intended for running non-Windows workloads in a heterogeneous datacentre standardised on Microsoft virtualisation technologies.

Management

The final piece of the puzzle is management:

There are a couple of caveats to note: the SCVMM 2008 R2 features mentioned are in the beta – more can be expected at final release; and, based on previous experience when Hyper-V RTMed, there may be some incompatibilities between the beta of SCVMM and the release candidate of Windows Server Hyper-V R2 (expected to ship soon).

SCVMM 2008 R2 is not a free upgrade – but most customers will have purchased it as part of the Server Management Suite Enterprise (SMSE) and so will benefit from the two years of software assurance included within the SMSE pricing model.

Wrap-up

That’s about it for the R2 wave of Microsoft Virtualization – for the datacentre at least – but there’s a lot of improvements in the upcoming release. Sure, there are things that are missing (memory ballooning may not a good idea for server consolidation but it will be needed for any kind of scalability with VDI – and using RDP as a workaround for USB device support doesn’t always cut it) and I’m sure there will be a lot of noise about how VMware can do more with vSphere but, as I’ve said previously, VMware costs more too – and I’d rather have most of the functionality at a much lower price point (unless one or more of those extra features will make a significant difference to the business case). Of course there are other factors too – like maturity in the market – but Hyper-V is not far off its first anniversary and, other than a couple of networking issues on guests (which were fixed) I’ve not heard anyone complaining about it.

I’ll write more about Windows 7 and Windows Server 2008 R2 virtualisation options (i.e. client and server) as soon as I can but, based on a page which briefly appeared on the Microsoft website, the release candidate for is expected to ship next month and, after reading Paul Thurrott’s post about a forthcoming Windows 7 announcement, I have a theory (and that’s all it is right now) as to what a couple of the Windows 7 surprises may be…

Microsoft Virtualization: part 6 (management)

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today’s release of System Center Virtual Machine Manager 2008 is a perfect opportunity to continue my series of blog posts on Microsoft Virtualization technologies by highlighting the management components.

Microsoft view of virtualisation

System Center is at the heart of the Microsoft Virtualization portfolio and this is where Microsoft’s strength lies as management is absolutely critical to successful implementation of virtualisation technologies. Arguably, no other virtualisation vendor has such a complete management portfolio for all the different forms of virtualisation (although competitors may have additional products in certain niche areas) – and no-one else that I’m aware of is able to manage physical and virtual systems in the same tools and in the same view:

  • First up, is System Center Configuration Manager (SCCM) 2007, providing patch management and deployment; operating system and application configuration management; and software upgrades.
  • System Center Virtual Machine Manager (SCVMM) provides virtual machine management and server consolidation and resource utilisation optimisation, as well as providing the ability for physical to virtual (P2V) and limited virtual to virtual (V2V) conversion (predictably, from VMware to Microsoft, but not back again).
  • System Center Operations Manager (SCOM) 2007 (due for a second release in the first quarter of 2009) provides the end-to-end service management; server and application health monitoring and management (regardless of whether the server is physical or virtual); and performance monitoring and analysis.
  • System Center Data Protection Manager (SCDPM) completes the picture, providing live host virtual machine backup with in-guest consistency and rapid recovery (basically, quiescing VMs, before taking a snapshot and restarting the VM whilst backup continues – in a similar manner to VMware Consolidated Backup but also with the ability to act as a traditional backup solution).

But hang on – isn’t that four products to license? Yes, but there are ways to do this in a very cost-effective manner – albeit requiring some knowledge of Microsoft’s licensing policies which can be very confusing at times, so I’ll have a go at explaining things…

From the client management license perspective, SCCM is part of the core CAL suite that is available to volume license customers (i.e. most enterprises who are looking at Microsoft Virtualization). In addition, the Enterprise CAL suite includes SCOM (and many other products).

Looking at server management and quoting a post I wrote a few months ago licensing System Center products:

The most cost-effective way to license multiple System Center products is generally through the purchase of a System Center server management suite licence:

Unlike SCVMM 2007 (which was only available as part of the SMSE), SCVMM 2008 is available as a standalone product but it should be noted that, based on Microsoft’s example pricing, SCVMM 2008 (at $1304) is only marginally less expensive than the cost of the SMSE (at $1497) – both quoted prices include two years of software assurance and, for reference, the lowest price for VMware Virtual Center Management Server (VCMS) on the VMware website this morning is $6044. Whilst it should be noted that the VCMS price is not a direct comparison as it includes 1 year of Gold 12×5 support, it is considerably more expensive and has lower functionality.

It should be noted that the SMSE is virtualisation-technology-agnostic and grants unlimited virtualisation rights. By assigning an SMSE to the physical server, it can be:

  • Patched/updated (SCCM).
  • Monitored (SCOM).
  • Backed Up (SCDPM).
  • VMM host (SCVMM).
  • VMM server (SCVMM).

One of the advantages of using SCVMM and SCOM together is the performance and resource optimisation (PRO) functionality. Stefan Stranger has a good example of PRO in a blog post from earlier this year – basically SCVMM uses the management pack framework in SCOM to detect issues with the underlying infrastructure and suggest appropriate actions for an administrator to take – for example moving a virtual machine workload to another physical host, as demonstrated by Dell integrating SCVMM with their hardware management tools at the Microsoft Management Summit earlier this year).

I’ll end this post with a table which shows the relative feature sets of VMware Virtual Infrastructure Enterprise and the Windows Server 2008 Hyper-V/Server Management Suite Enterprise combination:

VMware Virtual Infrastructure Enterprise Microsoft Windows Server 2008/Server Management Suite Enterprise
Bare-metal Hypervisor ESX/ESXi Hyper-V
Centralised VM management Virtual Center SCVMM
Manage ESX/ESXi and Hyper-V SCVMM
VM Backup VCB SCDPM
High Availability/Failover Virtual Center Windows Server Clustering
VM Migration VMotion Quick Migration
Offline VM Patching Update Manager VMM (with Offline Virtual Machine Servicing Tool)
Guest Operating System patching/configuration management SCCM
End-to-end operating system monitoring SCOM
Intelligent placement DRS SCVMM
Integrated physical and virtual management SMSE

This table is based on one from Microsoft and, in fairness, there are a few features that VMware would cite that Microsoft doesn’t yet have (memory management and live migration are the usual ones). It’s true to say that VMware is also making acquisitions and developing products for additional virtualisation scenarios (and has a new version of Virtual Infrastructure on the way – VI4) but the features and functionality in this table are the ones that the majority of organisations will look for today. VMware has some great products (read my post from the recent VMware Virtualization Forum) – but if I was an IT Manager looking to virtualise my infrastructure, then I’d be thinking hard about whether I really should be spending all that money on the VMware solution, when I could use the same hardware with less expensive software from Microsoft – and manage my virtual estate using the same tools (and processes) that I use for the physical infrastructure (reducing the overall management cost). VMware may have maturity on their side but, when push comes to shove, the total cost of ownership is going to be a major consideration in any technology selection.

Office 2007 Customisation and Deployment using BDD 2007

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the years, the various methods available for customising and deploying Microsoft Office have advanced considerably and so here are a few notes on customising and deploying the 2007 Microsoft Office System using the Microsoft Solution Accelerator for Business Desktop Deployment (BDD) 2007:

  • The first step is to create a network distribution point.  This is easily achieved using the BDD 2007 workbench (Distribution Share, Applications, New), with the additional advantage of integrating the Office 2007 files into the BDD distribution folder structure (e.g. D:\Distribution\Applications\Office2007).
  • The BDD workbench will also enable the application (by default) and allow the entry of additional information on the General tab.  The Dependencies tab can be used to control the order of application deployment within the BDD logic.  There is also a tab for Office Products which can be used to configure the deployment of Microsoft Office.
  • To save disk space, additional Office System components may be added to an existing distribution point.  Multiple languages may be integrated in the same manner – i.e. by adding the files to the application within BDD Workbench.
  • Office 2007 is always installed via setup.exe rather than with individual Windows Installer (.MSI) packages.
  • The Office Customization Tool (OCT) is used to create or edit Windows Installed Patch (.MSP) files to customise Office installations:
    • It may be launched from the command line using setup.exe /admin or within the BDD Workbench using the Office Customization Tool button in the application properties.  The OCT replaces the Custom Installation Wizard and Custom Maintenance Wizard tools in previous Office versions.
    • The OCT language interface will match the regional setting for the application (rather than the operating system language).
    • OCT allows the specification of multiple network sources (in case the primary is not available). By default, all necessary files are copied locally first and setup is launched from this cache – the local installation source (LIS).  If the installation is modified later then setup with use the LIS before attempting to locate network sources.
    • By default, .MSP files are saved in the Updates folder on the application distribution point.  Setup scans this location when it runs and will retrieve application settings from .MSP files.  If multiple .MSP files exist then the first one (in alphabetical order) will be used.
    • When editing .MSP files with the OCT, those areas that have changed from the defaults are highlighted in bold.
  • Microsoft Office updates and service packs can be copied to the Updates folder on the application distribution point for automatic application during installation.
  • Settings may be specified within a config.xml file (via the application properties in BDD Workbench) or using a .MSP file:
    • Sensitive values such as product keys should be stored within an .MSP file rather than as clear text in config.xml.
    • The command line to use a config.xml file is setup.exe /config applicationsubfolder\config.xml.
    • Settings in config.xml will take precedence over duplicate settings in a .MSP file.
  • Office setup writes a log file to %temp% on the destination machine.  The filename for this log will be prefixed with setupexe.
  • Microsoft Systems Management Server (SMS) 2003 can also be used to deploy Microsoft Office 2007:
    1. Create a package using the source files from the BDD distribution point.
    2. Create a program for Office 2007:
      • Within the package, create a new program and edit the properties to include the program name (e.g. Office 2007) and the appropriate command line (setup.exe /config applicationsubfolder\config.xml).
      • If the program is hidden (on the General tab) and the installation requires user input then it will never complete.  Similarly if the option to allow users to interact with this program option (on the Requirements tab) is not selected then installation will fail, unless the package has been created for silent installation.
      • If users have local administrator rights on their workstations then he program may be configured to run with user rights; however this is generally not desirable and a run mode of run with administrative rights should normally be selected.
      • The Windows Installer tab can be used to define a .MSI file that is used when clients with the package installed make updates to the Office installation (e.g. install on first use or repair).
    3. Create a distribution point (within SMS – not to be confused with the BDD distribution point) and copy the package to the distribution point.
    4. Check the distribution process using the SMS report for the distribution status of a specific package.
    5. Define a collection to receive the package based on membership rules and specific resource attributes.
    6. Create an advertisement for the package/program and schedule accordingly.
    7. If clients are taking a long time to receive an advertisement, check the schedule and also try initiating both machine and user policy actions within the systems management applet in control panel (installed by the SMS client).

Readvertising failed packages with Microsoft SMS

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, my colleague Barry Feist gave me a useful tip for when deploying software using Microsoft Systems Management Server (SMS). Barry doesn’t have his own blog, so here are the details.

Details of commands executed on the local machine by SMS are held at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\Mobile Client\Software Distribution\Execution History\packageid. It is not uncommon for there to be a failure within a distribution, so to rerun a failed installation, delete the key and re-advertise the package. According to the how to re-advertise a package post on MyITForum, Microsoft knowledge base article 257271 gives an alternative solution but Barry’s solution seems simpler to me.

10,000 feet view of Microsoft Systems Management Server 2003

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Until I started to look at the Microsoft Solution Accelerator for Business Desktop Deployment (Enterprise Edition), which makes use of the Microsoft Systems Management Server (SMS) 2003 Operating System Deployment Feature Pack, I had no experience of using SMS. At my BDD training, Thomas Lee gave a brief overview of SMS, which I have reproduced here for the benefit of anyone else who may find it useful.

SMS OverviewSMS relies on the presence of Microsoft SQL Server (not MSDE, or any other SQL server product, e.g. MySQL).

Each client has an agent installed (the SMS Advanced Client). This allows an administrator to view workstation activity and perform remote takeover operations. It also returns inventory information to the management server which SMS uses to creates collections (e.g. All Windows XP SP2 Workstations), which are stored in the SQL Server database.

Software to be distributed via SMS is packaged and placed on a distribution server. In order to distribute a package, an SMS administrator creates an advertisement, which is pushed to the SMS Advanced client, which in turn will pull the package from the distribution server for installation.

That’s SMS, in a nutshell.