IT Forum ’05 highlights: part 1

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Microsoft UK IT Forum Highlights
A few years back, I used to try and persuade my employer to send me to Microsoft TechEd Europe each year, on the basis that lots of 75 minute presentations on a variety of topics provided a better background for me than a few days of in depth product training (I can build experience later as I actually use the technology). The last time I attended TechEd was back in 2001, by which time it had become more developer-focused and the IT Forum was being positioned as the infrastructure conference (replacing the Microsoft Exchange Conference). For the last couple of years, I haven’t been able to attend the IT Forum due to family commitments (first it clashed with my the birth of my son and then subsequently its been in conflict with his birthday, as it is again this year) but luckily, Microsoft UK has been re-presenting the highlights from IT Forum as free-of-charge TechNet events (spread over two days) and I’ve managed to take some time out to attend them.

Yesterday’s event covered a variety of topics. Unfortunately there was no concept of different tracks from which I could attend the most relevant/interesting sessions, so some it went completely over my head. One of those topics was upgrading to SQL Server 2005, so apologies to the presenter – I was the guy nodding off on the front row.

In the next few paragraphs, I’ll highlight some of the key points from the day.

Upgrading to SQL Server 2005
Presented by Tony Rogerson, SQL Server MVP and UK SQL Server Community leader, this session gave useful information for those looking at upgrading from SQL Server 2000 (or earlier) to SQL Server 2005. I’ve blogged previously with a SQL Server 2005 overview, why SQL Server 2005 is such a significant new product and on the new management tools but the key points from Tony’s presentation were:

  • Upgrades (in-place upgrades) are supported, preserving user data and maintaining instance names in a largely automated fashion, as are side-by-side migrations (mostly manual, copying data from an old installation to a new and then decommissioning the old servers).
  • SQL Server versions prior to 7.0 cannot be migrated directly and SQL Server 7.0/2000 need to be updated to the latest service pack levels before they can be migrated. For SQL Server 2000 that is SP4, which might break some functionality for SP3A users, so the upgrade needs to be carefully planned.
  • The database engine (including subcomponents like the SQL Agent, tools, etc.), analysis services, reporting services and notification services can all be upgraded, and data transformation services can be migrated to integration services.
  • All product editions can be upgraded/migrated (32/64-bit, desktop, workgroup, personal, standard, developer or enterprise editions), as can all SQL Server 7.0/2000 released languages.
  • A smooth upgrade requires a good plan, breaking tasks into:
    • Pre-upgrade tasks.
    • Upgrade execution tasks.
    • Post-upgrade tasks (day 0, day 30, day 90).
    • Backout plan.
  • Microsoft provides the SQL Server 2005 Upgrade Advisor as a free download to analyse instances of SQL Server 7.0 and SQL Server 2000 in preparation for upgrading to SQL Server 2005. This can be used repeatedly until all likely issues have been resolved and the upgrade can go ahead.
  • Migration provides for more granular control over the process that an upgrade would and the presence of old and new installations side-by-side can aid with testing and verification; however it does require new hardware (although a major investment in a SQL Server upgrade would probably benefit from new hardware anyway) and applications will need to be directed to the new instance. Because the legacy installation remains online, there is complete flexibility to fail back should things not go to plan.
  • Upgrades will be easier and faster for small systems and require no new hardware or application reconfiguration; however the database instances will remain offline during the upgrade and it’s not best practice to upgrade all components (e.g. analysis services cubes).
  • Upgrade tips and best practices include:
    • Reduce downtime by pre-installing setup pre-requisites (Microsoft .NET Framework 2.0, SQL Native Client and setup support files) – some of these are needed for the Upgrade Advisor anyway.
    • If planning a migration using the copy database wizard, place the database in single-user mode (to stop users from modifying the data during the upgrade) and make sure that no applications or services are trying to access the database. Also, do not use read-only mode (this will result in an error) and note that the database cannot be renamed during the operation.
    • Be aware of the reduced surface attack area of SQL Server 2005 – some services and features are disabled for new installations (secure by default) – the surface area configuration tools can be used to enable or disable features and services.

Leveraging your Active Directory for perimeter defence
Presented by Richard Warren, an Internet and security training specialist, I was slightly disappointed with this session, which failed to live up to the promises that its title suggested. After spending way too much time labouring Microsoft’s usual points about a) how packet filtering alone is not enough and ISA Server adds application layer filtering and b) ISA Server 2004 is much better and much easier to use than ISA Server 2000, Richard finally got down to some detail about how to use existing investments in AD and ISA Server to improve security (but I would have liked to have seen more real-world examples of exactly how to implement best practice). Having been quite harsh about the content, I should add that there were some interesting points in his presentation:

  • According to CERT, 95% of [computer security] breaches [were] avoidable with an alternative configuration.
  • According to Gartner Group, approximately 70% of all web attacks occur at the application layer.
  • Very few organisations are likely to deploy ISA Server as a first line of defence. Even though ISA Server 2004 is an extremely secure firewall, it is more common to position a normal layer 3 (packer filtering) firewall at the network edge and then use ISA Server behind this to provide application layer filtering on the remaining traffic.
  • Users who are frightened of IT don’t cause many problems. Users who think they understand computers cause most of the problems. Users who do know what they are doing are few and far between. (Users are a necessary evil for administrators).
  • Not all attacks are malicious and internal users must not be assumed to be “safe”.
  • ISA Server can be configured to write it’s logs to SQL Server for analysis.
  • Active Directory was designed for distributed security (domain logon/authentication and granting access to resources/authorisation) but it can also store and protect identities and plays a key role in Windows managability (facilitating the management of network resources, the delegation of network security and enabling centralised policy control).
  • Using ISA Server to control access to sites (both internal and external), allows monitoring and logging of access by username. If you give users a choice of authenticated access or none at all, they’ll choose authenticated access. If transparent authentication is used with Active Directory credentials, users will never know that they needed a username and password to access a site (this requires the ISA Server to be a member of the domain or a trusted domain, such as a domain which only exists within the DMZ).
  • ISA Server’s firewall engine performs packet filtering and operates in kernel mode. The firewall service performs application layer filtering (extensible via published APIs) and operates in user mode.
  • SSL tunnelling provides a secure tunnel from a client to a server. SSL bridging involves installing the web server’s certificate on the ISA Server, terminating the client connection there and letting ISA server inspect the traffic and handle the ongoing request (e.g. with another SSL connection, or possibly using IPSec). Protocol bridging is similar, but involves ISA server accepting a connection using one protocol (e.g. HTTP) before connecting to the target server with another protocol (e.g. FTP).

Microsoft Windows Server 2003 Release 2 (R2) technical overview
Presented by Quality Training (Scotland)‘s Andy Malone, this session was another disappointment. Admittedly, a few months back, I was lucky to be present at an all day R2 event, again hosted by Microsoft, but presented by John Craddock and Sally Storey of Kimberry Associates, who went into this in far more detail. Whilst Andy only had around an hour (and was at pains to point out that there was lots more to tell than he had time for), the presentation looked like Microsoft’s standard R2 marketing deck, with some simple demonstrations, poorly executed, and it seemed to me that (like many of the Microsoft Certified Trainers that I’ve met) the presenter had only a passing knowledge of the subject – enough to present, but lacking real world experience.

Key points were:

  • Windows Server 2003 R2 is a release update – approximately half way between Windows Server 2003 and the next Windows Server product (codenamed Longhorn).
  • In common with other recent Windows Server System releases, R2 is optimised for 64-bit platforms.
  • R2 is available in standard, enterprise and datacenter editions (no web edition) consisting of two CDs – the first containing Windows Server 2003 slipstreamed with SP1 and the second holding the additional R2 components. These components are focused around improvements in branch office scenarios, identity management and storage.
  • The new DFSR functionality can provide up to 50% WAN traffic reduction through improved DFS replication (using bandwidth throttling remote differential compression, whereby only file changes are replicated), allowing centralised data copies to be maintained (avoiding the need for local backups, although one has to wonder how restoration might work over low-speed, high latency WAN links). Management is improved with a new MMC 3.0 DFS Management console.
  • There is a 5MB limit on the size of the DFS namespace file, which equates to approximately 5000 folders for a domain namespace and 50,000 folders for a standalone namespace. Further details can be found in Microsoft’s DFS FAQ.
  • Print management is also improved with a new MMC 3.0 Print Management console, which will auto-discover printers on a subnet and also allows deployment of printer connections using group policy (this requires use a utility called pushprinterconnections.exe within a login script, as well as a schema update).
  • Identity and access management is improved with Active Directory federation services (ADFS), Active Directory application mode (ADAM – previously a separate download), WS-Management and Linux/Unix identity management (incorporating Services for Unix, which was previously a separate download).
  • For many organisations, storage management is a major problem with typical storage requirements estimated to be increasing by between 60% and 100% each year. The cost of managing this storage can be 10 times the cost of the disk hardware and Microsoft has improved the storage management functionality within Windows to try and ease the burden.
  • The file server resource manager (FSRM) is a new component to integrate capacity management, policy management and quota management, with quotas now set at folder level (rather than volume) and file screening to avoid storage of certain file types on the server (although the error message if a user tries to do this just warns of a permissions issue and is more likely to confuse users and increase the burden on administrators trying to resolve any resulting issues).
  • Storage manager for SANs allows Windows administrators to manage disk resources on a SAN (although not with the granularity that the SAN administrator would expect to have – I’ve not seen this demonstrated but believe it’s only down to a logical disk level).
  • In conclusion, Windows Server 2003 R2 builds on Windows Server 2003 with new functionality, but with no major changes so as to ensure a non-disruptive upgrade with complete application compatibility, and requiring no new client access licenses (CALs).

Management pack melee: understanding MOM 2005 management packs
Finally, a fired up, knowledgeable presenter! Gordon McKenna, MOM MVP is clearly passionate about his subject and blasted through a whole load of detail on how Microsoft Operations Manager (MOM) uses management packs to monitor pretty much anything in a Windows environment (and even on other platforms, using third-party management packs). There was way too much information in his presentation to represent here, but Microsoft’s MOM 2005 for beginners website has loads of information including technical walkthoughs. Gordon did provide some additional information though which is unlikely to appear on a Microsoft website (as well as some that does):

  • MOM v3 is due for release towards the end of this year (I’ve blogged previously about some of the new functionality we might see in the next version of MOM). It will include a lightweight agent, making MOM more suitable for monitoring client computers as well as a Microsoft Office management pack. MOM v3 will also move from a server-centric paradigm to a service-centric health model in support of the dynamic systems initiative and will involve a complete re-write (if you’re going to buy MOM this year, make sure you also purchase software assurance).
  • There are a number of third-party management packs available for managing heterogeneous environments. The MOM management pack catalogue includes details.
  • The operations console notifier is a MOM 2005 resource kit utility which provides pop-up notification of new alerts (in a similar manner to Outlook 2003’s new mail notification).

A technical overview of Microsoft Virtual Server 2005
In the last session of the day, Microsoft UK’s James O’Neill presented a technical overview of Microsoft Virtual Server 2005. James is another knowledgeable presenter, but the presentation was a updated version of a session that John Howard ran a few months back. That didn’t stop it from being worthwhile – I’m glad I stayed to watch it as it included some useful new information:

  • Windows Server 2003 R2 Enterprise Edition changes the licensing model for virtual servers in two ways: firstly, by including 4 guest licenses with every server host licence (total 5 copies of R2); secondly by only requiring organisations to be licensed for the number of running virtual machines (currently even stored virtual machine images which are not in regular use each require a Windows licence); finally, in a move which is more of a clarification, server products which are normally licensed per-processor (e.g. SQL Server, BizTalk Server, ISA Server) are only required to be licensed per virtual processor (as Virtual Server does not yet support SMP within the virtual environment).
  • The Datacenter edition of the next Windows Server version (codenamed Longhorn) will allow unlimited virtual guests to be run as part of its licence – effectively mainframe Windows.
  • Microsoft is licensing (or plans to licence) the virtual hard disk format, potentially allowing third parties to develop tools that allow .VHD files to be mounted as drives within Windows. There is a utility to do this currently, but it’s a Microsoft-internal tool (I’m hoping that it will be released soon in a resource kit).
  • As I reported previously, Microsoft is still planning a service pack for Virtual Server 2005 R2 which will go into beta this quarter and to ship in the Autumn of 2006, offering support for Intel virtualization technology (formerly codenamed Vanderpool) and equivalent technology from AMD (codenamed Pacifica) as well as performance improvements for non-Windows guest operating systems.

Overall, I was a little disappointed with yesterday’s event, although part 2 (scheduled for next week) looks to be more relevant to me with sessions on Exchange 12, the Windows Server 2003 security configuration wizard, Monad, Exchange Server 2003 mobility and a Windows Vista overview. Microsoft’s TechNet UK events are normally pretty good – maybe they are just a bit stretched for presenters right now. Let’s just hope that part 2 is better than part 1.

Checking how much power a USB device requires

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have a number of USB-attached hard disks that I use for portable mass storage, backups, etc. Very occasionally, Windows XP will report that there has been a power surge on the USB port and that it has been shut down. This can happen when the total number of devices attached to a USB hub (internal or external) exceeds the total power available. I’ve always treated that as a minor annoyance (and as these disks have two USB connections and a Y-shaped cable, I can simply use two ports) but then a few days back I noticed something I’ve never see before – the ability to view power details for (or more precisely current drawn through) a USB root hub:

USB device power draw

As can be seen in the example above, my scanner is using the full 500mA on its port, but there is still a port available which could potentially provide another 500mA. To view this information, open Device Manager from the Computer Management MMC snap-in and expand the Universal Serial Bus Controllers node. There will normally be a number of controllers listed, along with some devices and USB root hubs. Each USB root hub should have include power details within its properties.

Checking my IEEE 1394 (FireWire/i.Link) controller doesn’t seem to offer the same facilities, presumably because it doesn’t have the same concept of a root hub.

Useful command for controlling Windows services

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in 2004, I blogged about some new commands in recent Windows releases and yesterday, I came across another one – sc.

More details may be found in the Microsoft Windows XP Professional product documentation or by entering sc /? in a Windows XP or Windows Server 2003 cmd shell.

Windows on a Mac?

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Ever since Apple announced last year that they will switch to using Intel processors, the industry has been alive with speculation as to which Mac model will switch first. My view is a bit different – Apple make fantastic-looking PCs, and if they have Intel hardware I ought to be able to run a version of Windows on one. Or, perhaps I could run Mac OS X in a Virtual PC on Windows (probably not, as I guess it will look for an Apple trusted platform module)?

On the way to work today, I was listening to episode 36 of the This Week in Tech podcast (incidentally, one of my favourite podcasts – even if it is a little US-centric) which briefly discusses the possibility of a new emulator for Windows applications on a Mac (not like Wine for Linux, which is API-based – you will need a copy of Windows in order to make this work, in a similar manner to running Linux applications on Solaris using BrandZ), so maybe I really can have the best of both worlds.

All I need to know now is, with the industry finally starting the push to 64-bit technology, will the new Intel Macs use cheap 32-bit processors (an early report from ThinkSecret suggested 3.6GHz Pentium 4s), or some new 64-bit dual-core beast? With CES taking place this week (Intel has already made some major announcements about its brand, identity and technology direction) and MacWorld next week (surely there must be some news there about Intel Macs), maybe we’ll get an answer soon.

Beefing up IIS – Apache style

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I just stumbled across an article on beefing up IIS: 10 tips from a former Solaris admin. Instead of simply adding to the “which is best – IIS or Apache?” discussion, the article takes a look at how IIS administrators can learn a few lessons from their Unix colleagues. It’s a few years old now but still worth a read.

Watch out for long path names on an NTFS volume

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I came across an interesting issue earlier today. Somehow, on my NTFS-formatted external hard disk I had managed to create a file system structure which was too deep.

Whenever I tried to delete a particular folder tree, I received strange errors about files which couldn’t be deleted:

Error Deleting File or Folder

Cannot delete foldername: The file name you specified is not valid or too long. Specify a different file name.

I thought that was strange – after all, I’d managed to create the files in the first place, then I found that if I drilled down to the files that would not delete, there was no right-click option to delete the file. Finally, I found that some folders displayed the following error when I tried to access them:

Can’t access this folder.

Path is too long.

It turned out that the problem folders/files had path names in excess of 255 characters. By renaming some of the top level folders to single character folder names (thus reducing the length of the path), I was able to access the problem files and folders, including deleting the files that I wanted to remove.

Windows management technologies

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

At the Best of the Microsoft Management Summit 2005 event a few weeks back, Vlad Joanavic gave an overview of some of the “free” Windows Management technologies that are available (in addition to the “paid” products under the System Center brand).

These basically break down into:

  • Windows Software Update Services (WSUS).
  • Windows Management Instrumentation (WMI) and WS-Management.
  • Group Policy Management Console (GPMC).
  • Microsoft Management Console (MMC) v3.0.
  • Windows Installer v3.1
  • Microsoft Scripting Host (codenamed Monad).

The rest of this post discusses each of these in turn.

WSUS

WSUS is Microsoft’s update management offering for corporate customers, effectively allowing customers to host a local copy of Microsoft Update and to manage update approval accordingly. Free to licensed users of Windows 2000 Server and Windows Server 2003 (with appropriate Windows Server/core client access licenses) it is a core component of Microsoft’s patch and update management roadmap.

Unlike its predecessor, Software Update Services (SUS), WSUS supports more than just Windows updates, and allows selective targeting of computers based on group membership and automatic approval of updates (if required). It also uses a database rather than flat file storage for its configuration data (storage of the actual updates is still file-based) and offers a much richer user experience. At the time of writing, WSUS supports 8 types of update for a number of products (with more to be added over time). WSUS is also localised to provide for international support and has multi-language user interface (MUI) support.

WSUS does not require a new client component to be installed as the automatic updates client within Windows XP is self-updating. Most client functionality is implemented via a Win32 service with an extensible architecture for MSI, update.exe and driver handling and automatic updates can also be controlled via group policy.

WSUS servers uses the background intelligent transfer service (BITS) to ensure that the network is utilised effectively during the transfer of updates. Microsoft recognises a number of WSUS deployment options:

  • Single server – for small organisations or simple networks.
  • Multiple servers – for a large organisations or a complex network, allowing a hierarchy of WSUS servers to be created.
  • Disconnected network (e.g. on a ship), whereby updates are downloaded to one WSUS server and then exported for transfer via removable media (e.g. DVD) to a disconnected WSUS server which validates the Microsoft certificates on the content and services clients on the remote network.

WMI and WS-Management

WMI is the Microsoft implementation of web based enterprise management (WBEM)/common interface model (CIM), allowing access to over 600 WMI classes and 3000 properties. Provided as a standard Windows component since Windows 2000 (and downloadable for Windows NT 4.0), the number of WMI providers has grown from 15 in Windows NT to 29 in Windows 2000 and 80 in Windows Server 2003. WMI supports a variety of clients including the Windows Script Host (WSH), native C++ and managed code using any language supported by the Microsoft.NET Framework. It also supports command line operations (WMIC) and DCOM-based remoting.

The goal of WMI is to provide a single API for access to large volumes of system data. WMI providers expose data from content sources; this information is placed into a repository, and WMI consumers (e.g. applications and scripts) consume this data.

I previously blogged about web services (WS-*) and WS-Management is a joint effort to provide a WS-* protocol for interoperable management. Implemented as a web service, WS-Management is XML/SOAP-based and runs over HTTPS to access most existing WMI objects. WS-Management also allows for out of band access (i.e. when there is no operating system installed, or the operating system has crashed) to service processors (e.g. remote management hardware). In-band access provides a richer set of capabilities, specifically for software management.

The first version of WS-Management will ship as part of Windows Server 2003 R2, with access to hardware instrumentation, HTTPS access to Windows instrumentation and a command line functionality (WSMAN).

GPMC

I’ve blogged previously about the GPMC but even though it has been available for a couple of years now, it seems that many administrators still do not use it. I’m not sure why (I guess it’s because it is a separate download), but GPMC represents a huge step forward in the management of group policies and I find the ability to create XML/HTML-based group policy object (GPO) reports a significant advantage in documenting group policy (much better than trying to capture it in a set of Excel spreadsheets).

Many of the GPMC tasks are scriptable, including:

  • Creating/deleting/renaming GPOs.
  • Linking GPOs and WMI filters.
  • Delegation of:
    • Security on WMI filters.
    • GPO-related security on sites, domains and organizational units (OUs).
    • Creation rights for GPOs and WMI filters.
  • Generating reports of GPO settings and resultant set of policy (RSOP) data.
  • GPO backup/restoration/import/export/copy/paste/search.

MMC v3.0

MMC v3.0 (previously known as MMC v2.1) is intended to offer a number of benefits:

  • More reliable (recognising the issues related to loading third party code such as MMC snap-ins into a core process) through improved detection and reporting of snap-in problems and an ability to isolate hung snap-ins from the console (new snap-ins only).
  • Improved usability with an asynchronous UI model, simpler console customisation and discoverability of actions (including sub-panes providing actions for a selected tree node and item, along with a helpful description).
  • Richer snap-ins with simplified customisation, template-base snap-in design, and functionally rich views.

Windows Installer v3.1

Windows Installer (.MSI) v3.0 shipped with Windows XP service pack 2 (and v3.1 is the latest version, as described in Microsoft knowledge base article 893803). Whilst it does not support Windows 95, 98, ME or NT, Windows Installer offers:

  • Improved logging.
  • Scripting objects.
  • Sourcelist API enhancements.
  • Enhanced inventory API.
  • Command line switches.
  • Enhanced patching.
  • New software development kit (SDK) tools and documentation updates.

Microsoft Scripting Host/Monad

Monad is a new command shell for Windows, designed to address some of the problems associated with the existing Windows shell, i.e. a weak “language” and sporadic command line coverage, combined with a GUI that is difficult to automate. Monad provides command-oriented scripting capabilities for administrators and systems integrators, including an integrated shell, “commandlets”, new utilities and a scripting language. Wikipedia has a good description the MSH shell including links to additional resources.

Microsoft acquires FrontBridge

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in March, I wrote about some new e-mail message continuity services from FrontBridge. Well, according to a press release just received from Microsoft, FrontBridge is about to become Microsoft’s latest acquisition as it steps up its systems management and security capabilities. With the purchase of Giant Company (anti-spyware), Sybari (anti-virus) and now FrontBridge (anti-spam and message continuity), Microsoft’s security arsenal is starting to look good. It will be interesting to see how these purchases shape up and whether they are integrated into Windows, retained on an application service provider (ASP) basis, or developed into one or more new products, perhaps as part of the System Center family, or (in the case of FrontBridge) maybe we will see some of the new technology integrated into Exchange 12?

Some clarity around Microsoft’s operating system release cycles

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I normally avoid blogging about Microsoft’s release plans for new technology as they tend to be out of date almost as soon as they are written; however, at last week’s Microsoft Technical Roadshow, John Howard gave one of the clearest examples I’ve ever seen of Microsoft’s plans for new operating system releases.

Microsoft aims to provide a major operating system release every four years with release updates approximately half way between major releases. For example, Windows Server 2003 was released on 28 March 2003, Windows Server 2003 R2 is expected during 2005 (delayed due to the late shipping of service pack 1) and the next version of Windows Server (codenamed Longhorn) can be expected in 2007. Following this pattern, we can expect an update to Longhorn in 2009 and the following version of the Windows Server product (codenamed Blackcomb) to make an appearance in 2011.

On the support side, mainstream service packs and updates will be provided for at least 5 years from the date of a major release (i.e. until 2008 for Windows Server 2003) with extended support available for a further 5 years.

Migrating from a Novell NetWare environment to the Windows Server System

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In my last job, I managed the migration of a major fashion design, marketing and retail company’s European business from Novell NetWare and GroupWise to a Microsoft platform. With a limited budget (none) for migration tools, only free utilities could be used and it worked, but was restrictive. This post discussed some of the alternatives that are available, based on a presentation from Microsoft’s Steve Plank at the IT Forum highlights event back in January.

NetWare has always been good at file and print, directory services, and management but traditionally it has lacked an application platform (although that is changing with Novell’s adoption of SuSE Linux) so many organisations have implemented Microsoft applications such as Exchange Server and SQL Server. Depending on the application, this may lead to a requirement for Active Directory (AD), and once the Windows servers for AD are in place, then it seems logical to provide file and print, or web services from the same infrastructure (IT Week recently reported that even Novell concedes that it has been losing between 12 and 15% of NetWare users every year for the last 4-5 years). This leads to a number of challenges around migration and interoperability. For many organisations, there is simply too big an investment in the existing environment to dump it all and move to a new platform, and so interoperability is a must; however by moving away from a mixed environment, support (and licensing) costs can be reduced, and the existing NetWare Directory Services (NDS)/eDirectory experience can even be used in planning the AD design.

There are a number of tools available to assist organisations with a migration to the Windows server system, the first of which is Microsoft Services for NetWare (SFN). Formerly chargeable, but now available as a free download, SFN provides:

  • File migration utility, which migrates files, preserving access controls (with some limitations as NetWare file attributes do not map directly onto the NTFS file system).
  • File and Print Services for NetWare, making a Windows server appear as a NetWare box (although only supporting the IPX transport and NetWare 3.x bindery mode – from v5.03 of SFN onwards, file and print services for NetWare, have been removed and are now available as a separate download).
  • Microsoft directory synchronisation services for NetWare (MSDSS), which provides 1- or 2-way synchronisation between NDS versions 4.x-6.x and AD; however there are some schema extensions required (which may or may not be desirable) and the Novell NetWare 32-bit client (v4.9 SP2) must also be installed (on a domain controller).

Third-party tools are also available (e.g. from Quest Software, who bought the previous Fastlane product set) and Microsoft is said to be producing solution accelerators to assist organisations in the transition.

It is important to bear in mind that data can co-exist in the two environments and that a migration is really a file copy. Therefore it is important to decommission old copies of data, to prevent two copies from being altered from users on different systems.

On the interoperability front, besides the gateway services for NetWare (GSNW) Windows server component (and client services for NetWare in Windows client operating systems), there is Microsoft Identity Integration Server (MIIS), which provides directory synchronisation, password management and user provisioning, or SFN can be used as a short term fix.

Implementing MSDSS for one way synchronisation from AD to NDS is good if AD is the focal point for management (e.g. as a short term, strategy until a move to AD can be completed), but is probably not sustainable in the long term. Two-way synchronisation allows both directories to be managed. There are some “gotchas” though:

  • Synchronisation is not real time – it works on a schedule, with an agent on the Windows side performing a push/pull operation.
  • More significantly, whilst AD does allow MSDSS to store passwords using reversible encryption, using a key which is only known by MSDSS, passwords cannot be passed from AD back to NDS as there is no reversible encryption option.

The file migration utility is actually a cut-down version of the Fastlane product, supporting NetWare versions 4.x, 5.x and 6.x as well as eDirectory 8.7.3. It preserves user permissions and provides some limited logging capabilities (although for reporting, the full product is required). Some considerations when using the tool include:

  • Data volumes (i.e. can all the data be physically migrated in the time available) – consequently, it may be appropriate to perform a trial run, then to actually migrated users and data in small volumes (scheduled for quiet times). One advantages of the migration may be the opportunity to consolidate server smaller servers into one.
  • Drive letters in document links – many Office applications, convert drive letters to UNC paths when saving documents. If the server location changes, then the link will be broken, although tools are available to assist in modifying this.
  • Encryption – encrypted files will need to be de-encrypted before they can be migrated.

There may also be migration considerations such as directory restructuring, removing the NetWare client from workstations and changes to login scripts, so whilst the free tools will be of use to many organisations, those enterprises with more than about 2000 users may wish to make use of third party tools. Quest Software’s NDS Migrator handles both object and data migration (together or as separate operations), with a central console for management which makes use of a mapping database to store metadata (either SQL Server or MSDE – although MSDE is limited to 2Gb in size).

NDS MigratorNDS Migrator is able to deal with a number of complex scenarios, as well as supporting the saving of configuration options (for a repeatable migration). Security principles are examined first, before attempting file migration, based on a file scan (during which information is written to the mapping database) and then finally a migration which uses the information from the database.

In NDS, a container is a security principle; whereas in AD, it is well known that security permissions cannot be applied to an OU. Instead, NDS Migrator creates global groups in AD called container permission equivalent groups (CPEGs), which correspond to an NDS container and are always named with a $ prefix.

NDS allows common names to be duplicated in different parts of the whereas AD common names must be unique. NDS Migrator handles this with pre-migration mapping and planning (identifying intra-NDS naming conflicts and remapping accordingly), as well as allowing for flexible migration (e.g. moving files to a new location), with a pre-migration file scan, Macintosh file support and a multi-threaded copy engine (the version in the SFN file migration utility is single-threaded).

Attribute mapping is supported, such that if NDS has been extended with additional attributes, these can be created in AD. It also handles the differences between NetWare and Windows file system permissions and file ownership (as NDS allows files to exist without an owner, but Windows does not).

It may be of interest that not all of Quest’s tools are available for purchase – some are only available to Quest’s professional services organisation, including NDS reporter (used to assess the NDS environment), workgroup migration tools, and a tool to remove the Novell NetWare client from Windows 2000 and XP clients.

In summary, there are many tools available to assist with the migration from NetWare to Windows and as with any migration, the key to success is in the planning. With careful preparation and by through becoming familiar with the tools that are available, administrators may be confident in performing a successful migration.

Links

Novell NetWare to Windows Server 2003 migration planning guide (Microsoft).
Quest NDS Migrator.

(Since I wrote the original notes for this post, the Microsoft TechNet Industry Insiders blog has carried another article on NDS Migrations, contributed by Darren Catteral from Quest Software).