It’s time to practice safe computing – whatever the operating system

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently switched my primary home computer to a Mac but I also use Windows and Linux. I don’t consider myself to be a member of the Mac community, or the Linux community, or the Windows community – because (based on many forum posts and blog comments that I read) all of these “communities” are full of people with bigoted views that generally boil down to “my OS is better than your OS” or “Duh… but why would you want to use that?”.

Based largely on Apple’s advertising though, one of the things that I did assume with Mac OS X was that I’d be secure by default. Nope. It turns out that’s not true as there is an obscure flaw in Mac OS X (surely not?!) whereby a malformed installer package can elevate its privileges in Mac OS X and become root. After running Windows for 16 years I’m used to these sort of flaws but surely His Jobsness’ wonderful creation is above such things!

Frankly I don’t care that Mac OS X is flawed. So is Linux. So is Windows. So is anything with many millions of lines of code – open or closed source – but I thought better of Apple because I believed that they would keep me safe by default. It’s well known that running Windows XP as anything less than a Power User is difficult and that’s one of the many improvements in Windows Vista. All the Linux installers that I’ve used recently suggested that I create a non-root user as well as root but the OS X installer is happy for me to breeze along and create a single administrator account without a word of further advice. I appreciate that an OS X administrator is not equal to root but nevertheless it’s a higher level of access than should be used for daily computing and because I didn’t know any better (I’m just a dumb switcher) I didn’t create a standard user account (until today).

I read a lot of Mac and Linux zealots singing the praises of their operating systems and saying how Windoze is a haven for spyware and viruses. Well, it’s time to wake up and smell the coffee – as Mac OS X gains in popularity (I heard something about the new MacBooks having a 12% share of all new laptop sales recently) then Mac users will have to start thinking about spyware, viruses and the like. Now is the time to practice safe computing – whatever the operating system – with most users running as administrators then that could quickly become a major issue.

How safe is your personal information?

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently wrote about why I’m cautious of all the hype surrounding what has become known as Web 2.0. One of my major concerns related to data security is that if my data is held on someone else’s servers, how can I control what it is being used for? Well, last week, back in the Web 1.0 world I experienced exactly the kind of issue which just underlines these concerns, when my ISP accidentally sent my account information to 1800 customers.

The first I knew was an e-mail from the Marketing Director which read (in part):

“This afternoon, whilst the marketing team was in the process of sending you a Customer Service Update, an email was sent in error to 1,800 customers. The email sent in error contained information relating to your Force9 service.The specific information was: our internal reference number, username, name, product name, subscription amount, Force9 email, alternative email, marketing preference and active status.

No address details, credit card details, payment details or phone numbers have been disclosed.

We have contacted the customers who received your information, asked them to disregard the contents and delete the email.

I would like to apologise. Although this was a result of a regrettable human error, we will be updating our systems and processes immediately to prevent this from ever reoccurring.

Once again, please accept my apologies for any inconvenience this has caused.”

Of course, my ISP should be commended for “‘fessing up” on this one – how many other organisations would have just kept quiet? But the accidental disclosure of information held about me by third parties is not an isolated incident – last year I experienced a similar problem when the Spread Firefox database was compromised. Some protection can be gained when registering with websites by using false details (watch those mandatory fields and think “why do they need my mailing address and telephone number?”); however there are practical reasons why many service providers need to be given real information.

In these days of direct marketing and (even worse) identity fraud, it seems to me that being concerned about the use of your personal details when they are supplied to a service provider is not being paranoid – it’s just common sense.

Microsoft’s digital identity metasystem

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

After months of hearing about Windows Vista eye candy (and hardly scraping the surface with anything of real substance with regards to the operating system platform), there seems to be a lot of talk about digital identity at Microsoft right now. A couple of weeks back I was at the Microsoft UK Security Summit, where I saw Kim Cameron (Microsoft’s Chief Architect for identity and access) give a presentation on CardSpace (formerly codenamed “InfoCard”) – a new identity metasystem contained within the Microsoft .NET Framework v3.0 (expected to be shipped with Windows Vista but also available for XP). Then, a couple of days ago, my copy of the July 2006 TechNet magazine arrived, themed around managing identity.

This is not the first time Microsoft has attempted to produce a digital identity management system. A few years back, Microsoft Passport was launched as a web service for identity management. But Passport didn’t work out (Kim Cameron refers to it as the world’s largest identity failure). The system works – 300 million people use it for accessing Microsoft services such as Hotmail and MSN Messenger, generating a billion logons each day – but people don’t want to have Microsoft controlling access to other Internet services (eBay used Passport for a while but dropped it in favour of their own access system).

Digital identity is, quite simply, a set of claims made about a subject (e.g. “My name is Mark Wilson”, “I work as a Senior Customer Solution Architect for Fujitsu Services”, “I live in the UK”, “my website is at http://www.markwilson.co.uk/”). Each of these claims may need to be verified before they are acted upon (e.g. a party to whom I am asserting my identity might like to check that I do indeed work where I say I do by contacting Fujitsu Services). We each have many identities for many uses that are required for transactions both in the real world and online. Indeed, all modern access technology is based on the concept of a digital identity (e.g. Kerberos and PKI both claim that the subject has a key showing their identity).

Microsoft’s latest identity metasystem learns from Passport – and interestingly, feedback gained via Kim Cameron’s identity weblog has been a major inspiration for CardSpace. Through the site, the identity community has established seven laws of identity:

  1. User control and consent.
  2. Minimal disclosure for a defined use.
  3. Justifiable parties.
  4. Directional identity.
  5. Pluralism of operators and technologies.
  6. Human integration.
  7. Consistent experience across contexts.

Another area where CardSpace fundamentally differs from Passport is that Microsoft is not going it alone this time – CardSpace is based on WS-* web services and other operating system vendors (e.g. Apple and Red Hat) are also working on comparable (and compatible) solutions. Indeed, the open source identity selector (OSIS) consortium has been formed to address this technology and Microsoft provides technical assistance to OSIS.

The idea of an identity metasystem is to unify access and prevent applications from the complexities of managing identity, but in a manner which is loosely coupled (i.e. allowing for multiple operators, technologies and implementations). Many others have compared this to the way in which TCP/IP unified network access, which paved the way for the connected systems that we have today.

The key players in an identity metasystem are:

  • Identity providers (who issue identities).
  • Subjects (individuals and entities about which claims are made).
  • Relying parties (require identities).

Each relying party will decide whether or not to act upon a claim, depending on information from an identity provider. In the real world scenario, that might be analogous to arriving at a client’s office and saying “Hello, I’m Mark Wilson from Fujitsu Services. I’m here to visit your IT Manager”. The security/reception staff may take my word for it (in which case this is self-issued identity and I am both the subject and the provider) or they may ask for further confirmation, such as my driving license, company identity card, or a letter/fax/e-mail inviting me to visit.

In a digital scenario the system works in a similar manner. When I log on to my PC, I enter my username to claim that I am Mark Wilson but the system will not allow access until I also supply a password that only Mark Wilson should know and my claims have been verified by a trusted identity provider (in this case the Active Directory domain controller, which confirms that the username and password combination matches the one it has stored for Mark Wilson). My workstation (the relying party) then allows me access to applications and data stored on the system.

In many ways a username and password combination is a bad identity analogy – we have trained users to trust websites that ask them to enter a password. Imagine what would happens if I was to set up a phishing site that asks for a password. Even if the correct password is entered then the site would claim that it was incorrect. A typical user (and I am probably one of those) will then try other passwords – the phishing site now has an extensive list of passwords available which can then be used to access other systems pretending to be the user whose identity has been stolen. A website may be protected by many thousands miles of secure communications but as Kim Cameron put it, the last one metre of the connection is from the computer to the user’s head (hence identity law number 6 – human integration) – identity systems need to be designed in a way that is easy for users to make sense of, whilst remaining secure.

CardSpace does this by presenting the user with a selection of digital identity cards (similar to the plastic cards in our wallets) and highlighting only those that are suitable for the site. Only publicly available information is stored with the card (so that should hold phishers at bay – the information to be gained is useless to them) and because each card is tagged with an image (and only appropriate cards are highlighted for use), I know that I have selected the correct identity (why would I send my Government Gateway identity to a site that claims to be my online bank?). Digital identities can also be combined with other access controls such as smartcards. The card itself is just a user-friendly selection mechanism – the actual data transmitted is XML-based.

CardSpace runs in a protected subsystem (similar to the Windows login screen) – so when active there is no possibility of another application (e.g. malware) gaining access to the system or of screenscraping taking place. In addition, user interaction is required before releasing the identity information.

Once selected, services that require identities can convert the supplied token between formats using the WS-Trust service for encapsulating protocol and claims transformation. For negotiations, WS-MetadataExchange and WS-SecurityPolicy are used. This makes the Microsoft implementation fully interoperable with other identity selector implementations, with other relying party implementations and with other identity provider implementations.

Microsoft is presently building a number of components to its identity metasystem:

  • CardSpace identity selector (usable by any application, included within .NET Framework v3.0 and hardened against tampering and spoofing).
  • CardSpace simple self-issued identity provider (makes use of strong PKI so that the user does not disclose passwords to relying parties).
  • Active Directory managed identity provider (to plug corporate users in to the metasystem via a full set of policy controls to manage the use of simple identities and Active Directory identities).
  • Windows Communication Foundation (for building distributed applications and implementing relying party services.

Post-Windows Vista, we can expect the Windows Login to be replaced with an CardSpace-based system. In the meantime, to find out more about Microsoft’s new identity metasystem, check out Kim Cameron’s identity blog, The Windows CardSpace pages and David Chappell’s Introducing InfoCard article on MSDN, and the July 2006 issue of TechNet magazine.

Windows Mobile device security

This content is 18 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the years, I’ve attended various presentations featuring mobile access to data but most of them have been along the lines of “look at all this cool stuff I can do”. Last week I was at the Microsoft IT Security Summit and saw a slightly different angle on things as Jason Langridge presented a session on securing Windows Mobile devices – something which is becoming ever more important as we increasingly use mobile devices to access data on the move.

It’s surprising just how few people make any effort to secure their device and, according to Microsoft, only 25% of mobile users set even a password/PIN. Even so, that’s just the tip of the iceberg – mobile data exists in a variety of locations (including paper!) and whilst many IT Managers are concerned about data on smartphones, PDAs and USB devices, paradoxically, many notebook PCs have an unencrypted hard disk containing many gigabytes of data. A mobile security policy is different to a laptop security policy – and it’s more than just a set of technology recommendations – it should involve assessing the risk and deciding what data can safely be lost and what can’t. Ultimately there is a fundamental trade-off between security, usability and cost.

Potential mobile device security threats can come from a number of sources, including malware from applications of unknown origin, viruses, loss/theft, unauthorised access via a personal area network, wireless LAN, wireless WAN, LAN or through synchronisation with a desktop/notebook PC. Each of these represents a subsequent risk to a corporate network.

The Windows Mobile platform supports secure device configuration through 43 configuration service providers (CSPs). Each CSP is an XML document that can be used to lock down a device, for example to disable Bluetooth:


The diagram below illustrates the various methods of provisioning and control for mobile devices, from direct application installation or desktop ActiveSync, through in-ROM configuration to over-the-air provisioning from Exchange Server, WAP or the Open Mobile Alliance (OMA) industry standard for mobile device management.Mobile device provisioning and control methods

The most secure method of configuring a mobile device is via a custom in-ROM configuration – i.e. hard-coded XML in ROM, run during every cold boot. This method needs to be configured by the OEM or system integrator who creates the device image.

Secure system updates provide for after-market updates to device configuration, even when mobile. Image updates (a new feature for Windows Mobile 5.0) can update system files ranging from the full image to a single file including handling dependency and conflict resolution. Controlled by the OEM or the mobile operator, image update packages are secured using cryptographic signatures.

Probably the simplest way to provide some form of perimeter security is using a PIN code or strong password (depending on the device), incorporating an exponential delay with each incorrect password. Such arrangements can now be enforced using the tools provided in Exchange Server 2003 SP2 and/or the Systems Management Server device management feature pack. Taking a look at Exchange Server 2003 SP2, it not only delivers improved access to Outlook data when mobile with reduced bandwidth usage and latency, direct push e-mail, additional Outlook properties and global address list lookup; but it also provides security policy provisioning for devices with password restrictions, certificate authentication, S/MIME and the ability to locally or remotely reset a mobile device.

Windows Mobile does not encrypt data on devices due to the impact on performance; however it does include a cryptographic API and SQL CE/SQL Mobile access provides 128-bit encryption. If data encryption on the device is required (bearing in mind that the volume of data involved is small and the observation that many notebook PCs representing a far larger security risk are unsecured) then third party solutions are available.

Mobile applications can be secured for both installation and execution. For installation, the .CAB file containing the application can be signed and is validated against certificates in the device certificate store. Similarly, .EXE/.DLL files (and .CPL files, which are a special .DLL) need to be signed and validated for execution. Users are asked to consent to install or execute signed code, and if consent is given, a hash of each file is added to a prompt exclusion list to avoid repeated prompts. Copying executable files to the device is not the same as installing them and will result in an execution prompt.

Windows Mobile includes a two-tier application execution control with the 1-tier mode including either blocking execution completely or running as privileged/trusted. If 2-tier mode is in use, an application could be signed for one of two different trust levels – either privileged, with access to registries, APIs and hardware interfaces; or unprivileged, with applications restricted from certain operations. Smartphones support 1- or 2-tier operation; whereas PocketPC devices are limited to a single tier.

Whilst application installation security can provide good protection against viruses and other malware, there are also anti-virus APIs built in to Windows Mobile with solutions available from a variety of vendors.

As new wireless network technologies come onstream, it is important to consider wide area network security too. Windows Mobile supports NTLM v2 as well as SSL, WPA and 802.1x user authentication using passwords or certificates. VPN support is also provided. From a personal area network (Bluetooth/infrared) perspective, peer-to-peer connections require interaction in order to accept data and CSPs are available to block both Bluetooth and IrDA object exchange (OBEX). By default, Bluetooth is turned off on Windows Mobile 5.0 devices, giving out-of-the-box protection against bluesnarfing (gaining access to personal information data) and bluejacking (unauthorised sending of messages to a device).

Jason summarised his presentation by pointing out that security is often used as a convenient excuse not to deploy mobile technology when what is really required is to establish a mobile security policy and to educate users.

A risk assessment must be made of each security scenario and risk management should be based on that assessment. Solutions should be automatically enforced but must also be acceptable to users (e.g. complex passwords will not work well on a smartphone!). Security is a combination of both a policy and technology but the policy must come before the technology choice (only when it is known what is to be protected from whom in which situations can it be decided how to secure it).

Suggested further reading
Microsoft mobile security white paper
Windows Mobile network security white paper

Making sense of public key infrastructure

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written a bit on this blog previously in an attempt to demystify public key infrastructure (PKI) but a fellow contributor to the Microsoft Industry Insiders blog, Adrian Beasley, has written an extensive article entitled make sense of public key infrastructure, which could be very useful for anyone trying to get their head around the subject.

As Steve Lamb pointed out when introducing Adrian’s article, he has tried to “keep very distinct the technology – asymmetric cryptography – which makes the whole thing possible, and the administrative process – PKI and all the rest – which controls it.”

Using unprivileged accounts in Windows

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, Microsoft UK’s Steve Lamb presented a session on using the principle of least privileged access to reduce exposure to security threats under Windows (basically, running as much as possible as a standard, non-administrative user). Unfortunately I missed the event but I was chatting with Steve last week and he filled me in on the basic principles (which I’ve padded out with a few notes from his slidedeck).

The runas command can be used to start a program as a different user (as programs inherit their permissions from the parent process, starting a cmd shell as an Administrator and then launching an application will launch that application as an Administrator. Within the Windows GUI, there is often a right click option for runas, although for control panel applets shift and right click is used to expose the runas option. Shortcuts can be modified to run with different credentials for applications that always require a higher level of access.

There are occasions when runas just doesn’t work – for example applications that reuse existing instances (Windows Explorer, Microsoft Word) or those that are started through the shell using the ShellExecute() API call or dynamic data exchange (DDE). Unfortunately Microsoft Update is one of those applications for which runas won’t work. Aaron Margosis has some advice on his blog to help work around issues with runas and Windows Explorer.

Privileged command shell windows can be set apart using a different colour scheme, for example:

cmd.exe /t:cf /k title Administration Shell

For the GUI, the TweakUI power toy can be used to set an alternative bitmap for Internet Explorer and Windows Explorer, or Aaron Margosis’ PrivBar displays the current privilege level.

Whilst it’s true that using a local account will prevent domain-wide issues, there are side effects in that there is no access to domain resources, different profile settings (and per-user policy settings) are in effect and some applications assume that the installer is the end user. One possible resolution is Aaron Margosis‘ MakeMeAdmin tool which allows for temporary elevation of the current account’s privileges (and any applications which inherit the user context. MakeMeAdmin can be downloaded from Aaron’s blog and he has a later follow-up post with more information.

Some applications are written to run as Administrator and there’s not a lot that an end user can do about poor coding (other than replacing the application with something else). Adding the user to the local Administrators group to resolve such issues is not good practice, although it may be possible to loosen the ACLs on application-specific resources (i.e. %ProgramFiles%\applicationname\ and HKEY_LOCAL_MACHINE\SOFTWARE\applicationname\Settings) but this should not be carried on operating system resources (e.g. %windir%, %windir%\System32 and
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows). The important thing to remember is to do this in a granular fashion, applying additional permissions only to those resources to which access is required.

If an application writes to HKEY_CLASSES_ROOT, then it’s usually a bug. HKEY_CLASSES_ROOT is a merged view of HKEY_LOCAL_MACHINE\SOFTWARE\Classes and HKEY_CURRENT_USER\SOFTWARE\Classes so writing to HKEY_CLASSES_ROOT effectively goes to HKEY_CURRENT_USER if the key already exists. Consequently, problems with HKEY_CLASSES_ROOT can often be overcome by pre-creating keys under HKEY_CURRENT_USER.

If all else fails, utilities such as MakeMeAdmin can be used to allow an application to run with elevated privileges but they require the user to know the Administrator password – alternatives include Valery Pryamikov‘s RunAsAdmin and DesktopStandard PolicyMaker Application Security.

In Windows Vista, everything changes again with new functionality known as user access control (also known by other names including user access protection and flexible account control technologies):

  • All users run as an unprivileged user by default, even when logged on as an Administrator.
  • Once running, the privilege of an application cannot be changed.
  • Administrators only use full privilege for administrative tasks or applications.
  • Users are prompted to provide explicit consent before using elevated privilege, which then lasts for the life of the process.
  • A high level of application compatibility is achieved using redirection (which allows legacy applications to run as a normal user with HKEY_LOCAL_MACHINE\Software access being emulated by a virtual location under HKEY_CURRENT_USER and attempted writes to the %SystemRoot% and %ProgramFiles% folders being redirected to a per-user store); however this is a temporary mitigation for 32-bit product versions only (i.e. not implemented in 64-bit versions of Windows Vista).

Although Windows has come a long way to making least privileged access usable, it’s important to remember that there are some things that least privileged access can’t guard against:

  • Anything you can do to yourself.
  • Weak passwords.
  • Attacks on services.
  • Phishing.
  • Stupidity.

Unfortunately I’m writing this post on the notebook PC supplied by my employer with a standard corporate build and my domain account is also a local administrator. I think that probably falls into the last category listed above… doh!

Putting PKI into practice

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Recently, I blogged about public/private key cryptography in plain(ish) English. That post was based on a session which I saw Microsoft UK’s Steve Lamb present. A couple of weeks back, I saw the follow-up session, where Steve put some of this into practice, securing websites, e-mail and files.

Before looking at my notes from Steve’s demos, it’s worth picking up on a few items that were either not covered in the previous post, or which could be useful in understanding what follows:

  • In the previous post, I described encryption using symmetric and asymmetric keys; however many real-world scenarios involve a hybrid approach. In such an approach, a random number generator is used to seed a symmetric session key which contains a shared secret. That key is then encrypted asymmetrically for distribution to one or more recipients. Once distributed, the (fast) symmetrical encrypted session key can be used.
  • If using the certificate authority (CA) built into Windows, it’s useful to note that this can be installed in standalone or enterprise mode. The difference is that enterprise mode is integrated with Active Directory, allowing the automatic publishing of certificates. In a real-world environment, multiple-tiers of CA would be used and standalone or enterprise CAs can both be used as either root or issuing CAs. It’s also worth noting that whilst the supplied certificate templates cannot be edited, they can be copied and the copies amended.
  • Whilst it is technically possible to have one public/private key pair and an associated certificate for multiple circumstances, there are often administrative purposes for separating them. For example, in life one would not have common keys for their house and their car, in case they needed to change one without giving access to the other away. Another analogy is to compare with single sign on, where it is convenient to have access to all systems, but may be more secure to have one set of access permissions per computer system.

The rest of this post describes the practical demonstrations which Steve gave for using PKI in the real world.

Securing a web site using HTTPS is relatively simple:

  1. Create the site.
  2. Enrol for a web server certificate (this authenticates the server to the client and it is important that the common name in the certificate matches the fully qualified server name, in order to prevent warnings on the client).
  3. Configure the web server to use secure sockets layer (SSL) for the site – either forced or allowed.

Sometimes web sites will be published via a firewall (such as ISA Server), which to the outside world would appear to be the web server. Using such an approach for an SSL-secured site, the certificate would be installed on the firewall (along with the corresponding private key). This has the advantage of letting intelligent application layer firewalls inspect inbound HTTPS traffic before passing it on to the web server (either as HTTP or HTTPS, possibly over IPSec). Each site is published by creating a listener – i.e. telling the firewall to listen on a particular port for traffic destined for a particular site. Sites cannot share a listener, so if one site is already listening on port 443, other sites will need to be allocated different port numbers.

Another common scenario is load-balancing. In such circumstances, the certificate would need to be installed on each server which appears to be the website. It’s important to note that some CAs may prohibit such actions in the licensing for a server certificate, in which case a certificate would need to be purchased for each server.

Interestingly, it is possible to publish a site using 0-bit SSL encryption – i.e. unencrypted but still appearing as secure (i.e. URL includes https:// and a padlock is displayed in the browser). Such a scenario is rare (at least among reputable websites), but is something to watch out for.

Configuring secure e-mail is also straightforward:

  1. Enrol for a user certificate (if using a Windows CA, this can either be achieved via a browser connection to http://servername/certsrv or by using the appropriate MMC snap-ins).
  2. Configure the e-mail client to use the certificate (which can be viewed within the web browser configuration, or using the appropriate MMC snap-in).
  3. When sending a message, opt to sign or encrypt e-mail (show icons).

Signed e-mail requires that the recipient trusts the issuer, but they do not necessarily need to have access to the issuing CA; however, this may be a reason to use an external CA to issue the certificate. Encrypted e-mail requires access to the user’s public key (in a certificate).

To demonstrate access to a tampered e-mail, Steve turned off Outlook’s cached mode and directly edited a message on the server (using the Exchange Server M: drive to edit a .EML file)

Another possible use for a PKI is in securing files. There are a number of levels of file access security, from BIOS passwords (problematic to manage); syskey mode 3 (useful for protecting access to notebook PCs – a system restore disk will be required for forgotten syskey passwords/lost key storage); good passwords/passphrases to mitigate known hash attacks; and finally the Windows encrypting file system (EFS).

EFS is transparent to both applications and users except that encrypted files are displayed as green in Windows Explorer (cf. blue compressed files). EFS is also unfeasible to break so recovery agents should be implemented. There are however some EFS “gotchas” to watch out for:

  • EFS is expensive in terms of CPU time, so may be best offloaded to hardware.
  • When using EFS with groups, if the group membership changes after the file is encrypted, new users are still denied access. Consequently using EFS with groups is not recommended.
  • EFS certificates should be backed up – with the keys! If there is no PKI or data recovery agent (DRA) then access to the files will be lost (UK readers should consider the consequences of the regulation of investigatory powers act 2000 if encrypted data cannot be recovered). Windows users can use the cipher /x command to store certificates and keys in a file (e.g. on a USB drive). Private keys can also be exported (in Windows) using the certificate export wizard.

Best practice indicates:

  • The DRA should be exported and removed from the computer.
  • Plain text shreds should be eliminated (e.g. using cipher /w to write 00 and FF to the disk at random).
  • Use an enterprise CA with automatic enrollment and a DRA configured via group policy.

More information can be found in Steve’s article on improving web security with encryption and firewall technologies in Microsoft’s November 2005 issue of TechNet magazine.

Public/private key cryptography in plain(ish) English

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Public key infrastructure (PKI) is one of those things that sounds like a good idea, but which I can never get my head around. It seems to involve so many terms to get to grips with and so, when Steve Lamb presented a “plain English” PKI session at Microsoft UK a few weeks back, I made sure that I was there.

Steve explained that a PKI can be used to secure e-mail (signed/encrypted messages), browsing (SSL authentication and encryption), code (authenticode), wireless network connectivity (PEAP and EAP-TLS), documents (rights management), networks (segmented with IPSec) and files (encrypted file system).

Before looking at PKI, it’s necessary to understand two forms of cryptography – symmetric and asymmetric. I described these last year in my introduction to IPSec post.

The important things to note about public key cryptography are that:

  • Knowledge of the encryption key doesn’t give knowledge of the decryption key.
  • The receiver of the information generates a pair of keys (either using a hardware security module or software) and publishes the private key in a directory.
  • What one key does, the other undoes – contrary to many texts, the information is not always encrypted with the recipients public key.

To some, this may sound like stating the obvious, but it is perfectly safe to publish a public key. In fact, that’s what a public key certificate does.

Having understood how a PKI is an asymmetric key distribution mechanism, we need a trust model to ensure that the public key really does belong to who it says it does. What if I were to generate a set of keys and publish the public key as my manager’s public key? Other people could send him information but he wouldn’t be able to read it because he wouldn’t have the private key; however I would have it – effectively I could read messages that were intended for my manager.

There are two potential methods to ensure that my manager’s public key really is his:

  • One could call him or meet with him and verify the fingerprint (hash) of the key, but that would be time consuming and is potentially error-prone.
  • Alternatively, one could employ a trusted third party to certify that the key really does belong to my manager by checking for a trusted digital signature on the key. The issue with this method is that the digital signature used to sign the key needs to be trusted too. Again, there are two methods of dealing with this:
    • A “web of trust” model, such as Phil Zimmermann‘s pretty good privacy (PGP) – upon which the GNU privacy guard (GPG) on Linux systems was built – where individuals digitally sign one another’s keys (and implicitly trust keys signed by friends/colleagues).
    • A trusted authority and “path of trust” model, using certificate authorities (CAs), where everyone trusts the root CA (e.g. VeriSign, Thawte, etc.) and the CA digitally signs the keys of anyone whose credentials have been checked using it’s published methods (producing a certificate). One CA may nominate another CA and they would automatically be trusted too, building a hierarchy of trust.

Most CAs will have multiple classes of trust, depending on the checks which have been performed. The class of the trust would normally be included within the certificate and the different levels of checking should be published in a document known as a certificate practice statement.

The analogy that I find useful here is one of writing and signing a cheque when paying for goods or services. I could write a cheque on any piece of paper, but the cheques that I write are trusted because they are written on my bank‘s paper – that bank is effectively a trusted CA. When I opened my account the bank would have performed various background checks on me and they also hold a reference of my signature, which can be checked against my cheques if required.

The padlock that indicates a secure website in most browsers also looks a bit like a handbag (UK English) or purse (US English)! The Internet Explorer 6 version looks like this Internet Explorer padlock and the Firefox 1.5 version is like this Firefox padlock. Steve Lamb has an analogy for users that I particularly like – “it’s safe to shop where you see the handbag”; however, it’s also important to note that the padlock (not really a handbag!) just means that SSL security is in use – it doesn’t mean that the site can automatically be trusted (it may be a phishing site) so it’s important to examine the certificate details by double clicking on the padlock.

Each verification method has its own advantages and disadvantages – web of trust can be considered more “trustworthy”, but it’s time-consuming and not well understood by the general public – CAs, whilst easy to deploy and manage, can be considered to be the tools of “Big Brother” and they have to be trusted implicitly.

Digital signatures work by calculating a short message digest (a hash) and encrypting this using the signatory’s private key, to provide a digital signature. The hash function should result in a unique output (although it’s theoretically possible that two messages could produce the same hash as a large volume of data is being represented by a smaller string) – the important point to note is that even the tiniest of changes will break the hash.

Creating a digital signature

Upon receipt, the recipient uses the signatory’s public key to decrypt the hash. Because the hash is generated using a one-way function, this cannot be expanded to access the data – instead, the data is transmitted with the signature and a new hash calculated by the recipient. If the two hashes match then the integrity of the message is proven. If not, then the message has almost certainly been tampered with (or at least damaged in transit).

Verifying a digital signature

Certificates are really just a method of publishing public keys (and guaranteeing their authenticity). The simplest certificate just contains information about the entity that is being certified to own a public key and the public key itself. The certificate is digitally signed by someone who is trusted – like a friend (for PGP) or a CA. Certificates are generally valid for a defined period (e.g. one year) and can be revoked using a certificate revocation list (CRL) or using the real-time equivalent, online certificate status protocol (OCSP). If the CRL or OCSP cannot be accessed, then a certificate is considered invalid. Certificates are analogous to a traditional passport in that a passport is issued by a trusted authority (e.g. the UK passport agency), is valid for a number of years and contains basic information about the holder as well as some form of identification (picture, signature, biometric data, etc.).

X.509 is the standard used for certificates, with version 3 supporting application-specific extensions, (e.g. authentication with certificates – the process that a browser will follow before displaying the padlock symbol to indicate that SSL is in use – authenticating the server to the client). Whether or not this certificate is issued by an external CA or an organisational (internal) CA is really a matter of choice between the level of trust placed in the certificate and how much the website owner is prepared to pay for a certificate (it’s unlikely that an external certificate will be required for a secure intranet site, whilst one may be expected for a major e-commerce site).

The SSL process works as follows:

  1. The browser (client) obtains the site (server) certificate.
  2. The digital signature is verified (so the client is sure that the public key really belongs to the site)
  3. To be sure that this is the actual site, not another site masquerading as the real site, the client challenges the server to encrypt a phrase. Because the server has the corresponding private key, it can encrypt the phrase and return it to the client.
  4. The client decrypts the phrase using the public key from the certificate – if the phrase matches the challenge, then the site is verified as authentic.

Most certificates can be considered safe – i.e. there is no need to protect them heavily as they only contain publicly available information. The certificate can be stored anywhere – in a file, on a USB token, on a memory-only smartcard, even printed; however private keys (and certificates that include them) are extremely vulnerable, requiring protected storage within the operating system or on a smartcard with cryptographic functionality (see below). Windows 2000 Server and Windows Server 2003 include a CA which can be used to issue and store certificates, especially within a company that is just looking to secure its own data. The Windows Server 2003 CA even supports auto-enrollment (i.e. where a certificate request is processed automatically), but what if the administrators within an organisation are not considered trustworthy? In that case, an external CA may be the only choice.

Most organisations use more than one root key for signing certificates. This is because it does not scale well, can be difficult to manage responsibility for in a large organisation and is dangerous if the key is compromised. Instead, certificate hierarchies can be established, with a CA root certificate at the top, and multiple levels of CA within the organisation. Typically the root CA is installed, then taken offline once the subordinate CAs have been installed. Because the root is offline, it cannot be compromised, which is important because complete trust is placed in the root CA. With this model, validating a certificate possibly involves validating a path of trust – essentially this is just checking the digital signature but it may be necessary to walk the path of all subordinate CAs until the root is reached (or a subordinate that is explicitly trusted). Cross certification is also possible by exporting and importing certificate paths between CA hierarchies.

The list of trusted root CAs increases with each Windows service pack. Some certificates can be obtained without payment, even those included in the list of Windows’ trusted root CAs. Whilst these are as valid as any other certificate, they are unlikely to have undergone such stringent checks and so the level of trust that can be placed in them may not be deemed sufficient by some organisations. If this is a concern, then it can be cleared down from within the browser, using group policy or via a script – the only client impact will be a (possibly confusing) message asking if the certificate issuer should be added to the list of trusted authorities when a site is accessed.

Smartcards are often perceived as a useful second factor for authentication purposes, but it’s useful to note that not all smartcards are equal. In fact, not all smartcards are smart! Some cards are really just a memory chip and are not recommended for storing a private key used to verify identity. More expensive smartcards are cryptographically enabled, meaning that the key never has to leave the smartcard, with all processing done on the smartcard chip. Additional protection can also be included (e.g. biometric measures) as well as self-destruction where the card is known to have been compromised.

It’s worth noting that in the UK, organisations that encrypt data and do not have the means to decrypt it can fall foul of the regulation of investigatory powers (RIP) act (2000). There is an alternative – leaving the keys in escrow – but that is tantamount to leaving the keys with the government. Instead, the recommended practice for managed environments with encryption is to store keys in a location that is encrypted with the key recovery operator’s key – that way the keys can be recovered by an authorised user, if required.

After attending Steve’s session, I came away feeling that maybe PKI is not so complex after all. Steve’s recommendations were to set up a test environment and investigate further; to minimise the scope of an initial implementation; and to read up on certificate practice and certificate practice statements (which should be viewed as being more important than the technology itself if defending the trustworthiness of a certificate in court).

For anyone implementing PKI in a Microsoft infrastructure, there’s more information on PKI at the Microsoft website.

Securing my wireless network

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I wrote about upgrading my wireless network. It’s been running well since then, so this afternoon I decided to go ahead with stage 3 – configuring wifi protected access (WPA). As I haven’t set up a RADIUS server here, and to be honest, it would be overkill for a small network like mine, I decided to implement WPA-PSK (pre-shared key), as detailed in Steve Lamb’s post (and blogcast) on the subject.

Initially, it all went well, simply setting the access point to use WPA-PSK and defining a passphrase. Within a few minutes, I had entered the passphrase on two of my notebook PCs and all was working well (one using a Compaq WLAN MultiPort W200 and one using an Intel PRO/Wireless 2200BG network connection) but then I hit some real problems. My wife’s PC (the whole reason for us having a wireless network) and my server were refusing to play with the access point displaying the following message when I selected the wireless network and entered the network key:

Wireless configuration

The network password needs to be 40 bits or 104 bits depending on your network configuration.

This can be entered as 5 or 13 ASCII characters or 10 or 26 hexadecimal characters.

This seemed strange to me – there was no mention of any no such restrictions when I set up the WPA-PSK passphrase (the network key). With one machine running Windows XP SP2 and the other running Windows Server 2003 SP1, WPA support shouldn’t have been a problem (I double-checked the server with the D-Link AirPlus DWL-520+ wireless PCI adapter and once I’d manually switched the properties to WPA-PSK using TKIP, I was able to enter the network key and connect as normal).

It seems that for some reason, the D-Link card had defaulted to using WEP, and sure enough, once I set it to use WPA-PSK, the network description changed from security-enabled wireless network to security-enabled wireless network (WPA).

So, three machines working, one to go.

I read in Kathryn Tewson and Steve Riley’s security watch: a guide to wireless security article that WPA is “both more secure and easier to configure than WEP, but most network cards made before mid-2003 won’t support it unless the manufacturer has produced a firmware update”. The problem machine was using a Compaq WL110 Wireless PC Card, which I was given around 2002/3 (when we first put in the 802.11b network) so it sounded plausible that I might need a firmware update. A little more googling turned up the does/can the WL110 support WPA? thread on the HP IT Resource Center which gave me the answer. No, there is no firmware upgrade (card support was dropped before the WPA specification was finalised), but if you download the Agere version of the drivers, and tell Windows XP that the WL110 is a 2Wire Wireless PC Card, WPA is available and it works (even inside the WL210 PCI adapter)!

So, that’s all done – a working, (hopefully) secure, wireless network, all for the price of a new access point.

Securing your Windows computer with syskey

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

At an event a few weeks back, Steve Lamb mentioned using the syskey utility to secure a Windows system. Even though it’s a standard Windows utility, I’d never heard of it before and Steve has now written about syskey on his blog, along with a follow up post on storing the keys on a USB token (think of it as a kind of ignition key for a Windows computer).