There is much talk in the IT press about how we can no longer rely on single factor identification (e.g. user name and password) and about how biometric security could be at least part of the answer; but for an alternative take on just how dangerous an over-reliance on biometric security may be, Alistair Dabbs’ recent will biometric security harm users? article in IT Week provides an interesting, if a touch alarmist, view on how this could all end up as an identity fraud victim’s worst nightmare.
Phishing and the wider issue of identity theft
Phishing worries me. In fact, identity theft in general is one of my major concerns (and is the reason I refuse to do any more business with Halifax Bank of Scotland, one of the UK’s largest banks, who will not respond to letters or e-mails requesting that they remove my online access even though I have closed all of my accounts with them).
According to IT Week:
“The anti-phishing working group (APWG), which comprises security vendors, ISPs and financial institutions, has been serving as a clearing-house for information on attacks and trends for more than a year [and has] reported a 24% increase in phishing each month from August to December [2004]”.
Now a group of leading IT companies, including Microsoft and eBay (two companies which have themselves been affected by high-profile phishing attacks), along with electronic payment specialist Visa and security solution provider WholeSecurity have joined forces to create an early warning network for new attacks called the Phish Report Network.
Another Internet security and payment specialist, Verisign, has warned, in its fifth Internet security intelligence briefing, that phishing attacks are the biggest threat to online business, with just over 40% of phishing sites hosted in the US but further sites identified in a total of 37 countries. According to IT Week, Verisign added that effective action against phishing would require international co-operation between Internet service providers (ISPs) and law enforcement agencies.
The problem of identity theft is broader than phishing. Since my mother’s credit card details were used fraudulently a couple of years back (identified, to their credit, by the same bank that I criticised at the head of this post), all of my family have been very careful about how we dispose of sensitive information, but that doesn’t stop me from having my card copied in a restaurant (in the UK, cards are rarely swiped using a mobile card payment terminal, as they would be in many countries – instead, they are taken away and returned with a slip for a signature a few minutes later, although this is changing with the introduction of chip and PIN technology). In his recent article, hook, line and stinkers, which appeared in IT Week, David Neal notes that:
“Identity theft, enabled by a lackadaisical approach to filing and a loose relationship with paper-shredding machines, is big business these days. In fact incidents of stolen identities have rocketed from shoulder-shrugging insignificance in 1999 to a 10 on the ‘Holy Moly’ scale this year”.
UK consumer watchdog Which? recently reported that a quarter of UK adults have either had their identity stolen or knew someone who has been a victim of ID fraud.
One of the most common cases of identity theft is credit card fraud, which cost UK banks £160 million last year and someone has to pay for this (you guessed it – ultimately it is us, the consumers), and the UK is ranked second for the number of fraudulent transactions (whist online trade grew by 88% in Q4 2004, compared with the same quarter in 2003).
The Association for Payment Clearing Services (APACS) has launched Card Watch, a website providing advice to consumers, retailers, police and media about card fraud. Meanwhile, credit card issuer, Capital One has started offering fraud protection services (somewhat embarrassingly, and unfortunately for him, the star of Capital One TV adverts, impersonator Alistair McGowan, had his own rubbish searched by a tabloid journalist who obtained a significant number of items which could be used to steal his identity).
Whilst secure and accountable systems are a must, some gullible users will always fall foul of the type of fraud which most of us delete from our inbox without reading. The IT industry is taking action, with anti-phishing capabilities promised for a new Netscape browser and Microsoft promising anti-phishing tools in Internet Explorer 7. Meanwhile, legislation is also being considered, with the US Senate debating its proposed Anti-Phishing Act and the UK is considering its own legislation, with early draft regulations as possibility as early as the end of this year.
The financial services companies which I transact online with (First Direct and Egg) will not correspond with me by e-mail about anything which requires personal information (i.e. only marketing information) – instead they have a private messaging system embedded within their secure websites. It’s a pain in the backside as I like to keep copies of my correspondence within my e-mail client long after my relationship with a company (and hopefully its secure website – take note HBoS) ends. Now other companies such as eBay are following the same path, but as Ken Young pointed out recently in IT Week:
“The power of email, after all, is that it arrives in your lap. How many of us would trundle down to the Post Office on the off-chance [that there may be some mail waiting there for us]? And therein lies the big problem with private e-mail services – it is a far more restricted form of the real thing. It’s safer, but much less useful.”
Young also notes that such systems represent a challenge to fraudsters who are likely to send out e-mails to entice users to fake inbox sites (with the intention of harvesting personal information), or to use keystroke logging software to gain access to users inboxes.
Whatever happens, its clear that this issue will not disappear overnight. What is needed is consumer education, legal protection and increased use of multi-factor identification – for example extending chip and PIN to the home PC.
Links
Gone phishing (IT Week)
Card Watch
Phish Report Network
New security guidance for consumers and business
Thomas Lee recently blogged about UK government’s security awareness website which is intended to “provide both home users and small businesses with proven, plain English advice to help protect computers, mobile phones and other devices from malicious attack”.
The government hopes the service will help boost confidence in e-commerce, and at the same time protect national security but the trouble is, that I have only heard about it on Thomas’ blog, and in a recent article by David Neal, home users will bodge DIY security, which appeared in IT Week. As Neal points out, there has been no high profile coverage and consumers are not likely to be aware of the new initiative. He goes on to say that even “plain English… will go over the heads of most users” and that “giving someone advice on tinkering with their firewall, updating their virus definitions, rebooting in safe mode and checking their proxy settings is as dangerous as arming everyone in the country with a shotgun, just because there has been a spate of burglaries”- an interesting view, and no doubt intended to be provocative, but nevertheless an opportunity for many small IT businesses consulting to the SOHO and low-end SME marketplace.
Meanwhile, for larger businesses, the Information Security Forum (ISF) has issued updated guidelines in the form of the standard of good practice for information security v4.1 incorporating updated sections in areas that have been the subject of additional research and investigation including:
- Information risk management in corporate governance.
- Virus protection in practice.
- Securing instant messaging.
- Managing privacy.
- Information risk analysis methodologies.
- Patch management.
- Managing the information risks from outsourcing.
- Web server security.
- Disappearance of the network boundary.
- Feedback from the results for the ISF’s information security status survey.
Securing the network using Microsoft ISA Server 2004
Several months ago, I attended a Microsoft TechNet UK event where the topic was ISA Server 2004 network design/troubleshooting and inside application layer firewalling and filtering. It’s taken me a while to get around to writing up the notes, but finally, here they are, with some additional information that I found in other some similar presentations added for good measure.
The need for network security
The Internet is built on internetworking protocol (IP) version 4 – which was not designed with security in mind. In the early days of the Internet, security clearance was required for access – i.e. physical access was restricted – so there was no requirement for protocol security. At that time (at the height of the cold war), resistance to nuclear attack was more important than protecting traffic and everyone on the network was trusted. The same networking technologies used to create the Internet (the TCP/IP protocol suite) are now used for internal networks and for TCP/IP, security was an afterthought.
Security should never be seen as a separate element of a solution – it should all pervasive. At the heart of the process should be a strategy of defence in depth – not just securing the perimeter or deploying some access controls internally, but placing security throughout the network so there are several layers to thwart malware or a hacker. Ideally, an administrator’s security strategy toolbox should include:
- Perimeter defences (packet filtering, stateful packet inspection, intrusion detection).
- Network defences (VLAN access control lists, internal firewall, auditing, intrusion detection).
- Host defences (server hardening, host intrusion detection, IPSec filtering, auditing, active directory).
- Application defences (anti-virus, content scanning, URL switching source, secure IIS, secure Exchange).
- Data and resource defences (ACLs, EFS, anti-virus, active directory).
Each layer of defence should be designed on the assumption that all prior layers have failed.
With users becoming ever more mobile, defining the edge of the network is becoming ever more difficult. Firewalls are no panacea, but properly configured firewalls and border routers are the cornerstone of perimeter security. The Internet and mobility have increased security risks, with virtual private networks (VPNs) softening the perimeter and wireless networks further eroding the traditional concept of the network perimeter.
A firewall alone is not enough
Some administrators take the view that “we’ve got a firewall, so everything is fine”, but standard (layer 3/4) firewalls check only basic packet information and treat the data segment of the packet as a black box. This is analogous to looking at the number and destination displayed on the front of a bus, but not being concerned with the passengers on board. Performance is often cited as a the reason for not implementing application layer (layer 7) firewalls, which inspect the data segment (e.g. for mail attachment checking, HTTP syntax, DNS syntax, correct SSL termination, URL blocking and redirection, RPC awareness, LDAP, SQL, etc.). However, Microsoft claim to have tested Internet Security and Acceleration (ISA) Server 2004 up to 1.9Gbps throughput on a single server with application filters in place (at a time when most corporates are dealing with 2-10Mbps).
Consider the standard security pitch, which has two elements:
- The sky is falling (i.e. we’re all doomed).
- Our product will fix it (i.e. buy our product).
In truth, no system is 100% effective and the firewall needs to be supplemented with countermeasures at various depths (intrusion detection systems, etc.). If there was a 100% secure system it would be incredibly expensive – in addition, threats and vulnerabilities are constantly evolving, which leaves systems vulnerable until a new attack is known and a new signature created and distributed. Heuristical systems must be supplemented with behavioural systems, and some intelligence.
Just because 100% security is not achievable, it doesn’t mean that it is any less worthwhile as a goal. We still lock our car doors and install immobilisers, even though a good car thief can defeat them eventually. The point is that we stop the casual attacker, buying time. Taking another analogy, bank safes are sold on how long it will take a safe cracker to break them.
Whatever solution is implemented, a firewall cannot protect against:
- Malicious traffic passed on open ports and not inspected at the application layer by the firewall.
- Any traffic that passes through an encrypted tunnel or session.
- Attacks on a network that has already been penetrated from within.
- Traffic that appears legitimate.
- Users and administrators who intentionally or accidentally install viruses or other malware.Administrators who use weak passwords.
HTTP is the universal firewall bypass and avoidance protocol
In the late 1990s, as business use of the Internet exploded, we became to rely ever more on HTTP, which has earned itself a nickname – UFBAP – the universal firewall bypass and avoidance protocol.
Firewall administrators are obsessed with port blocking and so all non-essential firewall ports are closed; but we generally assume that HTTP is good and so TCP port 80 (the default port for HTTP) is left open. Because it’s so difficult to get an administrator to open a port, developers avoid such restrictions by writing applications that tunnel over port 80. We even have a name for it (web services) and some of our corporate applications make use of it (e.g. RPC over HTTP for Outlook connecting to Exchange Server 2003).
This tunnelling approach is risky. When someone encapsulates one form of data inside another packet, we tend to allow it, without worrying about what the real purpose of the traffic is. There are even websites which exploit this (e.g. HTTP-Tunnel), allowing blocked traffic such terminal server traffic using the remote desktop protocol (RDP) to be sent to the required server via TCP port 80, for a few dollars a month.
In short, organisations, tend to be more concerned with blocking undesirable sites (by destination) than checking that the content is valid (by deep inspection).
Using web services such as RPC over HTTP to access Exchange Server 2003 is not always bad – 90% of VPN users just want to get to their e-mail and so offering an HTTP-based solution can eliminate many of the VPNs that are vulnerable network entry points – what is required is to examine the data inside the HTTP tunnel and only allowing it to be used under certain scenarios. Taking the Exchange Server 2003 example further, without using RPC over HTTP, the following ports may need to be opened for access:
- TCP 25: SMTP.
- TCP/UDP 53: DNS.
- TCP 80: HTTP.
- TCP/UDP 88: Kerberos.
- TCP 110: POP3.
- TCP 135: RPC endpoint mapper.
- TCP 143: IMAP4.
- TCP/UDP 389: LDAP (to directory service).
- TCP 691: Link state algorithm routing protocol.
- TCP 1024+: RPC service ports (unless DC and Exchange restricted).
- TCP 3268: LDAP (to global catalog).
Using HTTP over RPC, this is reduced to one port – TCP 80.
Application layer filtering
Inspection at the application layer still has some limitations and the real issue is understanding the purpose of the traffic to be filtered and blocking non-consistent traffic.
Microsoft ISA Server 2004 is typically deployed in one of three scenarios:
- Inbound access control and VPN server.
- Outbound access control and filtration (together with URL-based real time lists from third parties).
- Distributed caching (proxy server), leading to reduced bandwidth usage.
As part of its access control capabilities, ISA Server has a number of application filters included:
- HTTP (syntax analysis and signature blocking).
- OWA (forms based authentication).
- SMTP (command and message filtering).
- RPC (interface blocking).
- FTP (read only support).
- DNS (intrusion detection).
- POP3 (intrusion detection).
- H.323 (allows H.323 traffic).
- MMS (enabled Microsoft media streaming).
All of these filters validate protocols for RFC compliance and enable network address translation (NAT) traversal. In addition, ISA Server can work with third party filters to avoid the need for a proliferation of dedicated appliance servers (and even for appliance consolidation). Examples of third-party filter add-ons include:
- Instant messaging (Akonix).
- SOCKS5 (CornerPost Software).
- SOAP/raw XML (Forum Systems).
- Antivirus (McAfee, GFI, Panda).
- URL Filtering (SurfControl, Futuresoft, FilterLogix, Cerberian, Wavecrest).
- Intrusion detection (ISS, GFI).
But appliance firewalls are more secure – aren’t they?
Contrary to popular belief, appliance firewalls are not necessarily more secure – just more convenient – for those who prefer to use appliances, ISA Server is available in an appliance server format and such an appliance may well be cheaper than an equivalent server, plus Windows Server 2003 and ISA Server 2004 licenses.
Whilst looking at the security of the solution itself, ISA Server has been tested against the common certification criteria at level EAL4+ (for 9 streams). Totally rewritten since ISA Server 2000, Microsoft claim that ISA Server 2004 uses a code base which is 400% more efficient. It may run on a Windows platform, but Windows Server 2003 can (and should) also be hardened, and a well-configured ISA Server can be extremely secure.
Some firewall challenges: remote procedure calls (RPCs)
RPCs present their own challenge to a standard (layer 3/4) firewall in terms of the sheer number of potentially available ports:
- On service startup, the RPC server grabs random high port numbers and maintains a table, mapping UUIDs to port numbers.
- Clients know the UUID of the required service and connect to the server’s port mapper using TCP port 135, requesting the number of the port associated with the UUID.
- The server looks up the port number of the given UUID.
- The server responds with the port number, closing the TCP connection on port 135.
- From this point on the client accesses the application using the allocated port number.
Due to the number of potential ports, this is not feasible using a traditional firewall (it would require 64512 high ports plus 135 to be open); however, a layer 7 firewall could utilise an RPC filter to learn the protocol and use its features to improved security, such that the firewall would only allow access to specific UUIDs (e.g. domain controller replication, or Exchange/Outlook RPC communications) denying access to all other RPC requests. Instead of tunnelling within HTTP (prevented by an HTTP syntax check), native RPC access can be provided across the firewall.
Some firewall challenges: secure sockets layer (SSL)
Hackers will always attack the low-hanging fruit (i.e. easy targets) and as such, SSL attacks are generally too complex, but as our systems become more secure (i.e. we remove the low-hanging fruit), SSL attacks will become more likely.
HTTPS (which uses SSL) prompts a user for authentication and any user on the Internet can access the authentication prompt. SSL tunnels through traditional firewalls because it is encrypted, in turn, allowing viruses and worms to pass through undetected and infect internal servers.
Using ISA Server 2004 with an HTTP filter, authentication can be delegated. In this way, ISA Server pre-authenticates users, eliminating multiple authentication dialogs and only allowing valid traffic through. This means that the SSL connection is from the client to the firewall only, and that ISA Server can decrypt and inspect SSL traffic. Onward traffic to the internal server can be re-encrypted using SSL, or sent as clear HTTP. In this way, URLScan for ISA Server can stop web attacks at the network edge, even over an encrypted inbound SSL connection.
Pre-authentication means that without a valid layer 7 password, there is no access to any internal systems (potential attackers drop from the entire Internet to just the number of people with credentials for the network). ISA Server 2000 can also perform this using RSA securID for HTTP (although not for RPC over HTTP with securID) and cookie pre-authentication for Outlook Web Access 2003 is also available.
Some firewall challenges: protecting HTTP(S)
Protecting HTTP (and HTTPS) requires an understanding the protocol – how it works, what its rules are and what to expect. Inbound HTTPS termination is easy (as the certificate is controlled by the organisation whose network is being protected). For outbound HTTPS and HTTP, administrators need to learn how to filter port 80/443. It may be worth considering whether global access is really required, or whether there are a set of specific sites that are required for use by the business.
ISA Server allows web publishing of HTTP (as well as other protocols such as SMTP). Web publishing protects servers through two main defences:
- Worms rarely work by FQDN – tending to favour IP or network range. Publishing by FQDN prevents any traffic from getting in unless it asks for the exact URL and not just http://81.171.168.73:80.
- Using HTTP filter verbs (signature strings and method blocking) to eliminate whole classes of attack at the protocol level.
Some examples of protecting a web server using web publishing and HTTP filtration are:
- Limit header length, query and URL length.
- Verify normalisation – http://81.171.168.73/../../etc is not allowed.
- Allow only specified methods (GET, HEAD, POST, etc.).
- Block specified extensions (.EXE, .BAT, .CMD, .COM, .HTW, .IDA, .IDQ, .HTR, .IDC, .SHTM, .SHTML, .STM, .PRINTER, .INI, .LOG, .POL, .DAT, etc.)
- Block content containing URL requests with certain signatures (.., ./, \, :, % and &)
- Change/remove headers to provide disinformation – putting ISA Server in front of an Apache server is a great way to prevent UNIX attacks by making hackers think they are attacking a Windows server.
- Block applications based on the header.
Some headers to look for include:
- Request headers:
- MSN Messenger: HTTP header=User-Agent; Signature=MSN Messenger
- Windows Messenger: HTTP header=User-Agent; Signature=MSMSGS
- AOL Messenger (and Gecko browsers): HTTP header=User-Agent; Signature=Gecko/
- Yahoo Messenger: HTTP header=Host; Signature=msg.yahoo.com
- Kazaa: HTTP header=P2P-Agent; Signature=Kazaa, Kazaaclient
- Kazaa: HTTP header=User-Agent; Signature=Kazaa Client
- Kazaa: HTTP header=X-Kazaa-Network; Signature=KaZaA
- Gnutella: HTTP header=User-Agent; Signature=Gnutella
- Gnutella: HTTP header=User-Agent; Signature=Gnucleus
- Edonkey: HTTP header=User-Agent; Signature=e2dk
- Response header:
- Morpheus: HTTP header=Server; Signature=Morpheus
Some firewall challenges: protecting DNS
Whilst some DNS protection is available by filtering TCP/UDP ports 53, ISA Server filters can examine traffic for DNS host name overflows, length overflows, zone transfer from privileged ports (1-1023), zone transfer from high ports (1024 and above).
Some firewall challenges: protecting SMTP
When it comes to mail protection, anti-spam and anti-virus vendors cover SMTP relays but ISA server filters can examine protocol usage, i.e.:
- Checking that TCP port 25 traffic really is SMTP.
- Checking for a buffer overflow to the RCPT: command.
- Blocking someone using the VRFY command.
- Stripping an attachment or block a user.
Using such a solution adds to the defence in depth strategy, using the firewall to add another layer of defence to the mail system.
Some firewall challenges: encapsulated traffic
Encapsulated traffic can cause some concerns for a network administrator as IPSec (AH and ESP), PPTP, etc. cannot be scanned at the ISA Server if they are published or otherwise allowed through. Tunnelling traffic will be logged, but not scanned as ISA cannot look inside the tunnel unless it is terminating the VPN. The administrator is faced with a choice – open more ports and uses application filters – or tunnel traffic without inspection. NAT also has some implications.
ISA Server can, however, perform intra-tunnel VPN inspection, so VPN traffic can be inspected at the application layer. VPN client traffic is treated as a dedicated network so destinations can be controlled, along with the use of application filter rules.
VPN clients must be hardened. If not, then hackers can attack clients and ride the VPN into the corporate network. Client based intrusion detection systems and firewalls can help but the ideal solution is VPN quarantine (e.g. Windows Server 2003 network access quarantine control) as the most common entry to the network for malware is from mobile devices either VPNing into the network, or returning to the network after being infected whilst away from the network (perhaps connected to other networks, including the Internet).
Alternatives to a VPN that should be considered are:
- E-mail: RPC over HTTP, or Outlook Web Access (OWA). POP3 and IMAP4 should be avoided as they are not fully featured.
- Web-enabled extranet applications: SSL.
- Other applications: RPC filtration with ISA Server.
Don’t forget the internal network
Internal network defences are another factor to be considered. Networks are generally one large TCP/IP space, segmented by firewalls to the Internet. Trust is implicit throughout the organisation but this cannot be relied upon and network segmentation is critical (cf. a bank, where entering a branch does not gain access to the vault). Internal users are dangerous too.
- The windows firewall in Windows XP SP2 (internet connection firewall in Windows Server 2003 and earlier versions of Windows XP) is a vital tool in preventing network-based attacks, by blocking unsolicited inbound traffic. Ports can be opened for services running on the computer, and enterprise administration facilitated through group policy. Microsoft recommend that use of the Windows Firewall is combined with network access quarantine control; however it does not have any egress filters (i.e. controls over outbound traffic).
- Virtual LANs (VLANs) can be used to isolate like services from one another. Switch ACLs are used to control traffic flow between VLANs at layer 3. Layer 2 VLANS may be used where no routing is desired. By using internal firewalls, port level access can be controlled to internal VLANs.
- IPSec is a method of securing internal IP traffic, mutually authenticating end points. It is used to ensure encrypted and authenticated communications at the IP layer, providing a transport layer security that is independent of applications or application layer protocols. It prevents against spoofing, tampering in the wire and information disclosure. Mutual device authentication can be provided using certificates, kerberos (or pre-shared key – but this is only recommended for testing scenarios). Authentication headers (AH) should be used to provide packet integrity, but this does not encrypt, allowing for network intrusion detection. Encapsulation security payload (ESP) provides packet integrity and confidentiality, but its encryption prevents packet inspection. Consequently, careful planning is required to determine which traffic should be secured.
One use of IPSec is to allow domain replication to pass through firewalls, creating an IPSec policy on each domain controller to secure traffic to its replication partners. ESP 3DES should be used for encryption and the firewall should be configured to allow UDP port 500 for internet key exchange (IKE) and IP protocol 50 for ESP.
Potential issues around wireless network security are well publicised. The two most common security configurations each have their own limitations:
- Wired equivalency privacy (WEP), relies on static WEP keys which are not dynamically changed and are therefore vulnerable to attack. There is no standard method for provisioning static WEP keys to clients and the principle of static keys does not scale well, with a compromised key exposing everyone.
- MAC address filtering, is also limited by the potential for an attacker to spoof an allowed MAC address.
Possible solutions include password-based layer 2 authentication (IEEE 802.1x with PEAP/MS CHAP v2) and certificate-based layer 2 authentication (IEEE 802.1x EAP-TLS). Other options include:
- VPN connectivity using L2TP/IPSec (preferred) or PPTP. This does not allow for roaming but is useful when accessing public wireless hot spots; however there is no computer authentication, or processing of computer group policy settings.
- IPSec, but this has some interoperability issues.
Security type | Security level | Ease of deployment | Ease of integration |
---|---|---|---|
Static WEP | Low | High | High |
IEEE 802.1x (PEAP) | High | Medium | High |
IEEE 802.1x (TLS) | High | Low | High |
VPN (L2TP/IPSec) | High | Medium | Low |
IPSec | High | Low | Low |
Summary
In summary, firewalls are placed in different locations for different reasons. These must be understood and filtered accordingly. Core functionality can be extended with protocol filters to cover a specific scenario but no one device is a silver bullet. Solutions are more important than devices and firewall configuration is more than a networking decision – it also requires application awareness.
Links
Microsoft ISA Server
ISA Server Community
ISAserver.org
Zebedee (a simple, secure TCP and UDP tunnel program)
Child safety online
In previous posts I’ve mentioned both the Microsoft at work and Microsoft at home microsites. Today I was directed to a new microsite – security at home, specifically the child safety online section.
I’m not sure if the world is really any more dangerous for children than it was when I was a child but I do know the media is all pervasive – we hear a lot more about the unfortunate events that do occur – and that as a parent I’ll do anything I can to ensure that my son is safe. At three months old, he’s a bit young to be using the Internet but this site looks a useful resource for anyone who has children aged between 2 to 17 and who use a computer with a connection to the Internet.
On a related note, a couple of week’s back I wrote about technology’s role in the demise of the English language. Well, for anyone (like me), who’s not as “with it” as we once were (omigod, and I’m only 32 – hellllllp!), whilst reading child safety online, I stumbled across a parent’s primer to computer slang (should that be $14NG?) and the netiquette 101 for new netizens.
!337$p34k 1z m4d
Why IE 7.0 must rely on XP SP2
I’ve seen a lot of press coverage over the last week or so about Microsoft’s plans for Internet Explorer (IE) 7.0. One of the major gripes seems to be that it will require Windows XP service pack 2 (SP2).
So what’s wrong with that?
One of the main reasons that people are moving to other browsers (e.g. Firefox) is that IE is perceived as insecure. SP2 is a major security update for the Windows XP desktop operating system. Why provide a new (more secure) browser product to people who do not use the latest security patches on their operating system?
SP2 has been publicly available since August 2004 (6 months ago). The temporary blocking mechanism to hold back automatic SP2 deployment from Windows Update is scheduled to expire on April 12 2005. There is no point in IT Managers burying their heads in the sand and ignoring SP2 any longer. I will concede that Microsoft should have shipped v4.0 of the Application Compatibility Toolkit alongside SP2 (after all, application compatibility is probably the largest barrier to SP2 deployment) but it amazes me that so few organisations have made the move to SP2 after all this time.
For those who are not even using Windows XP, whilst the extra functionality in IE 7.0 may be useful, Microsoft is a product and technology business and it needs to maintain its licensing revenues through getting people to adopt the latest technologies (especially whilst strategic products are being delayed by major security rewrites).
If an older platform is seen “good enough” then fine; but “good enough” shouldn’t just be about functionality – it needs to consider the whole picture – including security. It may be that the risk assessment considers remaining on a legacy (possibly unsupported) platform is more favourable than the risk (and cost) of upgrading. That’s fine too – as long as that risk is acceptable to the business.
My recommendation? Organisations who are using Windows XP should fully test their applications and carry out a controlled upgrade to SP2 as soon as possible. Those who continue to use older operating systems (especially Windows 9x, ME, and NT) should urgently consider upgrading. Then keep patch levels up-to-date, for example, by using Microsoft Software Update Services (SUS) and the Microsoft Baseline Security Analyzer (MBSA). IT users can’t continue to complain about the security of the Microsoft platform if they won’t deploy the latest (or even recent) patches.
Multiple factor security identification
Another interesting piece in this week’s IT Week was Neil Barrett’s comment article entitled age-old tactics create PC security. Prior to reading this, I had not quite grasped the principles that constitute multiple factor security identification – Barrett’s article gives an excellent explanation.
Five ways to help protect your identity
A few months back I wrote about the Microsoft At Work microsite and its advice for maintaining your computer at work. It may be a little high level – but it is aimed at end users and that in itself is good because us techies are generally not too good at communicating with non-technical people.
Microsoft At Work has a sister microsite – Microsoft At Home. Again, it’s full of practical advice, but is more consumer-focused and one of the articles that caught my eye recently discusses avoiding phishing scams. Phishing is a rapidly increasing form of online crime concerned with identity theft. In a phishing scam, a malicious person attempts to obtain personal information such as credit card numbers, passwords, account information, or other personal information by convincing the end user to give it to them under false pretences. Phishing schemes usually come via spam e-mail or pop-up windows.
Biometric USB flash drive – how cool is that?!
I know that it is just a logical evolution of the humble USB flash drive and the decreasing cost of biometric security (even my local gym uses a fingerprint reader now for members to sign in and out) but last week Thomas Lee showed me the Trek ThumbDrive Swipe, which combines fingerprint swipe sensor technology with flash memory based USB storage. Fingerprint security on a USB stick is cool. Now all I need is for someone to invent something to stop me losing mine all the time…
Passed Microsoft Certified Professional exam 70-299
This morning I passed the Microsoft Certified Professional exam 70-299: Implementing and administering security in a Microsoft Windows Server 2003 network. Not my best pass rate but it was the first exam I’ve taken for over three years and not a particularly easy one at that.
Microsoft’s non-disclosure agreement prevents me from saying too much about the exam but I can say it involved cramming like crazy (on top of an already busy week at work) to use a voucher that lets me take the exam for free and expires tomorrow.
I’m going to enjoy that extra hour of sleep as British Summer Time ends tonight and the clocks go back an hour!