An introduction to MPLS

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Mansoor Majeed gave a presentation last week about multi-protocol label switching (MPLS) to Conchango‘s Infrastructure Architecture community of practice. Mansoor doesn’t have a blog of his own, so I’m taking this opportunity to write a little bit about what I learnt.

I first came across MPLS when I was working for a magazine distribution company in Australia which had expensive frame relay links (running for thousands of kilometres) and were looking at using VPNs across the Internet. The main reason we didn’t go ahead was that it is not possible to ensure quality of service (QoS) for such connections but this is an example of where MPLS would provide similar advantages in terms of routing flexibility, at a lower cost than traditional point-to-point links.

MPLS is a scheme typically used to enhance an IP network, based on Cisco tag switching technology. With tag switching, a switch maintains a map of logical interfaces with a tag for each virtual LAN (VLAN), switching the tag to forward traffic to the appropriate interface. Cisco define tag switching as a:

“High-performance, packet-forwarding technology that integrates network layer (layer 3) routing and data link layer (layer 2) switching and provides scalable, high-speed switching in the network core. Tag switching is based on the concept of label swapping, in which packets or cells are assigned short, fixed-length labels that tell switching nodes how data should be forwarded.”

and MPLS as a:

    “Switching method that forwards IP traffic using a label. This label instructs the routers and the switches in the network where to forward the packets based on pre-established IP routing information.”

With MPLS, organisations use their existing network infrastructure to connect to the service provider’s MPLS network, over which services which require QoS can be provided to connect to remote sites.

Switches work at layer 2; routers at layer 3; and as can be seen from Cisco’s tag-switching definition above, MPLS crosses the boundaries of the two layers. MPLS allows traffic routing, combined with the ability to compute a path at source and to distribute information about network topology and attributes. The main constraint is that it uses the shortest path first (SPF) algorithm to calculate the path across the network.

MPLS works by label edge routers (LERs) on the incoming edge of the MPLS network adding an MPLS label to the top of each packet. This label is based on some criteria (e.g. destination IP address) and is then used to pass it through the subsequent label switching routers (LSRs). The LERs on the outgoing edge strip off the label before final delivery of the original packet.

Multi-protocol label switching (MPLS)

So why invest in MPLS? The main reason is the lower cost for higher performance (e.g. the figures I have seen suggest that bandwidth can be increased 250-500% for a comparable cost) but other advantages include scalability, guaranteed bandwidth, QoS and the fact that MPLS will integrate with any transport method (IP, ATM, frame relay, etc.). Other potential advantages are that the MPLS provider may also provide hosting services, allowing the a company’s public Internet connection to be hosted at the MPLS provider’s datacentre for a minimal cost (cf. the flexibility of managing services locally).

There are some potential disadvantages though, firstly around security (running confidential traffic across a service provider’s network – although this could be encrypted if required); but more significantly there is no partnership at the time of writing between service providers in different countries, so for example, QoS would not be available for UK customers once the traffic left the UK service provider’s network. Over time this may be overcome with a system of MPLS points of presence (PoPs).

Another possible growth area for MPLS is the expansion of voice over Ethernet (VoE) technology. This is not the same as voice over IP (VoIP), but provides a similar service, effectively linking the MPLS network to various telco’s PSTNs. At the moment, a company would typically route all voice traffic via the local telco’s PSTN exchange, with line rental charges per channel connection, per month/quarter. Using VoE, an IP gateway can be used to run voice traffic across the MPLS network up to the point where it needs to transfer to another carrier’s network, resulting in significant savings on the cost of line rental.

That’s just a flavour of what MPLS is about. For further reading, there is a Cisco white paper about MPLS traffic engineering, onestopclick has an MPLS buyers guide and for a view on what to watch out for, there is the Techworld don’t get caught out by MPLS article.

No NAP until Longhorn

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year I commented that network access protection (NAP) had slipped from a planned feature pack for ISA Server 2004 to Windows Server 2003 Release 2 (R2). Well, it seems that has changed. Confirming what I wrote last March, when I blogged about the need for network segmentation and remediation, Steve Lamb commented at last week’s Microsoft Technical Roadshow that NAP will be a feature of the next version of Windows Server (codenamed Longhorn) and not in the R2 release scheduled for later this year.

Apparently the reasons for this are that NAP will require kernel mode changes (and there will be no kernel mode changes in R2) and the extra time will allow Microsoft and Cisco to ensure that NAP (Microsoft) and NAC (Cisco) play nicely together.

Until then we will have to make do with the network access quarantine controls (originally part of the Windows Server 2003 resource kit and productionised as part of the release of Windows Server 2003 service pack 1). The main differences are that network access quarantine control allows quarantining of inbound connections via the Windows routing and remote access service, but NAP will will support quarantine for wired and wireless LAN connections too.

Tracking down the vendor portion of a MAC address

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was trying to track down the source of an IP address conflict earlier today and I came across two sites offering a search service for the initial 24-bit (6 digit hexadecimal) vendor portion of an Ethernet media access control (MAC) address. The IEEE service is the official one, from where you can also download the complete listing, but MAC finder is also useful as you can use the ?string=00%3a00%3a00 command on the end of the URL (replacing the zeros with the appropriate hexadecimal digits).

Spyware re-enforces the need for network segmentation and remediation

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There is no doubt that malicious software (malware) is on the increase. We have learnt how to deal with the ever increasing number of viruses, worms and Trojan horses, but spyware is now a major problem too.

Earlier this month, it was widely reported how a joint investigation by law enforcement agencies in Israel and the UK foiled an attempt to use keystroke logging software to gain access codes in order to steal £220 million from the Sumitomo Mitsui bank. This is believed to be the first recorded incident of spyware being used for large scale online theft.

For some time now, IT-savvy users have been checking for spyware with products such as Spybot Search and Destroy or Lavasoft Ad-Aware. Then Microsoft bought the Giant Company and soon afterwards released its Windows AntiSpyware beta product. According to IT Week, the final release will be free for registered Windows users, but corporates will need to pay for the enterprise version of the product. Now Symantec has joined the spyware market with Symantec Client Security v3.0 and Symantec AntiVirus Corporate Edition 10, both incorporating spyware detection and removal capabilities, whilst McAfee Anti-Spyware Enterprise aims to block malware before it reaches the corporate network. Other vendors, such as Websense, have added malware detection to their products but there is still a gaping hole in many organisation’s IT strategy – mobile users returning to the network.

Whilst many corporates will specificly ban consultants and other suppliers from connecting non-managed PCs to their network, some don’t – and in any case that is still only half the issue – what about the user who takes their laptop on the train or to the airport and connects to a wireless hotspot, or even to a less-regulated business partner’s network, then returns to the “safe” corporate LAN with who-knows-what malware on their PC? It may sound paranoid, but when I started to use anti-spyware products a couple of years back I was amazed how much rubbish had infected my work PC and I am just one user on a large network.

According to IT Week, in a survey of 500 European IT Managers commissioned by Websense, 60% said that their company does not have systems in place to guard against internal threats with 35% unable to deal with spyware (and 62% unable to block phishing attacks).

Protecting the network edge is all very well, but the guiding security principle of defence in depth needs to be applied. Networks need to be segregated, with firewalls (or at the very least separate VLANs) restricting traffic between segments but the real answer to the mobile user issue is remediation.

The principle behind remediation is that on returning to the corporate network, users will not be granted full access until their device has been scanned for operating system patches, anti-virus and anti-spyware signatures and any application patches required. Only once all of these have been installed, will the user be granted full access to the network. Of course, as Dave Bailey recently commented in his article will you pass the access test? which appeared in IT Week recently, there will be occasions when patches fail to apply, or when returning users simply have too many updates to be applied and it impacts on their legitimate business operations (but not half as much as a full-blown network attack could impact on their business).

Both Microsoft and Cisco are preparing their remediation technology offerings. Cisco has it’s network admission control (NAC) technology, whilst Microsoft’s approach is network access protection (NAP) (when will they learn to read their acronyms phonetically – first WUS and now NAP). Unfortunately, NAP has been dropped from forthcoming ISA Server 2004 service/feature packs and instead will be held over for Longhorn (although Windows Server 2003 does offer network access quarantine control for users connecting via a VPN).

Securing the network using Microsoft ISA Server 2004

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Several months ago, I attended a Microsoft TechNet UK event where the topic was ISA Server 2004 network design/troubleshooting and inside application layer firewalling and filtering. It’s taken me a while to get around to writing up the notes, but finally, here they are, with some additional information that I found in other some similar presentations added for good measure.

The need for network security

The Internet is built on internetworking protocol (IP) version 4 – which was not designed with security in mind. In the early days of the Internet, security clearance was required for access – i.e. physical access was restricted – so there was no requirement for protocol security. At that time (at the height of the cold war), resistance to nuclear attack was more important than protecting traffic and everyone on the network was trusted. The same networking technologies used to create the Internet (the TCP/IP protocol suite) are now used for internal networks and for TCP/IP, security was an afterthought.

Security should never be seen as a separate element of a solution – it should all pervasive. At the heart of the process should be a strategy of defence in depth – not just securing the perimeter or deploying some access controls internally, but placing security throughout the network so there are several layers to thwart malware or a hacker. Ideally, an administrator’s security strategy toolbox should include:

  • Perimeter defences (packet filtering, stateful packet inspection, intrusion detection).
  • Network defences (VLAN access control lists, internal firewall, auditing, intrusion detection).
  • Host defences (server hardening, host intrusion detection, IPSec filtering, auditing, active directory).
  • Application defences (anti-virus, content scanning, URL switching source, secure IIS, secure Exchange).
  • Data and resource defences (ACLs, EFS, anti-virus, active directory).

Each layer of defence should be designed on the assumption that all prior layers have failed.

With users becoming ever more mobile, defining the edge of the network is becoming ever more difficult. Firewalls are no panacea, but properly configured firewalls and border routers are the cornerstone of perimeter security. The Internet and mobility have increased security risks, with virtual private networks (VPNs) softening the perimeter and wireless networks further eroding the traditional concept of the network perimeter.

A firewall alone is not enough

Some administrators take the view that “we’ve got a firewall, so everything is fine”, but standard (layer 3/4) firewalls check only basic packet information and treat the data segment of the packet as a black box. This is analogous to looking at the number and destination displayed on the front of a bus, but not being concerned with the passengers on board. Performance is often cited as a the reason for not implementing application layer (layer 7) firewalls, which inspect the data segment (e.g. for mail attachment checking, HTTP syntax, DNS syntax, correct SSL termination, URL blocking and redirection, RPC awareness, LDAP, SQL, etc.). However, Microsoft claim to have tested Internet Security and Acceleration (ISA) Server 2004 up to 1.9Gbps throughput on a single server with application filters in place (at a time when most corporates are dealing with 2-10Mbps).

Consider the standard security pitch, which has two elements:

  1. The sky is falling (i.e. we’re all doomed).
  2. Our product will fix it (i.e. buy our product).

In truth, no system is 100% effective and the firewall needs to be supplemented with countermeasures at various depths (intrusion detection systems, etc.). If there was a 100% secure system it would be incredibly expensive – in addition, threats and vulnerabilities are constantly evolving, which leaves systems vulnerable until a new attack is known and a new signature created and distributed. Heuristical systems must be supplemented with behavioural systems, and some intelligence.

Just because 100% security is not achievable, it doesn’t mean that it is any less worthwhile as a goal. We still lock our car doors and install immobilisers, even though a good car thief can defeat them eventually. The point is that we stop the casual attacker, buying time. Taking another analogy, bank safes are sold on how long it will take a safe cracker to break them.

Whatever solution is implemented, a firewall cannot protect against:

  • Malicious traffic passed on open ports and not inspected at the application layer by the firewall.
  • Any traffic that passes through an encrypted tunnel or session.
  • Attacks on a network that has already been penetrated from within.
  • Traffic that appears legitimate.
  • Users and administrators who intentionally or accidentally install viruses or other malware.Administrators who use weak passwords.

HTTP is the universal firewall bypass and avoidance protocol

In the late 1990s, as business use of the Internet exploded, we became to rely ever more on HTTP, which has earned itself a nickname – UFBAP – the universal firewall bypass and avoidance protocol.

Firewall administrators are obsessed with port blocking and so all non-essential firewall ports are closed; but we generally assume that HTTP is good and so TCP port 80 (the default port for HTTP) is left open. Because it’s so difficult to get an administrator to open a port, developers avoid such restrictions by writing applications that tunnel over port 80. We even have a name for it (web services) and some of our corporate applications make use of it (e.g. RPC over HTTP for Outlook connecting to Exchange Server 2003).

This tunnelling approach is risky. When someone encapsulates one form of data inside another packet, we tend to allow it, without worrying about what the real purpose of the traffic is. There are even websites which exploit this (e.g. HTTP-Tunnel), allowing blocked traffic such terminal server traffic using the remote desktop protocol (RDP) to be sent to the required server via TCP port 80, for a few dollars a month.

In short, organisations, tend to be more concerned with blocking undesirable sites (by destination) than checking that the content is valid (by deep inspection).

Using web services such as RPC over HTTP to access Exchange Server 2003 is not always bad – 90% of VPN users just want to get to their e-mail and so offering an HTTP-based solution can eliminate many of the VPNs that are vulnerable network entry points – what is required is to examine the data inside the HTTP tunnel and only allowing it to be used under certain scenarios. Taking the Exchange Server 2003 example further, without using RPC over HTTP, the following ports may need to be opened for access:

  • TCP 25: SMTP.
  • TCP/UDP 53: DNS.
  • TCP 80: HTTP.
  • TCP/UDP 88: Kerberos.
  • TCP 110: POP3.
  • TCP 135: RPC endpoint mapper.
  • TCP 143: IMAP4.
  • TCP/UDP 389: LDAP (to directory service).
  • TCP 691: Link state algorithm routing protocol.
  • TCP 1024+: RPC service ports (unless DC and Exchange restricted).
  • TCP 3268: LDAP (to global catalog).

Using HTTP over RPC, this is reduced to one port – TCP 80.

Application layer filtering

Inspection at the application layer still has some limitations and the real issue is understanding the purpose of the traffic to be filtered and blocking non-consistent traffic.

Microsoft ISA Server 2004 is typically deployed in one of three scenarios:

  • Inbound access control and VPN server.
  • Outbound access control and filtration (together with URL-based real time lists from third parties).
  • Distributed caching (proxy server), leading to reduced bandwidth usage.

As part of its access control capabilities, ISA Server has a number of application filters included:

  • HTTP (syntax analysis and signature blocking).
  • OWA (forms based authentication).
  • SMTP (command and message filtering).
  • RPC (interface blocking).
  • FTP (read only support).
  • DNS (intrusion detection).
  • POP3 (intrusion detection).
  • H.323 (allows H.323 traffic).
  • MMS (enabled Microsoft media streaming).

All of these filters validate protocols for RFC compliance and enable network address translation (NAT) traversal. In addition, ISA Server can work with third party filters to avoid the need for a proliferation of dedicated appliance servers (and even for appliance consolidation). Examples of third-party filter add-ons include:

  • Instant messaging (Akonix).
  • SOCKS5 (CornerPost Software).
  • SOAP/raw XML (Forum Systems).
  • Antivirus (McAfee, GFI, Panda).
  • URL Filtering (SurfControl, Futuresoft, FilterLogix, Cerberian, Wavecrest).
  • Intrusion detection (ISS, GFI).

But appliance firewalls are more secure – aren’t they?

Contrary to popular belief, appliance firewalls are not necessarily more secure – just more convenient – for those who prefer to use appliances, ISA Server is available in an appliance server format and such an appliance may well be cheaper than an equivalent server, plus Windows Server 2003 and ISA Server 2004 licenses.

Whilst looking at the security of the solution itself, ISA Server has been tested against the common certification criteria at level EAL4+ (for 9 streams). Totally rewritten since ISA Server 2000, Microsoft claim that ISA Server 2004 uses a code base which is 400% more efficient. It may run on a Windows platform, but Windows Server 2003 can (and should) also be hardened, and a well-configured ISA Server can be extremely secure.

Some firewall challenges: remote procedure calls (RPCs)

RPC CommunicationsRPCs present their own challenge to a standard (layer 3/4) firewall in terms of the sheer number of potentially available ports:

  1. On service startup, the RPC server grabs random high port numbers and maintains a table, mapping UUIDs to port numbers.
  2. Clients know the UUID of the required service and connect to the server’s port mapper using TCP port 135, requesting the number of the port associated with the UUID.
  3. The server looks up the port number of the given UUID.
  4. The server responds with the port number, closing the TCP connection on port 135.
  5. From this point on the client accesses the application using the allocated port number.

Due to the number of potential ports, this is not feasible using a traditional firewall (it would require 64512 high ports plus 135 to be open); however, a layer 7 firewall could utilise an RPC filter to learn the protocol and use its features to improved security, such that the firewall would only allow access to specific UUIDs (e.g. domain controller replication, or Exchange/Outlook RPC communications) denying access to all other RPC requests. Instead of tunnelling within HTTP (prevented by an HTTP syntax check), native RPC access can be provided across the firewall.

Some firewall challenges: secure sockets layer (SSL)

Protecting HTTPS - traditional firewallHackers will always attack the low-hanging fruit (i.e. easy targets) and as such, SSL attacks are generally too complex, but as our systems become more secure (i.e. we remove the low-hanging fruit), SSL attacks will become more likely.

HTTPS (which uses SSL) prompts a user for authentication and any user on the Internet can access the authentication prompt. SSL tunnels through traditional firewalls because it is encrypted, in turn, allowing viruses and worms to pass through undetected and infect internal servers.

Using ISA Server 2004 with an HTTP filter, authentication can be delegated. In this way, ISA Server pre-authenticates users, eliminating multiple authentication dialogs and only allowing valid traffic through. This means that the SSL connection is from the client to the firewall only, and that ISA Server can decrypt and inspect SSL traffic. Onward traffic to the internal server can be re-encrypted using SSL, or sent as clear HTTP. In this way, URLScan for ISA Server can stop web attacks at the network edge, even over an encrypted inbound SSL connection.

Protecting HTTPS - web publishingPre-authentication means that without a valid layer 7 password, there is no access to any internal systems (potential attackers drop from the entire Internet to just the number of people with credentials for the network). ISA Server 2000 can also perform this using RSA securID for HTTP (although not for RPC over HTTP with securID) and cookie pre-authentication for Outlook Web Access 2003 is also available.

Some firewall challenges: protecting HTTP(S)

Protecting HTTP (and HTTPS) requires an understanding the protocol – how it works, what its rules are and what to expect. Inbound HTTPS termination is easy (as the certificate is controlled by the organisation whose network is being protected). For outbound HTTPS and HTTP, administrators need to learn how to filter port 80/443. It may be worth considering whether global access is really required, or whether there are a set of specific sites that are required for use by the business.

ISA Server allows web publishing of HTTP (as well as other protocols such as SMTP). Web publishing protects servers through two main defences:

  • Worms rarely work by FQDN – tending to favour IP or network range. Publishing by FQDN prevents any traffic from getting in unless it asks for the exact URL and not just http://81.171.168.73:80.
  • Using HTTP filter verbs (signature strings and method blocking) to eliminate whole classes of attack at the protocol level.

Some examples of protecting a web server using web publishing and HTTP filtration are:

  • Limit header length, query and URL length.
  • Verify normalisation – http://81.171.168.73/../../etc is not allowed.
  • Allow only specified methods (GET, HEAD, POST, etc.).
  • Block specified extensions (.EXE, .BAT, .CMD, .COM, .HTW, .IDA, .IDQ, .HTR, .IDC, .SHTM, .SHTML, .STM, .PRINTER, .INI, .LOG, .POL, .DAT, etc.)
  • Block content containing URL requests with certain signatures (.., ./, \, :, % and &)
  • Change/remove headers to provide disinformation – putting ISA Server in front of an Apache server is a great way to prevent UNIX attacks by making hackers think they are attacking a Windows server.
  • Block applications based on the header.

Some headers to look for include:

  • Request headers:
    • MSN Messenger: HTTP header=User-Agent; Signature=MSN Messenger
    • Windows Messenger: HTTP header=User-Agent; Signature=MSMSGS
    • AOL Messenger (and Gecko browsers): HTTP header=User-Agent; Signature=Gecko/
    • Yahoo Messenger: HTTP header=Host; Signature=msg.yahoo.com
    • Kazaa: HTTP header=P2P-Agent; Signature=Kazaa, Kazaaclient
    • Kazaa: HTTP header=User-Agent; Signature=Kazaa Client
    • Kazaa: HTTP header=X-Kazaa-Network; Signature=KaZaA
    • Gnutella: HTTP header=User-Agent; Signature=Gnutella
    • Gnutella: HTTP header=User-Agent; Signature=Gnucleus
    • Edonkey: HTTP header=User-Agent; Signature=e2dk
  • Response header:
    • Morpheus: HTTP header=Server; Signature=Morpheus

Some firewall challenges: protecting DNS

Whilst some DNS protection is available by filtering TCP/UDP ports 53, ISA Server filters can examine traffic for DNS host name overflows, length overflows, zone transfer from privileged ports (1-1023), zone transfer from high ports (1024 and above).

Some firewall challenges: protecting SMTP

When it comes to mail protection, anti-spam and anti-virus vendors cover SMTP relays but ISA server filters can examine protocol usage, i.e.:

  • Checking that TCP port 25 traffic really is SMTP.
  • Checking for a buffer overflow to the RCPT: command.
  • Blocking someone using the VRFY command.
  • Stripping an attachment or block a user.

Using such a solution adds to the defence in depth strategy, using the firewall to add another layer of defence to the mail system.

Some firewall challenges: encapsulated traffic

Encapsulated traffic can cause some concerns for a network administrator as IPSec (AH and ESP), PPTP, etc. cannot be scanned at the ISA Server if they are published or otherwise allowed through. Tunnelling traffic will be logged, but not scanned as ISA cannot look inside the tunnel unless it is terminating the VPN. The administrator is faced with a choice – open more ports and uses application filters – or tunnel traffic without inspection. NAT also has some implications.

ISA Server can, however, perform intra-tunnel VPN inspection, so VPN traffic can be inspected at the application layer. VPN client traffic is treated as a dedicated network so destinations can be controlled, along with the use of application filter rules.

VPN clients must be hardened. If not, then hackers can attack clients and ride the VPN into the corporate network. Client based intrusion detection systems and firewalls can help but the ideal solution is VPN quarantine (e.g. Windows Server 2003 network access quarantine control) as the most common entry to the network for malware is from mobile devices either VPNing into the network, or returning to the network after being infected whilst away from the network (perhaps connected to other networks, including the Internet).

Alternatives to a VPN that should be considered are:

  • E-mail: RPC over HTTP, or Outlook Web Access (OWA). POP3 and IMAP4 should be avoided as they are not fully featured.
  • Web-enabled extranet applications: SSL.
  • Other applications: RPC filtration with ISA Server.

Don’t forget the internal network

Internal network defences are another factor to be considered. Networks are generally one large TCP/IP space, segmented by firewalls to the Internet. Trust is implicit throughout the organisation but this cannot be relied upon and network segmentation is critical (cf. a bank, where entering a branch does not gain access to the vault). Internal users are dangerous too.

  • The windows firewall in Windows XP SP2 (internet connection firewall in Windows Server 2003 and earlier versions of Windows XP) is a vital tool in preventing network-based attacks, by blocking unsolicited inbound traffic. Ports can be opened for services running on the computer, and enterprise administration facilitated through group policy. Microsoft recommend that use of the Windows Firewall is combined with network access quarantine control; however it does not have any egress filters (i.e. controls over outbound traffic).
  • Virtual LANs (VLANs) can be used to isolate like services from one another. Switch ACLs are used to control traffic flow between VLANs at layer 3. Layer 2 VLANS may be used where no routing is desired. By using internal firewalls, port level access can be controlled to internal VLANs.
  • IPSec is a method of securing internal IP traffic, mutually authenticating end points. It is used to ensure encrypted and authenticated communications at the IP layer, providing a transport layer security that is independent of applications or application layer protocols. It prevents against spoofing, tampering in the wire and information disclosure. Mutual device authentication can be provided using certificates, kerberos (or pre-shared key – but this is only recommended for testing scenarios). Authentication headers (AH) should be used to provide packet integrity, but this does not encrypt, allowing for network intrusion detection. Encapsulation security payload (ESP) provides packet integrity and confidentiality, but its encryption prevents packet inspection. Consequently, careful planning is required to determine which traffic should be secured.

One use of IPSec is to allow domain replication to pass through firewalls, creating an IPSec policy on each domain controller to secure traffic to its replication partners. ESP 3DES should be used for encryption and the firewall should be configured to allow UDP port 500 for internet key exchange (IKE) and IP protocol 50 for ESP.

Potential issues around wireless network security are well publicised. The two most common security configurations each have their own limitations:

  • Wired equivalency privacy (WEP), relies on static WEP keys which are not dynamically changed and are therefore vulnerable to attack. There is no standard method for provisioning static WEP keys to clients and the principle of static keys does not scale well, with a compromised key exposing everyone.
  • MAC address filtering, is also limited by the potential for an attacker to spoof an allowed MAC address.

Possible solutions include password-based layer 2 authentication (IEEE 802.1x with PEAP/MS CHAP v2) and certificate-based layer 2 authentication (IEEE 802.1x EAP-TLS). Other options include:

  • VPN connectivity using L2TP/IPSec (preferred) or PPTP. This does not allow for roaming but is useful when accessing public wireless hot spots; however there is no computer authentication, or processing of computer group policy settings.
  • IPSec, but this has some interoperability issues.
Security type Security level Ease of deployment Ease of integration
Static WEP Low High High
IEEE 802.1x (PEAP) High Medium High
IEEE 802.1x (TLS) High Low High
VPN (L2TP/IPSec) High Medium Low
IPSec High Low Low

Summary

In summary, firewalls are placed in different locations for different reasons. These must be understood and filtered accordingly. Core functionality can be extended with protocol filters to cover a specific scenario but no one device is a silver bullet. Solutions are more important than devices and firewall configuration is more than a networking decision – it also requires application awareness.

Links

Microsoft ISA Server
ISA Server Community
ISAserver.org
Zebedee (a simple, secure TCP and UDP tunnel program)

Slow network copies – duplex mismatch?

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst copying some files across the network today it seemed to me that the operation seemed to be taking much longer than it should. It looks like there may have been a duplex mismatch as setting the network interface cards to 100/full instead of auto seemed to fix the problem.

I’m not sure if this is entirely accurate, but I remember an ex-colleague of mine telling me that the network speed can be auto-detected but auto-detecting the duplex is less reliable.