IPv6 – so what’s it all about?

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks back, I was at a Microsoft TechNet UK event, where Steve Lamb discussed Microsoft’s implementation of the Internet Protocol v6 (IPv6), available in Windows 2000 service pack 3 or later, Windows XP service pack 1 or later, or Windows Server 2003. This is a new version of IP (also known as IP next generation – IPng), intended to overcome some of the limitations of the present version (v4), namely:

  • Exhaustion of available addresses – not such a major issue now that network address translation (NAT) is so common, but potentially a future issue as more and more devices are IP-enabled.
  • Large routing tables in backbone routers (the average ISP has 90,000 entries under IPv4).
  • A need for simpler, stateless configuration.
  • A need for better support of real-time data delivery (QoS)

IPv6 provides a 128-bit address space (compared with IPv4’s 32-bit implementation), and instead of being represented using four octets in dotted decimal notation, IPv6 addresses use eight groups of four hexadecimal digits, which incorporate the media access control (MAC) address of the client, for example, 21DA:00D3:0000:2F3B:02AA:00FF:FE28:9C5A

I’m told that there was an IPv5 (presumably with a 64-bit address space?), but it took too long to ratify. The IPv6 addressing scheme gives a vast number of possible combinations (about 340 undecillion – that’s more than 340000000000000000000000000000000000000!) and allows for faster routing due to its simplified header.

Like most protocols in the TCP/IP suite, IP is made up on an number of sub-protocols and IPv6 is actually formed of five core protocols:

  • Internet protocol (IP).
  • Internet control message protocol (ICMP).
  • Multicast listener discovery (MLD).
  • Neighbor discovery (ND).
  • Top level aggregator (TLA).

(Yes, there really is a three latter acronym called TLA!)

In terms of application support, Microsoft’s IPv6 implementation is as per the IETF RFCs (i.e. not extended in any way). The tools look similar to the IPv4 versions, apart from the different addresses. DNS and RPC are both supported by the IPv6 stack, as are sockets interface extensions; however IPSec on IPv6 is only partly functional. There is also support for an IPv6 IP Helper API.

So what are the barriers to IPv6 adoption? For a start, businesses will need to see some benefit first, and although IPv6 addresses are available now, the initial worries about a lack of IPv4 addressing space have been alleviated (for the time being) with the use of network address translation (NAT) and private IP address ranges. Organisations implementing IPv6 do not need to drop IPv4 and convert overnight – it is possible to mix and match there is a world-wide IPv6 test network backbone; however, many organisations are using NAT as a line of defence in their security model and so firewall configurations will need to be re-examined if an IPv6 migration is performed. Add to that the fact that IPv4 is well understood by administration staff (IPv6 is not), a critical mass must build up before most organisations will be ready to make the move, although the the US government is mandating that all federal agencies must use IPv6 by 2008 – maybe that will start the ball rolling.

In summary, IPv6 is here today, but many organisations will not be in a rush to migrate. The next generation of Windows (codenamed Longhorn) is expected to include a new networking stack that supports both the IPv6 and IPv4 networking standards and I would expect IPv6 to gain some momentum around about the time of it’s expected release (2006-7). Until then, IPv6 will remain something to look at in our labs. Wikipedia has more information about IPv6 for those who wish to learn more.

Best practices for managing automatic IP addressing with DHCP

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Dynamic host configuration protocol (DHCP) is often taken for granted – we expect it to work; however there are a few items which need to be considered and this post is intended as a general discussion of DHCP best practice.

Most administrators will be familiar with the overall DHCP concept – basically a database of IP addresses allocated to clients dynamically, allowing centralised IP address management; however, most of the organisations I see still need to use static addresses for some devices (e.g. servers). Whilst there is nothing wrong with this and I would still suggest using fixed IP addresses for networking equipment and the DHCP server itself, reservations can be useful to reserve particular addresses for certain clients, based on their media access control (MAC) address. The main drawback of this approach is that if the NIC in the computer changes, so does the MAC, although reprogramming the MAC address is possible (as is setting up a new reservation).

If there are static addresses in use which fall within the an IP address range intended for DHCP, exclusions can be configured (much easier than configuring several scopes to cover the fragmented IP range). Exclusions can be configured for a single address, or for a range of IP addresses.

Lease duration is another area to consider (i.e. the amount of time before a client needs to renew its DHCP address) – if this is set too long, and there are a large number of mobile clients, there is a risk of running out of available IP addresses as these mobile clients join the network, lease an address and then leave again without releasing it; conversely, too short and there is a large amount of renewal traffic as the DHCP client attempts to renew its lease at the half life. For most environments, I find that an 80:20 rule can be applied – i.e. provide 20% more addresses than are expected to be in use at any one time (to cater for mobile clients) and set the lease time to 1 day but for a subnet with largely static PCs, then longer leases may be appropriate.

DHCP includes a number of pre-defined options that can be set on a client:

  • Server options apply to all scopes on a server (e.g. 006 DNS servers, 015 DNS Domain Name).
  • Scope options apply to a single scope (e.g. 003 Router).
  • Class options can be applied to a specific type of device.
  • Reservation options apply to specific reservations.

Occasionally it may be necessary to configure custom options – e.g. 060 for a pre-boot execution environment (PXE) client or 252 for web proxy auto-discovery (WPAD).

If there are multiple DHCP servers on a subnet, then the client will be allocated an address by the first one to answer – hence the reason for Windows 2000 and later DHCP servers supporting DHCP authorisation in Active Directory (hence preventing the use of rogue DHCP servers); however this will not affect non-AD DHCP servers (such as the one in Virtual Server, or on an ADSL router). When a client issues a DHCP request, all listening servers respond with an offer and the client will respond to the first answer received. Because DHCP requests are broadcast-based, they typically cannot traverse routers and so DHCP relaying must be configured to overcome this where clients are remote from the DHCP server.

To configure DHCP for redundancy, it is generally advised to configure two DHCP servers and to split the scope using a 50:50 or 80:20 ratio (50:50 works well where both DHCP servers are on the same site; 80:20 may be often appropriate where a remote site is providing redundancy for a local server) so, for example, if I want to allocate addresses on the network 192.168.1.0/24, I might reserve the top 10 or so addresses for static devices and create two scopes on two DHCP servers – one for 192.168.1.1-120 and the other for 192.168.1.121-240. This provides 240 potentially available addresses but if one server is unavailable then the other can answer. Of course, this scenario only provides for 120 clients (96 taking into account my earlier recommendations for dealing with mobile devices). It is also possible to cluster DHCP servers for redundancy.

Superscopes can be used to group several scopes into one for management purposes, but when I tried to implement these in a live environment, we found that they did not work well and had to revert to individual scopes for each subnet.

Since Windows 2000, the Microsoft DHCP server implementation has included DNS integration. Set on the scope properties, this allows three options for updating A and PTR records in DNS as IP addresses are leased to DHCP clients:

  • Enable DNS dynamic updates, either always, or if requested (by Windows 2000 or later clients).
  • Discard DNS records when the lease is deleted (i.e. clean up afterwards).
  • Dynamically update DNS for legacy clients that do not request updates (e.g. Windows NT 4.0).

In terms of new features, Windows Server 2003 improves on Windows 2000 Server by allowing backup and restoration of the DHCP database from the DHCP console. It also provides for both user- and vendor-specified option classes. Potentially the greatest area of improvement is integration of DHCP commands within the netsh command shell.

Finally, DHCP servers use a JET database and may be busy. At a recent Microsoft TechNet UK event, John Howard recommended that every now and again, the service is stopped and jetpack.exe is used to perform database maintenance, improving performance (as described in Microsoft knowledge base article 145881).

Securing the network using Microsoft ISA Server 2004

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Several months ago, I attended a Microsoft TechNet UK event where the topic was ISA Server 2004 network design/troubleshooting and inside application layer firewalling and filtering. It’s taken me a while to get around to writing up the notes, but finally, here they are, with some additional information that I found in other some similar presentations added for good measure.

The need for network security

The Internet is built on internetworking protocol (IP) version 4 – which was not designed with security in mind. In the early days of the Internet, security clearance was required for access – i.e. physical access was restricted – so there was no requirement for protocol security. At that time (at the height of the cold war), resistance to nuclear attack was more important than protecting traffic and everyone on the network was trusted. The same networking technologies used to create the Internet (the TCP/IP protocol suite) are now used for internal networks and for TCP/IP, security was an afterthought.

Security should never be seen as a separate element of a solution – it should all pervasive. At the heart of the process should be a strategy of defence in depth – not just securing the perimeter or deploying some access controls internally, but placing security throughout the network so there are several layers to thwart malware or a hacker. Ideally, an administrator’s security strategy toolbox should include:

  • Perimeter defences (packet filtering, stateful packet inspection, intrusion detection).
  • Network defences (VLAN access control lists, internal firewall, auditing, intrusion detection).
  • Host defences (server hardening, host intrusion detection, IPSec filtering, auditing, active directory).
  • Application defences (anti-virus, content scanning, URL switching source, secure IIS, secure Exchange).
  • Data and resource defences (ACLs, EFS, anti-virus, active directory).

Each layer of defence should be designed on the assumption that all prior layers have failed.

With users becoming ever more mobile, defining the edge of the network is becoming ever more difficult. Firewalls are no panacea, but properly configured firewalls and border routers are the cornerstone of perimeter security. The Internet and mobility have increased security risks, with virtual private networks (VPNs) softening the perimeter and wireless networks further eroding the traditional concept of the network perimeter.

A firewall alone is not enough

Some administrators take the view that “we’ve got a firewall, so everything is fine”, but standard (layer 3/4) firewalls check only basic packet information and treat the data segment of the packet as a black box. This is analogous to looking at the number and destination displayed on the front of a bus, but not being concerned with the passengers on board. Performance is often cited as a the reason for not implementing application layer (layer 7) firewalls, which inspect the data segment (e.g. for mail attachment checking, HTTP syntax, DNS syntax, correct SSL termination, URL blocking and redirection, RPC awareness, LDAP, SQL, etc.). However, Microsoft claim to have tested Internet Security and Acceleration (ISA) Server 2004 up to 1.9Gbps throughput on a single server with application filters in place (at a time when most corporates are dealing with 2-10Mbps).

Consider the standard security pitch, which has two elements:

  1. The sky is falling (i.e. we’re all doomed).
  2. Our product will fix it (i.e. buy our product).

In truth, no system is 100% effective and the firewall needs to be supplemented with countermeasures at various depths (intrusion detection systems, etc.). If there was a 100% secure system it would be incredibly expensive – in addition, threats and vulnerabilities are constantly evolving, which leaves systems vulnerable until a new attack is known and a new signature created and distributed. Heuristical systems must be supplemented with behavioural systems, and some intelligence.

Just because 100% security is not achievable, it doesn’t mean that it is any less worthwhile as a goal. We still lock our car doors and install immobilisers, even though a good car thief can defeat them eventually. The point is that we stop the casual attacker, buying time. Taking another analogy, bank safes are sold on how long it will take a safe cracker to break them.

Whatever solution is implemented, a firewall cannot protect against:

  • Malicious traffic passed on open ports and not inspected at the application layer by the firewall.
  • Any traffic that passes through an encrypted tunnel or session.
  • Attacks on a network that has already been penetrated from within.
  • Traffic that appears legitimate.
  • Users and administrators who intentionally or accidentally install viruses or other malware.Administrators who use weak passwords.

HTTP is the universal firewall bypass and avoidance protocol

In the late 1990s, as business use of the Internet exploded, we became to rely ever more on HTTP, which has earned itself a nickname – UFBAP – the universal firewall bypass and avoidance protocol.

Firewall administrators are obsessed with port blocking and so all non-essential firewall ports are closed; but we generally assume that HTTP is good and so TCP port 80 (the default port for HTTP) is left open. Because it’s so difficult to get an administrator to open a port, developers avoid such restrictions by writing applications that tunnel over port 80. We even have a name for it (web services) and some of our corporate applications make use of it (e.g. RPC over HTTP for Outlook connecting to Exchange Server 2003).

This tunnelling approach is risky. When someone encapsulates one form of data inside another packet, we tend to allow it, without worrying about what the real purpose of the traffic is. There are even websites which exploit this (e.g. HTTP-Tunnel), allowing blocked traffic such terminal server traffic using the remote desktop protocol (RDP) to be sent to the required server via TCP port 80, for a few dollars a month.

In short, organisations, tend to be more concerned with blocking undesirable sites (by destination) than checking that the content is valid (by deep inspection).

Using web services such as RPC over HTTP to access Exchange Server 2003 is not always bad – 90% of VPN users just want to get to their e-mail and so offering an HTTP-based solution can eliminate many of the VPNs that are vulnerable network entry points – what is required is to examine the data inside the HTTP tunnel and only allowing it to be used under certain scenarios. Taking the Exchange Server 2003 example further, without using RPC over HTTP, the following ports may need to be opened for access:

  • TCP 25: SMTP.
  • TCP/UDP 53: DNS.
  • TCP 80: HTTP.
  • TCP/UDP 88: Kerberos.
  • TCP 110: POP3.
  • TCP 135: RPC endpoint mapper.
  • TCP 143: IMAP4.
  • TCP/UDP 389: LDAP (to directory service).
  • TCP 691: Link state algorithm routing protocol.
  • TCP 1024+: RPC service ports (unless DC and Exchange restricted).
  • TCP 3268: LDAP (to global catalog).

Using HTTP over RPC, this is reduced to one port – TCP 80.

Application layer filtering

Inspection at the application layer still has some limitations and the real issue is understanding the purpose of the traffic to be filtered and blocking non-consistent traffic.

Microsoft ISA Server 2004 is typically deployed in one of three scenarios:

  • Inbound access control and VPN server.
  • Outbound access control and filtration (together with URL-based real time lists from third parties).
  • Distributed caching (proxy server), leading to reduced bandwidth usage.

As part of its access control capabilities, ISA Server has a number of application filters included:

  • HTTP (syntax analysis and signature blocking).
  • OWA (forms based authentication).
  • SMTP (command and message filtering).
  • RPC (interface blocking).
  • FTP (read only support).
  • DNS (intrusion detection).
  • POP3 (intrusion detection).
  • H.323 (allows H.323 traffic).
  • MMS (enabled Microsoft media streaming).

All of these filters validate protocols for RFC compliance and enable network address translation (NAT) traversal. In addition, ISA Server can work with third party filters to avoid the need for a proliferation of dedicated appliance servers (and even for appliance consolidation). Examples of third-party filter add-ons include:

  • Instant messaging (Akonix).
  • SOCKS5 (CornerPost Software).
  • SOAP/raw XML (Forum Systems).
  • Antivirus (McAfee, GFI, Panda).
  • URL Filtering (SurfControl, Futuresoft, FilterLogix, Cerberian, Wavecrest).
  • Intrusion detection (ISS, GFI).

But appliance firewalls are more secure – aren’t they?

Contrary to popular belief, appliance firewalls are not necessarily more secure – just more convenient – for those who prefer to use appliances, ISA Server is available in an appliance server format and such an appliance may well be cheaper than an equivalent server, plus Windows Server 2003 and ISA Server 2004 licenses.

Whilst looking at the security of the solution itself, ISA Server has been tested against the common certification criteria at level EAL4+ (for 9 streams). Totally rewritten since ISA Server 2000, Microsoft claim that ISA Server 2004 uses a code base which is 400% more efficient. It may run on a Windows platform, but Windows Server 2003 can (and should) also be hardened, and a well-configured ISA Server can be extremely secure.

Some firewall challenges: remote procedure calls (RPCs)

RPC CommunicationsRPCs present their own challenge to a standard (layer 3/4) firewall in terms of the sheer number of potentially available ports:

  1. On service startup, the RPC server grabs random high port numbers and maintains a table, mapping UUIDs to port numbers.
  2. Clients know the UUID of the required service and connect to the server’s port mapper using TCP port 135, requesting the number of the port associated with the UUID.
  3. The server looks up the port number of the given UUID.
  4. The server responds with the port number, closing the TCP connection on port 135.
  5. From this point on the client accesses the application using the allocated port number.

Due to the number of potential ports, this is not feasible using a traditional firewall (it would require 64512 high ports plus 135 to be open); however, a layer 7 firewall could utilise an RPC filter to learn the protocol and use its features to improved security, such that the firewall would only allow access to specific UUIDs (e.g. domain controller replication, or Exchange/Outlook RPC communications) denying access to all other RPC requests. Instead of tunnelling within HTTP (prevented by an HTTP syntax check), native RPC access can be provided across the firewall.

Some firewall challenges: secure sockets layer (SSL)

Protecting HTTPS - traditional firewallHackers will always attack the low-hanging fruit (i.e. easy targets) and as such, SSL attacks are generally too complex, but as our systems become more secure (i.e. we remove the low-hanging fruit), SSL attacks will become more likely.

HTTPS (which uses SSL) prompts a user for authentication and any user on the Internet can access the authentication prompt. SSL tunnels through traditional firewalls because it is encrypted, in turn, allowing viruses and worms to pass through undetected and infect internal servers.

Using ISA Server 2004 with an HTTP filter, authentication can be delegated. In this way, ISA Server pre-authenticates users, eliminating multiple authentication dialogs and only allowing valid traffic through. This means that the SSL connection is from the client to the firewall only, and that ISA Server can decrypt and inspect SSL traffic. Onward traffic to the internal server can be re-encrypted using SSL, or sent as clear HTTP. In this way, URLScan for ISA Server can stop web attacks at the network edge, even over an encrypted inbound SSL connection.

Protecting HTTPS - web publishingPre-authentication means that without a valid layer 7 password, there is no access to any internal systems (potential attackers drop from the entire Internet to just the number of people with credentials for the network). ISA Server 2000 can also perform this using RSA securID for HTTP (although not for RPC over HTTP with securID) and cookie pre-authentication for Outlook Web Access 2003 is also available.

Some firewall challenges: protecting HTTP(S)

Protecting HTTP (and HTTPS) requires an understanding the protocol – how it works, what its rules are and what to expect. Inbound HTTPS termination is easy (as the certificate is controlled by the organisation whose network is being protected). For outbound HTTPS and HTTP, administrators need to learn how to filter port 80/443. It may be worth considering whether global access is really required, or whether there are a set of specific sites that are required for use by the business.

ISA Server allows web publishing of HTTP (as well as other protocols such as SMTP). Web publishing protects servers through two main defences:

  • Worms rarely work by FQDN – tending to favour IP or network range. Publishing by FQDN prevents any traffic from getting in unless it asks for the exact URL and not just http://81.171.168.73:80.
  • Using HTTP filter verbs (signature strings and method blocking) to eliminate whole classes of attack at the protocol level.

Some examples of protecting a web server using web publishing and HTTP filtration are:

  • Limit header length, query and URL length.
  • Verify normalisation – http://81.171.168.73/../../etc is not allowed.
  • Allow only specified methods (GET, HEAD, POST, etc.).
  • Block specified extensions (.EXE, .BAT, .CMD, .COM, .HTW, .IDA, .IDQ, .HTR, .IDC, .SHTM, .SHTML, .STM, .PRINTER, .INI, .LOG, .POL, .DAT, etc.)
  • Block content containing URL requests with certain signatures (.., ./, \, :, % and &)
  • Change/remove headers to provide disinformation – putting ISA Server in front of an Apache server is a great way to prevent UNIX attacks by making hackers think they are attacking a Windows server.
  • Block applications based on the header.

Some headers to look for include:

  • Request headers:
    • MSN Messenger: HTTP header=User-Agent; Signature=MSN Messenger
    • Windows Messenger: HTTP header=User-Agent; Signature=MSMSGS
    • AOL Messenger (and Gecko browsers): HTTP header=User-Agent; Signature=Gecko/
    • Yahoo Messenger: HTTP header=Host; Signature=msg.yahoo.com
    • Kazaa: HTTP header=P2P-Agent; Signature=Kazaa, Kazaaclient
    • Kazaa: HTTP header=User-Agent; Signature=Kazaa Client
    • Kazaa: HTTP header=X-Kazaa-Network; Signature=KaZaA
    • Gnutella: HTTP header=User-Agent; Signature=Gnutella
    • Gnutella: HTTP header=User-Agent; Signature=Gnucleus
    • Edonkey: HTTP header=User-Agent; Signature=e2dk
  • Response header:
    • Morpheus: HTTP header=Server; Signature=Morpheus

Some firewall challenges: protecting DNS

Whilst some DNS protection is available by filtering TCP/UDP ports 53, ISA Server filters can examine traffic for DNS host name overflows, length overflows, zone transfer from privileged ports (1-1023), zone transfer from high ports (1024 and above).

Some firewall challenges: protecting SMTP

When it comes to mail protection, anti-spam and anti-virus vendors cover SMTP relays but ISA server filters can examine protocol usage, i.e.:

  • Checking that TCP port 25 traffic really is SMTP.
  • Checking for a buffer overflow to the RCPT: command.
  • Blocking someone using the VRFY command.
  • Stripping an attachment or block a user.

Using such a solution adds to the defence in depth strategy, using the firewall to add another layer of defence to the mail system.

Some firewall challenges: encapsulated traffic

Encapsulated traffic can cause some concerns for a network administrator as IPSec (AH and ESP), PPTP, etc. cannot be scanned at the ISA Server if they are published or otherwise allowed through. Tunnelling traffic will be logged, but not scanned as ISA cannot look inside the tunnel unless it is terminating the VPN. The administrator is faced with a choice – open more ports and uses application filters – or tunnel traffic without inspection. NAT also has some implications.

ISA Server can, however, perform intra-tunnel VPN inspection, so VPN traffic can be inspected at the application layer. VPN client traffic is treated as a dedicated network so destinations can be controlled, along with the use of application filter rules.

VPN clients must be hardened. If not, then hackers can attack clients and ride the VPN into the corporate network. Client based intrusion detection systems and firewalls can help but the ideal solution is VPN quarantine (e.g. Windows Server 2003 network access quarantine control) as the most common entry to the network for malware is from mobile devices either VPNing into the network, or returning to the network after being infected whilst away from the network (perhaps connected to other networks, including the Internet).

Alternatives to a VPN that should be considered are:

  • E-mail: RPC over HTTP, or Outlook Web Access (OWA). POP3 and IMAP4 should be avoided as they are not fully featured.
  • Web-enabled extranet applications: SSL.
  • Other applications: RPC filtration with ISA Server.

Don’t forget the internal network

Internal network defences are another factor to be considered. Networks are generally one large TCP/IP space, segmented by firewalls to the Internet. Trust is implicit throughout the organisation but this cannot be relied upon and network segmentation is critical (cf. a bank, where entering a branch does not gain access to the vault). Internal users are dangerous too.

  • The windows firewall in Windows XP SP2 (internet connection firewall in Windows Server 2003 and earlier versions of Windows XP) is a vital tool in preventing network-based attacks, by blocking unsolicited inbound traffic. Ports can be opened for services running on the computer, and enterprise administration facilitated through group policy. Microsoft recommend that use of the Windows Firewall is combined with network access quarantine control; however it does not have any egress filters (i.e. controls over outbound traffic).
  • Virtual LANs (VLANs) can be used to isolate like services from one another. Switch ACLs are used to control traffic flow between VLANs at layer 3. Layer 2 VLANS may be used where no routing is desired. By using internal firewalls, port level access can be controlled to internal VLANs.
  • IPSec is a method of securing internal IP traffic, mutually authenticating end points. It is used to ensure encrypted and authenticated communications at the IP layer, providing a transport layer security that is independent of applications or application layer protocols. It prevents against spoofing, tampering in the wire and information disclosure. Mutual device authentication can be provided using certificates, kerberos (or pre-shared key – but this is only recommended for testing scenarios). Authentication headers (AH) should be used to provide packet integrity, but this does not encrypt, allowing for network intrusion detection. Encapsulation security payload (ESP) provides packet integrity and confidentiality, but its encryption prevents packet inspection. Consequently, careful planning is required to determine which traffic should be secured.

One use of IPSec is to allow domain replication to pass through firewalls, creating an IPSec policy on each domain controller to secure traffic to its replication partners. ESP 3DES should be used for encryption and the firewall should be configured to allow UDP port 500 for internet key exchange (IKE) and IP protocol 50 for ESP.

Potential issues around wireless network security are well publicised. The two most common security configurations each have their own limitations:

  • Wired equivalency privacy (WEP), relies on static WEP keys which are not dynamically changed and are therefore vulnerable to attack. There is no standard method for provisioning static WEP keys to clients and the principle of static keys does not scale well, with a compromised key exposing everyone.
  • MAC address filtering, is also limited by the potential for an attacker to spoof an allowed MAC address.

Possible solutions include password-based layer 2 authentication (IEEE 802.1x with PEAP/MS CHAP v2) and certificate-based layer 2 authentication (IEEE 802.1x EAP-TLS). Other options include:

  • VPN connectivity using L2TP/IPSec (preferred) or PPTP. This does not allow for roaming but is useful when accessing public wireless hot spots; however there is no computer authentication, or processing of computer group policy settings.
  • IPSec, but this has some interoperability issues.
Security type Security level Ease of deployment Ease of integration
Static WEP Low High High
IEEE 802.1x (PEAP) High Medium High
IEEE 802.1x (TLS) High Low High
VPN (L2TP/IPSec) High Medium Low
IPSec High Low Low

Summary

In summary, firewalls are placed in different locations for different reasons. These must be understood and filtered accordingly. Core functionality can be extended with protocol filters to cover a specific scenario but no one device is a silver bullet. Solutions are more important than devices and firewall configuration is more than a networking decision – it also requires application awareness.

Links

Microsoft ISA Server
ISA Server Community
ISAserver.org
Zebedee (a simple, secure TCP and UDP tunnel program)

Windows server system service overview and network port requirements

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As security becomes ever more paramount and network administrators implement extra layers of security, including client PCs running personal firewall products, systems administrators and support staff need to know which ports and protocols Microsoft operating systems and programs require for network connectivity in a segmented network.

Microsoft have addressed this with Microsoft knowledge base article 832017, which details the essential network ports, protocols and services that are used by Microsoft client and server operating systems, server-based programs and their subcomponents in the Microsoft Windows server system.

Useful TCP and UDP port numbers

This content is 20 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Having spent the afternoon configuring Windows Firewall exceptions, I thought I’d post some links to useful port number information.

Of course, %systemroot%\system32\drivers\etc\services contains port numbers for well-known services defined by IANA, but this is an incomplete list and the up-to-date version is on the IANA website.

Although now out of date (superseded by the RFC 3232 online database), the missing table of contents for RFC 1700 (assigned numbers) provides links to a pile of useful information that doesn’t seem to be covered in RFC 3232. This information is not just from the RFC and includes links to items such as country codes from ISO 3166, although a more up-to-date list of country codes is available on the ISO website (note that the ISO country codes do not necessarily equate to the top level domain codes, e.g. United Kingdom is GB in ISO 3166, but both GB and UK on the IANA website).

Finally, the ISS website has details of commonly used ports (along with some descriptive information) for Microsoft services as well as other vendors.