Preventing dnsmasq from running as a daemon (service) on a Raspberry Pi

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some time ago, I wrote a post about running a Raspberry Pi as a home infrastructure server (DNS, DHCP, TFTP, etc.). Now my Synology NAS is doing that for me (well, the DNS and DHCP at least – TFTP is less critical as my Cisco 7940 IP Phone just sits there taking up desk space most of the time) so I don’t need the Pi to provide those services.

Unfortunately, when I migrated DNS and DHCP a few months ago, I just stopped the service with sudo service dnsmasq stop so, after a power outage last week, when the Pi came back up, so did dnsmasq – and having two DNS/DHCP servers on the network produced some strange results (as might be expected…).

So, to do the job properly, I ran sudo nano /etc/default/dnsmasq and changed the ENABLED=1 line to ENABLED=0. That should prevent dnsmasq from running as a service but leaves the configuration intact if I ever need to bring it back online.

A quick sudo reboot and sudo service dnsmasq status is all that’s needed to check that dnsmasq stays disabled.

dnsmasq, not running

Office 365 and proxy servers: like oil and water?

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Office 365 and proxy servers don’t mix very well. Well, to be more accurate, thousands of Outlook, Skype for Business and OneDrive for Business clients, each with multiple connections to online services can quickly build up to a lot of (persistent) connections. If you haven’t already, it’s well-worth reading Paul Collinge’s blog post on ensuring your proxy server can scale to handle Office 365 traffic.

Microsoft recommends that the network is configured to allow unauthenticated direct outbound access to a published list of URLs and IP ranges (there’s also an RSS feed) – although I’ve had customers who take issue with this and don’t think it’s a reasonable expectation in the enterprise. My view? You’re adopting cloud services; your network boundary has moved (disappeared?) and the approach you take to managing the connectivity between services needs to change.

Perhaps as more people take advantage of services like ExpressRoute for Office 365, things will change but, for now, every Office 365 implementation I work on seems to involve a degree of proxy bypassing…

Some of the issues I experienced in a recent implementation included:

  • OneDrive for business unable to perform an initial synchronisation, but fine on subsequent syncs. It seems that the OneDrive client downloads http://clientconfig.microsoftonline-p.net/fplist.xml when it first syncs. We could get it to work when going through a different proxy server, or direct to the Internet; but the main proxy server had to have a list of trusted sites added. The managed services provider had previously allowed access to some known IP addresses (a risky strategy as they change so frequently and the use of content delivery networks means they are not always under Microsoft’s control), but the proxy server had the capability to trust a list of target URLs too.
  • Outlook unable to reliably redirect after Exchange mailboxes were migrated to Exchange Online. In this case, we found that, even with the trusted URLs in place on the proxy, as part of the Outlook Autodiscover process, Outlook was trying to contact autodiscover-s.outlook.com. The proxy wasn’t allowing unauthenticated access and Outlook didn’t know how to cope with the authentication request. Once autodiscover-s.outlook.com had been added to the proxy server’s unauthenticated access list, Outlook Autodiscover began to work as intended.
  • Lync/Skype for Business Online calls working internally, but not with external parties. Users dropping off the call after a few seconds. We still haven’t got to the bottom of this, but strongly suspect the network configuration…
  • Exchange Hybrid free/busy information not available cross-premises. Again, this seems to be related to the Exchange servers’ ability to see the Internet (free/busy lookups are performed by the server, not the client).

Further reading

Short takes: checking your IP in Google; writing to a text file in PowerShell; and confirming which IE security zone a website uses in Internet Explorer

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another eclectic mix of snippets merged into a single blog post…

What’s my IP address?

Ever want to check the IP address of the connection you’re using? There are lots of websites out there that will tell you, or you can just type what is my IP into Google (other search engines are available… but they won’t directly return this information).

Writing output to a text file in PowerShell

Sometimes, when working in PowerShell, it’s useful to pipe the output to a file, for example to send to someone else for analysis. For this, the Out-File cmdlet comes in useful (| Out-File filename.txt) , as described on StackOverflow.

Internet Explorer status bar no longer shows security zone for a site

Last week, I was trying to work out which security zone a site was in last week (because I wanted to see if it was in the Intranet zone, whilst tracking down some spurious authentication prompts) but recent versions of Internet Explorer don’t show this information in the status bar. The workaround is to right click any black space in the website and select Properties. Alternatively, use Alt + F + R.

Check the security zone in Internet Explorer

Raspberry Pi infrastructure server (DNS, DHCP, TFTP)

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A long time ago, I used to run real servers at home – I had a Compaq Prosignia 300 for a while and then a Compaq (or maybe it was an HP) Proliant DL380 running in my garage. Then, a few years back, I stopped running my own mail server and put all of the infrastructure services onto a low-powered PC running Windows Server (working alongside a NetGear ReadyNAS Duo). Recently, I found I didn’t even need Active Directory (I have unmanaged devices and cloud services these days) so I started to switch over onto a Raspberry Pi.  Each move made a huge difference to my electricity bill but I’ve had some mishaps too. I accidentally turned off the Pi and corrupted the flash memory (oops), then recommissioned the previous server. Then, I accidentally killed the power on that too and it’s not come back up (could be the PSU, or the motherboard – but whichever it is it’s unlikely to get fixed) so last Saturday night, I found myself bringing the Pi back into service as a DNS, DHCP and TFTP server – partly to improve my Internet access speeds and partly to back up my Windows Phone (that will be the subject of another blog post).

Luckily, I had the notes from last time I did it – but they hadn’t made it into a blog post yet, so I’d better record them in case I need to do this again…
Assuming that the Raspberry Pi is running Raspbian, the following commands should be entered from command line (e.g. LX Terminal):

  • sudo nano /etc/network/interfaces (to set up static IP – in this case 192.168.1.10 on a class C network):
    #iface eth0 inet dhcp
    iface eth0 inet staticaddress 192.168.1.10
    netmask 255.255.255.0
    network 192.168.1.0
    broadcast 192.168.1.255
    gateway 192.168.1.1
  • sudo nano /etc/resolv.conf (to set the DNS server address – 8.8.8.8 will do if you don’t have one):
    nameserver 8.8.8.8
  • sudo ifdown eth0 (take down the Ethernet connection).
  • sudo ifup eth0 (bring it back up again).
  • ifconfig (check new IP settings.)
  • sudo apt-get install dnsmasq (install the Dnsmasq network infrastructure package for small networks)
  • Optionally, sudo apt-get install dnsutils (to get utilities like nslookup and dig). Unfortunately, this is resulting in bash: dig: command not found (I’m pretty sure it worked when I did this a year or so ago but, for now, I’m managing without those tools.
  • sudo service dnsmasq stop (stop the Dnsmasq service)
  • sudo nano /etc/dnsmasq.conf (edit the Dnsmasq config) – these are the settings I changed (all others were left alone) – the original version of the file includes full details of what each of these mean):
    domain-needed
    bogus-priv
    no-resolv
    server=212.159.13.49
    server=212.159.13.50
    server=212.159.6.9
    server=208.67.222.222
    server=208.67.220.220
    server=8.8.8.8
    local=/home.markwilson.co.uk/
    expand-hosts
    domain=home.markwilson.co.uk
    dhcp-range=192.168.1.100,192.168.1.199
    dhcp-host=00:1d:a2:2f:20:f9,192.168.1.199
    dhcp-option=3,192.168.1.1
    dhcp-option=6,192.168.1.10
    dhcp-option=42,192.168.1.1
    dhcp-option=66,192.168.1.10
    dhcp-option=66,boot\pxeboot.com
    dhcp-option=vendor:MSFT,2,li
    enable-tftp
    tftp-root=/home/pi/ftp/files
  • Optionally, add some static entries for fixed IP items on the network with sudo nano /etc/hosts:
    192.168.1.1 router
    192.168.1.10 raspberrypi
  • sudo nano /etc/resolv.conf (set the DNS server address again – to use the local server):
    nameserver 192.168.1.10
  • sudo service dnsmasq start (start the Dnsmasq service)
  • View client leases with cat /var/lib/misc/dnsmasq.leases.

A few more notes that might be useful include that pinging short names may need a trailing . .
Other blog posts that helped me in creating this include:

(I haven’t actually tested the TFTP functionality – I need it for my Cisco 7940 phone, but need to recover the files from the old server first).

Now, all I need is a UPS for my Pi – and it looks like one is available (but I’m waiting for the new version that can keep the device running a while on battery power too…)

Short takes: Windows Phone screenshots and force closing apps; Android static IP

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m clearing down my browser tabs and dumping some of the the things I found recently that I might need to remember again one day!

Taking a screenshot on Windows Phone

Windows Phone 7 didn’t have a screenshot capability (I had an app that worked on an unlocked phone) but Windows Phone 8 let me take screenshots with Windows+Power. For some reason this changed in Windows Phone 8.1 to Power+Volume Up.  It does tell you when you try to use the old combination but, worth noting…

Some search engines are more helpful than others

Incidentally, searching for this information is a lot more helpful in some search engines than in others…

One might think Microsoft could surface it’s own information a little more clearly in Bing but there are other examples too (Google’s built-in calculator, cinema listings, etc.)

Force-quitting a Windows Phone app

Sometimes, apps just fail. In theory that’s not a problem, but in reality they need to be force-closed.  Again, Windows Phone didn’t used to allow this but recent updates have enabled a force-close. Hold the back button down, and then click the circled X that appears in order to close the problem app.

Enabling a static IP on an Android device

Talking of long key presses… I recently blew up my home infrastructure server (user error with the power…) and, until I sort things out again, all of our devices are configured with static IP configurations. One device where I struggled to do this was my Hudl tablet, running Android. It seems the answer is to select the Wi-Fi connection I want to use, but to long-press it, at which point there are advanced options to modify the connection and configure static IP details.

Wake on LAN braindump

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I lost quite a bit of sleep over the last few nights, burning the midnight oil trying to get my Dell PowerEdge 840 (server repurposed as a workstation) to work with various Dell management utilities and enable Wake On LAN (WoL) functionality.

It seems that the various OpenManage tools were no help – indeed many of the information sources I found for configuring the Baseboard Management Controller and kicking SOLProxy and IMPI into life seemed to be out of date, or just not applicable on Windows 7 (although ipmish.exe might be a useful tool if I get it working in future and it can be used to send WoL packets). I did find that, annoyingly, WinRM 2.0 needs an HTTPS connection and that a self-signed certificate will not be acceptable (according to Microsoft knowledge base article 2019527).  If I ever return to the topic of WinRM and IPMI, there’s a useful MSDN article on installation and configuration for Windows Remote Management.

In the end, even though my system is running Windows 7, the answer was contained in a blog post about a PowerEdge 1750, WoL and Debian

“Pressing ‘CTRL-S’ brings us to a configuration panel which allows for enabling the Wake-On-LAN (WOL) mode of the card.”

I’d been ignoring this because it the Ctrl-S boot option advertises itself as the “Broadcom NetXtreme Ethernet Boot Agent” (and I didn’t want to set the machine up to PXE boot) but, sure enough, after changing the Pre-boot Wake On LAN setting to Enable, my PowerEdge 840 started responding to magic packets.

On my WoL adventure, I’d picked up a few more hints/tips too, so I thought it’s worth blogging them for anyone else looking to follow a similar path…

“Windows 2000 and Windows 2003 do not require that WOL be turned on in the NIC’s or LOM’s firmware, therefore the steps using DOS outlined in the Out?of?Box and Windows NT 4.0 procedures are not necessary and should be skipped.  Enabling WOL with IBAUTIL.EXE, UXDIAG.EXE or B57UDIAG.EXE may be detrimental to WOL under Windows 2000 and Windows 2003.”

    • Presumably this advice also applies to Windows XP, Vista, Server 2008, 7 and Server 2008 R2 as they are also based on the NT kernel, so there is no need to mess around with DOS images and floppy drives to try and configure the NIC…
  • I downloaded Broadcom’s own version (15.0.0.21 19/10/2011) of the Windows drivers for my NIC (even though Windows said that the Microsoft-supplied drivers were current) and I’m pretty sure (although I can’t be certain) that the Broadcom driver exposed advanced NIC properties that were not previously visible to control Wake Up Capabilities and WoL Speed. (Incidentally, I left all three power management checkboxes selected, including “Only allow a magic packet to wake the computer”). There’s more information on these options in the Broadcom Ethernet NIC FAQs.
  • There is a useful-sounding CLI utility called the Broadcom Advanced Control Suite that I didn’t need to download; however its existence might be useful to others.
  • Depicus (Brian Slack) has some fantastic free utilities (and a host of information about WoL) including:
  • Other WoL tools (although I think Depicus has the landscape pretty much covered) include:
  • There’s also some more information about WoL on Lifehacker.

Network access control does its job – but is a dirty network such a bad thing?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I was dumped from my email and intranet access (mid database update) as my employer’s VPN and endpoint protection conspired against me. It was several hours before I was finally back on the corporate network, meanwhile I could happily access services on the Internet (my personal cloud) and even corporate email using my mobile phone.

Of course, even IT service companies struggle with their infrastructure from time to time (and I should stress that this is a personal blog, that my comments are my own and not endorsed by my employer) but it raises a real issue – for years companies have defended our perimeters and built up defence-in-depth strategies with rings of security. Perhaps that approach is less valid as end users (consumers) are increasingly mobile and what we really need to do is look at the controls on our data and applications – perhaps a “dirty” network is not such a bad thing if the core services (datacentres, etc.) are adequately secured?

I’m not writing this to “out” my employer’s IT – generally it meets my needs and it’s important to note that I could still go into an office, or pick up email on my phone – but I’d be interested to hear the views of those who work in other organisations – especially as I intend to write a white paper on the subject…

In effect, with a “dirty” corporate network, the perimeter moves from the edge of the organisation to its core and office networks are no more secure than the Wi-Fi access provided to guests today – at the same time as many services move to the cloud. Indeed, why not go the whole way and switch from dedicated WAN links to using the public Internet (with adequate controls to encrypt payloads and to ensure continuity or service of course)? And surely there’s no need for a VPN when the applications are all provided as web services?

I’m not suggesting it’s a quick fix – but maybe something for many IT departments to consider in adapting to meet the demands of the “four forces of IT industry transformation”: cloud; mobility; big data/analytics and social business?

[Update: Neil Cockerham (@ncockerhreminded me of the term “de-perimiterisation” – and Ross Dawson (@rossdawson)’s post on tearing down the walls: the future of enterprise tech is exactly what I’m talking about…]

Free Wireshark training – and the 10 truths of network analysis

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I was working my way through my RSS backlog when I spotted Thomas Lee’s post highlighting some free Wireshark (formerly Ethereal) webcasts by Network Protocol Specialists.

Wireshark is an open source packet capture and analysis tool (a bit like Microsoft Network Monitor – but available for a variety of platforms as well as in portable application and U3 form). I’ve struggled with deep packet-level networking since my days at Uni’ but a little knowledge in this area can really help when troubleshooting connectivity, so I registered for the first session and found it both worthwhile and interesting as Mike Pennacchi explained:

  • Analyzer placement.
  • Starting up Wireshark.
  • Selecting an interface.
  • Basic capture filters.
  • Capturing packets.
  • Displaying and decoding packets.
  • Saving the trace.

The next two sessions will look at:

  • Using display filters effectively.
  • Long term captures.

and:

  • Separating the good traffic from the bad traffic.

If you want to know more, check out the video from session 1 – or register for the next two sessions on the Network Protocol Specialists website.

In the meantime, I’ll round up this post with Mike’s 10 truths of network analysis:

  1. The wire does not lie. It is not out to prove a point, nor is it politically motivated. Interpreting traffic on the wire can help to solve problems.
  2. Packets cannot hang around at a device for more than a few milliseconds. Routers and switches do not have large enough buffers for packets to “hang around” – they may get dropped and retransmitted – or an application may be holding on to them. Network analysis can help to identify where the delay is.
  3. The total response time is the sum of the various deltas. Long response times may be the result of many packets with small gaps or fewer packets with long gaps.
  4. Every application program can be diagnosed. Solving them is a different issue.
  5. Focus on eliminating components that are not part of the problem. Figure out which layer of the OSI model is causing the problem, then implicate or exonerate.
  6. Don’t guess. Only state the facts after thorough analysis.
  7. Don’t believe anything that anyone tells you. Carry out your own troubleshooting and analysis. Be thorough.
  8. Explain the problem and diagnosis in a way that can be understood by all. Avoid misinterpretation and misunderstanding.
  9. Understand how to use the analysis tools before problems occur. And practice!
  10. Look for differences between working and non-working examples. If the normal situation is captured then it’s like a digital photo for comparison.

And finally, if this sort of thing is what interests you, Network Protocol Specialists have created a LinkedIn group for protocol analysis and troubleshooting to provide tips, tricks and valuable information to network professionals, application developers and anyone tasked solving computer network problems.

Can I fit a PCI expansion card into a different type of slot?

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The new server that I bought recently has a huge case with loads of room for expansion and that got me thinking about all which components I already had that I could reuse.  DVD±RW dual layer recorder from the old PC, 500GB SATA hard disk from my external drive (swapped for the 250GB disk supplied with the server), couple of extra NICs… "oh, hang on.  Those NICs look like PCI cards and I can only see a single PCI slot.  Ah!".

I decided to RTFM and, according to the technical specifications, my server has 5 IO slots (all full-height, full length) as follows:

  • 2 x 64-bit 133MHz PCI-X
  • 1 x PCIe
  • 1 x 8 PCIe
  • 1 x 32-bit 33MHz legacy slot

I knew that the spare NICs I had were PCI cards but would they fit in a PCI-X slot?  Yes, as it happens.

I found a really useful article about avoiding PCI problems that explained to me the differences between various peripheral component interconnect (PCI) card formats and it turned out not to be as big an issue as I first thought.  It seems that the PCI specification allows for two signalling voltages (3.3v and 5v) as well as different bus widths (32- or 64-bit).  32-bit cards have 124 pins whilst 64-bit cards have 184 pins; however to indicate which signalling voltage is supported, notches are used at different points on the card – 5V cards have the notch at pin positions 50 and 51 whilst 3.3V cards have the notch closer to the backplate at pin positions 12 and 13.  Furthermore, some cards (like the NICs I wanted to use)Universal PCI card have notches in both positions (known as universal cards), indicating that they will work at either signalling voltage.  Meanwhile, PCI-X (PCI eXtended) is a development of PCI and, whilst offering higher speeds and a longer, 64-bit, connection slot, is also backwards-compatible with PCI cards allowing me to use my universal PCI card in a PCI-X slot (albeit slowing the whole bus down to 32-bit 33MHz).  PCIe (PCI Express) is a different standard, with a radically different connector and a serial (switched) architecture (HowStuffWorks has a great explanation of this).  My system has a single lane (1x) and an 8-lane (8x) connector, but 1x and 4x PCIe cards will work in the 8x slot.

PCI slotsThis illustration shows the various slot types on my motherboard, an 8x PCIe at the top, then a 1x PCIe, two 64-bit PCI-X slots and, finally, one legacy 32-bit 5V PCI slot.

After adding the extra NICs (one in the 32-bit legacy 33MHz slot and the other in one of the PCI-X slots) everything seemed to fit without resorting to the use of heavy tools and when I switched on the computer it seemed to boot up normally, without any pops, bangs or a puffs of smoke.  All that was needed was to get some drivers for Windows Server 2008 (these are old 100Mbps cards that have been sitting in my "box of PC bits" for a long time now).  Windows Device Manager reported the vendor and device IDs as 8086 and 1229 respectively (I already knew from the first half of the MAC address that these were Intel NICs), from which I could track down the vendor and device details and find that device 1229 is an 82550/1/7/8/9 EtherExpress PRO/100(B) Ethernet Adapter.  Despite this being a discontinued product, searching the Intel Download Center turned up a suitable Windows Vista (64-bit) driver that was backwards compatible with the Intel 82550 Fast Ethernet Controller and I soon had the NICs up and running in Windows Server 2008, reporting themselves as Intel PRO/100+ Management Adapters (including various custom property pages provided by Intel for teaming, VLAN support, link speed, power management and boot options).

So, it seems that, despite the variety of formats, not having exactly the right PCI slot is not necessarily an issue.  PCI Express is an entirely different issue but, for now, my 32-bit universal PCI card is working fine in a 64-bit PCI-X slot.

BT’s view of networks in the 21st century

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year, I wrote about the emergence of MPLS as an alternative network technology to traditional leased lines. It seems that everyone I work with is in the process of, or thinking about, dumping their kilostream/megastream/frame relay links and moving to something more cost-efficient.

Thus have a full-page advert in this week’s IT Week for their National Ethernet (a UK-based network with points of presence from the Shetland Islands to the south coast of England, but strangely none in Wales, east or south-west England). COLT, have their EuroLAN (only three UK POPs, but coverage in many major European cities) and BT are touting their global presence, whilst investing heavily in their 21st Century Network (dubbed 21CN).

Last week I was present at a presentation given by BT about what they call the “digital networked economy” (and their 21CN). Although they wouldn’t share their slide deck with me, I’m not aware of any non-disclosure agreement and much of the following information is available from a Google search anyway!

BT explained that the “inter-human web” has arrived. After the Internet had existed for many years as an e-mail and file transfer mechanism, mainly used by government and educational establishments, Sir Tim Berners-Lee invented the world wide web, making the Internet user-friendly. Now we have a collaborative infrastructure built on e-mail, file transfer and the human aspects that the web provides and furthermore, according to BT, “a hurricane is ripping our industry apart”:

  • Traditional voice services face increased market pressure – 4 major US telcos lost 2% of their retail market base in one quarter.
  • Next-generation networks (NGNs) offer greater bandwidth and have taken the first steps towards replacing terrestrial television – viewers of last summer’s Live 8 concerts on AOL outnumbered MTV viewers 2:1 and BSkyB has linked up with EasyNet to offer broadband and telephony services combined with satellite television.
  • Revenue per megabit for broadband connectivity has collapsed – meanwhile UK connections are now pushing past the 10mbps mark, the US has 25mbps, and the far east is looking at 50-100mbps.
  • Network/service separation is expanding the base for traditional telcos’ competitors – using IT, homeworkers can be called wherever they are, and no-one need know that they are not in the office.

BT claims that it’s 21CN is about giving its customers control to enable communications; offering new services (faster than previously); and reducing costs to grow cash. They are betting the company on 21CN (to put this into context, BT is investing more into 21CN than the UK Government is investing in our road infrastructure).

Having said that, they are coming from a pretty poor starting point. The current infrastructure is a mess, with a mixture of networking technologies. 21CN is intended to offer Ethernet right back to the Exchange, with copper (DSL), wireless, and fibre links through to aggregators at 5500 sites, and onwards to BT’s core IP/MPLS/wavelength division multiplexing (WDM) network.

Ethernet has not traditionally been a successful wide area networking technology, so BT is investing in the use of carrier grade Ethernet, looking at fast restoration, auto discovery, scalability, class-based queuing, protection switching, fault and performance monitoring.

BT’s vision sees 21CN to be at the heart of the UK economy, innovating to provide “more than dumb fat pipes” connecting data centres, branches, headquarters campus buildings, home and mobile workers to offer enterprise virtualisation – a global virtual network with virtual applications and ubiquitous access to any application from any broadband location. They cite example technologies (many of which are here today) including:

  • VOIP (BT Communicator).
  • Fixed-mobile convergence (BT Fusion).
  • Broadband home (network-enabled wireless hubs).
  • Multimedia infotainment (BT Livetime mobile TV)
  • Application assured interface (single, robust and flexible platform which is able to prioritise at the application layer, e.g. to offer priority to applications which require low latency).

In the words of BT’s Tim Hubbard “21CN is big, bold, and it’s going to change the world forever”. I’m not sure if 21CN will change the world for ever, but NGNs in general will and BT is well placed to capitalise on this, as it builds a seamless global MPLS network, rolling out a new POP every week.