Microsoft Azure URLs

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been doing a lot of Azure reading recently and it struck me that there are many different URLs in use that would be useful to record somewhere.

John Savill (@NTFAQGuy) has noted the main ones in his Windows IT Pro post on Azure URLs to whitelist but I’ll expand on them here to highlight the purpose of each one (and to add some extras):

*.windowsazure.com

https://manage.windowsazure.com/ is still used for the legacy (classic) Azure Service Manager (ASM) portal. http://windowsazure.com/ redirects to http://azure.microsoft.com/

*.azure.com

https://portal.azure.com/ is used for the Azure Resource Manager (ARM) portal. http://azure.com/ redirects to http://azure.microsoft.com/

*.*.core.windows.net

This URL pattern is used for access to Azure Storage:

  • File access: https://storageaccountname.file.core.windows.net/sharename/foldername/foldername/filename
  • Containers in blob storage: https://storageaccountname.blob.core.windows.net/containername/blobname
  • Table storage: http://storageaccountname.table.core.windows.net/tablename
  • Queue storage: http://storageaccountname.queue.core.windows.net/queuename

*.cloudapp.net

Domain name used for cloud services.

*.azurewebsites.net

Domain name used for Azure App Service websites. Each site also has Kudu at https://sitename.scm.azurewebsites.net.

*.database.windows.net

Domain name used for Azure SQL database.

*.trafficmanager.net

Domain name used for Azure Traffic Manager.

*.azureedge.net

Content Delivery Network (CDN) endpoints are available via http://cdnname.azureedge.net/.

*.streaming.mediaservices.windows.net

Domain name used for steaming media services (https://mediaservicename.streaming.mediaservices.windows.net/identifier/filename)

*.onmicrosoft.com

Domain name used for the Microsoft Online Services tenant (tenantname.onmicrosoft.com) – shared between multiple online services, including Azure but also Office 365, etc.

[Edited 12/9/16 – updated to include streaming media services and tenant URL]

Removing an auto-signature on the MSDN and TechNet forums

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in 2008, I was awarded Microsoft Most Valuable Professional (MVP) status for Virtual Machine technology. Unfortunately I wasn’t doing enough Hyper-V work (and my employer at the time didn’t understand the value of employing an MVP) so, in 2010, the Product Group had to let me go. Disappointing as that was, I understood and I moved on to do other things.

Every time I post on the MSDN or TechNet forums though, there’s a signature appended to my posts that says “Mark Wilson (MVP Virtual Machine) – http://www.markwilson.co.uk/blog/”. I’ve been editing manually to remove the MVP text (I don’t want to make false claims about my status) but I couldn’t see how to remove the option completely – it didn’t seem to appear anywhere in my forum profile.

Only after posting on the forums to ask how I prevent this behaviour, did I find the answer in a “related thread” that was highlighted:

“Yes, you can change that setting by clicking on Quick Access, and then My Settings. You’ll then see a section where you can add your signature.

For a more detailed guide including screenshots and instructions on how to insert HTML content see this post http://blogs.technet.com/b/rmilne/archive/2013/01/31/how-to-tweak-your-technet-forum-signature.aspx

Thanks to Keith Langmaid for that gem!

My forum preferences have now been duly edited to remove the offending text!

My first few weeks with a Synology Diskstation NAS

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this summer, I bought myself a new NAS. I’d lost faith in my old Netgear ReadyNAS devices a while ago, after a failure took out both halves of a RAID 1 mirror and I lost all the data on one of them. That actually taught me two important lessons:

  1. Data doesn’t exist unless it’s backed up in at least two places.
  2. RAID 1 is not suitable for fault-tolerant backups.

As I wrote a few weeks ago, my new model is to get all of the data into one place, then sync/archive as appropriate to the cloud. Anything on any other PCs, external disks, etc. should be considered transient.

For the device itself, it seems that there are only really two vendors to consider – QNAP or Synology (maybe a Drobo). I chose Synology – and elected to go with a 4-bay model, picking up a Synology Diskstation DS916+ (8GB) and kitting it out with 4 Hitachi (HGST) Deskstar NAS drives.

Unfortunately, I had a little hiccup in that I’d ordered the device pre-configured. The weight of the disks was clearly too much for the plastic drive carriers to cope with but, once again, span.com sorted things out for me and I soon had a replacement in my possession.

Over the last few weeks, I’ve been building up what I’m doing with the Diskstation: providing home drives for the family; syncing all of my cloud storageacting as a VPN endpoint; providing DHCP and DNS services; running anti-virus checks; and backing up key files to Microsoft Azure.

This last workload is worthy of discussion, as it took me a couple of weeks to push my data to the cloud. Setup was fairly straightforward, following Paris Polyzos (@ppolyzos)’s advice to backup Synology NAS data in Microsoft Azure Cool Storage but the volume of data and the network it had to traverse was more problematic.

Initially I had issues with timeouts due to a TP-Link HomeplugAV (powerline Ethernet) device between the ISP router and the DNS server that kept failing. I worked around that by moving DNS onto the NAS, and physically locating the NAS next to the router (bypassing the problematic section of network). Then it was just a case of waiting for my abysmal home Internet connection to cope with multi-GB upstream transfers…

I have no doubts that this NAS, albeit over-specified for a family (because I wanted an Intel-based model), is a great device but I did need to work around some issues with vibration noise. It’s also slightly frustrating that there is no integration between the DHCP and DNS services (I’ve been spoiled working with Windows Server…), the Security Advisor reports are a bit dramatic, and some of the Linux commands are missing – but I really haven’t found anything yet that’s a show-stopper.

Now I need to get back to consolidating data onto the device, and moving more of it into the cloud…

Preventing vibration noise on a Synology NAS

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My Synology Diskstation NAS (DS916+) has been a great purchase but I have had some issues with noise from vibration. Over a course of a few weeks, complaints from family members meant that I had to move the NAS from my desk, onto the floor, then into the garage (before I brought it into the kitchen to be next to the Internet connection – but that’s another story). You should be able to hear the noise in the video below (though it seems much louder in real life!):

As can be heard, the vibration noise reduces when I put pressure on the chassis. It seems that it’s actually caused by the screw-less drive carriers that Synology use on their NASs.

Thanks to advice from Chipware on Reddit, I was able to add some sticky-backed Velcro (just the fluffy side) between the disk carrier and the disk, and on the outside of the disk carriers. They now better fit the NAS and, crucially, the Velcro serves as a shock absorber, preventing any more vibrations…

And, at just £2 for a metre of sticky-backed Velcro (which I only used a few centimetres of), it was a pretty inexpensive fix.

Chipware says in his post that:

“I definitely think the 4 Velcro pieces connecting the sled to the cage solved the problem. The pieces between drive and sled connection provides negligible dampening.”

I initially only put 4 pieces on the outside of the carrier (2 of them can be seen in the picture) but my experience was that adding 2 more pieces on the disk itself (underneath the carrier) also helped. Of course, your mileage my vary (and any changes you make are at your own risk – I’m not responsible for any problems it may cause).

After making these modifications there’s no more noise, just a relatively quiet fan noise (as to be expected) and the NAS is back on my desk!

A “Snooper’s Charter” for the postal system?

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I spotted this on my Facebook feed today, from an old University friend, who now works as a Senior Cyber Security Consultant:

“I will shortly be writing to my MP urging him to push the Cabinet to extend it’s Investigatory Powers Bill to mandate that all mail carriers must open all letters they collect, scan their contents, and store those images in an archive for a given period in case law enforcement agencies needed to review their contents. Furthermore, I think it would be reasonable outlaw glue on envelopes altogether…with a recommendation to allow postcards only.

I urge the rest of the UK to do the same as a matter of priority due to concerns around National Security.”

He always had a wicked sense of humour but for those who think this is just banter, it really is the postal mail equivalent of what the UK Government is proposing for email in the Investigatory Powers Bill (nicknamed “The Snooper’s Charter”). The staggering thing is that the UK public is largely unaware – generally engagement with politics here is low and I’d wager that the combination of politics and technology has a particularly high “snooze factor”.

[Perhaps Parliament needs to be transformed to involve some kind of “bake-off” type element with MPs getting voted out each week based on their performance. The Westminster Factor. Britain’s Got Legal Talent. Would that get the public involved?]

Putting aside low social engagement in politics (or anything that’s not a big competition on TV) this quote highlights how out of touch our legislators are with the realities of digital life – and how ridiculous the new law would be if applied to analogue communications…

Android for under-13s: no Google accounts; no family sharing

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

We’re entering a new phase in the Wilson family as my eldest son starts secondary school next week and my youngest becomes more and more tech-aware.

The nearly-10 year-old just wants a reasonably-priced, reasonably-specced tablet as my original iPad is no longer suiting his needs (stuck back on iOS 5 and with a pretty low spec by today’s standards) – I’m sure we’ll work something out.

A bigger challenge is a phone for the nearly-12 year-old. We’ve said he can have his own smartphone when his birthday comes and effectively there are 3 (well, 2) platforms to consider:

  • Windows Mobile: limited app availability; inexpensive handsets; uncertain future.
  • Apple iOS: expensive hardware; good app support.
  • Google Android: wide availability of apps and hardware; fragmented OS.

Really, Windows isn’t an option (for consumers – different story in the enterprise); Apple is only viable if he has a hand-me-down device (which is a possibility); but he’s been doing his research, and is looking at price/specs for various Android devices.  The current favourite is an Elephone P9000 – which looks like a decent phone for a reasonable price – as long as I can find a reliable UK supplier (i.e. not grey market).

In the meantime, and to see how he gets on before we commit to a device purchase, I’ve given him an old Samsung Galaxy S3 Mini that I had in a drawer and I put a giffgaff SIM in. Because it’s a Google device, he gets the best experience if he uses a Google account… and that’s where the trouble started.

We went to sign-up, added some details, and promptly found that you have to be 13 to open a Google account. And unlike with Apple iCloud Family Sharing, where I have family sharing set up for the old iPhones that the boys use around the house, the Google equivalent (Google Play Family Library) also needs all of the family members to be at least 13.  There simply appears to be no option for younger children to use Google services.

Maybe that’s because Google’s primary business is about selling advertising and advertising to children is questionable from a moral standpoint (though YouTube have come up with a child-friendly product).

I tried signing in as me – which let me download some apps but also meant he had access to my information – like all of my contacts (easily switched off but still undesirable).

Luckily, it seems I created him a GMail account when he was 5 weeks old (prescient, some might say) and I was able to find my way into that and get him going. Sadly, it seems I was not as mentally sharp when his little brother was born…

(As an aside, I originally gave my son a Nokia “feature phone” to use and he looked bemused – he later confessed that was because he didn’t know how to use it!)

Postscript: I’ve since given my youngest son my Tesco Hudl and was able to sign up for a Google account without being asked to provide date of birth details…

Cyclist abuse

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today, the phrase “Jeremy Vine” is trending on Twitter after the BBC presenter published a video of the abuse he allegedly suffered at the hands of a motorist who didn’t like the way he cycled through West London:

To be fair, Mr Vine does appear to have stopped his bike and blocked the road when he could simply have pulled over as the road widened but the tirade of verbal (and it seems physical) abuse poured on him was totally unreasonable. Sadly, this kind of behaviour is not unusual, though most of us are not prominent journalists with a good network of media contacts to help highlight the issue:

[In addition to driving an average of around 25,000 miles a year for the last 27 years)] I regularly cycle – road, mountain and commuting – and, whilst it should be noted I see a fair amount of cyclist-induced stupidity too, Jeremy Vine’s incident is not an isolated one.  Just this weekend:

  • I was cycling downhill in the town where we live, following my son at around 28mph (in a 30mph limit) when an impatient Audi driver decided to squeeze into the gap between Father and Son, and then tailgate my 11yo as he rode along. My son pulled over when it was safe to do so but he was scared – and there was no justification for the driver’s actions.
  • Then, whilst out with a small group yesterday morning, the driver of a Nissan Qashqai tore past sounding a long blast on his horn (presumably in protest that two of the three of us were riding side by side – which is perfectly acceptable, especially as this was not a narrow road). That kind of behaviour is pretty normal, as pretty much any road cyclist will attest…
  • Finally, whilst turning left, a motorist overtook me, on the junction itself, leaving around 18 inches to ride in between his car and the kerb, rather than follow the highway code ruling to “give motorcyclists, cyclists and horse riders at least as much room as you would when overtaking a car”. I called out and was actually forced to use his car to steady myself. As he drove off, the usual hand signals were observed, along with some unintelligable expletives (from the driver, not me – I was in shock).

All of this in around 24 hours – and against a landscape where there are far more cyclists on UK roads (so motorists are more aware of us)…

Maybe it was all just a bit of Bank Holiday summer madness…

Not all software consumed remotely is a cloud service

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Improving application performance from Azure with some network routing changes

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few months, I’ve been working with a UK Government customer to move them from a legacy managed services contract with a systems integrator to a disaggregated solution built around SaaS services and a virtual datacentre in Azure.  I’d like to write a blog post on that but will have to be careful about confidentiality and it’s probably better that I wait until (hopefully) a risual case study is created.

One of the challenges we came across in recent weeks was application performance to a third-party-hosted solution that is accessed via a site-to-site VPN from the virtual datacentre in Azure.

My understanding is that outside access to Microsoft services hits a local point of presence (using geographically-localised DNS entries) and then is routed across the Microsoft global network to the appropriate datacentre.

The third-party application in Bedford (UK) and the virtual datacentre is in West Europe (Netherlands) so the data flows should have just been in Europe.  Even so, a traceroute from the third-party provider’s routers to our VPN endpoint suggested several long (~140ms) hops once traffic hit the Microsoft network. These long hops were adding significant latency and reducing application performance.

I logged a call under the customer’s Azure support contract and after several days of looking into the issue, then identifying a resolution, Microsoft came back and said words to the effect of “it should be fixed now – can you try again?”.  Sure enough, ping times (not the most accurate performance test it should be said) were significantly reduced and a traceroute showed that the last few hops on the route were now down to a few milliseconds (and some changes in the route). And overnight reports that had been taking significantly longer than previously came down to a fraction of the time – a massive improvement in application performance.

I asked Microsoft what had been done and they told me that the upstream provider was an Asian telco (Singtel) and that Microsoft didn’t have direct peering with them in Europe – only in Los Angeles and San Francisco, as well as in Asia.

The Microsoft global network defaults to sending peer routes learned in one location to the rest of the network.  Since the preference of the Singtel routes on the West Coast of the USA was higher than the preference of the Singtel routes learned in Europe, the Microsoft network preferred to carry the traffic to the West Coast of the US.  Because most of Singtel’s customers are based in Asia, it generally makes sense to carry traffic in that direction.

The resolution was to reconfigure the network to stop sending the Singtel routes learned in North America to Europe and to use one of Singtel’s local transit providers in Europe to reach them.

So, if you’re experiencing poor application performance when integrating with services in Azure, the route taken by the network traffic might just be something to consider. Getting changes made in the Microsoft network may not be so easy – but it’s worth a try if something genuinely is awry.

Short takes: calculating file transfer times; Internet breakout from cloud datacentres; and creating a VPN with a Synology NAS

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of “not-quite-whole-blog-posts”…

File transfer time calculations

There are many bandwidth/file transfer time calculators out there on the ‘net but I found this one particularly easy to work with when trying to assess the likely time to sync some data recently…

Internet breakout from IaaS

Anyone thinking of using an Azure IaaS environment for Internet breakout (actually not such a bad idea if you have no on-site presence, though be ready to pay for egress data) just be aware that because the IP address is in Holland (or Ireland, or wherever) location-aware websites will present themselves accordingly.

One of my customers was recently caught out when Google defaulted to Dutch after they moved their client Internet traffic over to Azure in the West Europe region… just one to remember to flag up in design discussions.

Creating a VPN with a Synology NAS

I’ve been getting increasingly worried about the data I have on a plethora of USB hard disks of varying capacities and wanted to put it in one place, then sync/archive as appropriate to the cloud. To try and overcome this, I bought a NAS (and there are only really two vendors to consider – QNAP or Synology).  The nice thing is that my Synology DS916+ NAS can also operate many of the network services I currently run on my Raspberry Pi and a few I’ve never got around to setting up – like a VPN endpoint for access to my home network.

So, last night, I finally set up a VPN, following Scott Hanselman’s (@shanselman) article on Setting up a VPN and Remote Desktop back into your home. Scott’s article includes client advice for iPhone and Windows 8.1 (which also worked for me on Windows 10) and the whole process only took a few minutes.

The only point where I needed to differ from Scott’s article was the router configuration (the article is based on a Linksys router and I have a PlusNet Hub One, which I believe is a rebadged BT Home Hub). L2TP is not a pre-defined application to allow access, so I needed to create a new application (I called it L2TP) with UDP ports 500, 1701 and 4500 before I could allow access to my NAS on these ports.

Creating an L2TP application in the PlusNet Hub One router firewall

Port forwarding to L2TP in the PlusNet Hub One router firewall