Android for under-13s: no Google accounts; no family sharing

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

We’re entering a new phase in the Wilson family as my eldest son starts secondary school next week and my youngest becomes more and more tech-aware.

The nearly-10 year-old just wants a reasonably-priced, reasonably-specced tablet as my original iPad is no longer suiting his needs (stuck back on iOS 5 and with a pretty low spec by today’s standards) – I’m sure we’ll work something out.

A bigger challenge is a phone for the nearly-12 year-old. We’ve said he can have his own smartphone when his birthday comes and effectively there are 3 (well, 2) platforms to consider:

  • Windows Mobile: limited app availability; inexpensive handsets; uncertain future.
  • Apple iOS: expensive hardware; good app support.
  • Google Android: wide availability of apps and hardware; fragmented OS.

Really, Windows isn’t an option (for consumers – different story in the enterprise); Apple is only viable if he has a hand-me-down device (which is a possibility); but he’s been doing his research, and is looking at price/specs for various Android devices.  The current favourite is an Elephone P9000 – which looks like a decent phone for a reasonable price – as long as I can find a reliable UK supplier (i.e. not grey market).

In the meantime, and to see how he gets on before we commit to a device purchase, I’ve given him an old Samsung Galaxy S3 Mini that I had in a drawer and I put a giffgaff SIM in. Because it’s a Google device, he gets the best experience if he uses a Google account… and that’s where the trouble started.

We went to sign-up, added some details, and promptly found that you have to be 13 to open a Google account. And unlike with Apple iCloud Family Sharing, where I have family sharing set up for the old iPhones that the boys use around the house, the Google equivalent (Google Play Family Library) also needs all of the family members to be at least 13.  There simply appears to be no option for younger children to use Google services.

Maybe that’s because Google’s primary business is about selling advertising and advertising to children is questionable from a moral standpoint (though YouTube have come up with a child-friendly product).

I tried signing in as me – which let me download some apps but also meant he had access to my information – like all of my contacts (easily switched off but still undesirable).

Luckily, it seems I created him a GMail account when he was 5 weeks old (prescient, some might say) and I was able to find my way into that and get him going. Sadly, it seems I was not as mentally sharp when his little brother was born…

(As an aside, I originally gave my son a Nokia “feature phone” to use and he looked bemused – he later confessed that was because he didn’t know how to use it!)

Postscript: I’ve since given my youngest son my Tesco Hudl and was able to sign up for a Google account without being asked to provide date of birth details…

Cyclist abuse

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Today, the phrase “Jeremy Vine” is trending on Twitter after the BBC presenter published a video of the abuse he allegedly suffered at the hands of a motorist who didn’t like the way he cycled through West London:

To be fair, Mr Vine does appear to have stopped his bike and blocked the road when he could simply have pulled over as the road widened but the tirade of verbal (and it seems physical) abuse poured on him was totally unreasonable. Sadly, this kind of behaviour is not unusual, though most of us are not prominent journalists with a good network of media contacts to help highlight the issue:

[In addition to driving an average of around 25,000 miles a year for the last 27 years)] I regularly cycle – road, mountain and commuting – and, whilst it should be noted I see a fair amount of cyclist-induced stupidity too, Jeremy Vine’s incident is not an isolated one.  Just this weekend:

  • I was cycling downhill in the town where we live, following my son at around 28mph (in a 30mph limit) when an impatient Audi driver decided to squeeze into the gap between Father and Son, and then tailgate my 11yo as he rode along. My son pulled over when it was safe to do so but he was scared – and there was no justification for the driver’s actions.
  • Then, whilst out with a small group yesterday morning, the driver of a Nissan Qashqai tore past sounding a long blast on his horn (presumably in protest that two of the three of us were riding side by side – which is perfectly acceptable, especially as this was not a narrow road). That kind of behaviour is pretty normal, as pretty much any road cyclist will attest…
  • Finally, whilst turning left, a motorist overtook me, on the junction itself, leaving around 18 inches to ride in between his car and the kerb, rather than follow the highway code ruling to “give motorcyclists, cyclists and horse riders at least as much room as you would when overtaking a car”. I called out and was actually forced to use his car to steady myself. As he drove off, the usual hand signals were observed, along with some unintelligable expletives (from the driver, not me – I was in shock).

All of this in around 24 hours – and against a landscape where there are far more cyclists on UK roads (so motorists are more aware of us)…

Maybe it was all just a bit of Bank Holiday summer madness…

Not all software consumed remotely is a cloud service

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Helping a customer to move away from physical datacentres and into the cloud has been an exciting project to work on but my scope was purely the Microsoft workstream: migrating to Office 365 and a virtual datacentre in Azure. There’s much more to be done to move towards the consumption of software as a service (SaaS) in a disaggregated model – and many more providers to consider.

What’s become evident to me in recent weeks is that lots of software is still consumed in a traditional manner but as a hosted service. Take for example a financial services organisation who was ready to allow my customer access to their “private cloud” over a VPN from the virtual datacentre in Azure but then we hit a road block for routing the traffic. The Azure virtual datacentre is an extension of the customer’s network – using private IP addresses – but the service provider wanted to work with public IPs, which led to some extra routers being reployed (and some NATting of addresses somewhere along the way). Then along came another provider – with human resources applications accessed over unsecure HTTP (!). Not surprisingly, access across the Internet was not allowed and again we were relying on site-to-site VPNs to create a tunnel but the private IPs on our side were something the provider couldn’t cope with. More network wizardry was required.

I’m sure there’s a more elegant way to deal with this but my point is this: not all software consumed remotely is a cloud service. It may be licenced per user on a subscription model but if I can’t easily connect to the service from a client application (which will often be a browser) then it’s not really SaaS. And don’t get me started on the abuse of the term “private cloud”.

There’s a diagram I often use when talking to customers about different types of cloud deployments. it’s been around for years (and it’s not mine) but it’s based on the old NIST definitions.

Cloud computing delivery models

One customer highlighted to me recently that there are probably some extra columns between on-premises and IaaS for hosted and co-lo services but neither of these are “cloud”. They are old IT – and not really much more than a different sort of “on-premises”.

Critically, the NIST description of SaaS reads:

“The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited userspecific application configuration settings.”

The sooner that hosted services are offered in a multi-tenant model that facilitates consumption on demand and broad network access the better. Until then, we’ll be stuck in a world of site-to-site VPNs and NATted IP addresses…

Improving application performance from Azure with some network routing changes

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last few months, I’ve been working with a UK Government customer to move them from a legacy managed services contract with a systems integrator to a disaggregated solution built around SaaS services and a virtual datacentre in Azure.  I’d like to write a blog post on that but will have to be careful about confidentiality and it’s probably better that I wait until (hopefully) a risual case study is created.

One of the challenges we came across in recent weeks was application performance to a third-party-hosted solution that is accessed via a site-to-site VPN from the virtual datacentre in Azure.

My understanding is that outside access to Microsoft services hits a local point of presence (using geographically-localised DNS entries) and then is routed across the Microsoft global network to the appropriate datacentre.

The third-party application in Bedford (UK) and the virtual datacentre is in West Europe (Netherlands) so the data flows should have just been in Europe.  Even so, a traceroute from the third-party provider’s routers to our VPN endpoint suggested several long (~140ms) hops once traffic hit the Microsoft network. These long hops were adding significant latency and reducing application performance.

I logged a call under the customer’s Azure support contract and after several days of looking into the issue, then identifying a resolution, Microsoft came back and said words to the effect of “it should be fixed now – can you try again?”.  Sure enough, ping times (not the most accurate performance test it should be said) were significantly reduced and a traceroute showed that the last few hops on the route were now down to a few milliseconds (and some changes in the route). And overnight reports that had been taking significantly longer than previously came down to a fraction of the time – a massive improvement in application performance.

I asked Microsoft what had been done and they told me that the upstream provider was an Asian telco (Singtel) and that Microsoft didn’t have direct peering with them in Europe – only in Los Angeles and San Francisco, as well as in Asia.

The Microsoft global network defaults to sending peer routes learned in one location to the rest of the network.  Since the preference of the Singtel routes on the West Coast of the USA was higher than the preference of the Singtel routes learned in Europe, the Microsoft network preferred to carry the traffic to the West Coast of the US.  Because most of Singtel’s customers are based in Asia, it generally makes sense to carry traffic in that direction.

The resolution was to reconfigure the network to stop sending the Singtel routes learned in North America to Europe and to use one of Singtel’s local transit providers in Europe to reach them.

So, if you’re experiencing poor application performance when integrating with services in Azure, the route taken by the network traffic might just be something to consider. Getting changes made in the Microsoft network may not be so easy – but it’s worth a try if something genuinely is awry.

Short takes: calculating file transfer times; Internet breakout from cloud datacentres; and creating a VPN with a Synology NAS

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of “not-quite-whole-blog-posts”…

File transfer time calculations

There are many bandwidth/file transfer time calculators out there on the ‘net but I found this one particularly easy to work with when trying to assess the likely time to sync some data recently…

Internet breakout from IaaS

Anyone thinking of using an Azure IaaS environment for Internet breakout (actually not such a bad idea if you have no on-site presence, though be ready to pay for egress data) just be aware that because the IP address is in Holland (or Ireland, or wherever) location-aware websites will present themselves accordingly.

One of my customers was recently caught out when Google defaulted to Dutch after they moved their client Internet traffic over to Azure in the West Europe region… just one to remember to flag up in design discussions.

Creating a VPN with a Synology NAS

I’ve been getting increasingly worried about the data I have on a plethora of USB hard disks of varying capacities and wanted to put it in one place, then sync/archive as appropriate to the cloud. To try and overcome this, I bought a NAS (and there are only really two vendors to consider – QNAP or Synology).  The nice thing is that my Synology DS916+ NAS can also operate many of the network services I currently run on my Raspberry Pi and a few I’ve never got around to setting up – like a VPN endpoint for access to my home network.

So, last night, I finally set up a VPN, following Scott Hanselman’s (@shanselman) article on Setting up a VPN and Remote Desktop back into your home. Scott’s article includes client advice for iPhone and Windows 8.1 (which also worked for me on Windows 10) and the whole process only took a few minutes.

The only point where I needed to differ from Scott’s article was the router configuration (the article is based on a Linksys router and I have a PlusNet Hub One, which I believe is a rebadged BT Home Hub). L2TP is not a pre-defined application to allow access, so I needed to create a new application (I called it L2TP) with UDP ports 500, 1701 and 4500 before I could allow access to my NAS on these ports.

Creating an L2TP application in the PlusNet Hub One router firewall

Port forwarding to L2TP in the PlusNet Hub One router firewall

End user computing – the device doesn’t matter

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Following a recent Windows update that “went bad”, I needed to have my work PC rebuilt.  That left me with a period when I had work to do, but only a smartphone to work on or my personal devices. To me, this was also a perfect opportunity to put cloud services to work.

So, armed only with a web browser on another PC, I was perfectly able to access email and send/receive IMs (it’s all in Office 365), pester people on Yammer, catch up on some technical videos, etc. There was absolutely nothing (technically) preventing me from doing my job on another device. That’s how End User Computing should work – providing a flexible computing workstyle that’s accessible regardless of the device and the location.

The real issues are not around technology, but process: questions were asked about why I wasn’t following policy and using my company-supplied device; and I was able to answer with clear reasons and details of what I was doing to ensure no customer information was being processed on a non-corporate device. There are technical approaches to ensuring that only approved devices can be used too – but what’s really needed is a change of mindset…

Short takes: pairing my headphones, firewalls and Exchange SMTP communications, tethered photos with a Mac

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Some more snippets that don’t quite make a blog post…

Because I always forget how to do this: how to pair a Plantronics BackBeat PRO headset with a mobile device.

And a little tip whilst troubleshooting connectivity to an Exchange Server server for hybrid connectivity with Office 365… if telnet ipaddress 25 gives a banner response from the SMTP server then that’s a good thing – if the firewall is interrupting transmission then I’ll get nothing back, or asterisks ********. Joe Palarchio (@JoePalarchio) writes about this (see issue 7) in his post on Common Exchange Online Hybrid Mail Flow Issues. Note that firewalls doing any form of blocking between Exchange servers are unsupported but that doesn’t stop customers from putting them between their email servers and anything running in the cloud (e.g. Hybrid server in Azure).  If you need to do this, then you should have any ANY/ANY rule (i.e. allow free flow of traffic) between the Exchange Server servers.

Take photos with OS X Image CaptureFinally, back in 2009, I  wrote about tethering a DLSR to a computer and taking pictures using Windows PowerShell (I think I’ve also written about using software to do this). Well, it turns out that the OS X Image Capture utility can also take a photo on a supported camera – either on a timed basis or by pressing a key.  Could be useful to know if setting up a time-lapse, or for studio work…

Copy NTFS permissions from one folder/file to another

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m working with a customer who is migrating from on-premises datacentres to the cloud – using a virtual datacentre in Microsoft Azure. One of the challenges we have is around the size of the volumes on a file server: Azure has a maximum disk size of 1023GB and the existing server has a LUN attached that exceeds this size.

We can use other technologies in Windows to expand volumes over multiple disks (breaking the <1TB limit) but the software we intend to use for the migration (Double Take Move) needs the source and target to match. That means that the large volume needs to be reduced in size, which means moving some of the data to a new volume (at least temporarily).

One of my colleagues moved the data (using a method that retained permissions) but the top level folders that he created were new and only had inherited permissions from their parent. After watching him getting more and more frustrated, manually configuring access control lists and comparing them in the Windows Explorer GUI, I thought there had to be a better way.

A spot of googling turned up some useful information from forums and this is what I did to copy NTFS permissions from the source to the target (thanks to Kalatzis Stefanos for his answer on Server Fault).

First of all, export the permissions from the source folder with the icacls.exe command:

icacls D:\data /save perms.txt [/t /c]

/c is continue on error; /t is to work through subfolders too

Then, apply these permissions to the target volume. They can be applied at volume level, because the export includes the file names and an associated ACL (i.e. it only applies to matching files)

icacls D:\ /restore perms.txt

But what if the source and destination folders/files have different names? That’s answered by Scott Chamberlain in another post, which tells me I can just edit my perms.txt file and change the file/folder name before each ACL.

By following this, I was able to export and re-apply permissions on several folders in a few minutes. Definitely a time saver!

Reflecting on riding the #RideStaffs 68-mile sportive

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in 2013, when I bought my first road bike since the “racer” of my teens, the first sportive I took part in was the Tour [of Britain] Ride in Staffordshire – setting out from Stoke-on-Trent. Now I work for a Stafford-based IT services company and when I heard we were sponsoring the Staffordshire Cycling Festival (@RideStaffs) it gave me a chance to a return visit, although a little further south this time!

(Ironically, the Tour Ride has moved to my home county of Northamptonshire this year… but I can’t make it.)

So, last Sunday, blessed with some summer sunshine (at last!) I rocked up at Shugborough Hall wearing my risual orange jersey, the only one of the team joining the 68 mile sportive (though quite a few of the guys took part in the 22 miler).

With rolling hills from the off, at Milford we took a sharp left and then Bang! we hit the climb up onto Cannock Chase. The first 30 minutes were slow, grinding my way up onto the Chase until we turned left on Brindley Heath and headed down towards Rugeley. I’d just got going at full speed (hitting just over 60kph) when I realised I needed to take a left turn half way down a hill and grabbed the brakes hard – no discs on my road bike! I managed to scrub off speed and make the turn, then hooked onto the back of a small peloton with 2 other riders down towards Rugeley. After taking turns for a while, we hit the A51 and missed the route sign – but it seemed wrong to be heading west so quickly and, as we were heading back towards Shugborough, I turned around and retraced my steps, picking up the correct route again a mile back down the road and passing my hotel from the previous night!

The next section took in mostly flat roads near Lichfield and Alrewas, nipping over the border into Derbyshire before turning over the River Trent and up to the first stop at Barton-under-Needwood. After taking on water and flapjack I started chatting with the owners of two beautifully restored 1970s Colnagos with glorious etching and chromework, one of whom even had a traditional wool jersey, cap (no helmets in the ’70s I guess) and leather saddle bag!

Despite my slow start, I’d averaged over 27kph but realised why as we set off again towards Uttoxeter – turning into the wind that had previously been helping me along (though Hanbury Bank offered a welcome break) . To make matters worse my bike seemed to be grinding from the bottom bracket… time to see Kev at Olney Bikes again for repairs…

After another stop in Uttoxeter (where one rider was conducting the town band – he later told me they split over “musical differences”!) we set off again over some undulating terrain towards the last major climb at Sandon (and what a killer that was).

I skipped the final stop (it was only for water and was carrying plenty of fluids) and pushed on with a large group riding into Stafford – past the Technology Park where our offices are – but was dropped again as we turned left up past the University. From there it was a steady ride on into Shugborough… ending slightly-extended 68 mile ride!

As I crossed the line, I was handed my goody bag musette style, including a variety of items but most importantly a beer token!  My official time was a respectable 5 hours 8 minutes, but Strava told me I’d only been moving for 4 hours and 39, climbing 1235 metres in the process.

Even though I’d missed the rest of the risual riders (the 22 mile sportive set off later and obviously got back sooner!) I stuck around for a while to watch some of the Tour de France coverage and got some lunch from the wood fired pizza stand (a long wait but nice pizza), before heading home… wishing I hadn’t picked a sportive quite so far away!

All in all, it was a fantastic day – and I was very lucky with the weather. Paul at Leadout Cycling organised a great event and I hope to make it back another year. It was also a timely reminder that, even without heading up onto the North Staffordshire Moorlands, there are still plenty of hills around Staffordshire and that my normal routes around South Northants, North Bucks and Beds are relatively flat by comparison…

…as well as that it’s just 4 more weeks until my next sportive – 100 miles from London to Surrey and back again (hopefully not cut short this year)!

Thoughts on the use of Sway as a presentation tool

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks ago, I gave a short talk on adopting cloud services at Milton Keynes Geek Night (MKGN). I’ll admit being a little nervous – the talk was supposed to be 5 minutes (and I had more to say than would ever have fitted – I later learned it’s pretty rare for anyone to stick to their allotted time) and I’m not used to speaking to an audience larger than a meeting room-full (a typical MKGN audience in the current venue is around about 100).  Just to make things a little harder for myself, I decided to use Microsoft Sway for my visual aids.

For those who are unfamiliar with Sway, I got excited about it when it first previewed in 2014. Since then it’s shipped and is available as part of Office 365 or as a standalone product. It’s a tool for presenting content from a variety of sources in a visually-appealing style that works cross-platform and cross-form factor.

Even though Sway has an app for Windows 10, some of the content (e.g. embedded tweets) relies on having an Internet connection at the time of presenting.  Wi-Fi at conferences is notoriously bad and 3G/4G at the MKGN venue is not much better (although it did hold up for me on the night). So, with that and the 7Ps in mind I had PowerPoint and PDF fallback plans but I persisted with Sway.

I’m still not sure Sway is a presentation tool though…

You see, as I swiped and clicked my way through, the audience saw everything I saw. I prefer the simplicity of a picture, with my notes on my screen – I talk, the audience listens, the image re-enforces the view. Sway didn’t work for me like that. Indeed, Sway falls into what Matt Ballantine recently described as the latest whizz-bang tool in a post about a request he was given to knock up a few slides of PowerPoint:

“PowerPoint [… is …] rarely used to perform the task it was designed to do […] The latest whizz-bang tool is the answer! Prezi, Sway or whatever it is that the cool kids are using. Actually, though, the answer probably lies as much in new skills that people need to develop to communicate in a Digital era. Questions like:

  • Who is your audience?
  • What is the message that you are trying to deliver?
  • Where will they be?
  • How will they consume your content?
  • How can you extend the conversation?”

We use Sway at work for weekly updates on what’s been happening in the company – internal communications that used to make use of lengthy HTML emails (I almost never used to read to the end) became more immersive and easier to engage with. And that’s where I think Sway fits – as a tool for communications that are read asynchronously. Not as a tool for presenting a message to an audience in real time.

You can see what you think about the use of Sway as a presentation tool when you take a look at the Sway I used for my MKGN talk.