Removing ads from the Amazon Kindle Fire lock screen (without root)

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I wrote about installing the Google Play Store on my Amazon Kindle Fire HD 8 (5th generation) but one point I made was that the script I used didn’t remove the Amazon lock-screen ads as it suggested it would.

It’s possible to pay £10 extra when you buy your Kindle Fire to have the ads removed from the lock screen… and some people have had success in getting theirs removed by asking Amazon Customer Services nicely. Alternatively, if you have the tech skills, I’ve found a fix, thanks to Vlasp over on the XDADevelopers forums and now my Fire is ad-free (although I have to admit the ads have previously inspired me to make the odd purchase)!

Just as when I installed the Google Play Store, I first had to unhide Developer Options (by tapping 7 times on the device serial number in Settings) and enable ADB (the Android Debug Bridge). After connecting to a PC with a USB cable and accepting the connection, I was able to use ADB to control the settings on the Kindle Fire.

Enable ADB in Developer Options (Debugging)

Allow connections from the PC to the Kindle Fire

HowToGeek has an article about installing ADB but I didn’t do that… I used the copy that came with the script I had previously used to install the Google Play Store (from @RootJunky) – simply by opening up the command prompt and changing directory to the folder that had adb.exe in it…

Then, I ran the commands that Vlasp outlines in his XDADevelopers forum post:

adb shell
pm clear com.amazon.kindle.kso
pm hide com.amazon.kindle.kso
exit
adb reboot

Commands to remove ads from Amazon Fire (via ADB)

And, once the Kindle restarted, there were no more ads*!  Just remember to turn ADB off again on the Kindle.

Amazon Kindle Fire HD 8 lock screen with ads removed

*Sometimes the ads may return – just repeat the process and they will be banished again for a while…

Installing Google Play Store on an Amazon Kindle Fire HD

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In preparation for my summer holidays this year, I bought a new tablet to replace my aging (and slow) Tesco Hudl. Again, I didn’t want to spend much money – I’ve suffered at the hands of Apple’s built-in obselescence previously, and the Amazon Kindle Fire HD 8 seemed to fit the bill quite nicely.

The only trouble with a Kindle is it runs FireOS – a fork of Android – rather than a “stock” Android. That means no Google Play store, which means you’re limited to the apps that are in Amazon’s store.  By and large that’s OK – I installed iPlayer, OneDrive, OneNote, Spotify, etc. but there is no YouTube and the browser is Silk, not Chrome.

I tried sideloading packages from unofficial sources, following advice from Arash Soheili (Android Cowboy).  That got me Chrome and YouTube but without the ability to log in to a Google account (so no syncing of shortcuts, no visibility of subscriptions, etc.). It seemed that installing Google Play Store and installing the apps properly needed me to root the Kindle.

Then, last night, I found a HowToGeek article that was a) easy to follow and b) linked to @RootJunky (Tom)’s script that did all the heavy lifting.  Within a few minutes and one reboot I had installed the Google Play store on my Kindle Fire HD; logged in to my Google account; and downloaded Chrome and YouTube – both working perfectly.

RootJunky's Amazon Fire Tablet Tool at work

RootJunky's Amazon Fire Tablet Tool at work

Just one point to note – Amazon must have updated their method of unlocking the device to remove ads from the lock screen as that part of the script didn’t seem to take effect on my device.

Google Play Store, Chrome and YouTube installed on Kindle Fire HD 8

With the last hurdle out of the way, this means I can recommend that my 10 year-old son, who wants to buy a tablet (and is too young for a smartphone), can buy something cheap like a Kindle rather than spending far too much money on a more fully-featured tablet in a dying market:

Possible fix for a touch screen that stops recognising input on a Lenovo Flex 15

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last weekend, I had an issue with the touch screen on the family laptop. This not-quite-three-year-old device (running Windows 10) is on its second screen (the first one gave up after 13 months) and the laptop was working fine, just that the touch screen acted like, well, a screen (i.e. no touch).

Helpfully, both Adi Kingsley-Hughes (@the_pc_doc) and Jack Schofield (@jackschofield) chipped in with suggestions but it remained a mystery.

The issue persisted through a reboot (which does cast some doubt on the eventual “fix”) and Lenovo’s published drivers were woefully out-of-date but I found a Dell forum post with something that might have helped in some way:

“Think about it, if you are not using the touchscreen and keeping it active, in this energy efficient world and age, a system would turn off unnecessary devices!!

THE SOLUTION: Device Manager – Universal Serial Bus Controllers – Generic USB Hub Properties -( Under POWER tab: the one that has “HID-compliant Device 100mA” attached) Power Management – UNCHECK-“Allow computer to turn off this device to save power”

If you have problems or not sure if it the correct HID-compliant Device, just look under the Driver Details and hit the drop down box to scroll through all those different labels until it clearly says “Touchscreen” under “Bus Reported Device Description”

Fixed my problem pretty easily.” [Nate97]

I say “might”, because the results were not immediate – and if this worked, then why didn’t a reboot?

I also tried following advice from a Lenovo post and in Lenovo support article HT104469:

“1. Press Windows + X. Select Device Manager.

2. Find the Touch screen driver under Mice and Other Pointing Devices > USB Touchscreen Controller(A111).  You’re going to uninstall this and check the box that says “Delete the driver software for this device”. Restart your computer.

3. If the feature is still not back, open Device Manager -> Human Interface Devices. Right-click HID compliant touch screen, then uninstall. When you restart the PC, it will reinstall.

4. Or if you cannot locate any USB Touchscreen Controller(A111), please try to look for an option called “USB Root Hub (xHCI)” under USB Controllers or Universal Serial Bus. If it was labeled as disabled (a little faded or lighter shade of gray that means it is disabled). Righ-click on it then select enable. That may bring the touchscreen back.”

Again, it didn’t seem to make much difference and I went to bed with a non-functional touch screen; however, the next day the touch screen was working again, when I was ready to write this off as a hardware issue.  I’m not sure which (if either) of these “fixes” worked… but I’m posting this in case it helps someone else…

Tracking spin classes with a Garmin; and some thoughts on cycle sportives

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

So, as I hit the half-way point in an 8-week block of 90 minute Endurance Spin classes with Jason Martindale (@martindale72) – and with the nights drawing in and winter weather making road cycling less attractive – it’s time to start planning my winter training schedule.

I’ve been toying with the idea of getting a turbo-trainer for a while, and I’ve just ordered a Jet Black Z1 Fluid Pro though UK stock seems to be hard to track down. I’ll also be giving Zwift a try (though I may have to wait for the iOS version to ship as I don’t have any spare PCs with a suitable spec that I can move to the garage, where the bike and turbo will be).

In the meantime, I thought I’d write a bit about my experience of using my Garmin Edge 810 in spin sessions…

Spinning with Garmin

Riding on a spin bike means there’s no speed/cadence recording – and being stationary in a spin studio means there’s no distance – but I still log my workouts on Strava (if only to keep a training record). I can still record my heart rate though (which remains stubbornly low – even if I think I’m working hard). I’ve set up new bike and activity profiles in the Garmin and then all it needs is for me to remember to turn off the GPS in the System Settings before starting the workout.

The end result looks something like this:

So, what’s the point of all this training? Apart from general fitness, I don’t want to have to go back to zero again when I get my bike out of the garage next spring and I like to fit a couple of sportives in each year, which leads me onto some more ramblings…

Some thoughts on the big closed road sportives

This year I rode the Prudential Ride London-Surrey 100 again (this time it wasn’t cut short for me – though many riders had their ride massively reduced due to delays). The verdict: too many people; too much variety in capabilities; too dangerous; won’t be riding this one again (I had more fun in the Ride Staffs 68 earlier in the summer).

The trouble with Ride London (apart from the ballot system and the having to make a separate trip to London to pick up the registration pack) is that it’s just too popular. “How can a sportive be too popular?”, you may ask.

Well 27,000 riders is a lot of people and although the organisers try to set people off according to ability, some overestimate their skills (and crash – even though 33 injuries from that many riders is a pretty good ratio, 33 is still too many); others clearly didn’t read the rider pack and ride in the middle or on the right side of the road, making it difficult to pass safely; and others chain-gang through in mini-peletons as if they are a professional team. That mix makes things dangerous. Coming off some of the hills I was having to shout “coming through on the right” to get slower riders to move left (the bicycling equivalent of a motorway, with everyone driving in  the middle lane, the left lane empty, and lane 3 backing up…). And in one place I came to a dead halt because but then I got marooned on the wrong side of the road at the bottom of an incline and needed to wait for a gap to cross back to the left of the road as a steady stream of 30-40mph riders came off the hill I’d just come down.  Yes, the public whose neighbourhoods we rode through were great, and the atmosphere riding on closed roads through central London is epic, but on balance it was a pain to get to and a long day that could have been more enjoyable than it was.

I hit my goal of riding the full route in under 6 hours, according to Strava rather than the official time (with stops for accidents, etc.) so I feel I’ve done London now. Someone else can have the place next year…

I could do the Tour of Cambridgeshire again, but last time I did that (in 2015) it took nearly an hour to get over the start line and I missed the cutoff for the full route (though riding at a decent pace) – which kind of put me off that event…

So, 2017 will see me riding Vélo Birmingham, a new closed road sportive, with just 15,000 riders. Many people seem to be put off by the price but the way I see it is:

  • Staging events (particularly on closed roads) have an associated cost.
  • Ride London-Surrey is an example of what happens if you have too many people.
  • Reducing the number of riders by 40% is bound to mean each entrant has to pay more…

So what’s next on my bucket list. Well John O’Groats to Lands End for sure – but that’s probably a few years away. The near future’s more likely to include London Revolution (though I can’t make the 2017 dates), England Coast to Coast (possibly in a day, though more likely over a couple) and then maybe Wales in a Day (I’ll need to build up to that).

Inside the Microsoft datacentres

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A datacentre is just a datacentre isn’t it? After all, isn’t it just a bigger version of the server room in the basement? But what about the huge datacentres that run cloud services? What’s it like inside the Microsoft datacentres that host Azure, Office 365, etc.?

Last week, Microsoft’s Modern Workplace webcast titled “An Inside Look at Your Secure Cloud” gave a sneak peek inside some of the Microsoft datacentres – comparing various generations and showing the improvements along the way.  And, as you might expect, these are the very definition of operating at scale…

As Doug Hauger (General Manager for National Cloud Programs at Microsoft) explained, organisations look to use a cloud datacentre for scale and professionalism.  Anyone can run a datacentre but the Microsoft Cloud is about robustness and security – whether that’s how staff are monitored or the physical and logical security models.

Each time Microsoft moves into a new region (like the two regions that opened in the UK earlier this month) there’s not just one super-scale datacentre but multiple facilities per region, providing redundancy and disaster recovery capability. Each facility has multiple power sources and multiple network ingress and egress points. Then there’s the investment Microsoft is making in physical infrastructure around the world – for example the joint project with Facebook for a new Europe-North America undersea cable (MAREA).

Each time Microsoft considers expanding into a new market they perform a business case analysis on the potential opportunity, considering the scale that they will go in at (tens of thousands of servers). Microsoft now has more than 100 datacentres in 30 regions around the world (with four more under construction). Because of the huge range of locations covered, Microsoft is now the industry leader for compliance and certification – whether that is meeting global or local requirements. Then there is the question of meeting customer needs around data residency, compliance, etc. (for example with the German datacentres that operate under a unique data trustee model in partnership with Deutsche Telekom).

With its cloud datacentres, Microsoft is aiming to meet customer needs around digital transformation, where the question is no longer “why should I go to the cloud” but one of “how to innovate more quickly in the cloud”. That’s what drives the agenda for where to geographically expand, where enhance scalability, etc.

Despite the question I posed in the opening paragraph of this post, a true datacentre is worlds apart from the typical server room in the basement (or wherever). The last time I got to visit a datacentre was when I was working at Fujitsu and I visited the London North facility, an Uptime Institute Tier III datacentre that won awards when it was built in 2008. Seeing the scale at which a modern datacentre operates is impressive. Then ramp it up some more for the big cloud service providers.

In the webcast, Christian Belady (General Manager Cloud Infrastructure Strategy and Architectures at Microsoft) explained that datacentres are the foundation of the Internet – they are where all the cloud services are served from (whether that is Microsoft services, or those provided by other major players).

There are several layers of physical security from the outside fence in, screening people, controlling access to parts of the buildings, even to cabinets themselves with critical customer data in locked cabinets covered with video surveillance. Used disks are destroyed, being wiped and then crushed on site! The physical security surpasses anything provided for on-premises servers and the logical security continues that defence in depth.

Each custom-built server is actually 2 computers with 10s of 1000s of computers per room, 100s of 1000s per datacentre, each datacentre the size of 20-30 football fields. Look at the racks and you can see the attention to detail – keeping things orderly not only adds to operational efficiency but it looks good too! The enterprise servers that most of us run on-premises have plastic bezels to make them look pleasant. Instead, Microsoft’s servers have focused on eliminating anything that has no useful function…

Each iteration of datacentres becomes more industrialised – with improvements to factors such as cooling (which is one of the biggest power usage factors).

A generation 2 datacentre from around 2007 has a Power Usage Effectiveness (PUE) efficiency score of 1.4-1.6 (for comparison, the Fujitsu facility I mentioned earlier has a PUE of 1.4 but a typical enterprise datacentre from the 2000s with a normal raised floor would have a PUE of 2-3). Cool and hot aisles are used with hot air returned to coolers and recirculated. Microsoft then raised the temperature of their servers to a level that is acceptable (working with manufacturers), rather than the lower levels they used to have (reducing the cooling demands).

Moving on to generation 4, efficiency is improved further (a PUE of 1.1-1.2), eliminating chillers by removing roofs, driving down costs and using outside air to chill. Containers use the outside cooling and a system of adiabatic cooling, spraying mist into the air to cool down – which evaporates before it hits the server”. Such datacentres use a lot less water too (compared with older styles of datacentre).

With the latest (generation 5) datacentres, further improvements are made, culminating the features of other generations – learning and adapting. The PUE is now down to 1.1 (and below at certain times of year) with running costs also improved. There are still hot a cold aisles but no raise floor and, instead of outside air, the datacentres use a closed liquid loop system (no chiller – cool the water outside) – and that water doesn’t need to be potable.

The actual datacentre design changes for each facility, based on the geography and the environmental impact. Backup power generation is a key component in the design, with several days of fuel onsite and contracts to keep bringing more fuel in. Power is often sustainably sourced, be that cheap and carbon-free hydro-electric power, wind or solar. Microsoft Research is even working on a tidal-powered under-sea datacentre (Project Natick).

Inside the Microsoft datacentres is very industrial. Whole racks are brought in (pre-tested), rather than single servers and, as previously mentioned, Microsoft design and build the servers for use at scale, stripping out enterprise features and retaining only what’s needed for the Microsoft environment.

Whilst I’ve worked with customers who have visited Microsoft datacentres in Dublin, it seems unlikely that I’ll ever get the chance. Watching the Modern Workplace webcast gave me a fascinating look at how Microsoft operates datacentres at scale though – and it truly is awe-inspiring. To find out more, visit the Microsoft website.

What is DevOps? And is your organisation ready?

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Like cloud a few years ago and then big data, DevOps is one of the buzzwords of the moment. So what does it actually mean? And is there more to it than hype?

There are many definitions but most people will agree that, at its core, DevOps is about closer working between development and operations teams (and for infrastructure projects read engineering and operations teams). Part of DevOps is avoiding the “chuck it over the fence” mentality that can exist in some places (often accompanied with a “not invented here” response). But there is another side too – by increasing co-operation between those who develop or implement technology and those who need to operate it, there are opportunities for improved agility within the organisation.

DevOps is as much (probably more) about people and process than technology, but the diagram below illustrates the interaction between teams at different stages in the lifecycle:

  • Businesses need change in order to drive improvements and respond to customer demands.
  • Development teams implement change.
  • Meanwhile, operations teams strive to maintain a stable environment.

Just as agile methodologies sit between the business and developers, driving out requirements that are followed by frequent releases of updated code with new functionality, DevOps is the bridge between the development and operations teams.

DevOps in context

This leads to concepts such as infrastructure as code (implementing virtual infrastructure in a repeatable manner using declarative templates), configuration automation (perhaps with desired state configuration) and automation testing. Indeed, DevOps is highly dependant on automation: automating code testing, automating workflows, automating infrastructure – Automating Everything!

Configuring, managing and deploying resources (for example into the cloud) is improved with DevOps processes such as continuous integration (CI). No doubt some will argue that CI has existed for a lot longer than the DevOps term and that is true – just as virtualisation pre-dates infrastructure-as-a-service!

The CI process is a cycle of integrating code check-ins with testing and feedback mechanisms to improve the quality of the code:

Continuous integration example

In the example above, each new check-in to the version control system results in an automated trigger to carry out build and unit tests. These will either pass or fail, with corresponding feedback. When the tests are successful, another trigger fires to start automated acceptance tests, again with feedback on the success or failure of those tests. Eventually, the code passes the automated tests and is approved for user acceptance testing, before ultimately being released.

Continuous integration works hand in hand with continuous delivery and continuous deployment to ensure that development teams are continuously dropping new code but in line with the Release Management processes that the operations teams require in order to maintain their service.

Continuous delivery allows new versions of software to be deployed to any environment (e.g. test, staging, production) on demand. Continuous delivery is similar to continuous integration but can also feed business logic tests. Continuous deployment takes this further with every check-in that passes all tests ultimately ending up with a production release – the fastest route from code to release.

No one tool is used to implement DevOps – DevOps is more about a cultural shift than it is about technology – but there are many tools that can assist with implementing DevOps processes. Examples include Chef, Puppet (configuration management) and Jenkins (continuous integration). Integrated development environments (such as Visual Studio and Eclipse) also play a part, as do source control systems like Visual Studio Team Services and Git/GitHub.

DevOps is fuzzy too. Once we start to talk about software-defined infrastructure we start to look at orchestration tools (e.g. Mesosphere, Docker Swarm) and containerisation (e.g. Docker, Azure Container Service, Amazon ECS). And then there’s monitoring – either with tools built into the platform (e.g. Visual Studio Application Insights) or third party tools (like those from NewRelic and AppDynamics).

So DevOps is more than a buzzword. It’s a movement, that’s bringing with it a whole stack of processes and tools to help drive towards a more agile environment. IT that can support business change. But DevOps needs a change of mindset and for me the big question is “is your organisation ready for DevOps?”.

Further reading/viewing

Some thoughts on naming Azure resources

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

During a recent project, I was caught out by a lack of consistency in naming for Azure resources (and an inability to rename some of them afterwards). Some resources had underscores in their names (_), some had hyphens (-) – and then there were the inconsistencies in case. For someone who generally pays attention to details like this, I found it all very frustrating.

So I started to look into what a standard for naming Azure resources might look like (I also asked Microsoft for advice). The general advice I received was, “stick to numbers and letters – no special characters because some resources won’t accept them”. Then, whilst naming a subnet and starting with a number I found that subnet names can’t start with a number or a space.

So, lets make that “use letters and numbers only, in lower case, and always starting with a letter”.

Now, consider uniqueness – some resources have an associated DNS name (e.g. *.cloudapp.net) that needs to be available for use.

I generally advise against including organisation names in resources like server names (because resources often outlive organisation names) but, in this case, the organisation name is likely to provide some uniqueness. So, let’s try “use letters and numbers only, in lower case, prefixed with an abbreviation for the organisation name, starting with a letter”.

Then, lets think about the naming for the resources themselves – a two letter code for the resource type (rr) and a suitable number of digits to count the instances (nn) – something like:

orgrrnn

This has two characters for the digits on the end, though three, or even four, may be better depending on the size of the organisation (remember to plan for growth!).  You’ll also need to consider the total length of the name – between 3 and 15 characters appears to be the sweet spot (some may be longer, few may be shorter).

Resource types might be:

Two-letter code Meaning
ad Active Directory
cs Cloud Service
db Database
gw Gateway
ln Local Network
ms Media Service
rg Resource Group
sg Storage Account
sp App Service Plan
sn Subnet
tm Traffic Manager
vm Virtual Machine
vn Virtual Network
wa Web App (App Service)

For my recent batch of resources when I was studying for an exam, that led to names like:

  • exam70534ms01 (Media Service 01)
  • exam70534db02 (Database 02)

(illustrated here for both ASM and ARM)

Azure resource naming in ASM (Class)

Azure resource naming in ARM

That looks to me to be unique, consistent and meaningful, but I’m sure there are other considerations too! Indeed the Azure documentation has some quite complex recommended naming conventions for Azure resources. My concerns with these are that they are not consistent (remember that not all resources can include certain characters), whereas the naming approach I’ve outlined in this post is.

Preparation notes for Microsoft exam 70-534: Architecting Microsoft Azure Solutions

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been preparing for Microsoft exam 70-534: Architecting Microsoft Azure Solutions. At the time of writing, I haven’t yet sat the exam (so this post doesn’t breach any NDA) but the notes that follow were taken as I studied.

Resources I used included:

  • Microsoft Association of Practicing Architects (MAPA) bootcamp (unfortunately the delivery suffered from issues with the streaming media platform and the practical labs are difficult to follow, partly due to changes in the platform).
  • Hands-on time with Azure – though the exam is still mostly based on the old Classic/Azure Service Manager (ASM) model, so I found myself going back to learn things in ASM that I do differently under Azure Resource Manager (ARM).
  • The Microsoft Press exam preparation book, which contains a lot more detail and is pretty readable (or it would be if I wasn’t trying to read it in PDF form – sometimes paperback books are better for flicking back and forward!).
  • A free Azure subscription (either sign up for a one-off £125 credit for a month, or you can get £20 each month for 12 months through Visual Studio Dev Essentials).

Other potentially-useful references include:

The rest of this post contains my study notes – which may be useful to others but will almost certainly not be enough to pass the exam (i.e. you’ll need to read around the topics too – the Azure documentation is generally very good).

Note that Microsoft Azure is a fast-moving landscape – these notes are based on studying the exam curriculum and may not be current – refer to the Azure documentation for the latest position.

Azure Networking

  • Virtual networks (VNets) are used to manage networking in Azure. Can only exist in one Azure region.
  • CIDR notation is used to describe networks.
  • Use different subnets to partition network – e.g. Internet-facing web servers from internal traffic; different environments.
  • Subnet has to be part of VNet range with no overlap.
  • All virtual machines (VMs) in a VNet can communicate (by default) but anything outside cannot talk (by default) – so VNet is default network boundary.
  • In ASM, every VM has an associated cloud service (with its own name @cloudapp.net). Without subnets the VMs can only communicate via a public IP. If multiple cloud services are on same VNet then VMs can communicate using private IP.
  • Endpoints are used to manage connections: internal (private) endpoint listening on a given port (e.g. for RDP on 3389); external (public) endpoint on defined port number – therefore go to a particular server, rather than just to the cloud service.
    • Public from anywhere on the Internet; private only within the cloud service/VNet.
  • Dynamic IP (DIP) is the private IP associated with a VM; only resolvable inside the VNet – external access needs a public IP. Can chose an IP address to use – and will be reserved.
  • Virtual IP (VIP) – assigned to a cloud service – static public IP for as long as at least one VM running inside the cloud service.
  • Instance Level Public IP (ILPIP) – for direct connection to Azure VM from Internet (not via the cloud service); public IP attached to a VM. In this configuration, whatever ports open on the VM are open to the Internet – effectively bypassing the security of the VNet.
  • Use a VNet-to-VNet VPN to create a tunnel between VNets in different regions. This extends VNets to appear as if they were one.
  • Site-to-site VPN to create tunnel between on-premises network and Azure VNet. Uses persistent hardware on-premises.
  • Point-to-site VPN to create tunnel from individual computers to an Azure VNet. Software-based.
  • Multi-site VPN is a combination of the other methods, combined.
  • Azure ExpressRoute avoids routing via ISP – effectively a dedicated link from customer datacentre to Azure region, bypassing ISP. High throughput, low-latency and no effect on Internet link).
    • ExpressRoute Providers provide point-to-point Ethernet of connect via a cloud exchange. BGP sessions with edge routers on customer site. 200Mbps/500Mbps/1Gbps/10Gbps.
    • Can use for Azure computing (IaaS); Azure public services (web apps, etc. – PaaS) or Office 365 (SaaS).
  • Secure network with Network Access Control Lists (ACLs), attached to a VIP – define what traffic will be allowed/denied to/from the VIP (i.e. the cloud service). Lower number rule has higher priority. First match is executed and rest are ignored.
    • If there is no ACL – all traffic is allowed (whatever endpoints are open will allow access); if there is one or more permit, deny all others; if there is one or more deny, allow all others; combination of permit and deny to define a specific IP range.
    • Network ACL affects Incoming traffic only.
  • Network Security Groups (NSGs) are attached to a VM or a subnet and act on both inbound and outbound traffic.
    • By default all inbound access is blocked inbound rules (allow inbound within VNet and from Azure LB; deny all other inbound – rules 65000/65001/65500).
    • Outbound defaults allow outbound within VNet outbound, Internet outbound (0.0.0.0/0) and deny all others – rules 65000/65001/65500.
    • Default rules can’t be edited but can be overridden with higher priority rules.
  • Can only use Network ACLs or NSGs – not both together.
  • VMs can have multiple NICs in different subnets – i.e. dual-homed machine.

Azure Virtual Machines

  • Azure Hypervisor is similar to Hyper-V (but not the same).
  • Different sizes of VMs are available.
  • VMs are isolated at network and execution level – Azure customers never get access to the hypervisor – only to the VM layer.
  • Use Windows Server 2008 onwards or Linux: OpenSUSE; SUSE Enterprise Linux; CentOS; Ubuntu 12.04; Oracle Enterprise Linux; CoreOS; OpenLogic; RHEL
  • Basic and Standard service tiers – different machine types available:
    • General Purpose: A0-A4 Basic; A5-A7 Standard; A8-A9 Network Optimised (10Gbps networking); A10-A11 Compute Intensive (high end CPUs)
    • D1-D4, D11-D14 with SSD temp storage.
    • DS1-DS4, DS11-14 with premium (SSD) storage.
    • G1-G5 (and GS) with local SSD and lots of RAM.
    • F and N too

  • Every Azure VM has temporary storage drive (D:) – lost when VM is moved/restarted.
  • VMs may be attached to data disks that persist across VM restarts/redeployments and are locally replicated in-region (and beyond if specified).
  • Can use gallery images or create custom images (to meet custom requirements, e.g. with certain software pre-installed).
  • OS disk always has caching, default Read/Write (data disk caching is optional, default none) – changes need a reboot.
  • Can create a bootable image from an OS disk (not data disk).
  • Can change caching on data disk without reboot.
  • OS disk max 127GB, data disk max 1TB.
  • Only charged for storage used (regardless of what is provisioned).
  • Can take VHDs from on-premises: (Windows Server 2008 R2 SP1 or later), sysprep then upload with Add-AzureVhd -Destination storageaccount/container/name.vhd -LocalFilePath localfile.vhd; for Linux install WALinuxAgent (different preparation for different distributions).
  • Tell cloud service to load balance an endpoint to split load between VMs. With ARM there is the option to define a separate Load Balancer.
  • Encryption at rest for data disks requires third party applications (encryption is in preview though…).
  • Availability set: 2 or more VMs distributed across fault domains and upgrade domains for SLA of 99.95% (no SLA for single VMs).
  • Auto-scaling based on thresholds (mix/max number of instances, CPU utilisation, queue length – between web and worker roles) or time schedule (also time to wait before adding/removing more instances – AKA cooldown period). Needs at least 2 VMs in an availability set.
  • Basic VMs have no load balancing or auto-scaling.

Azure Storage Service

  • Blob, table, or queue storage (plus file storage for legacy apps) encapsulated inside a storage account.
    • Two types: Standard/Premium – essentially HDD/SSD.
    • Up to 500TB per storage account – can create multiple accounts.
  • Data stored in multiple locations (minimum 3 copies).
    • LRS (Locally Redundant Storage) synchronously replicates 3 copies data in separate fault and update domains. Use for: low cost; high throughput (less replication); data sovereignty concerns re: transfer out of region. If region goes down, so do all copies.
    • ZRS (Zone Redundant Storage) also 3 copies but in at least 2 facilities (1 or 2 regions). Data durable in case of facility failure.
    • GRS (Globally Redundant Storage) – 6 copies (3 copies in primary region asynchronously replicated to 3 more copies in a secondary region). Data still safe in a secondary region but cannot be read (unless Azure flips primary and secondary in event of catastrophic failure).
    • RA-GRS (Read Access Geo Redundant Storage) – read from secondary copy. -secondary.cloud.core.windows.net domain name.
  • More copies and more bandwidth is more cost! Also:
    • GRS ingress max 10 Gibps (20 egress) but does not impact latency of transactions made to primary location.
    • LRS ingress max 20 Gibps (30 egress)
  • File storage – mounted by servers and accessed via API. Provides shared storage for applications using SMB 2.1. Use cases:
    • On-premises apps that rely on file shares migrated to Azure VMs or cloud services without app re-write.
    • Storing shared application settings (e.g. config files) or diagnostc data like logs, metrics and crash dumps.
    • Tools and utils for developing or administering Azure VMs or cloud services.
    • Create shares inside storage accounts – up to 5TB per share, 1TB per file. Unlimited total number of files and folders.
    • https://storageaccountname.file.core.windows.net/sharename/foldername/foldername/filename
  • Blob storage: Not a file system – an object store.
    • Create containers inside storage accounts with up to 500TB data per container
    • https://storageaccountname.blob.core.windows.net/containername/blobname
    • Block blobs, with block ID; uploaded and then committed – unless committed doesn’t become part of the blob: max 64MB per upload (blocks <=4MB), max 200GB per blob; Can upload in parallel, better for large blogs (generally) and for sequential streaming of data.
    • Page blobs – collection of 512byte pages. Max size set during creation and initialisation (up to 1TB). Write by offset and range – instantly committed. Overwrite single page or up to 4MB at once; Generally used for random read/write operations (e.g. disks in VMs). Page blobs can be created on premium storage for higher IOPs.
    • Access control is via 512bit keys (secret key – used in API calls to sign requests) – two keys so can maintain connectivity whilst regenerate another (i.e. during key rotation).
    • Can have full public read access for anonymous access to blobs in a container; public read access for blobs only (but not list the blobs in the container); no public read access (default – only signed requests allowed); shared access signature – signed URL for access including permissions, start time and expiry time.
    • Lease blob for atomic operations – lease for 15-60 seconds (or infinite). Acquire/renew/change/release (immediately)/break (at lease end).
    • Snapshots – used to create a read-only copy of a blob (multiple snapshots possible but cannot outlive the original blob – i.e. deleting blob deletes the snapshots); charges based on difference.
    • Copy blob to any container within the same storage account (e.g. between environments).
  • Table storage:
    • Store data for simple query – NoSQL key-value store – no locks, joins, validation.
    • http://storageaccountname.table.core.windows.net/tablename
    • Generally, use row key to retrieve data.
    • Can partition tables and generate a partition key.
    • Use shared access signatures for querying/adding/updating/deleting/upserting (insert if does not already exist, else update) table entries
  • Queue storage:
    • Store and access messages through HTTP/HTTPS calls.
    • Each queue entry up to 64KB in size.
    • Store messages up to 100TB.
    • Use for an asynchronous list for processing; messaging layer between applications (avoid handshaking – just add to or consume from the queue); or messaging between web and worker roles.
    • http://storageaccountname.queue.core.windows.net/queuename
    • Operations to put (add), get (which makes message invisible), peek (get first entry without making invisible), delete, clear (all), update (visibility timeout or contents) for messages.
  • Pricing based on storage (per GB/month); replication type (LRS/ZRS/GRS/RA-GRS); bandwidth (ingress is free; egress charged per GB); requests/transactions.

Web Apps

  • Web Apps are available in 5 tiers: free/shared/basic/standard/premium.
  • These tiers affect: the maximum number of web/mobile/API apps (10/100/unlimited/unlimited/unlimited), logic apps (10/10/10/20 per core/20 per core, integration options (dev/test up to basic; Standard connectors for Standard; Premium Connectors and BizTalk Services for premium), disk space (1GB/1GB/10GB/50GB/500GB), maximum instances (-/-/3/10/50), App Service environments (Premium only), SLA (Free/shared none; Basic 99.9; Standard and Premium 99.95%)
  • Resource Group and Web Hosting Plan are used to group websites and other resources in a single view; can also add databases and other resources; deleting a resource group will delete all of the resources in it.
  • Instance types:
    • Free F1.
    • Shared D1.
    • Basic B1-B3 1 core, 1.75GB RAM, 10GB storage x2 cores and RAM (2/3.5; 4/7) – VMs running web apps.
    • Standard S1-S3 same cores and RAM but more storage (50GB).
    • Premium P1-P4 same again but 500GB storage (P4 is 8 cores, 14GB RAM).
  • Other things to configure:
    • .NET Framework version.
    • PHP version (or off).
    • Java version (or off) – use web container version to chose between Tomcat and Jetty; enabling Java disables .NET, PHP and Python.
    • Python version (or off).
  • Scale web apps by moving up plans: Free-Shared-Basic-Standard – changes apply in seconds and affect all websites in web hosting plan. No real scaling for Free or Shared plans. Basic can change instance size and count. Standard can autoscale based on schedule or CPU – min/max instances (checked every 5 mins).
  • Scale database separately.
  • Deployment pipeline can be automated and can flip environments when move from staging to production (flips virtual IP). Can flip back if there are issues.
  • SSL certificates – can add own custom certs (2 options – server name indication with multiple SSL certs on a single VM; or IP SSL for older browsers but only one SSL cert for IP address).
  • Site extensions – no RDP access to the VM, so tools for website: Visual Studio Online for viewing code or phpMyAdmin.
  • Webjobs allow running programs or scripts on website (like cron in Linux or scheduled task in Windows) – one time, schedules or recurring.
  • Can use .cmd, .bat or .exe; .ps1, .sh., php, .py, .js
  • Monitoring web app via metrics in the portal.

Cloud Services

  • For more complex, multi-tier apps.
    • Web role with IIS
    • Worker role for back-end (synchronous, perpetual tasks – independent of user interaction; uses polling, listening or third party process patterns).
  • Upload code and Azure manages infrastructure (provisioning, load balancing, availability, monitoring, patch management, updates, hardware failures…)
  • 99.95% SLA (min 2 role machines)
  • Auto-scale based on CPU or queue.
  • Communicate via internal endpoints, Azure storage queues, Azure Service Bus (pub/sub model – service bus creates a topic, published by web role and worker role subscriber is notified).
  • Availability: fault domain (physical – power, network, etc.) – cannot control but can programmatically query to find out which domain a service is running in. In ASM, normally 0 or 1. ASM automatically distributes VMs across fault domains.
  • Upgrade domain (logical – services stopped one domain at a time) – default is 5, can be changed.
    If have web and worker roles, automatically placed in Availability set.
  • Azure Service Definition Schema (.csdef file) has definitions for cloud service (number of web/worker roles, communications, etc.), service endpoints, config for the service – changes required restart of services.
  • Azure Service Configuration Schema (.cscfg file) runtime components, number of VMs per web/worker role and size etc. – changes do not require service restart.
  • Deployment pipeline as for Web Apps.

Azure Active Directory

  • Identity and Access Management in the cloud – provided as a service.
  • Optionally integrate with on-premises AD.
  • Integrate with SaaS (e.g. Office 365).
  • Use cases: system to take care of authentication for application in the cloud; “same sign-on” for applications on-premises and cloud; federation to avoid concerns re: syncing passwords and avoid multiple logins to different apps (even with same sign-on) – provide single sign-on; SSO for 1000s of third-party applications. Effectively, if sync password then same sign-on, if no password sync then single sign-on.
  • Can also enable Multi-Factor Authentication (MFA) for Azure AD and therefore add MFA to third party apps.
  • Directory integration with Azure Active Directory Synchronization Tool (DirSync) or Azure AD Sync. Use Azure Active Directory Connect instead.
  • Can also use Forefront Identity Manager 2010 R2 (or Microsoft Identity Manager?) – originally was needed if sync multiple ADs.
  • Each directory gets a DNS name at .onmicrosoft.com. Also possible to use custom domains (verify domains in DNS).
  • Supports WS-Federation (SAML token format); OAuth 2.0; OpenID Connect; SAML 2.0.

Role-based access control

  • Role = collection of actions that can be performed on Azure resources.
  • Users for RBAC are from the associated Azure AD.
  • Roles can be assigned to external account users by invite.
  • Roles can be assigned to Azure AD security groups (recommended practice, rather than direct role assignment).
  • Roles can also be assigned for Resource Groups (resources inherit access from subscription-Resource Group-Resource).
  • Built-in roles: Owner (create and manage all types of resource); Reader (read all types of resource); Contributor (manage everything except access). Lots of other roles built on this construct – e.g. Virtual Network Contributor.

Azure SQL Database

  • Relational database service as a service (PaaS) – up to 500 GB per database.
  • Easy provisioning, automatic HA, load balancing, built-in management portal, scalability, use existing skills to deploy database, patching, etc. taken care of so less time to manage, easy sync with offline data.
  • It is not same as SQL Server on a VM though!
    • Unsupported features may have corresponding features in Azure; some are just not available.
  • Performance model with different tiers: Basic, then Standard S0-S3, Premium P1-P2, P4, P6 (formerly P3).
    • Measured in Database Thoughput Units (DTUs) – standardised model to help sizing (relative model [like ACU for VMs]).
    • Only committing to transactions per hour in Basic, per minute in Standard, per second in Premium.
  • Scaling Azure SQL: Federation is deprecated; Custom Sharding (create multiple database and use application logic to separate, e.g. based on customer ID); Elastic Scale (application doesn’t need to be so smart, endpoint is same but multiple applications).
  • Backups:
    • SQL database creates automatic backup for active database; at least 3 replicas at any one time – one primary replica and two or more secondaries (more if using GRS).
    • Can restore to point-in-time (self-service capability to restore from automated system – creates new database on same server – zero-cost/zero-admin – number of days depends on service tier – 7, 14, 35 days for basic/standard/premium), or geo-restore (restore from geo-redundant backup to any server in any region.
    • Automatically enabled for all tiers at no extra cost – helps when there is a region outage – estimated recovery time <12h RPO <1h).
  • Also standard geo-replication (protect app from regional outage – one secondary database in Microsoft-defined paired region; secondary is visible but can’t connect to it until failover occurs – discount for secondary DB as offline until failover – standard/premium only with ERT <30s RPO <5s) and active geo-replication (database redundancy within different regions – up to 4 readable secondary servers – asynchronous replication of committed transactions from one DB to another; for write-intensive applications – e.g. load balancing for read-only workloads – premium only with ERT <30s RPO <5s).
    • Regional disaster – Geo Restore, Standard or Active Geo-Replication.
    • Online application upgrade – Active Geo replication.
    • Online application relocation – Active Geo replication.
    • Read load balancing – Active Geo replication.
  • Security: only available via TCP 1433 – blocked by default – define firewall rules at server and database level to open up (i.e. to own IP address). Can define firewall rules programmatically with T-SQL, REST API and Azure PowerShell.
  • Data encrypted on wire – SSL required all the time
  • Data encrypted at rest – encryption with transparent data encryption – real-time I/O encryption/decryption for data and log files.
  • Only supports SQL Server authentication or Azure AD authentication – i.e. no Windows authentication.
  • First user created (master database principal) cannot be altered or dropped; can configure user-level permissions by logging on to the database and issuing SQL commands.
  • Pricing: DB size plus outbound data transfers (per database, per month) – per hour pricing, so drop DTUs at quiet time.

Azure Mobile Service

  • Cross-platform app development service (PaaS).
  • Mobile apps need to be cross-platform, with cloud storage, ID management, database integration and push notifications.
  • Azure Mobile Services provides mobile back-end as a service (MBaaS).
  • Easily connect to SaaS APIs – e.g. Facebook, Salesforce, etc.
  • Auto-scaling based on incoming customer load.
  • User authentication taken care of by the service.
  • Push notifications to millions in seconds.
  • Offline-ready apps with sync capability.

Azure Content Delivery Network (CDN)

  • Caching public objects from a storage account at point of presence (POP) for faster access close to users (and to scale when a lot of traffic hits).
  • Content served from local edge location. If content not there (first serve), it fetches information from the origin and caches locally.
  • Drastic reduction in traffic on original content (so faster access and more scalable!)
    Use a CDN for lower latency, higher throughput, improved performance!
  • POP locations separate to Azure regions – not full-fledged DCs.
  • CDN origin can be Azure Storage, Apps, Cloud Services or Media Services (including live streaming) – or a custom origin on any web server.
  • CDN Edge is a cache – not a permanent store.
  • Anycast protocol is used to route user to closest endpoint.
  • Create a CDN endpoint: http://cdnname.azureedge.net/
  • Change website code to point to the CDN. Route dynamic content to origin, static to CDN.
  • Can set a custom domain too (e.g. cdn.domain.com) – avoid browser warnings about content from other domains.
  • Can also enable HTTPS – need to upload the SSL certificate.
  • Default cache is 72 hours – cache control header can be used to control (any value >300s). Use to ensure not serving stale content.
  • Use CDN to cache images, scripts, CSS from Azure Cloud Service but have to provide using HTTP on port 80.
  • Pricing based on bandwidth (between edge and origin) and requests.

Azure Traffic Manager

  • DNS-based routing for infrastructure. Route to different regions, monitoring health of endpoints (HTTP checks) to assist with DR. Many routing policies.
  • Create a Traffic Manager endpoint and route to this via DNS.
  • Options include failover load balancing (re-route based on availability, with priority list – 100% of traffic to one endpoint – used for DR/BC rather than scaling); round robin load balancing (shared across various endpoints in rotation – but only to healthy endpoints cf. DNS RR); Weighted round robin load balancing (use weight to distribute traffic between endpoints); performance load balancing (based on latency times).
  • Different to traditional load balancer in that it is DNS-based – user request is direct to endpoint, not through load balancer. Also, note that traffic is direct to web servers – not to Edge locations as in CDN.
  • Pay per DNS request resolved (TTL will keep this down) and per health-check configured.

Azure Monitoring

  • Diagnostic tasks may include performance measurement, troubleshooting and debugging, capacity planning, traffic analysis, billing and auditing.
  • Monitor via portal; Visual Studio (plugins to parse logs, etc.) or third party tools.
  • Azure management services to manage alerts or view operational logs. Create alerts based on metrics and thresholds (and average to smooth out spikes) and send email to service admins and co-admins or to a specific address.
  • Operational logs are service requests – operation, timestamped, by whom.
  • Visual Studio 2013 has Azure SDK for managing Azure services. Some limitations: with remote debugging cannot have more than 25 role instances in a cloud service.
  • Azure Redis cache monitoring allows diagnostic data stored in storage account – enable desired chart from Redis cache blade to display the metric blade for that chart.
  • System Center 2012 R2 can also monitor, provision, configure, automate, protect and self-service Azure and on-premises.
  • Third party tools like New Relic and AppDynamics.
  • For websites there are application diagnostic logs and site diagnostic logs (3 types: web server logging; detailed error messages; failed request tracing) – access via Visual Studio, PowerShell or portal. Kudu dashboard at https://sitename.scm.azurewebsites.net.
  • View streaming log files (i.e. just see the end): Get-AzureWebsiteLog -Name "sitename" -Tail -Path http
  • View only the error logs: Get-AzureWebsiteLog -Name "sitename" -Tail -Message Error
  • Options include -ListPath (to list log paths) -Message <string> -Name <string> -Path (defaults to root) -Slot <string> -Tail (to stream instead of downloading entire log)
  • Can also turn on diagnostics on storage accounts.

Azure HD Insight

  • Microsoft Implementation of Hadoop – create clusters in minutes (Windows or Linux); pay per use (no need to leave running); use blob storage as storage layer and Excel to visualise the data.
  • Hadoop uses divide and conquer approach to solving big data problems (chunking): processes the data, then combines it again – using HDFS and MapReduce components.
  • Provision cluster, take large data set (e.g. search engine queries) on master node, distributed to processing nodes (Map). Reduce collects results and collates.
  • Hybrid Hadoop – e.g. for organisations that offer analytics services – burst to cloud…
  • Either site-to-site VPN on-premises to Azure, or ExpressRoute.
  • Supports Storm and HBase clusters natively – can install other software via custom script.
  • Connectors in WebApp (Standard and Premium) – connect to other services (e.g. Azure HDInsight).

High Performance Computing (HPC)

  • HPC not the same as big data:
    • Big data analytics is usually bounded by data volumes and so network IO.
    • HPC usually CPU-bounded.
  • HPC good for financial modelling, media encoding, video and image rendering, smaller compter-aided engineering models, etc.
  • HPC instances are A8/9 (network optimised – high-bandwidth RDMA network 32Gbps within cloud service as well as 10Gbps Ethernet to other services) and A10/11 (compute intensive).
  • Both 8/16 cores, 56/112GB RAM, 382GiB disk.
  • Microsoft HPC Pack 2012 R2 SP1 on Windows Server (on-premises, in Azure or hybrid) – Message Passing Interface (MPI) used (over RDMA network).

Azure Machine Learning

  • Predictive analysis in cloud – as a service, no VMs etc. to manage.
  • Take existing data, analyse by running predictive models and predict future outcomes/trends.
  • Deploy in minutes; drag and drop machine learning algorithms (built-in); use data in Azure; add custom scripts; Marketplace of vendors providing custom solutions.
  • Terminology:
    • Classification (group data).
    • Regression (predict a value).
    • Ranking (order items by criteria).
    • Clustering (take a set of data, e.g. by date range).
  • Get raw data (unstructured or losely structured) -> data cleaning -> build machine learning model -> predict results.

Azure Automation

  • Script and automate the application lifecycle; simplify cloud management; automate manual, long-running and frequently-repeated tasks (save time and increase reliability).
  • Works with Web Apps. Virtual Machines, Storage, SQL Server and other Azure services.
  • Automation account is a container for Azure Automation resources.
  • Create runbooks – set of tasks that perform an automated process – PowerShell workflow.
  • Scheduler to start run-books daily/hourly/at a defined point in time.
  • Pricing based on minutes/triggers:
    • Free = 500 minutes
    • Basic tier
    • Standard tier
  • Automation is an enabler for DevOps:
    • Dev team loves changes.
    • Ops Team loves stability.
    • Agile used for development between business-dev.
    • DevOps fills gap between dev and ops.
    • Infrastructure as code; configuration automation; automation testing.
  • Continuous integration – pipeline to delivery and deployment – cycle of integrating solution with various phases:
    • Delivery team check-in to Version Control, triggers Build and Unit Tests (with Feedback). When Build and Unit tests are clean, triggers Automated Acceptance tests (with feedback). When approval gained, move to User Acceptance Tests, and then on FInal Approval move to release.
  • Continuous Delivery – push-button deployment of any version of software to any environment, on demand – similar to CI but can feed business logic tests.
    • Need automated testing to achieve CD.
  • Continuous Deployment – natural extension to CD; every check-in ends up in a production release.
  • Chef for Configuration Automation: Configuration Management between environments: Build, Test, Release, Deploy (and automate CI/CD). Manage Windows and Linux VMs, integration via Azure Portal. Chef and DSC can be used together to manage infrastructure.
  • Puppet – integrated with Azure and VS 2013 for easy deployment of infrastructure across physical and virtual machines. Can deploy pre-configured Puppet image to create a VM.
  • Deploy Custom Script with VM configuration – run when VM is launched (one of the available config extensions).
  • VM agent is used to install and manage extensions that help interact with the VM (Chef, Puppet, Custom Script).

Azure Media Services

  • Developing video on demand is challenging: cost/managing content/encoding/distribution across multiple devices/streaming experience/DRM content protection/providing high quality video for any device any time anywhere.
  • Ingest data, encode, format conversion, content protection (DRM policies), on-demand streaming, live streaming, analytics, advertising.
  • Need media service account and associated storage account.
  • Media Player is web video player service backed by Azure Media Service: one player for all popular devices – no need to develop device-specific player; plays format for that device; easy intergtaion with web and apps; standard player controls.
  • Data caching via Azure CDN.
  • Steps:
    • In management portal, create new Media Service with name, storage account and region.
    • Start the Media Service.
    • Scale up streaming units (1 unit=200Mbps).
    • Upload a video file (from local or from Azure storage) – will be stored in storage account without encryption.
    • Publish the file.
    • Configure the encoding options, then video is uploaded into portal (can encode multiple times for different formats with different names).
    • View the media content (copy link into browser).

Azure Resource Manager

  • With ASM even a VM has a cloud service.
  • ARM is pure IaaS, not necessarily cloud service.
  • Deploy, manage and monitor services as a group; deploy repeatedly throughout the application life cycle; use declarative templates to define deployment; can have dependencies between resources; apply RBAC; organise logically by tagging.
  • ASM tightly couples to cloud service – VM in subnet, in VNet, in cloud service, in region, with VIP for DNS and public IP.
  • ARM is more loosely coupled – can have multiple VIPs, NICs, etc. All in a RG (which can span regions). Attached via reference.
 ASM XML  ARM JSON
VM deployment  Cloud service as container  Does not require a cloud service
Availability set Define VMs under same availability set Availability set is a resource exposed by the Microsoft.Compute provider – VMs that need HA must be included in availability set
Fault domain  Maximum 2 fault domains  Maximum 3 fault domains
Load balancing Cloud service provides an implicit load balancer for the VMs  The load balancer is a resource exposed by the Microsoft.Network provider
Virtual IP address Default static VIP as long as one VM running in the cloud service Public IP is a resource exposed by Microsoft.Network – can be static (reserved) or dynamic
Reserved IP address Reserve an IP address in Azure and associate with a cloud service Public IP can be created as static and assigned to a load balancer
  • Choose deployment mode when provisioning resources. Limited inter-operability so choose the right model.
  • Deploy using
    • Portal
    • PowerShell: Switch-AzureMode -Name AzureResourceManager
    • ARM REST API
    • Azure CLI: azure config mode arm
  • Resource Manager template – JSON document – deploys and provisions all of the related resources in a single, co-ordinated operation.
  • Tags are key-value pairs of metadata: applied to individual ARM resources or ARM RGs – up to 15 tags per Resource or RG
    RBAC – Owner, Reader or Contributor.

Azure Messaging Solutions

  • Service Bus: multi-tenant cloud service – each user creates a namespace to work within.
    • Queues – one-way communication, asynchronous queuing with guarantee of message delivery order (worker has to keep polling).
    • Topics – let each receiving application create a subscription by defining a filter (avoid polling – get notification instead) – pub-sub model. Read with RecievAndDelete or PeekLock; can have multiple subscribers.
    • Relays – synchronous 2 way communications between applications – won’t help with buffering.
  • Event hubs – highly scalable ingestion system that can process millions of events per second (e.g. for IoT).
  • Can also queue via storage – more options with service bus but more scalable with storage.

Azure Backup

  • Backup service targeted at replacing tape backup.
  • Can work with on-premises workloads or Azure workloads.
  • On-premises backup – pick region and create a vault; download vault credential files; download and install Azure backup agent; can seed through Azure Import/Export Service; select backup policy (start time of backup (retention policies (weekly/monthly/yearly)) – backups are incremental.
  • Azure VM Backup – install agent if not already installed, register VMs with Azure Backup Service (installs backup agent in extensions); select backup policy.
  • Azure backup is to backup data on VM. Priced per protected instance and storage consumed (price for protected instance goes up at 50GB, then 500GB, then each additional 500GB.

Azure Site Recovery

  • Orchestrates failover and recovery of a VM.
  • On-premises machine replicated to vault in Azure, or to another datacentre – not Azure to Azure.
  • Protect AD and DNS, SQL Server, SharePoint, Dynamics AX, RDS, Exchange, SAP.
  • Can also perform a test failover, starting resources in Azure but not routing the traffic.
  • Use to protect VMware ESX or Hyper-V VMs or physical servers and can be used to migrate to Azure

Business continuity (BC) and disaster recovery (DR)

  • Scenarios: recover from local failures; loss of a region; on-premises to Azure
  • For Azure failures:
    • HA in PaaS (per region), just make sure web and worker roles 2 or more roles each – then will automatically be spread across fault domains.
    • For region failure need to plan across regions – more elaborate (make sure code and config is available in a second region).
  • HA in IaaS needs management of VMs in availability sets (need to define define manually).
  • At region level, also think about load balancing (VIP), storage (LRS, ZRS, GRS of RA-GRS), Azure SQL replication.
  • Recover from loss of region:
    • Redeploy on disaster (cold DR) – replicate data ready to run (not high RTO/RPO)
    • Warm spare (active/passive) – infrastructure in DR region but not fully available (e.g. SQL replication with secondary copy not accessed, not routing traffic to passive).
    • Hot spare (active/active) – two regions at the same time (e.g. SQL on IaaS and replicating itself).
  • Cross regional strategies for DR:
    • VNet – export settings, import in secondary region.
    • Cloud Services – create a separate cloud service in target region; publish to secondary region if primary files; use Traffic Manager to route traffic.
    • VM – use blob copy API to duplicate VM disks; geo-replicated VM images.
    • Storage – use GRS or RA-GRS (replicated in minutes, so tight RPOs cannot rely on this – need to write own algorithm).
    • Azure SQL:
      • Geo-restore (1 hour RPO/<12 hours RTO).
      • Standard geo-replication (5 secs RPO/30 mins RTO) – no access to secondary.
      • Active geo-replication (5 secs RPO/30 mins RTO) – read access to secondary.
      • Manually export to Azure Storage (blob) with Azure SQL database import/export service.

Securing Azure Resources

  • Cloud security model is shared security model:
    • Users are responsible for securing applications.
    • Cloud Service Provider (CSP) is responsible for providing controls; users for using them!
    • CSP is responsible for infrastructure security.
  • VNet/VM security: use endpoints (ACL for endpoints, NSGs at VM or VNet level).
  • Storage: use shared access signatures.
  • Role-based access control.
  • Encryption.

Microsoft’s UK datacentres: what you need to know

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This morning, the UK woke up to an announcement from Microsoft that the UK datacentres for Azure and Office 365 are generally available, making Microsoft the first global provider to deliver a complete cloud (Iaas, PaaS and SaaS) from UK data centres.

That means:

  • Two new Azure regions in the UK:
    • UK West (Cardiff)
    • UK South (London)
  • Office 365 services from UK datacentres in Durham and London.

Dynamics CRM online will be offered from the UK in the first half of 2017.

That Azure location information was taken from the Azure regions page on the Microsoft website (although my sources tell me that “Cardiff” is really “Newport” – close enough as to make no difference anyway, and London is probably “near London” too).  The Office location information was taken from the Office 365 Interactive Data Maps.

Now, UK customers already using Azure or Office 365 will be asking “will my data be moved to a UK datacentre?”. There’s no official announcement from Microsoft (not that I’ve seen) but my (unofficial) answer is “no”. At least not automatically.

For Azure, it’s good practice to design across multiple regions. There are also implications around geo-replication (which regions are paired with which for business continuity and disaster recovery purposes). Moving resources from one region to another is possible but is also a project that would need to be undertaken by a customer (possibly working with a partner) as a programme of planned resource moves.

For Office 365, it’s worth reading the TechNet advice on Moving core data to new Office 365 datacenter regions. At the time of writing it hasn’t been updated to reflect UK datacentres (it was last updated 28 July 2016) but it currently says:

“Existing customers that have their core customer data stored in an already existing datacenter region are not impacted by the launch of a new datacenter region”

[…]

“The data residency option, and the availability to move customer data into the new region, is not a default for every new region we launch. As we expand into new regions in the future, we’ll evaluate the availability and the conditions of data moves on a region by region basis.”

“New customers or Office 365 tenants created after the availability of the new datacenter region will have their core customer data stored at rest in the new datacenter region automatically.”

The page goes on to state that, assuming the data residency option is made available for the UK (remember, nothing has been announced yet)

“Customers will need to request to have their data moved within a set enrollment window.”

and that:

“Data moves can take up to 24 months after the request period to complete”

There’s also a footnote on the UK interactive data map to say:

“Customers who signed up and selected the United Kingdom for their Office 365 services before September 2, 2016 will have their customer data located in the EMEA datacenter locations.”

So, in short, Office 365 (SaaS) data stays exactly where it is, unless you sign up for a new tenant, or wait for further announcements from Microsoft. Azure (IaaS and PaaS) workloads can be moved to the new regions whenever you are ready.

 

Using rsync to keep folders in sync on a Synology Diskstation NAS

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Now I have backups working between my Synology Diskstation NAS and a storage account in Microsoft Azure (with over half a TB of photos so far backed up in the cloud), the next stage is to consolidate some more images into the folder that the backup works from.

I don’t want to remove them from their source (which in this case is the copy of my OneDrive data on my home drive) but I do want to archive all of the iPhone images I have there to the master photos folder so they are included in the backup.

Reading around the Synology forums suggests that this is not as straightforward as one might think. It appears there’s no easy way to synchronise two folders on the same NAS within the DSM software; but then I stumbled across Zarino Zappia (@zarino)’s post about a Synology-flavoured rsync backup script.

By following Zarino’s advice and using ssh to connect to the box as admin, I was able to achieve what I wanted with the following command:

rsync --itemize-changes --archive --progress --verbose --inplace --exclude '*@SynoResource' --exclude '@eaDir' --exclude '*.vsmeta' --exclude '.DS_Store' --exclude 'Thumbs.db' /volume1/homes/mark/OneDrive/iPhone\ Photos/ /volume1/photos/Digital\ Photos\ \(Master\)/Mark\'s\ iPhone/

(BTW, right click is the way to paste text to the command line in PuTTY!)

Some people on the Synology forums had suggested synchronising via another computer on the network would be fast! That sounds strange to me – logically a copy will always be faster on a single device with no network in between. For reference, it took about 20 minutes to rsync 32GB of images/videos on my on my DS916+.

Incidentally, the Error 23 in the screen shots was actually a typo in my command (missing space before one of the –exclude options). I re-ran with –dry-run to see which files were not transferred…

The next step will be to script this and get it running as a scheduled task but that can wait for another day…