This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Some more browser tabs turned into mini-snippets of blog post…
I know that HTML tables fell out of fashion when we started to use CSS but they do still have a place – for displaying tabular data on a web page – just not for controlling page layouts!
I needed to create a table for a blog post recently and I found this HTML Table Generator that did a fair chunk of the legwork for me…
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Two years ago, I attended Future Decoded – Microsoft’s largest UK event, which has taken place each November for the last few years at the ExCeL centre in London. It’s a great opportunity to keep up to date with the developments in the Microsoft stack, with separate Business-focused and Technical-focused days and some really good keynote speakers as well as quality breakout sessions.
Future Decoded has particular significance for me because it’s where I “met” risual, who have been headline sponsors for the last 3 events. After the 2014 event, I decided to find out more about risual and, in May 2015 I finally joined the “risual family”. This year I was lucky enough to be on one of our five stands (one headline stand in the form of a Shoreditch pub, complete with risuAle, and one each for our solutions businesses in retail, justice, education and productivity). I had a fantastic (if very tiring) day connecting with former colleagues, customers, industry contacts and potential new customers – as we chatted about how risual could help them on their digital transformation journey.
Whilst I wasn’t able to attend a lot of the sessions, indeed I was consulting with a customer in the north-east of England on the first day, I did manage to catch the day 2 keynote and was blown away with some of the developments around machine learning and artificial intelligence (maybe more on that in another post). I also noticed the teams behind the Microsoft Business (@MSFTBusinessUK) and Microsoft Developer (@MSDevUK) Twitter handles were tweeting sketch notes, which I thought might be a useful summary of the event:
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Once again, my PC is running out of memory because of the number of open browser tabs, so I’ll convert some into a mini-blog post…
Outlook forgets how to open HTTP(S) links
I recently found that Outlook 2016 had “forgotten” what to do with HTTP(S) links – complaining that:
Something unexpected went wrong with this URL: […] Class not registered.
The fix was to reset my default browser in Windows. Even though I hadn’t changed it away from Edge, a Windows Update (I expect) had changed something and Edge needed to be reset as the default browser, after which Outlook was happy to open links to websites again.
Globally disable Outlook Clutter
I had a customer who moved to Exchange Online and then wanted to turn off the Clutter feature, because “people were complaining some of their email was being moved”.
Unfortunately, Clutter is set with a per-mailbox setting so to globally disable it you’ll need something like this:
get-mailbox | set-clutter -enable $false
That will work for existing mailboxes but what about new ones? Well, if you want to do make sure that Clutter remains “off”, then you’ll need a script to run on a regular basis and turn off Clutter for any new users that have been created – maybe using Azure Automation with Office 365?
Personally, I think this is the wrong choice – the answer isn’t to make software work the way we used to – it’s to lead the cultural change to start using new features and functionality to help us become more productive. Regardless, Clutter will soon be replaced by the Focused Inbox (as in the Outlook mobile app).
Don’t run externally-facing mail servers in Azure
I recently came across a problem when running an Exchange Hybrid server on a VM in Azure. Whilst sending mail directly outbound (i.e. not via Office 365 and hence Exchange Online Protection), consumer ISPs like Talk Talk were refusing our email. I tried adding PTR records in DNS for the mail server but then I found the real issue – Azure adds it’s IP addresses to public block lists in order to protect against abuse.
“[…] the Azure compute IP address blocks are added to public block lists (such as the Spamhaus PBL). There are no exceptions to this policy”
and the recommended approach is to use a mail relay – such as Exchange Online Protection or a third party service like SendGrid. Full details can be found in the Microsoft link above.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
A few months back, I rode the Prudential London-Surrey Ride 100. That’s a 100-mile sportive, except that my Garmin recorded the route as 155.9km, or 96.871769 miles. Somewhere it seems I missed about 5 km/3 miles… or maybe I just cut all the corners!
Well, even though I was using GPS, I may tonight have found something that might account for a little bit of that variation – it seems my Garmin cycle computer was set to the wrong wheel circumference. Not wildly out but about 0.8%, which won’t help.
The Edge 810 software has options within a bike profile for both manual and automatic wheel size adjustment. For my Bianchi C2C profile, it was set to automatic and had decided that my wheel circumference was 2088mm, probably when I originally paired my speed/cadence sensor as, according to the Garmin website:
“Wheel size is automatically calculated when a Garmin Speed/Cadence Bike Sensor (GSC 10) is paired to a GPS-enabled device”
With 700x23C tyres fitted, it should actually be closer to 2096mm (which seemed to be the default when I switched to a manual setting, or maybe that’s what I had originally entered before it was overridden by the software?) but I switched to 700x25C tyres about a year ago which will have a circumference of around 2105mm according to this table (and this one).
For completeness, I checked the profiles for my mountain bikes too – they both showed an automatic wheel size of 0mm (so presumably get their distance from GPS) but have now been changed to 2075 for 26×2.20″ and 2037 for 26×1.85″ (defaults seemed to be 2050).
As for the rest of the difference – well, tyre pressures are a factor – as is the weight of the rider. One school of thought says you should put some paint or water on your tyre, ride along and then measure the gaps between the dots. That assumes you ride in a straight line and that the other factors (weight, tyre pressure, etc.) remain constant between rides.
If the organisers said it’s a hundred miles, then I’ll go with that. Hopefully now I’ve amended the wheel circumference that will help a bit in future though.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Just as when I installed the Google Play Store, I first had to unhide Developer Options (by tapping 7 times on the device serial number in Settings) and enable ADB (the Android Debug Bridge). After connecting to a PC with a USB cable and accepting the connection, I was able to use ADB to control the settings on the Kindle Fire.
HowToGeek has an article about installing ADB but I didn’t do that… I used the copy that came with the script I had previously used to install the Google Play Store (from @RootJunky) – simply by opening up the command prompt and changing directory to the folder that had adb.exe in it…
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
In preparation for my summer holidays this year, I bought a new tablet to replace my aging (and slow) Tesco Hudl. Again, I didn’t want to spend much money – I’ve suffered at the hands of Apple’s built-in obselescence previously, and the Amazon Kindle Fire HD 8 seemed to fit the bill quite nicely.
The only trouble with a Kindle is it runs FireOS – a fork of Android – rather than a “stock” Android. That means no Google Play store, which means you’re limited to the apps that are in Amazon’s store. By and large that’s OK – I installed iPlayer, OneDrive, OneNote, Spotify, etc. but there is no YouTube and the browser is Silk, not Chrome.
Just one point to note – Amazon must have updated their method of unlocking the device to remove ads from the lock screen as that part of the script didn’t seem to take effect on my device.
With the last hurdle out of the way, this means I can recommend that my 10 year-old son, who wants to buy a tablet (and is too young for a smartphone), can buy something cheap like a Kindle rather than spending far too much money on a more fully-featured tablet in a dying market:
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Last weekend, I had an issue with the touch screen on the family laptop. This not-quite-three-year-old device (running Windows 10) is on its second screen (the first one gave up after 13 months) and the laptop was working fine, just that the touch screen acted like, well, a screen (i.e. no touch).
Helpfully, both Adi Kingsley-Hughes (@the_pc_doc) and Jack Schofield (@jackschofield) chipped in with suggestions but it remained a mystery.
The issue persisted through a reboot (which does cast some doubt on the eventual “fix”) and Lenovo’s published drivers were woefully out-of-date but I found a Dell forum post with something that might have helped in some way:
“Think about it, if you are not using the touchscreen and keeping it active, in this energy efficient world and age, a system would turn off unnecessary devices!!
THE SOLUTION: Device Manager – Universal Serial Bus Controllers – Generic USB Hub Properties -( Under POWER tab: the one that has “HID-compliant Device 100mA” attached) Power Management – UNCHECK-“Allow computer to turn off this device to save power”
If you have problems or not sure if it the correct HID-compliant Device, just look under the Driver Details and hit the drop down box to scroll through all those different labels until it clearly says “Touchscreen” under “Bus Reported Device Description”
Fixed my problem pretty easily.” [Nate97]
I say “might”, because the results were not immediate – and if this worked, then why didn’t a reboot?
2. Find the Touch screen driver under Mice and Other Pointing Devices > USB Touchscreen Controller(A111). You’re going to uninstall this and check the box that says “Delete the driver software for this device”. Restart your computer.
3. If the feature is still not back, open Device Manager -> Human Interface Devices. Right-click HID compliant touch screen, then uninstall. When you restart the PC, it will reinstall.
4. Or if you cannot locate any USB Touchscreen Controller(A111), please try to look for an option called “USB Root Hub (xHCI)” under USB Controllers or Universal Serial Bus. If it was labeled as disabled (a little faded or lighter shade of gray that means it is disabled). Righ-click on it then select enable. That may bring the touchscreen back.”
Again, it didn’t seem to make much difference and I went to bed with a non-functional touch screen; however, the next day the touch screen was working again, when I was ready to write this off as a hardware issue. I’m not sure which (if either) of these “fixes” worked… but I’m posting this in case it helps someone else…
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
A datacentre is just a datacentre isn’t it? After all, isn’t it just a bigger version of the server room in the basement? But what about the huge datacentres that run cloud services? What’s it like inside the Microsoft datacentres that host Azure, Office 365, etc.?
As Doug Hauger (General Manager for National Cloud Programs at Microsoft) explained, organisations look to use a cloud datacentre for scale and professionalism. Anyone can run a datacentre but the Microsoft Cloud is about robustness and security – whether that’s how staff are monitored or the physical and logical security models.
With its cloud datacentres, Microsoft is aiming to meet customer needs around digital transformation, where the question is no longer “why should I go to the cloud” but one of “how to innovate more quickly in the cloud”. That’s what drives the agenda for where to geographically expand, where enhance scalability, etc.
Despite the question I posed in the opening paragraph of this post, a true datacentre is worlds apart from the typical server room in the basement (or wherever). The last time I got to visit a datacentre was when I was working at Fujitsu and I visited the London North facility, an Uptime Institute Tier III datacentre that won awards when it was built in 2008. Seeing the scale at which a modern datacentre operates is impressive. Then ramp it up some more for the big cloud service providers.
In the webcast, Christian Belady (General Manager Cloud Infrastructure Strategy and Architectures at Microsoft) explained that datacentres are the foundation of the Internet – they are where all the cloud services are served from (whether that is Microsoft services, or those provided by other major players).
There are several layers of physical security from the outside fence in, screening people, controlling access to parts of the buildings, even to cabinets themselves with critical customer data in locked cabinets covered with video surveillance. Used disks are destroyed, being wiped and then crushed on site! The physical security surpasses anything provided for on-premises servers and the logical security continues that defence in depth.
Each custom-built server is actually 2 computers with 10s of 1000s of computers per room, 100s of 1000s per datacentre, each datacentre the size of 20-30 football fields. Look at the racks and you can see the attention to detail – keeping things orderly not only adds to operational efficiency but it looks good too! The enterprise servers that most of us run on-premises have plastic bezels to make them look pleasant. Instead, Microsoft’s servers have focused on eliminating anything that has no useful function…
Each iteration of datacentres becomes more industrialised – with improvements to factors such as cooling (which is one of the biggest power usage factors).
A generation 2 datacentre from around 2007 has a Power Usage Effectiveness (PUE) efficiency score of 1.4-1.6 (for comparison, the Fujitsu facility I mentioned earlier has a PUE of 1.4 but a typical enterprise datacentre from the 2000s with a normal raised floor would have a PUE of 2-3). Cool and hot aisles are used with hot air returned to coolers and recirculated. Microsoft then raised the temperature of their servers to a level that is acceptable (working with manufacturers), rather than the lower levels they used to have (reducing the cooling demands).
Moving on to generation 4, efficiency is improved further (a PUE of 1.1-1.2), eliminating chillers by removing roofs, driving down costs and using outside air to chill. Containers use the outside cooling and a system of adiabatic cooling, spraying mist into the air to cool down – which evaporates before it hits the server”. Such datacentres use a lot less water too (compared with older styles of datacentre).
With the latest (generation 5) datacentres, further improvements are made, culminating the features of other generations – learning and adapting. The PUE is now down to 1.1 (and below at certain times of year) with running costs also improved. There are still hot a cold aisles but no raise floor and, instead of outside air, the datacentres use a closed liquid loop system (no chiller – cool the water outside) – and that water doesn’t need to be potable.
Inside the Microsoft datacentres is very industrial. Whole racks are brought in (pre-tested), rather than single servers and, as previously mentioned, Microsoft design and build the servers for use at scale, stripping out enterprise features and retaining only what’s needed for the Microsoft environment.
Whilst I’ve worked with customers who have visited Microsoft datacentres in Dublin, it seems unlikely that I’ll ever get the chance. Watching the Modern Workplace webcast gave me a fascinating look at how Microsoft operates datacentres at scale though – and it truly is awe-inspiring. To find out more, visit the Microsoft website.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Like cloud a few years ago and then big data, DevOps is one of the buzzwords of the moment. So what does it actually mean? And is there more to it than hype?
There are many definitions but most people will agree that, at its core, DevOps is about closer working between development and operations teams (and for infrastructure projects read engineering and operations teams). Part of DevOps is avoiding the “chuck it over the fence” mentality that can exist in some places (often accompanied with a “not invented here” response). But there is another side too – by increasing co-operation between those who develop or implement technology and those who need to operate it, there are opportunities for improved agility within the organisation.
DevOps is as much (probably more) about people and process than technology, but the diagram below illustrates the interaction between teams at different stages in the lifecycle:
Businesses need change in order to drive improvements and respond to customer demands.
Development teams implement change.
Meanwhile, operations teams strive to maintain a stable environment.
Just as agile methodologies sit between the business and developers, driving out requirements that are followed by frequent releases of updated code with new functionality, DevOps is the bridge between the development and operations teams.
Configuring, managing and deploying resources (for example into the cloud) is improved with DevOps processes such as continuous integration (CI). No doubt some will argue that CI has existed for a lot longer than the DevOps term and that is true – just as virtualisation pre-dates infrastructure-as-a-service!
The CI process is a cycle of integrating code check-ins with testing and feedback mechanisms to improve the quality of the code:
In the example above, each new check-in to the version control system results in an automated trigger to carry out build and unit tests. These will either pass or fail, with corresponding feedback. When the tests are successful, another trigger fires to start automated acceptance tests, again with feedback on the success or failure of those tests. Eventually, the code passes the automated tests and is approved for user acceptance testing, before ultimately being released.
Continuous integration works hand in hand with continuous delivery and continuous deployment to ensure that development teams are continuously dropping new code but in line with the Release Management processes that the operations teams require in order to maintain their service.
Continuous delivery allows new versions of software to be deployed to any environment (e.g. test, staging, production) on demand. Continuous delivery is similar to continuous integration but can also feed business logic tests. Continuous deployment takes this further with every check-in that passes all tests ultimately ending up with a production release – the fastest route from code to release.
No one tool is used to implement DevOps – DevOps is more about a cultural shift than it is about technology – but there are many tools that can assist with implementing DevOps processes. Examples include Chef, Puppet (configuration management) and Jenkins (continuous integration). Integrated development environments (such as Visual Studio and Eclipse) also play a part, as do source control systems like Visual Studio Team Services and Git/GitHub.
So DevOps is more than a buzzword. It’s a movement, that’s bringing with it a whole stack of processes and tools to help drive towards a more agile environment. IT that can support business change. But DevOps needs a change of mindset and for me the big question is “is your organisation ready for DevOps?”.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
During a recent project, I was caught out by a lack of consistency in naming for Azure resources (and an inability to rename some of them afterwards). Some resources had underscores in their names (_), some had hyphens (-) – and then there were the inconsistencies in case. For someone who generally pays attention to details like this, I found it all very frustrating.
So I started to look into what a standard for naming Azure resources might look like (I also asked Microsoft for advice). The general advice I received was, “stick to numbers and letters – no special characters because some resources won’t accept them”. Then, whilst naming a subnet and starting with a number I found that subnet names can’t start with a number or a space.
So, lets make that “use letters and numbers only, in lower case, and always starting with a letter”.
I generally advise against including organisation names in resources like server names (because resources often outlive organisation names) but, in this case, the organisation name is likely to provide some uniqueness. So, let’s try “use letters and numbers only, in lower case, prefixed with an abbreviation for the organisation name, starting with a letter”.
Then, lets think about the naming for the resources themselves – a two letter code for the resource type (rr) and a suitable number of digits to count the instances (nn) – something like:
orgrrnn
This has two characters for the digits on the end, though three, or even four, may be better depending on the size of the organisation (remember to plan for growth!). You’ll also need to consider the total length of the name – between 3 and 15 characters appears to be the sweet spot (some may be longer, few may be shorter).
Resource types might be:
Two-letter code
Meaning
ad
Active Directory
cs
Cloud Service
db
Database
gw
Gateway
ln
Local Network
ms
Media Service
rg
Resource Group
sg
Storage Account
sp
App Service Plan
sn
Subnet
tm
Traffic Manager
vm
Virtual Machine
vn
Virtual Network
wa
Web App (App Service)
For my recent batch of resources when I was studying for an exam, that led to names like:
exam70534ms01 (Media Service 01)
exam70534db02 (Database 02)
(illustrated here for both ASM and ARM)
That looks to me to be unique, consistent and meaningful, but I’m sure there are other considerations too! Indeed the Azure documentation has some quite complex recommended naming conventions for Azure resources. My concerns with these are that they are not consistent (remember that not all resources can include certain characters), whereas the naming approach I’ve outlined in this post is.