Microsoft SQL Server overview

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I wrote this post a few months ago… and it crashed my blog. Gone. Needed to be restored from backup…

…hopefully this time I’ll have more luck!

One of the advantages of being in the MVP Reconnect programme is that I occasionally get invited to webcasts that open my eyes to technology I’ve not had a lot to do with previously. For many years, one of the big holes in my knowledge was around Microsoft SQL Server. That was until I saw Brian Kelley (@kbriankelley)’s “Brief overview of SQL Server”. The content’s not restricted, so I thought I’d republish some of it here for others who are getting their head around the major on-premises components of the Microsoft Data Platform.

SQL Server Editions

There are several editions of SQL Server available and these are the key differences (updated for 2017):

  • Express Edition (previously known as MSDE) is a free version, with some limitations around database size, etc.
  • Standard Edition lacks some enterprise features but has high availability and suits many application workloads.
  • Enterprise Edition is the full functionality product (but can be expensive)
  • Developer Edition (not licenced for use in production) offers the full feature set but can also run on a client operating system whereas enterprise will only run on server-based operating systems
  • Web Edition has reduced functionality and is intended for public websites (only available to service providers)
  • Compact Edition is another free version, intended for embedded databases in ASP.NET websites and Windows desktop applications

Although SQL Server is often thought of as an RDBMS product, it’s really a suite of systems, under the SQL Server name. Usually that means the database engine but there are many parts, each of which has a distinct setup (i.e. you don’t need the database service for SQL Server Analysis Services and vice versa).

SQL Server Analysis Services (SSAS)

SSAS (since 2007) is an online analytical and transaction processing (OLAP) tool intended for data warehousing and data mining.

One advantage of OLAP is to run jobs during the night for pre-generated calculations (used for roll-ups – e.g. totals and averages, etc.). It can provide fast results to business users who would otherwise need complex calculations in a transactional system (e.g. sales data based on region, month, quarter, etc. can be done ahead of time).

SSAS is comparable to IBM Cognos or Oracle Essbase (normally packaged with Hyperion for accounting, etc.).

Some SSAS jargon includes:

  • Star schema/snowflake schema – database design differs from transactional design. You can do these things in RDBMS but use SSAS on top.
  • Cubes
  • Dimensions
  • Tabular model
  • Data analysis expressions (DAX) – a language to do things in SSAS

SQL Server Integration Services (SSIS)

SSIS (since 2005) is heavily used for extract, transform and load (ETL) workloads – i.e. to get data from a source, manipulate it and pass it to a destination. It can be used to build a data warehouse, then data marts or to move data between systems. Basically, it’s a back-end batch processing system that performs the data mining.

SSIS is a replacement for Data Transformation Services (DTS). It’s not limited to SQL Server for source/destination so can talk to Oracle, Excel spreadsheets, other ODBC connections, etc.

The drag and drop interface is very powerful with the full functionality and flexibilityof Microsoft.NET behind it.

SSIS is comparable with Informatica (or Clover, etc.).

Some SSIS jargon includes:

  • Packages (whatever is processing, contains all the logic)
  • Tasks (what’s being carried out)
  • Dataflow tasks (how you go from source to destination – could be multiples)
  • Transformation (manipulating data)
  • Business Intelligence Markup Language (BIML)

SQL Server Reporting Services (SSRS)

SSRS was introduced 2005 and became so popular it was ported back to SQL Server 2000!
It is a reporting engine, used to publish reports in-browser. Early versions were built on IIS but since 2008, SSIS has run directly on http.sys.

SSRS can be integrated with SharePoint (for report security based on SharePoint security) or the native, standalone mode is browser-based to look at folders, find reports, and run a report with parameters. Used to print via ActiveX control but now (since 2016) prints to PDF (or opens with a PDF reader).

There are two ways to build reports: Report Builder (a web-side interface for BA-type power user) or Report Designer (a full product for complex designs). There is also a subscription capability so users can subscribe to reports.

SSRS can be compared with IBM Business Objects and Tableau.

SSRS jargon includes:

  • Reports
  • Data sources
  • Datasets
  • ReportServer (API to integrate with other products)
  • Native mode vs. integrated mode (SharePoint)

SQL Server Database Engine

The SQL Server database engine is what most people think of when SQL Server is mentioned.
It is traditionally a relational database management system (RDBMS) although it now contains many other database capabilities. It was originally derived from a Sybase product (until SQL Server 6.5).

SQL Server supports both multiple databases per instance (which can connect and join across) and multiple instances per server (from 2000) – the first is a default instance, then named instances can be created.

SQL Server uses a SQL language variant called T-SQL to interact. A GUI is provided in SQL Server Management Studio but it’s also possible interact via PowerShell.

SQL Server also has a scheduler (the SQL Server Agent), which can alert on success/failure and allows the creation of elaborate scheduling routines with notifications and the ability to run code. It is comparable with IBM DB2, Oracle, PostgreSQL, Sybase, MySQL and MariaDB.

SQL Server 2016 features include:

  • High availability options, including Always On failover clusters; Always On availability groups (which are more flexible because they don’t have to replicate and fail over everything); Database mirroring (one database on multiple systems; deprecated now in favour of availability groups); log shipping.
  • Several encryption options including built-in (certificate, asymmetric keys, symmetric keys); Enterprise Edition also has Transparent Data Encryption (TDE) to encrypt database at rest and stop copies of the database from being loaded elsewhere; connection encryption (SSL/TLS since 2005); Always Encrypted is new for 2016 (transparent to the application and to SQL Server) – data stored in encrypted form within the database.
  • SQL Server and Windows authentication (server or Active Directory). Can have Windows or both, but not just SQL Server-based logins.
  • Replication options to move data between servers.

Other security features include audit objects (who did what?); granular security permissions; login auditing (failed logins are written to the SQL Server Error Log text file and to the application event log); dynamic data masking (depending on who needs to see it – e.g. store social security numbers and only show part of the data; only obfuscation as data is still in clear text); row-level security (to filter rows).

Each new version brings performance enhancements, e.g. columnstore indexes, in-memory OLTP tables, query optimisation.

New Technologies in 2016 include:

  • JSON support. Query and return data in JSON format. Administrators have been able to use SOAP and XML since 2005 but this is now deprecated in favour of JSON (which is popular for RESTful systems).
  • Master data services.
  • Polybase (not to be confused with a clustering solution – it’s about talking to other data sources, e.g. Hadoop, Cloudera and Azure storage, to be expanded to include Oracle, Teradata, Mongo, Spark and more).
  • R Services/R Server (R within the database and also R Server for data science/big data queries).

2017 builds on 2016 to include:

  • Linux and Docker support. Starting with SQL Server 2017, SQL Server is available for either Windows or Linux systems and it’s available as an installable application or for Docker containers.
  • SQL Server R Services has been renamed SQL Server Machine Learning Services, to reflect support for Python in addition to R.

There are many more features in the Microsoft documentation but these are the most significant updates.

But what about the cloud?

This post provided a quick run-down of some of the major on-premises SQL Server components but, just as with Microsoft’s other products, there are cloud alternatives too. I’m planning a follow-up post to cover these so watch this space!

Some tips from my first few weeks with a GoPro Hero action camera

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been interested in having a play with an action camera for a while now. I figure I can get some fun footage on the bikes, as well as ski-ing next winter, and I missed not having a waterproof camera when I was lake-swimming in Switzerland a few weeks ago!

So, when I saw that a contact who had upgraded to the Hero 5 was selling his GoPro Hero 3 Silver Edition, I jumped at the opportunity.

My camera came to me with quite a few accessories and I picked up some more for not too much money at HobbyKing (shipped from China in 3 weeks – don’t pay GoPro prices for things like a tripod mount or a lens cover!).

Whilst getting used to the camera’s controls (oh yes, and opening the waterproof case for the first time), I came across some useful tips on the ‘net… including loads of videos from a guy called Bryn, whose new users guide was useful to make sure I had everything set up as I needed:

Once I had everything set up and a fast 64GB card installed, My first outing on a bike with the GoPro was helmet-mounted. That was OK, but it’s a bit weird having all that weight on your head and also not too handy for working out if the camera is running or not. Since then, I’ve got a bike mount so when my GoPro is mounted on my bike, I have it below the stem, which means technically it’s upside-down:

No worries – the Internet delivered another video telling me how to set the camera up for upside down recording:

One thing to watch out for is the battery life – don’t expect to be filling your memory card on a single battery – but it should last a while. It’s just that a GoPro isn’t going to work as a DashCam or similar (there are actually some good articles on the ‘net as to why you would probably want to use a specialist dashcam anyway – I have a NextBase 402G for that). Anyway, I don’t want to have to edit hours of footage so knowing I can only record a few minutes at a time is good for me (I have hours of recordings on MiniDV digital tape that have been waiting to be transferred to disk for years!).

I did recently use the GoPro to record some presentations at work: great for a wide angle view – but it got pretty warm being plugged into a power source the whole time (so again, a proper video camera would be the right thing to use – and don’t think about using a DSLR or a compact camera – I tried that too and they generally switch off after 20-30 mins to prevent overheating). One thing I found is that each video recorded on the GoPro is chopped into chunks of around 3.55MB (I was recording 1080p). The file naming is worth getting used to.

Each video uses the same number (0001, 0002, etc.) but you’ll find that the first one is named GOPR0001.MP4, the next is GP010001.MP4, then GP020001.MP4, etc. So, when selecting a group of files that relate to the same recording, look carefully at the index numbers (the date and time stamp should help too).

Also, depending on how you import the videos (i.e. copying directly rather than using an application like MacOS Image Capture), you may see some .THM and .LRV files. The GoPro support site explains that these are thumbnail and low-resolution video files respectively.

So, that’s a few things I’ve discovered over the last few weeks and just a little bit of GoPro tinkering. Please leave a comment if you’ve anything more to add!

Seven technology trends to watch 2017-2020

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still think they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

https://youtu.be/doNNClTTYwE

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

Short takes: iPhone broadcasting wrong number; fractions in HTML; Word comment authors

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another collection of things I found on the Internet that might or might not be useful for other people.

SMS and phone calls using the wrong number on an iPhone

In common with most people who “work in IT”, I get called upon for family IT support. In truth, I get called upon a lot less since my trainee geek (aged 12¾) deals with most of that for me! Last weekend though, he was stumped by the problems my Mother-in-law was having with her iPhone.

She’d bought a new phone and changed providers, then ported her number to the new provider. Although calls were reaching her with the correct number on her SIM, SMS and outbound calls were using the temporary number allocated prior to porting her “real” number.

I found the solution via the Giffgaff forums – where essie112mm describes a combination of steps including turning iMessage and Facetime on/off. The crucial part for me was Settings, Phone, My Number – where I needed to edit the number to the one that we wanted to use.

Writing fractions in HTML

In the previous section, I wanted to write ¾ using the correct HTML. As it happens, WordPress has taken our my HTML ¾ and replaced it with a raw ¾ symbol but I found this article by Charles Iliya Krempeaux (@Riever) useful reading for representing less common fractions in HTML.

Microsoft Word removes the author name from comments

I write a lot of documents in my professional life. I review even more for other people – and I use the reviewing tools in Microsoft Word extensively. One “feature” that was frustrating me though was that, every time I saved a file, my comments changed from “Mark Wilson” to “Author”.

My colleague Simon Bilton (@sabrisual) pointed out the fix to me – buried in Word’s options under Trust Center, Trust Center Settings, Privacy Options, Remove personal information from file properties on save (thanks to Stefan Blom in this TechNet forum post).

Remove personal information from file properties on save

It seems that our admins have set this by Group Policy now so I won’t have the problem any more but it’s a useful one to be aware of…

Running the Pixlr Editor (or other Adobe Flash-based apps) in a modern browser

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Many people will be familiar with the Pixlr browser-based image editing tool, Pixlr Editor. Unfortunately, it’s developed in Adobe Flash, a technology that’s rapidly falling out of favour with developers (about time too!) and losing browser support.

A few weeks ago, I tried to run Pixlr Editor in Chrome and found it wouldn’t work. Same for Safari. Edge gave a similar experience – in fact only Internet Explorer would play nicely!

Then I found Paulo Amaroso’s Google+ post about the issue (yes, Google+!). It seems that what I needed to do was click on the “omnibar” (the secure padlock or info button to the left of the URL in the browser) to open up Chrome settings and select Flash then Always allow on this site.

Interestingly, I’m now seeing browsers prompting me to enable Flash for the website… I suspect Pixlr have updated their website to improve the user experience.

Allow Flash for pixlr editor website in Chrome

Adopting cloud services means being ready for constant change

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s a news story today about how Microsoft may be repositioning some (or all) of Skype for Business as Microsoft Teams (the collaborative group-based chat service built on various Office 365 services but Skype for Business in particular).

The details of that story are kind of irrelevant to this post; it’s the reaction I got on Twitter that I felt the need to comment on (when I hit 5 tweeted replies I thought a blog post might be more appropriate).

Change is part of consuming cloud services. There’s a service agreement and a subscription/licensing agreement – customers consume the service as the provider defines it. The service provider will generally give notice of change but you normally have to accept it (or leave). There is no option to stay on legacy versions of software for months or years at a time because you’re not ready to update your ways of working or other connected systems.

That is a big shift and many IT departments have not adjusted their thinking to adopt this new way of working.

I’ve seen many organisations moving to cloud services (mostly Office 365 and Azure) and stick with their current approach. They do things like try to map drive letters to OneDrive because that’s what users are used to, instead of showing them new (and often better) ways of working. They try to use old versions of Office with the latest services and wonder why the user experience is degraded. They think about the on-premises workloads (Exchange, Lync/Skype for Business, SharePoint) instead of the potential provided by the whole productivity platform that they have bought licences to use. They try to turn parts of the service off or hide them from users.

My former colleague Steve Harwood (@SteeveeH) did some work with one of risual’s customers to define a governance structure for Office 365. It’s great work – and maybe I’ll blog about it separately – but the point is that organisations need to think differently for the cloud.

Buying services from Microsoft, Amazon, Google, Salesforce, et al is not like buying them from the managed services provider that does its best to maintain a steady state and avoid change at all costs (or often at great cost!). Moving to the cloud means constant change. You may not have servers to keep up to date once your apps are sold on an “evergreen” subscription basis but you will need to keep client software up to date – not just traditional installed apps but mobile apps and browsers too. And when the service gains a new feature, it’s there for adoption. You may have the ability to hide it but that’s just a sticking plaster solution.

Often the cry is “but we need to train the users”. Do you really? Many of today’s business end users have grown up with technology. They are familiar with using services at home far more advanced than those provided by many workplaces. Intuitive user interfaces can go a long way and there’s no need to provide formal training for many IT changes. Instead, keep abreast of the advertised changes from your service provider (for example the Message Center in Office 365) and decide what the impact is of each new feature. Very few will need a full training package! Some well-written communications, combined with self-help forums and updated FAQs at the Service Desk will often be enough but there’s also the opportunity to offer access to Massive Open Online Courses (MOOCs) where training needs are more extensive.

There are, of course, examples of where service providers have rolled out new features with inadequate testing, or with too little notice but these are edge cases and generally there’s time to react. The problem comes when organisations stick their proverbial heads in the sand and try to ignore the inevitable change.

Using a VPN to watch ITV content outside the UK

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Those who follow me on Twitter (@markwilsonit) will probably be aware that I recently spent some time in mainland Europe – travelling through France, Germany and Switzerland with my family. You’ll probably also be aware that one of my hobbies is road cycling – and that I like to watch the highlights from the three Grand Tours (Giro d’Italia, Tour de France and Vuelta a España) and from the Tour of Britain. With the Vuelta in full swing as my holiday started, I wanted to make sure I could still catch the highlights on ITV4!

Even with the new EU mobile roaming arrangements that mean I can use my mobile data allowance in other EU countries, I didn’t expect to be able to stream content reliably, so I took out a subscription to ITV Hub+, allowing me to download ITV programmes with the ITV Hub app (on Wi-Fi) and play back later, without ads. This worked brilliantly on the ferry to France but not so well once I was in my Paris hotel room, where the app detected I was outside the UK and denied access to content with a variety of error messages:

ITV Hub download error outside the UK ITV Hub download error outside the UK ITV Hub download error outside the UK

I was pretty annoyed – after all, there was no mention of UK-only coverage when I subscribed to the ITV Hub+ and the ITV website says:

“Where can I use a Hub+ subscription?

As long as you’re signed into your account, you’ll be able to use your Hub+ subscription almost anywhere. Watch ad-free telly on our website, download and catch up on the go on your mobile or tablet, or binge on your favourite shows with no interruptions on your Smart TV!”

but I did find the limitation in their troubleshooting guide later:

I am abroad and can’t watch videos
The ITV Hub is only available within the UK as we don’t hold international rights for all of our shows. If you’re lucky enough to be on holiday or you live abroad, you won’t be able to watch ITV Hub until you return to the UK”

After a bit of a rant on Twitter (no response from ITV, of course), I thought about using a VPN (and @JFDuncan suggested Plex).

Unfortunately, my own VPN back to my NAS didn’t work (on reflection, L2TP/IPSec was not the best choice of transport – as @GarryMartin pointed out when I originally set it up) and I was nervous about using a third party service until Justin Barker (@JustinBarker77) suggested TunnelBear:

Recommendations are always good. And TunnelBear seemed more legitimate than some of the sites I found…

At first, I didn’t have much luck – even after following TunnelBear’s troubleshooting advice for accessing content. 24 hours later though, something had cleared (maybe I had a different IP address, maybe it was something on my iPhone) and ITV Hub+ worked flawlessly over hotel Wi-Fi and a VPN back to the UK. I could download my cycling highlights for later playback and the VPN tunnel even seemed to improve the Holiday Inn Wi-Fi reliability – possibly due to QoS restrictions prioritising potential business traffic (VPN) over leisure (downloading videos)!

I did have some challenges with playback – so I put the iPhone into Airplane Mode before watching content, just in case the ITV Hub app detected I was outside the UK again, but each time I wanted to download over the next few days I enabled the VPN and all was good. I also subscribed to TunnelBear for a month’s worth of unlimited data allowance (I soon chewed through the 1GB I got for tweeting about the service!).

Hopefully, this information will help someone else who’s frustrated by paying for a download service and then finding it doesn’t work outside the UK…

Downloading multiple YouTube videos for offline playback

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few weeks ago, I wrote about a nifty utility called youtube-dl that can be used to download content from YouTube for offline playback (indeed, I’m writing this on the last day of my holidays, having played precisely none of the content I downloaded to watch whilst I was away!).

In the original post, I suggested giving youtube-dl the URL for a playlist to download all videos in the playlist. It’s also smart in that if it detects any videos that are already present in the folder, it will skip them – e.g.:

[download] Cloud Tech 10 – 3rd July 2017 – Azure Machine Learning, Jenkins, Petya detection and more-ymKSGTR55LQ.mp4 has already been downloaded

But what if you want to download lots of videos that are unrelated – or just certain videos from a large list? In my case, I wanted to download a bunch of recent videos from the Global Cycling Network (GCN) – a YouTube channel that I often watch but which has thousands of videos – I certainly didn’t want to download the entire playlist!

Instead, create a file with the download commands for the individual videos, e.g.:

youtube-dl -f 22 https://www.youtube.com/watch?v=5RsFWlvJjOg
youtube-dl -f 22 https://www.youtube.com/watch?v=O7FxZ1kFIW0
youtube-dl -f 22 https://www.youtube.com/watch?v=iOaeo3_E8R4

Rather than sitting at the terminal, running each one and waiting, save the file with a .sh extension (assuming a Unix-based OS – like MacOS) and then kick them off at once.

My file was called dl-gcn.sh but it’s no good running that from the Terminal – bash will complain.

bash: dl-gcn.sh: command not found

Instead, prefix with sh as follows:

sh dl-gcn.sh

and the downloads should run through in serial fashion, whilst you get on with something more interesting…

Combining GPX files for Strava

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This morning was spent on my bike… as was a fair chunk of this afternoon… as is a fair chunk of many summer weekends, much to Mrs W’s disappointment.

My friend Andy and I put in 60 miles in the sunshine, on a big loop around Milton Keynes. It seems my route planning is pretty spot on, as it was almost the exact opposite of a charity ride going the other way around (we passed the same riders twice!). Unfortunately, my ability to “press the start button on my Garmin cycle computer” is clearly less good – I was about a mile from home and heading out of town when I realised I’d forgotten to start tracking my ride!

My OCD can’t cope with this. It would be able to cope with turning around, going back up the hill, starting the computer and starting the ride again – but not with some missing kilometres in my ride data! Luckily, Andy was also riding with a Garmin bike computer. Even though he’d also forgotten to start his, he was wearing a Garmin watch too – so I could combine his data and mine (we’d ridden side by side for the first part of the ride…).

I’ve blogged before about GPS Track Editor, which is a fantastic piece of free software. Using this, I could edit Andy’s data to just the part I had missing, then combine it with mine and merge the two tracks (the short gap doesn’t matter – Strava will straight-line the route between the two points). I also tried merging the files with a tool from gotoes.org – unfortunately, that ended up with a ride that was effectively double the length of what we rode (two loops). it would probably have worked with my edited files but I could also merge them in the GPS Track Editor…

Combining tracks in GPS Track Editor

I then deleted the original (short) ride from Strava and re-uploaded. Sorted.

Just one thing to sort out – all of the PRs I got on today’s ride (and there were a few) were recorded as second places by the second upload. No worries – Strava has a “refresh my achievements” tool. which sorted out that particular issue. Now my ride has the complete distance… and my achievements are correct too…

Providing fast mailbox access to Exchange Online in virtualised desktop scenarios

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.