Seven technology trends to watch 2017-2020

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Just over a week ago, risual held its bi-annual summit at the risual HQ in Stafford – the whole company back in the office for a day of learning with a new format: a mini-conference called risual:NXT.

I was given the task of running the technical track – with 6 speakers presenting on a variety of topics covering all of our technical practices: Cloud Infrastructure; Dynamics; Data Platform; Unified Intelligent Communications and Messaging; Business Productivity; and DevOps – but I was also privileged to be asked to present a keynote session on technology trends. Unfortunately, my 35-40 minutes of content had to be squeezed into 22 minutes… so this blog post summarises some of the points I wanted to get across but really didn’t have the time.

1. The cloud was the future once

For all but a very small number of organisations, not using the cloud means falling behind. Customers may argue that they can’t use cloud service because of regulatory or other reasons but that’s rarely the case – even the UK Police have recently been given the green light (the blue light?) to store information in Microsoft’s UK data centres.

Don’t get me wrong – hybrid cloud is more than tactical. It will remain part of the landscape for a while to come… that’s why Microsoft now has Azure Stack to provide a means for customers to run a true private cloud that looks and works like Azure in their own datacentres.

Thankfully, there are fewer and fewer CIOs who don’t see the cloud forming part of their landscape – even if it’s just commodity services like email in Office 365. But we need to think beyond lifting and shifting virtual machines to IaaS and running email in Office 365.

Organisations need to transform their cloud operations because that’s where the benefits are – embrace the productivity tools in Office 365 (no longer just cloud versions of Exchange/Lync/SharePoint but a full collaboration stack) and look to build new solutions around advanced workloads in Azure. Microsoft is way ahead in the PaaS space – machine learning (ML), advanced analytics, the Internet of Things (IoT) – there are so many scenarios for exploiting cloud services that simply wouldn’t be possible on-premises without massive investment.

And for those who still think they can compete with the scale that Microsoft (Amazon and Google) operate at, this video might provide some food for thought…

(and for a similar video from a security perspective…)

2. Data: the fuel of the future

I hate referring to data as “the new oil”. Oil is a finite resource. Data is anything but finite! It is a fuel though…

Data is what provides an economic advantage – there are businesses without data and those with. Data is the business currency of the future. Think about it: Facebook and Google are entirely based on data that’s freely given up by users (remember, if you’re not paying for a service – you are the service). Amazon wouldn’t be where it is without data.

So, thinking about what we do with that data: the 1st wave of the Internet was about connecting computers, 2nd was about people, the 3rd is devices.

Despite what you might read, IoT is not about connected kettles/fridges. It’s not even really about home automation with smart lightbulbs, thermostats and door locks. It’s about gathering information from billions of sensors out there. Then, we take that data and use it to make intelligent decisions and apply them in the real world. Artificial intelligence and machine learning feed on data – they are ying and yang to each other. We use data to train algorithms, then we use the algorithms to process more data.

The Microsoft Data Platform is about analytics and data driving a new wave of insights and opening up possibilities for new ways of working.

James Watt’s 18th Century steam engine led to an industrial revolution. The intelligent cloud is today’s version – moving us to the intelligence revolution.

3 Blockchain

Bitcoin is just one implementation of something known as the Blockchain. In this case as a digital currency.

But Blockchain is not just for monetary transactions – it’s more than that. It can be used for anything transactional. Blockchain is about a distributed ledger. Effectively, it allows parties to trust one another without knowing each other. The ledger is a record of every transaction, signed and tamper-proof.

The magic about Blockchain is that as the chain gets longer so does the entropy and the encryption level – effectively, the more the chain is used, the more secure it gets. That means infinite integrity.

(Read more in Jamie Skella’s “A blockchain explaination your parents could understand”.)

Blockchain is seen as strategic by Microsoft and by the UK government and it’s early days but we will see where people want to talk about integrity and data resilience with integrity. Databases – anything transactional – can be signed with blockchain.

A group of livestock farmers in Arkansas is using blockchain technology so customers can tell where their dinner comes from. They are applying blockchain technology to trace products from ‘farm to fork’ aiming to provide consumers with information about the origin and quality of the meat they buy.

Blockchain is finding new applications in the enterprise and Microsoft has announced the CoCo Framework to improve performance, confidentiality and governance characteristics of enterprise blockchain networks (read more in Simon Bisson’s article for InfoWorld). There’s also Blockchain as a service (in Azure) – and you can find more about Microsoft’s plans by reading up on “Project Bletchley”.

(BTW, Bletchley is a town in Buckinghamshire that’s now absorbed into Milton Keynes. Bletchley Park was the primary location of the UK Government’s wartime code-cracking efforts that are said to have shortened WW2 by around 2 years. Not a bad name for a cryptographic technology, hey?)

4 Into the third dimension

So we’ve had the ability to “print” in 3 dimensions for a while but now 3D is going further.Now we’re taking physical worlds into the virtual world and augmenting with information.

Microsoft doesn’t like the term augmented reality (because it’s being used for silly faces on photos) and they have coined the term mixed reality to describe taking untethered computing devices and creating a seamless overlap between physical and virtual worlds.

To make use of this we need to be able to scan and render 3D images, then move them into a virtual world. 3D is built into next Windows 10 release (the Fall Creators update, due on 17 October 2017). This will bring Paint 3D, a 3D Gallery, View 3D for our phones – so we can scan any object and import to a virtual world. With the adoption rates of new Windows 10 releases then that puts 3D on a market of millions of PCs.

This Christmas will see lots of consumer headsets in the market. Mixed reality will really take off after that. Microsoft is way ahead in the plumbing – all whilst we didn’t notice. They held their Hololens product back to be big in business (so that it wasn’t a solution without a problem). Now it can be applied to field worker scenarios, visualising things before they are built.

To give an example, recently, I had a builder quote for a loft extension at home. He described how the stairs will work and sketched a room layout – but what if I could have visualised it in a headset? Then imagine picking the paint, sofas, furniture, wallpaper, etc.

The video below shows how Ford and Microsoft have worked together to use mixed reality to shorten and improve product development:

5 The new dawn of artificial intelligence

All of the legends of AI are set by sci-fi (Metropolis, 2001 AD, Terminator). But AI is not about killing us all! Humans vs. machines? Deep Blue beating people at Chess, Jeopardy, then Google taking on Go. Heading into the economy and displacing jobs. Automation of business process/economic activity. Mass unemployment?

Let’s take a more optimistic view! It’s not about sentient/thinking machines or giving human rights to machines. That stuff is interesting but we don’t know where consciousness comes from!

AI is a toolbox of high-value tools and techniques. We can apply these to problems and appreciate the fundamental shift from programming machines to machines that learn.

Ai is not about programming logical steps – we can’t do that when we’re recognising images, speech, etc. Instead, our inspiration is biology, neural networks, etc. – using maths to train complex layers of neural networks led to deep learning.

Image recognition was “magic” a few years ago but now it’s part of everyday life. Nvidia’s shares are growing massively due to GPU requirements for deep learning and autonomous vehicles. And Microsoft is democratising AI (in its own applications – with an intelligent cloud, intelligent agents and bots).

NVIDIA Corporation stock price growth fuelled by demand for GPUs

So, about those bots…

A bot is a web app and a conversational user interface. We use them because natural language processing (NLP) and AI are here today. And because messaging apps rule the world. With bots, we can use Human language as a new user interface; bots are the new apps – our digital assistants.

We can employ bots in several scenarios today – including customer service and productivity – and this video is just one example, with Microsoft Cortana built into a consumer product:

The device is similar to Amazon’s popular Echo smart speaker and a skills kit is used to teach Cortana about an app; Ask “skillname to do something”. The beauty of Cortana is that it’s cross-platform so the skill can show up wherever Cortana does. More recently, Amazon and Microsoft have announced Cortana-Alexa integration (meanwhile Siri continues to frustrate…)

AI is about augmentation, not replacement. It’s true that bots may replace humans for many jobs – but new jobs will emerge. And it’s already here. It’s mainstream. We use recommendations for playlists, music, etc. We’re recognising people, emotions, etc. in images. We already use AI every day…

6 From silicon to cells

Every cell has a “programme” – DNA. And researchers have found that they can write code in DNA and control proteins/chemical processes. They can compile code to DNA and execute, creating molecular circuits. Literally programming biology.

This is absolutely amazing. Back when I was an MVP, I got the chance to see Microsoft Research talk about this in Cambridge. It blew my mind. That was in 2010. Now it’s getting closer to reality and Microsoft and the University of Washington have successfully used DNA for storage:

The benefits of DNA are that it’s very dense and it lasts for thousands of years so can always be read. And we’re just storing 0s and 1s – that’s much simpler than what DNA stores in nature.

7 Quantum computing

With massive data storage… the next step is faster computing – that’s where Quantum computing comes in.

I’m a geek and this one is tough to understand… so here’s another video:

https://youtu.be/doNNClTTYwE

Quantum computing is starting to gain momentum. Dominated by maths (quantum mechanics), it requires thinking in equations, not translating into physical things in your head. It has concepts like superposition (multiple states at the same time) and entanglement. Instead of gates being turned on/off it’s about controlling particles with nanotechnology.

A classical 2 bit on-off takes 2 clock cycles. One quantum bit (a Qubit) has multiple states at the same time. It can be used to solve difficult problems (the RSA 2048 challenge problem would take a billion years on a supercomputer but just 100 seconds on a 250-bit quantum computer). This can be applied to encryption and security, health and pharma, energy, biotech, environment, materials and engineering, AI and ML.

There’s a race for quantum computing hardware taking place and China sees this as a massively strategic direction. Meanwhile, the UK is already an academic centre of excellence – now looking to bring quantum computing to market. We’ll have usable devices in 2-3 years (where “usable” means that they won’t be cracking encryption, but will have initial applications in chemistry and biology).

Microsoft Research is leading a consortium called Station Q and, later this year, Microsoft will release a new quantum computing programming language, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.

Predicting the future?

Amazon, Google and Microsoft each invest over $12bn p.a. on R&D. As demonstrated in the video above, their datacentres are not something that many organisations can afford to build but they will drive down the cost of computing. That drives down the cost for the rest of us to rent cloud services, which means more data, more AI – and the cycle continues.

I’ve shared 7 “technology bets” (and there are others, like the use of Graphene) that I haven’t covered – my list is very much influenced by my work with Microsoft technologies and services. We can’t always predict the future but all of these are real… the only bet is how big they are. Some are mainstream, some are up and coming – and some will literally change the world.

Credit: Thanks to Rob Fraser at Microsoft for the initial inspiration – and to Alun Rogers (@AlunRogers) for helping place some of these themes into context.

Virtual Worlds (@stroker at #DigitalSurrey)

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last night saw another Digital Surrey event which, I’ve come to find, means another great speaker on a topic of interest to the digerati in and around Farnham/Guildford (although I also noticed that I’m not the only “foreigner” at Digital Surrey with two of the attendees travelling from Brighton and Cirencester).

This time the speaker was Lewis Richards, Technical Portfolio Director in the Office of Innovation at CSC, and the venue was CSC’s European Innovation Centre.  Lewis spoke with passion and humour about the development of virtual worlds, from as far back as the 18th century. As a non-gamer (I do have an Xbox, but I’m not heavily into games), I have to admit quite a lot of it was all new to me, but fascinating nevertheless, and I’ve drawn out some of the key points in this post (my edits in []) – or you can navigate the Prezi yourself – which Lewis has very kindly made public!

  • The concept of immersion (in a virtual world) has existed for more than 200 years:
  • In 1909, E.M Forster wrote The Machine Stops, a short story in which everybody is connected, with a universal book for reference purposes, and communications concepts not unlike today’s video conferencing – this was over a hundred years ago!
  • In [1957], Morton Heilig invented the Sensorama machine which allowed viewers to enter a world of virtual reality with a combination of film and mechanics for a 3D, stereo experience with seat vibration, wind in the hair and smell to complete the illusion.
  • The first heads up displays and virtual reality headsets were patented in the 1960s (and are not really much more usable today).
  • In 1969, ARPANET was created – the foundation of today’s Internet and the world wide web [in 1990].
  • In [1974], the roleplay game Dungeons and Dragons was created (in book form), teaching people to empathise with virtual characters (roleplay is central to the concept of virtual worlds); the holodeck (Rec Room) was first referenced in a Star Trek cartoon in 1974; and, back in 1973, Myron Krueger had coined the term artificial reality [Krueger created a number of virtual worlds in his work (glowflow, metaplay, physic space, videoplace)].
  • Lewis also showed a video of a “B-Spline Control” which is not unlike the multitouch [and Kinect tracking] functionality we take for granted today – indeed, pretty much all of the developments from the last 40-50 years have been iterative improvements – we’ve seen no real paradigm shifts.
  • 1980s developments included:
    • William Gibson coined the term cyberspace in his short stories (featured in Omni magazine).
    • Disney’s Tron; a film which still presents a level of immersion to which we aspire today.
    • The Quantum Link network  service, featuring the first multiplayer network game (i.e. not just one player against a computer).
  • In the 1990s, we saw:
    • Sir Tim Berners-Lee‘s World Wide Web [possibly the biggest step forward in online information sharing, bringing to life E.M. Forster’s universal book].
    • The first use of the term avatar for a digital manifestation of oneself (in Neal Stephenson’s Snow Crash).
    • Virtual reality suits
    • Sandboxed virtual worlds (AlphaWorld)
    • Strange Days, with the SQUID (Super-conducting Quantum Interference Device) receptor – still looking for immersion – getting inside the device – and The Matrix was still about “jacking in” to the network.
    • Virtual cocoons (miniaturised, electronic, versions of the Sensorama – but still too intrusive for mass market adoption)
  • The new millennium brought Second Life (where, for a while, almost every large corporation had an island) and World of Warcraft (WoW) – a behemoth in terms of revenue generation – but virtual worlds have not really moved forward. Social networking blinded us and took the mass market along a different path for collaboration; meanwhile kids do still play games and virtual reality is occuring – it’s just not in the mainstream.
  • Lewis highlighted how CSC uses virtual worlds for collaboration; how they can also be used as training aids; and how WoW encouraged team working and leadership, and how content may be created inside virtual worlds with physical value (leading to virtual crime).
  • Whilst virtual reality is not really any closer as a consumer concept than in 1956 there are some real-world uses (such as virtual reality immersion being used to take away feelings of pain whilst burns victims receive treatment).
  • Arguably, virtual reality has become, just, “reality” – everywhere we go we can communicate and have access to our “stuff” – we don’t have to go to a virtual world but Lewise asks if we will ever give up the dream of immersion – of “jacking in” [to the matrix].
  • What is happening is augmented reality – using our phone/tablets, etc. to interact between physical and virtual worlds. Lewis also showed some amazing concepts from Microsoft Research, like OmniTouch, using a short-range depth camera and a pico projector to turn everyday objects into a surface to interact with; and Holodesk for direct 3D interactions.
  • Lewis explained that virtual worlds are really a tool – the innovation is in the technology and the practical uses are around virtual prototyping, remote collaboration, etc. [like all innovations, it’s up to us to find a problem, to which we can apply a solution and derive value – perhaps virtual worlds have tended to be a technology looking for a problem?]
  • Lewis showed us CSC’s Teleplace, a virtual world where colleagues can collaborate (e.g. for virtual bid rooms and presentations), saving a small fortune in travel and conference call costs but, just to finish up with a powerful demo, he asked one of the audience for a postcode, took the Google Streetview URL and pasted it into a tool called Blue Mars Lite – at which point his avatar could be seen running around inside Streetview. Wow indeed! That’s one virtual world in which I have to play!