The future Internet and the Intelligent Society

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, I spent an evening with the British Computer Society’s Internet Specialist Group, where I’d been asked to present on where I see the Internet developing in future – an always-on, connected vision of joined-up services to deliver greater benefit across society.

I started out with a brief retrospective of the last 42 years of Internet development and at look at the way we use the Internet today, before I introduced the concept of human-centric computing and, in particular, citizen-centric computing as featured in Rebecca MacKinnon’s TED talk about the need to take back the Internet. This shows how we need any future Internet to evolve in a citizen-centric manner, building a world where government and technology serve people and leads nicely into some of the concepts introduced in the Technology Strategy Board‘s Future Internet Report.

After highlighting out the explosion in the volumes of data and the number of connected devices, I outlined the major enabling components for the future Internet – far more than “bigger pipes” – although we do need a capable access mechanism, infrastructure for the personalisation of cloud services and for machine to machine (M2M) transactions; and finally, for convergence that delivers a transformational change in both public and private service delivery.

Our vision is The Intelligent Society; bringing physical and virtual worlds into harmony to deliver greater benefit across society. As consumerisation takes hold, technology is becoming more accessible, even commoditised in places, for on delivery of on-demand, stateless services. Right now we have a “perfect storm” where a number of technologies are maturing and falling into alignment to deliver our vision.

These technologies break down into: the devices (typically mobile) and sensors (for M2M communications); the networks that join devices to services; and the digital utilities that provide on demand computing and software resources for next-generation digital services. And digital utilities are more than just “more cloud” too – we need to consider interconnectivity between clouds, security provision and the compute power required to process big data to provide analytics and smart responses.

There’s more detail in the speaker notes on the deck (and I should probably write some more blog posts on the subject) but I finished up with a look at Technology Perspectives – a resource we’ve created to give a background context for strategic planning.

As we develop “the Internet of the future” we have an opportunity to deliver benefit, not just in terms of specific business problems, but on a wide scale that benefits entire populations. Furthermore, we’ve seen that changing principles and mindsets are creating the right conditions for these solutions to be incubated and developed alongside maturing technologies that enabling this vision and making it a reality.

This isn’t sci-fi, this is within our reach. And it’s very exciting.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Can we process “Big Data” in the cloud?

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I wrote last week about one of the presentations I saw at the recent Unvirtual conference and this post highlights another one of the lightning talks – this time on a subject that was truly new to me: Big Data.

Tim Moreton (@timmoreton), from Acunu, spoke about using big data in the cloud: making it “elastic and sticky” and I’m going to try and get the key points across in this post. Let’s hope I get it right!

Essentially, “big data” is about collecting, analysing and servicing massive volumes of data.  As the Internet of things becomes a reality, we’ll hear more and more about big data (being generated by all those sensors) but Tim made the point that it often arrives suddenly: all of a sudden you have a lot of users, generating a lot of data.

Tim explained that key ingredients for managing big data are storage and compute resources but it’s actually about more than that: it’s not just any storage or compute resource because we need high scalability, high performance, and low unit costs.

Compute needs to be elastic so that we can fire up (virtual) cloud instances at will to provide additional resources for the underlying platform (e.g. Hadoop). Spot pricing, such as that provided by Amazon, allows a maximum price to be set, to process the data at times when there is surplus capacity.

The trouble with big data and the cloud is virtualisation. Virtualisation is about splitting units of hardware to increase utilisation, with some overhead incurred (generally CPU or IO) – essentially multiple compute resources are combined/consolidated.  Processing big data necessitates combining machines for massive parallelisation – and that doesn’t sit too well with cloud computing: at least I’m not aware of too many non-virtualised elastic clouds!

Then, there’s the fact that data is decidedly sticky.  It’s fairly simple to change compute providers but how do you pull large data sets out of one cloud and into another? Amazon’s import/export involves shipping disks in the post!

Tim concluded by saying that there is a balance to be struck.  Cloud computing and big data are not mutually exclusive but it is necessary to account for the costs of storing, processing and moving the data.  His advice was to consider the value (and the lock-in) associated with historical data, to process data close to its source, and to look for solutions that a built to span multiple datacentres.

[Update: for more information on “Big Data”, see Acunu’s Big Data Insights microsite]