Some thoughts on modern communications and avoiding the “time-Hoover”

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I was reading an article by Shelley Portet looking at how poor productivity and rudeness are influenced by technology interruptions at work. As someone who’s just managed to escape from email jail yet again (actually, I’m on parole – my inbox may finally be at zero but I still have hundreds of items requiring action) I have to say that, despite all the best intentions, experience shows that I’m a repeat offender, an habitual email mis-manager – and email is just the tip of the proverbial iceberg.

Nowadays email is just one of many forms of communication: there’s instant messaging; “web 2.0″ features on intranet sites (blogs, wikis, discussion forums); our internal social networking platform; business and personal use of external social networks (Twitter, LinkedIn, Slideshare, YouTube, Facebook… the list goes on) – so how can we prepare our knowledge workers for dealing with this barrage of interruptions?

There are various schools of thought on email management and I find that Merlin Mann’s Inbox Zero principles work well (see this video from Merlin Mann’s Google Tech Talk using these slides on action-based email – or flick through the Michael Reynolds version for the highlights), as long as one always manages to process to zero (that’s the tricky part that lands me back in email jail).

The trouble is that Inbox Zero only deals with the manifestation of the problem, not the root cause: people. Why do we send these messages? And how do we act on them?

A couple of colleagues have suggested recently that the trouble with email is that people confuse sending an email with completing an action as if, somehow, the departure of the message from someone’s outbox on its way to someone else’s inbox implies a transfer of responsibility. Except that it doesn’t – there are many demands on my colleagues’ time and it’s up to me to ensure that we’re all working towards a common goal. I can help by making my expectations clear; I can think carefully before carbon copying or replying to all; I can make sure I’m brief and to the point (but not ambiguous) – but those are all items of email etiquette. They don’t actually help to reduce the volumes of messages sent and received. Incidentally, I’m using email as an example here – many of the issues are common whatever the communications medium (back to handwritten letters and typed memos as well as forwards to social networking) but, ultimately I’m either trying to:

  • Inform someone that something has happened, will soon happen, or otherwise communicate something on a need to know basis.
  • Request that someone takes an action on something.
  • Confirm something that has been agreed via another medium (perhaps a telephone call), often for audit purposes.

I propose two courses of action, both of which involve setting the expectations of others:

  1. The first is to stop thinking of every message as requiring a response. Within my team at work, we have some unwritten rules that: gratefulness is implied within the team (not to fill each others’ inboxes with messages that say “thank you”); carbon copy means “for information”; and single-line e-mails can be dealt with in the subject heading.
  2. The second can be applied far more widely and that’s the concept of “service level agreements” for corporate communications. I don’t mean literally, of course, but regaining productivity has to be about controlling the interruptions. I suggest closing Outlook. Think of it as an email/calendar client – not the place in which to spend one’s day – and the “toast” that pops up each time a message arrives is a distraction. Even having the application open is a distraction. Dip in 3 times a day, 5 times a day, every hour, or however often is appropriate but emails should not require nor expect an immediate response. Then there’s instant messaging: the name “instant” suggests the response time but presence is a valuable indicator – if my presence is “busy”, then I probably am. Try to contact me if you like but don’t be surprised if I ignore it until a better time. Finally, social networking: which is both a great aid to influencing others and to keeping abreast of developments but can also be what my wife would call a “time-Hoover” – so don’t even think that you can read every message – just dip in from time to time and join the conversation, then leave again.

Ultimately, neither of these proposals will be successful without cultural change. This issue is not unique to any one company but the only way I can think of to change the actions and/or thoughts of others is to lead by example… starting today, I think I might give them a try.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

An alternative enterprise desktop

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I participated in an online event that looked at the use of Linux (specifically Ubuntu) as an alternative to Microsoft Windows on the enterprise desktop.

It seems that every year is touted as the year of Linux on the desktop – so why hasn’t it happened yet? Or maybe 2011 really is the year of Linux on the desktop and we’ll all be using Google Chrome OS soon. Somehow I don’t think so.

You see, the trouble with any of the “operating system wars” arguments is that they miss the point entirely. There is a trilogy of people, process and technology at stake – and the operating system is just one small part of one of those elements. It’s the same when people start to compare desktop delivery methods – thick, thin, virtualised, whatever – it’s how you manage the desktop it that counts.

From an end user perspective, many users don’t really care whether their PC runs Windows, Linux, or whatever-the-next-great-thing-is. What they require (and what the business requires – because salaries are expensive) is a system that is “usable”. Usability is in itself a subjective term, but that generally includes a large degree of familiarity – familiarity with the systems that they use outside work. Just look at the resistance to major user interface changes like the Microsoft Office “ribbon” – now think what happens when you change everything that users know about using a PC. End users also want something that works with everything else they use (i.e. an integrated experience, rather than jumping between disparate systems). And, for those who are motivated by the technology, they don’t want to feel that there is a two tier system whereby some people get a fully-featured desktop experience and others get an old, cascaded PC, with a “light” operating system on it.

From an IT management standpoint, we want to reduce costs. Not just hardware and software costs but the costs of support (people, process and technology). A “free” desktop operating system is just a very small part of the mix; supporting old hardware gets expensive; and the people costs associated with major infrastructure deployments (whether that’s a virtual desktop or a change of operating system) can be huge. Then there’s application compatibility – probably the most significant headache in any transformation. Yes, there is room for a solution that is “fit for purpose” and that may not be the same solution for everyone – but it does still need to be manageable – and it needs to meet all of the organisation’s requirements from a governance, risk and compliance perspective.

Even so, the days of allocating a Windows PC to everyone in an effort to standardise every single desktop device are starting to draw to a close. IT consumerisation is bringing new pressures to the enterprise – not just new device classes but also a proliferation of operating system environments. Cloud services (for example consuming software as a service) are a potential enabler – helping to get over the hurdles of application compatibility by boiling everything down to the lowest common denominator (a browser). The cloud is undoubtably here to stay and will certainly evolve but even SaaS is not as simple as it sounds with multiple browser choices, extensions, plug-ins, etc. If seems that, time and time again, it’s the same old legacy applications (generally specified by business IT functions, not corporate IT) that make life difficult and prevent the CIO from achieving the utopia that they seek.

2011 won’t be the year of Linux on the desktop – but it might just be the year when we stopped worrying about standardisation so much; the year when we accepted that one size might not fit all; and the year when we finally started to think about applications and data, rather than devices and operating systems.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

Does Microsoft Kinect herald the start of a sensor revolution?

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week, Microsoft officially announced a software development kit for the Kinect sensor. Whilst there’s no commercial licensing model yet, that sounds like it’s the start of a journey to official support of gesture-based interaction with Windows PCs.

There’s little doubt that Kinect, Microsoft’s natural user interface for the Xbox game console, has been phenomenally successful. It’s even been recognised as the fastest-selling consumer device on record by Guinness World Records. I even bought one for my family (and I’m not really a gamer) – but before we go predicting the potential business uses for this technology, it’s probably worth stopping and taking stock. Isn’t this really just another technology fad?

Kinect is not the first new user interface for a computer – I’ve written so much about touch-screen interaction recently that even I’m bored of hearing about tablets! We can also interact with our computers using speech if we choose to – and the keyboard and mouse are still hanging on in there too (in a variety of forms). All of these technologies sound great, but they have to be applied at the right time: my iPad’s touch screen is great for flicking through photos, but an external keyboard is better for composing text; Kinect is a fantastic way to interact with games but, frankly, it’s pretty poor as a navigational tool.

What we’re really seeing here is a proliferation of sensors. Keyboard, mouse, trackpad, microphone, and camera(s), GPS, compass, heart monitor – the list goes on. Kinect is really just an advanced, and very consumable, sensor.

Interestingly sensors typically start out as separate peripherals and, over time, they become embedded into devices. The mouse and keyboard morphed into a trackpad and a (smaller) keyboard. Microphones and speakers were once external but are now built in to our personal computers. Our smartphones contain a wealth of sensors including GPS, cameras and more. Will we see Kinect built into PCs? Quite possibly – after all it’s really a couple of depth sensors and a webcam!

What’s really exciting is not Kinect per se but what it represents: a sensor revolution. Much has been written about the Internet of Things but imagine a dynamic network of sensors where the nodes can automatically handle re-routing of messages based on an awareness of the other nodes. Such networks could be quickly and easily deployed (perhaps even dropped from a plane) and would be highly resilient to accidental or deliberate damage because of their “self-healing” properties. Another example of sensor use could be in an agricultural scenario with sensors automatically monitoring the state of the soil, moisture, etc. and applying nutrients or water. We’re used to hearing about RFID tags in retail and logistics but those really are just the tip of the iceberg.

Exciting times…

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was jointly authored with Ian Mitchell.]

Adapt, evolve, innovate – or face extinction

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written before (some might say too often) about the impact of tablet computers (and smartphones) on enterprise IT. This morning, Andy Mulholland, Global CTO at Capgemini, wrote a blog post that grabbed my attention, when he posited that tablets and smartphones are the disruptive change lever that is required to drive a new business technology wave.

In the post, he highlighted the incredible increase in smartphone and tablet sales (also the subject of an article in The Economist which looks at how Dell and HP are reinventing themselves in an age of mobile devices, cloud computing and “verticalisation”), that Forrester sees 2011 as the year of the tablet (further driving IT consumerisation), and that this current phase of disruption is not dissimilar to the disruption brought about by the PC in the 1980s.

Andy then goes on to cite a resistance to user-driven adoption of [devices such as] tablets and XaaS [something-as-a-service] but it seems to me that it’s not CIOs that are blocking either tablets/smartphones or XaaS.

CIOs may have legitimate concerns about security, business case, or unproven technology – i.e. where is the benefit? And for which end-user roles? – but many CIOs have the imagination to transform the business, they just have other programmes that are taking priority.

With regards to tablets, I don’t believe it’s the threat to traditional client-server IT that’s the issue, more that the current tranche of tablet devices are not yet suitable to replace PCs. As for XaaS (effectively cloud computing), somewhat ironically, it’s some of the IT service providers who have the most to lose from the shift to the cloud: firstly, there’s the issue of “robbing Peter to pay Paul” – eroding existing markets to participate in this brave new world of cloud computing; secondly it forces a move from a model that provides a guaranteed revenue stream to an on-demand model, one that involves prediction – and uncertainty.

Ultimately it’s about evolution – as an industry we all have to evolve (and innovate), to avoid becoming irrelevant, especially as other revenue streams trend towards commoditisation.

Meanwhile, both customers and IT service providers need to work together on innovative approaches that allow us to adapt and use technologies (of which tablets and XaaS are just examples) to disrupt the status quo and drive through business change.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

IPv6 switchover – what should CIOs do (should they even care)?

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s not often that something as mundane as a communications protocol hits the news but last week’s exhaustion of Internet Protocol (IP) addresses has been widely covered by the UK and Irish media. Some are likening the “IPocalypse” to the Year 2000 bug. Others say it’s a non-issue. So what do CIOs need to consider in order to avoid being presented with an unexpected bill for urgent network upgrades?

Focus have produced an infographic which explains the need for an IPv6 migration but, to summarise the main points:

  • The existing Internet address scheme is based on 4 billion internet protocol (IPv4) addresses, allocated in blocks to Regional Internet Registries (RIR) and eventually to individual Internet Service Providers (ISP).
  • A new, and largely incompatible version of the Internet Protocol (IPv6) allows for massive growth in the number of connected devices, with 340 undecillion (2^128) addresses.
  • All of the IPv4 addresses have now been allocated to the RIRs and at some point in the coming months, the availability of IPv4 addresses will dry up.
  • Even though there are huge numbers of unused addresses, they have been already been allocated to companies and academic institutions. Some have returned excess addresses voluntarily; others have not.

The important thing to remember is that the non-availability of IPv4 addresses doesn’t mean that the Internet will suddenly stop working. Essentially, new infrastructure will be built on IPv6 and we’re just entering an extended period of transition. Indeed, in Asia (especially Japan and China), IPv6 adoption is much more mature than in Europe and America.

It’s also worth noting that there are a range of technologies that mitigate the requirement for a full migration to IPv6 including Network Address Translation (NAT) and tunnels that allow hybrid networks to be created over the same physical infrastructure. Indeed, modern operating systems enable IPv6 by default so many organisations are already running IPv6 on their networks – but, whilst there are a number of security, performance and scalability improvements in IPv6, there can be negative impacts on security too if implemented badly.

Network providers are actively deploying IPv6 (as are some large organisations) but it’s likely to be another couple of years before many UK and Ireland’s enterprises consider wide-spread deployment. Ironically, the network side is relatively straightforward and the challenge is with the hardware appliances and applications. The implications for a 100% replacement are massive, however a hybrid approach is workable and will be the way IPv6 is deployed in the enterprise for many years to come.

So, should CIOs worry about IPv6? Well, once the last IPv4 addresses are allocated, any newly formed organisation, or those that require additional address space, will only be accessible over the new protocol. Even so, it will be a gradual transition and the key to success is planning, even if implementation is deferred for a while:

“The move to IPv6 will take a long time – ten years plus, with hybrid networks being the reality in the interim. We are already seeing large scale adoption across the globe, particularly across Asia. Telecommunication providers have deployed backbones and this adoption is growing, enterprise customers will follow. Enterprises need to carefully consider migrations: not all devices in the network can support IPv6 today; it is not uncommon for developers to have ‘hard-coded’ IPv4 addresses and fields in applications; and there are also security implications with how hybrid network are deployed, with the potential to bypass security and firewall policies if not deployed correctly.” [John Keegan, Chief Technology Officer, Fujitsu UK and Ireland Network Solutions Division]

As for whether IPv6 is the new Y2K? I guess it is in the sense that it’s something that’s generating a lot of noise and is likely to result in a lot of work for IT departments but, ultimately it’s unlikely to result in a total infrastructure collapse.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was written with assistance from John Keegan.]

Tablets: How will they impact your enterprise IT?

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It seems that last week’s Consumer Electronics Show (CES) can be summed up with one word:

“Tablet”.

Even though Steve Ballmer, CEO at Microsoft, demonstrated an HP “slate” running Windows in last year’s CES keynote, Apple managed to steal Microsoft’s thunder with the iPad and this year’s show saw just about every PC manufacturer (and Fujitsu is no exception) preparing to launch their own model(s).

Tablet computers aren’t new but Apple’s iPad has revitalised the market – I recently wrote about this when I examined the potential impact on desktop managed service – and one report I read suggested that there were over 80 tablets launched at CES!

For many years, CIOs have been standardising end-user computing environments on Intel x86 hardware and Windows operating systems, with appropriate levels of lockdown and control which makes it all the more interesting to see the variation in hardware, form factor and operating system in these new devices.

Our IT departments will struggle to support this plethora of devices yet IT consumerisation will force us to. But this isn’t a new phenomenon – ten years ago I was working in an organisation which was trying to standardise on Windows CE devices as they provided the best application support platform for the business, whilst the execs were asking for BlackBerrys so they could access e-mail on the move.

Guess what happened? We ended up with both.

And that’s what will happen with next-generation tablets, just as for smartphones. To some extent, it’s true for PCs too – the hardware and the operating system have become commoditised – and our task is to ensure that we can present the right data and the right applications to the right people, at the right time, on the right device.

Which brings back around me to my opening point: tablets featured heavily at CES but tablets are just one part of the IT mix. Will your organisation be supporting their use in the enterprise? And do you see them as serious business devices, or are they really just executive toys?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]

The generation game

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Creating innovative ways to align IT with business needs is one of the main functions of our office. Ever since the early days of computing, IT departments have been trying to close the gap between end user requirements and service provision. Now changing attitudes to work are a frequent topic of conversation and, whereas we’re often talking about technological change, a number of recent events have highlighted the wider social impact.

In the last 20 years, we’ve seen some pretty significant advancements – not only in terms of technology but in attitudes too:

  • Do you remember the typing pool? In 2011, the idea of a room full of secretarial staff employed to type memos dictated by management seems absurd!
  • There was a time when Microsoft’s vision of a computer on every desk and in every home seemed like science fiction but today we use a plethora of personal computing devices.
  • In the early 1990s, radio pagers were used for communications and only the duty manager had a mobile. Now our smartphones have more computing power than the PCs of that era.
  • Bulletin boards, accessed via modems, and Ceefax, accessed through television, have been replaced by the worldwide web, itself transformed beyond recognition into today’s massive content distribution system that is becoming embedded in many elements of our lives.

Even this blog post is an example of changing attitudes: it’s informal but written for a customer audience; unedited by the marketing department but with clear expectations as to what is acceptable to discuss in a public forum. It’s unthinkable that we would have been able to communicate like this more than a few years ago.

Generation Y, or the millennial generation, as we term those born between 1980 and 2000, is witnessing ever-increasing technological change and expects major social changes too. Horizontal silos are forming within organisations, loosely based around generations of employees who think and communicate in entirely different ways. Take these examples:

  • Formality. We work in a very informal society and it’s rare to refer to colleagues, senior management, or even customers, using their title and surname.
  • Speed. We expect results: faster; and accuracy is often less important than speed of access to information (we can refine the details later – but need to make decisions now!).
  • Quality. Whereas previous generations expected a device or product to last for years, younger generations are happy to replace it with a newer model much more quickly. This affects buying patterns, but also the standards to which goods and services are produced.
  • Communications. Whilst older generations will send a birthday card, generation Y is happy with a message on Facebook. Baby boomers and generation X may communicate by phone or e-mail but generation Y uses text messages, instant messaging and social networks.
  • Familiarity. Whereas baby boomers may like to see pages with detailed information about a given topic; generation Y is happy with snippets of information.
  • Deference. Previous generations were taught to defer to their elders but today’s young people are much more prepared to challenge and question.
  • Education. A degree is no longer an indicator of excellence; instead, it’s expected from almost everyone.
  • Recession. The generation entering the workplace now is the first that will, in all likelihood, earn less than their parents (and yet have higher expectations).

Every generation brings a new approach and some people find the resulting changes easier to adapt to than others. In a few years’ time, today’s graduate entrants will be running our businesses and it seems that, as we experience an accelerated pace of technological change, there’s also an accelerating gap in attitudes between generations.

For many years, we’ve been trying to align IT to business needs and it’s still a challenge at times. Perhaps now is the time to start thinking about the social needs of business end users, before that gap widens too?

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog and was jointly authored with Ian Mitchell.]

Technology Perspectives

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Technology Perspectives bannerAt Fujitsu, we pride ourselves on being a forward-looking company that not only seeks to predict the future, but also to form it. We do this through close cooperation with our customers in order to meet their needs for today and for tomorrow.

Our vision is to develop and build networks of intelligent systems that work together in a way that touches and improves everyday life for people all around the globe. We call it the Intelligent Society. To make that reality, we invest significant resources to identify the patterns of change that are paving the way for the future.

Today, Fujitsu is launching a new Technology Perspectives microsite, presenting an across-the-board look at trends in technology, business and society; and featuring thought leadership from our Chief Technology Officers (CTO) around the world, including here in the UK and Ireland.

The microsite is designed to be easy to use, so that busy executives can find the information they need quickly but download content when they need detail and depth.

Using a quadrant framework that balances personal freedom with technology to present four scenarios that express contrasting business and technology futures, we examine nine key trends that represent high-impact mid-term developments, as well as some others that are just over the horizon but may be even more significant.  We also offer twelve predictions for change that we think are fairly safe bets, before highlighting those technologies that will soon fade into oblivion.  You can also download the full report, if you prefer.

Technology Perspectives is intended to provide some background context for strategic planning, making it easier to obtain the insight and tools needed to prepare for a competitive future. Above all, we hope that the thought-provoking ideas on the Technology Perspectives microsite will spark a debate about planning for the future. We welcome you to join in the debate.

[This post originally appeared on the Fujitsu UK and Ireland CTO Blog.]