Weeknote 2024/03: missing cyclocross; digitally transforming my family; installing Ethernet and much, much more

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another week has flown by – this time I kept notes to keep track of it all in the hope it would speed up the weeknote writing. It didn’t, so I need to work on the format. Anyway, this is how it looks this week.

My week at work

Understandably, I can’t write much about what I do in the day job. Suffice to say, it’s been busy, busy, busy. I’m preparing for a presentation to the Node4 Go To Market (GTM) team next week. This will be me, along with my colleague Bjoern, presenting to the entire salesforce and trying to convince them why they should be selling more of the services we’re responsible for. And, in parallel, I’m refreshing the collateral to support the sales of those same services.

I also spent some time on a call with one of our business partner this week, learning more about how they are developing their offers and how we can potentially do more work together.

My week in cycling

I know, this blog is supposed to be about tech, but I also have two very sporty teenagers that I’m very proud of. Their sports activities are a big part of my week (and my life in general).

Last weekend, I should have been in Falkirk, supporting my eldest son, Matt, at the 2024 British National Cyclocross Championships. As things transpired, that was not to be…

At 2023’s National Champs (Matt’s first senior year), Cameron Mason was so dominant in the Elite/U23 Men’s race that only 7 riders were permitted to finish the race (under the 80% rule). It’s a big investment of time and money to travel the length of the country for a short race but we would have been there, if Matt felt he was ready for it. Unfortunately, after a challenging few weeks with a return to racing after spending the autumn leading cycle tours in Greece, he decided to end his cyclocross season early. Apart from podiums in the local Central Cyclocross League (CCXL), third place in the Central Regional CX Champs was to be his only significant result this season. He’s preparing to build two new bikes for the 2024/25 cyclocross season – and he has plans for the 2024 road season too. I’m sure those plans will end up in these weeknotes in due course.

Just as a side note, after the demise of GCN Plus, it’s great to see BBC Scotland providing mainstream TV coverage of the national champs!

My week in technology

Adding AirPlay to an old Hi-Fi amplifier

With a bonus weekend at home, I got to finish up a tech project that’s been on the list for a while – adding AirPlay to my old 1990s Technics amplifier. When I moved in with my wife, my mid-range Hi-Fi stack was labelled “black loud crap”. As a result, it was banished from the house, but still lives on in the Man Cave. Adding an old Raspberry Pi 2B running as an audio gateway has provided the necessary tech to cast audio, without needing to invest in more Sonos (or IKEA Symfonisk) as I have in the rest of the house. This is the guide I followed, at PiMyLifeUp.

There’s the odd stutter, which I think may be due to a weak 2.4MHz Wi-Fi signal. It could also be down to running on a relatively old Pi 2. It certainly beats connecting Spotify via Alexa which used my account and so only worked for me and not the whole family. Plus it also works with other apps, like Pocket Casts and Audible.

Wilson family digital transformation

Late last year, I convinced Mrs W that we could use the family calendar on our iPhones to manage our busy family life. Previously, the paper calendar on the kitchen wall was the single source of the truth. That’s not too helpful when we’re not at home. This digital transformation of the Wilson family has been a huge success but it’s also shown me that people use calendars in different ways!

For example, our eldest son is currently away skiing. Is that one long appointment for 2 weeks? Or do we just need to know the dates he leaves and returns? And how do we record our youngest son’s Hockey training sessions? Is it the actual session times, or the times we leave the house and return? I’m trying not to be too “Mark” about this, but it’s an interesting insight into how other people think!

On a related note, I also learned this week that not everyone sees pictures in their mind, like I do. I don’t know what they do “see”, but it explains why not everyone can visualise what something will look like when it’s finished!

AirTag all the the things

After a trial with an Apple AirTag in my luggage (very useful when it wasn’t put on the plane at Stansted one holiday), I’ve been expanding our use of these devices. One use case that’s been particularly helpful is my youngest son’s keys… as he’s already had to replace at least one set that he lost before I tagged them. Now I regularly hear the “FindMy sound” as he searches for them before leaving the house.

On a similar note, for Christmas, my eldest son bought me an Apple FindMy-compatible tracker for my glasses. It doesn’t have the Precision Finding feature of the AirTag, but it does tell me where they were last seen, and lets me play a sound. Now, when I leave them somewhere, I can listen for the chirp of the Orbit sensor. It’s a bit strange charging my glasses though, but this is relatively infrequent.

Other bits and pieces of tech

  • After seeing a thread about date formats, ISO standards and RFCs, I thought about my frustrations with people who write dates “the wrong way”. By the wrong way, I mean not putting the most significant portion first. The US convention of mm/dd/yyyy is nonsensical. UK dd/mm/yyyy is better, but I generally name files using yyyymmdd etc. because they appear in order. On that basis, I realised that my naming for these weeknotes should be year/week number (inspired by Sharon O’Dea). Previously I had erroneously named them week number/year. From this week forwards, that is corrected.
  • After watching a YouTube video, I successfully resuscitated an apparently-dead Li-ion battery pack (the on-board circuitry needed its capacity recharging before it would accept a charge). This is potentially dangerous – I’m not responsible for anything that happens if you try it, but it worked for me, and saved me quite a few quid. Some say to use a resistor for safety. Others stress to only “jump start” momentarily (as I did).
  • I was looking at some communications from Vodafone about the 3G switch off… and wondered if that is the same part of the spectrum as 4G… i.e. more channels freed up for 4G/5G or will 4G/5G have access to extra spectrum now? Twitter helped me out with that…
  • Hopefully that section between the hall (OpenReach ONT) and the garage “datacentre” (ISP router) is all the Ethernet I need to run, but I have plenty of spare cable if I need to pull any more for a potential CCTV project… (I’ve been watching lots of videos about Reolink cameras).
  • Oh yes, one more thing. I finally changed my LinkedIn profile picture… my previous professional headshot was taken when I was in my late 30s, I think. I’m nearly 52 and afraid it’s time to look my age. This may not sound like news but it took me ages to find something suitably professional that I liked!

My week in TV

I’ll spare you all my YouTube highlights this week but, over in streaming TV land, Mrs W and I wrapped up three excellent series:

  • Mr Bates vs the Post Office (ITV);
  • Slow Horses (Apple TV); and
  • The Crown (Netflix).

This last season of The Crown has been criticised for being too dramatic but I thought it was well done. There will be no season 7 and it feels like it was left at just the right point, at the marriage of Charles and Camilla (then Prince and Princess of Wales) and the early days of William and Katherine’s relationship (the current Prince and Princess of Wales). It even contained a nod to Queen Elizabeth II’s funeral, with her involvement in the plans but also some scenes that linked to the actual events last year.

And in case we hadn’t had enough Toby Jones, we’ve started watching season 2 of Detectorists, for some light hearted relief from the more serious stuff.

Other things that should probably be a blog post on their own

I was going to write some more, but I’m getting bored of writing this now – goodness knows how you feel, dear reader. So there may need to be an overflow post or two about these topics, or maybe the tweets will say enough:

  • Well-paid IT folks moaning about the inconvenience that strikes have on their lives… playing to the “them and us” narrative.
  • Rebooting the car to get Apple CarPlay to work again!
  • CTOs with 30 years of industry experience being approached about a job that claims to need a technical degree.
  • Storytelling. And how pictures can convey messages that words alone cannot. Or bring meaning to words when they are in another language that you only have a passing knowledge of.
  • Rail fare “simplification” and the very different approaches taken by LNER (UK Government-owned) and ScotRail (Scottish Government-owned).
  • Public sector IT contracts, and the need to be a good client – it’s not all about the supplier.
  • The increasingly anti-social nature of social media.

My week in pictures

Featured image: author’s own

Weeknote 2/2024: time flies

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Writing this weeknote may explain why this week has felt so busy. I clearly try to squeeze far too many personal projects in around my work and family life…

This week:

  • I spent quite a bit of time looking at ways to reduce the cost of moving to Azure for our clients. I’m not going to post the details here, but I think my colleagues and I made some good progress. I’ll have more to share soon, but in the meantime, you can always get in touch. Here’s my Microsoft Bookings link if you want to have a chat about how Node4 could help your business.
  • Talking of which, I set up Calendly to try and pull my various calendars together. It’s kind of like Microsoft Bookings on steroids. Sadly it didn’t do the thing I’m really struggling with and show me my work, home and family calendars in one place. Outlook does that for me, but the family calendar in iCloud doesn’t seem to update…
  • I have been staying up far too late watching too much YouTube. Last night the algorithm decided that I needed to know how to install a French Drain. Actually, it was right… it might help with some of the drainage issues in our back garden. Other gems have included:
  • I did try to create a static archive of my tweets though. It’s not quite as I would like, so let’s just say that’s “work in progress”. Maybe more next week, when I have a working solution.
  • And I questioned how “normal users” must feel when presented with nonsensical or jargon-filled computer messages:
  • The photos are still uploading from my NAS to Azure. Several people have recommended other solutions (e.g. Backblaze or Synology C2) but the granular charging on Azure means that I think my current solution may well cost a little less if I tick over the 1TB mark…

I’ve been watching…

As well as the YouTube content I mentioned above, and the dramas I wrote about last week… 3/4 of my family watched the Barbie movie together last weekend. I really quite enjoyed it. Actually, Mark Kermode’s Guardian review nails it: “It’s a riotously entertaining candy-coloured feminist fable that manages simultaneously to celebrate, satirise and deconstruct its happy-plastic subject. Audiences will be delighted. Mattel should be ecstatic.”

And, away from the screen, I got to watch my youngest son play Hockey at Bedford last night.

I’ve been reading…

Not enough. But I am slowly reading the materials for my amateur radio foundation licence exam… and I’ve made the Man Cave a better place to kick back and relax (including a place to read, or listen to an audiobook):

Some thoughts that won’t make it to a full blog post…

The ITV Drama about the Post Office Scandal is a brilliant illustration of the power of storytelling. Graham Chastney wrote about this before I got around to it and his post about how we are wired for stories is pretty much what I wanted to say. Dan Barker’s tweet looks a bit deeper at why years of quality journalism wasn’t enough and it took an ITV Drama to bring the story to the masses.

Rachel Coldicutt examines why we seem inclined to believe the machine, not the person:

I’ve certainly experienced “management by dashboard”, when a report, which was believed by so many, was flawed and presented bad insights on data.

And, whilst I’m still embarrassed that my former employer is so deeply embroiled in a scandal that led to so many miscarriages of justice, I’m more and more inclined to think that software is imperfect, and that the failure of leadership (and consequential mismanagement of the issues) was the main problem here, as outlined by Professor Damien Page:

What else?

Not enough writing. No press coverage this week. Still working out what my new content strategy is as I try to use less “social” media and make blogging less of a time hoover. Next week’s weeknote might not be such a rush…

Featured image: Author’s own
(screenshot from Microsoft Visio)

Weeknote 1/2024: A new beginning

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Wow, that was a bump. New Year celebrations over, a day off for the public holiday, and straight back to work.

After a lot of uncertainty in December, I’ve been keen to get stuck in to something valuable, and I’m not breaking any confidentiality by saying that my focus right now is on refreshing the collateral behind Node4’s Public Cloud offerings. I need to work across the business – my Office of the CTO (OCTO) role is about strategy, innovation and offering development – but the work also needs to include specialist sales colleagues, our marketing teams, and of course the experts that actually deliver the engagements.

So that’s the day job. Alongside that, I’ve been:

  • Avoiding stating any grand new year resolutions. I’ll only break them. It was literally hours before I broke my goal of not posting on Twitter/X this year. Though I did step away from a 453-day streak on Duolingo to focus my spare time on other, hopefully less gamified, pursuits:
  • Doing far too little exercise. A recurring health condition is impacting my ability to walk, run, cycle and to get back to Caveman Conditioning. It’s getting a bit better but it may be another week before I can have my new year fitness kick-start.
  • Eating badly. Logging everything in the Zoe app is helping me to see what I should avoid (spoiler: I need to eat more plants and less sweet stuff) but my willpower is still shockingly bad. I was also alarmed to see Prof. Tim Spector launching what appeared to be an ultra-processed food (UPF) product. More on that after I’ve got to M&S and actually seen the ingredients list for the Zoe Gut Shot, but others are telling me it’s not a UPF.
  • Redesigning the disaster recovery strategy for my photos. I learned the hard way several years ago that RAID is not a backup, and nothing exists unless it’s in three places. For me that’s the original, a copy on my Synology NAS, and copy in the cloud. My cloud (Azure) backups were in a proprietary format from the Synology Hyper Backup program, so I’ve started to synchronise the native files by following a very useful article from Charbel Nemnom, MVP. Unfortunately the timestamps get re-written on synchronisation, but the metadata is still inside the files and these are the disaster copies – hopefully I’ll never need to rely on them.
  • Watching the third season of Slow Horses. No spoilers please. I still have 4 episodes to watch… but it’s great TV.
  • Watching Mr Bates vs. The Post Office. The more I learn about the Post Office Scandal, the more I’m genuinely shocked. I worked for Fujitsu (and, previously, ICL) for just over 15 years. I was nothing to do with Horizon, and knew nothing of the scandal, but it’s really made me think about the values of the company where I spent around half my career to date.
  • Spreading some of my late Father-in-law’s ashes by his tree in the Olney Community Orchard.
  • Meeting up with old friends from my “youth”, as one returns to England from his home in California, for a Christmas visit.

Other things

Other things I found noteworthy this week:

  • Which came first, the chicken or the egg scissors or the blister-pack?

Press coverage

This week, I was quoted in this article:

Coming up

This weekend will see:

  • A return to Team MK Youth Cycle Coaching. Our local cyclo-cross league is finished for the 2023/4 season so we’re switching back to road cycling as we move into the new year.
  • Some home IT projects (more on them next week).
  • General adulting and administration.

Next week, I’ll be continuing the work I mentioned at the head of this post, but also joining an online Group Coaching session from Professor John Amaechi OBE. I have no idea what to expect but I’m a huge fan of his wise commentary. I’m also listening to The Promises of Giants on Audible. (I was reading on Kindle, but switched to the audiobook.)

This week in photos

Featured image: Author’s own
(this week’s flooding of the River Great Ouse at Olney)

Celebrating ChatGPT’s first birthday…

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was asked to comment on a few GPT-related questions by Node4’s PR agency. I haven’t seen where those comments are featured yet, but I decided to string them together into my own post…

How has ChatGPT prompted innovation over the past year, transforming industries for the better?

ChatGPT’s main impact in 2023 has been on driving a conversation around generative AI, and demonstrating the potential that this technology has to make truly impactful change. It’s certainly stoked a lot of interest in a lot of sectors and is clearly right at the top of the hype curve right now.

It’s sparked conversations around driving innovation and transforming a variety of industries in remarkable ways, including:

  • Revolutionising Customer Experiences: AI-powered chatbots, bolstered by sophisticated language models like ChatGPT, can engage with customers in natural language, offering real-time assistance, answering queries, and even providing personalized recommendations. This level of interaction not only improves customer satisfaction but also opens new avenues for businesses to understand their customers on a deeper level.
  • Enhancing Decision-Making and Strategic Planning: By harnessing the power of AI, leaders can make informed decisions that are driven by data rather than intuition alone. This has impacted decision-making processes and strategic planning across industries.
  • Transforming the Economy: ChatGPT, with its human-like writing abilities, holds the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analysing data. This has left economists unsure how jobs and overall productivity might be affected [as we will discuss in a moment].
  • Developer Tools: OpenAI and Microsoft have unveiled a series of artificial intelligence tool updates, including the ability for developers to create custom versions of ChatGPT. These innovations are designed to make it easier for developers to incorporate AI into their projects, whether they’re building chatbots, integrating AI into existing systems, or creating entirely new applications.

These advancements signal a new direction in tackling complex challenges. The impact of ChatGPT on workers and the economy as a whole is far-reaching and continues to evolve. 2023 was just the start – and the announcements from companies like Microsoft on how it will use ChatGPT and other AI technologies at the heart of its strategy show that we are still only getting started on an exciting journey.

Elon Musk has recently claimed that AI will end all jobs, is this actually a reality or is it scaremongering?

Mr Musk was one of original backers of OpenAI, the company that created ChatGPT; however he resigned from their board in 2018. At the time that was over conflicts of interest with Tesla’s AI work and Musk now has another AI startup called xAI. He’s well-known for his controversial opinions, and his comment about AI ending all jobs is aimed to fuel controversy.

Computing has been a disruptor throughout the last few generations. Generative AI is the latest iteration of that cycle.

Will we see jobs replaced by AI? Almost certainly! But new jobs will be created in their place.

To give an example, when I entered the workplace in the 1990s, we didn’t have social media and all the job roles that are built around it today; PR involved phoning journalists and taking clippings from newspapers to show where coverage had been gained for clients; and advertising was limited to print, TV, radio, and billboards. That world has changed immensely in the last 30 years and that’s just one sector.

For another example, look at retail where a huge number of transactions are now online and even those in-store may be self-service. Or the rise in logistics opportunities (warehousing, transportation) as a result of all our online commerce and the fuel for ever more variety in our purchases.

The jobs that my sons are taking on as they enter the workplace are vastly different to the ones I had. And the jobs their children take a generation later will be different again.

Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

So will AI end work? Maybe one day, but not for a while yet!

When and if it does, way out in the future, we will need new social constructs – perhaps a Universal Basic Income – to replace wages and salaries from employment, but right now that’s a distant dream.

How far is there still to go to overcome the ethical nightmares surrounding the technology e.g. data privacy, algorithm bias?

There’s a lot of work that’s been done to overcome ethical issues with artificial intelligence but there’s still a lot to do.

The major source of bias is the training data used by models. We’re looking at AI in the context of ChatGPT and my understanding is that ChatGPT was trained, in part, on information from the world-wide web. That data varies tremendously in quality and scope.

As it becomes easier for organisations to generate their own generative AI models, tuned with their own data, we can expect to see improved quality in the outcomes. Whether that improved quality translates into ethical responses depends on the decisions made by the humans that decide how to use the AI results. OpenAI and its partners will be keen to demonstrate how they are improving the models that they create and reducing bias, but this is a broader social issue, and we can’t simply rely on technology.

ChatGPT has developed so much in just a year, what does the next year look like for its capabilities? How will the workplace look different this time next year, thanks to ChatGPT?

We’ve been using forms of AI in our work for years – for example predictive text, design suggestions, grammar correction – but the generative AI that ChatGPT has made us all familiar with over the last year is a huge step forwards.

Microsoft is a significant investor in OpenAI (the creators of ChatGPT) and the licensing of the OpenAI models is fundamental to Microsoft’s strategy. So much so that, when Sam Altman and Greg Brockman (OpenAI’s CEO and CTO respectively) abruptly left the company, Microsoft moved quickly to offer them the opportunity to set up an advanced AI Research function inside Microsoft. Even though Altman and Brockman were soon reinstated at OpenAI, it shows how important this investment is to Microsoft.

In mid-November 2023, the main theme of the Microsoft Ignite conference was AI, including new computing capabilities, the inclusion of a set of Copilots in Microsoft products – and the ability for Microsoft’s customers to create their own Copilots. Indeed, in his keynote, Microsoft CEO Satya Nadella repeatedly referred to the “age of the Copilot”.

These Copilots are assistive technologies – agents to help us all be more productive in our work. The Copilots use ChatGPT and other models to generate text and visual content.

Even today, I regularly use ChatGPT to help me with a first draft of a document, or to help me write an Excel formula. The results are far from perfect, but the output is “good enough” for a first draft that I can then edit. That’s the inspiration to get me going, or the push to see something on the page, which I can then take to the next level.

This is what’s going to influence our workplaces over the next year or so. Whereas now we’re talking about the potential of Copilots, next year we’ll be using them for real.

What about 5 or 10 years time?

5 or 10 years is a long time in technology. And the pace of change seems to be increasing (or maybe that’s just a sign of my age).

I asked Microsoft Copilot (which uses ChatGPT) what the big innovations of 2013 were and it said: Google Glass (now confined to the history books); Oculus Rift (now part of Meta’s plans for augmented reality) and Bitcoin (another controversial technology whose fans say it will be world-changing and critics say has no place in society). For 2018 it was duelling neural networks (AI); babel-fish earbuds (AI) and zero-carbon natural gas.

The fact that none of these are commonplace today tells me that predicting the future is hard!

If pushed, and talking specifically about innovations where AI plays a part, I’d say that we’ll be looking at a world where:

  • Our interactions with technology will have changed – we will have more (and better) spoken interaction with our devices (think Apple Siri or Amazon Alexa, but much improved). Intelligent assistants will be everywhere. And the results they produce will be integrated with Augmented Reality (AR) to create new computing experiences that improve productivity in the workplace – be that an industrial setting or an office.
  • The computing services that are provided by companies like Node4 but also by the hyperscalers (Microsoft, Amazon, et al.) will have evolved. AI will change the demands of our clients and that will be reflected in the ways that we provide compute, storage and connectivity services. We’ll need new processors (moving beyond CPUs, GPUs, TPUs, to the next AI-driven approach), new approaches to power and cooling, and new approaches to data connectivity (hollow-core fibre, satellite communications, etc.).
  • People will still struggle to come to grips with new computing paradigms. We’ll need to invest more and more effort into helping people make good use of the technologies that are available to them. Crucially, we’ll also need to help current and future generations develop critical thinking skills to consider whether the content they see is real or computer-generated.

Anything else to add?

Well, a lot has been made of ChatGPT and its use in an educational context. One question is “should it be banned?”

In the past, calculators (and even slide rules – though they were before my time!) were banned before they became accepted tools, even in the exam hall. No-one would say today “you can’t use Google to search for information”. That’s the viewpoint we need to get to with generative AI. ChatGPT is just a tool and, at some point, instead of banning these tools, educational establishments (and broader society) will issue guidelines for their acceptable use.

Postscript

Bonus points for working out which part of this was original content by yours truly, and which had some AI assistance…

Featured image: generated via Microsoft Copilot, powered by DALL-E 3.

What did we learn at Microsoft Ignite 2023?

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Right now, there’s a whole load of journalists and influencers writing about what Microsoft announced at Ignite. I’m not a journalist, and Microsoft has long since stopped considering me as an influencer. Even so, I’m going to take a look at the key messages. Not the technology announcements – for those there’s the Microsoft Ignite 2023 Book of News – but the real things IT Leaders need to take away from this event.

Microsoft’s investment in OpenAI

It’s all about AI. I know, you’re tired of the hype, suffering from AI fatigue, but for Microsoft, it really is about AI. And if you were unconvinced just how important AI is to Microsoft’s strategy, their action to snap up key members of staff from an imploding OpenAI a week later should be all you need to see:

Tortoise Media‘s Barney Macintyre (@barneymac) summed it up brilliantly when he said that

“Satya Nadella, chief executive of Microsoft, has played a blinder. Altman’s firing raised the risk that he would lose a key ally at a company into which Microsoft has invested $13 billion. After it became clear the board wouldn’t accept his reinstatement, Nadella offered jobs to Altman, Brockman and other loyalist researchers thinking about leaving.

The upshot: a new AI lab, filled with talent and wholly owned by Microsoft – without the bossy board. An $86 billion subsidiary for a $13 billion investment.”

But the soap opera continued and, by the middle of the week, Altman was back at OpenAI, apparently with the blessing of Microsoft!

If nothing else, this whole saga should reinforce just how important OpenAI is to Microsoft.

The age of the Copilot

Copilot is Microsoft’s brand for a set of assistive technologies that will sit alongside applications and provide an agent experience, built on ChatGPT, Dall-E and other models. Copilots are going to be everywhere. So much so that there is a “stack” for Copilot and Satya described Microsoft as “a Copilot company”.

That stack consists of:

  • The AI infrastructure in Azure – all Copilots are built on AzureAI.
  • Foundation models from OpenAI, including the Azure OpenAI Service to provide access in a protected manner but also new OpenAI models, fine-tuning, hosted APIs, and an open source model catalogue – including Models as a Service in Azure.
  • Your data – and Microsoft is pushing Fabric as all the data management tools in one SaaS experience, with onwards flow to Microsoft 365 for improved decision-making, Purview for data governance, and Copilots to assist. One place to unify, prepare and model data (for AI to act upon).
  • Applications, with tools like Microsoft Teams becoming more than just communication and collaboration but a “multi-player canvas for business processes”.
  • A new Copilot Studio to extend and customise Microsoft Copilot, with 1100 prebuilt plugins and connectors for every Azure data service and many common enterprise data platforms.
  • All wrapped with a set of AI safety and security measures – both in the platform (model and safety system) and in application (metaprompts, grounding and user experience).

In addition to this, Bing Chat is now re-branded as Copilot – with an enterprise version at no additional cost to eligible Entra ID users. On LinkedIn this week, one Microsoft exec posted that “Copilot is going to be the new UI for work”.

In short, Copilots will be everywhere.

Azure as the world’s computer

Of course, other cloud platforms exist, but I’m writing about Microsoft here. So what did they announce that makes Azure even more powerful and suited to running these new AI workloads?

  • Re-affirming the commitment to zero carbon power sources and then becoming carbon negative.
  • Manufacturing their own hollow-core fibre to drive up speeds.
  • Azure Boost (offloading server virtualisation processes from the hypervisor to hardware).
  • Taking the innovation from Intel and AMD but also introducing new Microsoft silicon: Azure Cobalt (ARM-based CPU series) and Azure Maia (AI accelerator in the form of an LLM training and inference chip).
  • More AI models and APIs. New tooling (Azure AI Studio).
  • Improvements in the data layer with enhancements to Microsoft Fabric. The “Microsoft Intelligent Data Platform” now has 4 tenets: databases; analytics; AI; and governance.
  • Extending Copilot across every role and function (as I briefly discussed in the previous section).

In summary, and looking forward

Microsoft is powering ahead on the back of its AI investments. And, as tired of the hype as we may all be, it would be foolish to ignore it. Copilots look to be the next generation of assistive technology that will help drive productivity. Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

In time we’ll see AI and mixed reality coming together to make sense of our intent and the world around us. Voice, gestures, and where we look become new inputs – the world becomes our prompt and interface.

Featured images: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Learning to be intelligent about artificial intelligence

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.

Some thoughts on Microsoft Windows Extended Security Updates…

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Technology moves quickly. And we’re all used to keeping operating systems on current (or n-1) releases, with known support lifecycles and planned upgrades. We are, aren’t we? And every business application, whether COTS or bespoke, has an owner, who maintains a road map and makes sure that it’s not going to become the next item of technical debt. Surely?

Unfortunately, these things are not always as common as they should be. A lot comes down to the perception of IT – is it a cost centre or does it add value to the business?

Software Assurance and Azure Hybrid Benefit

Microsoft has a scheme for volume licensing customers called Software Assurance. One of the benefits of this scheme is the ability to keep running on the latest versions of software. Other vendors have similar offers. But they all come at a cost.

When planning a move to the cloud, Software Assurance is the key to unlocking other benefits too. Azure Hybrid Benefit is a licensing offer for Windows Server and SQL Server that provides a degree of portability between cloud and on-premises environments. Effectively, the cloud costs are reduced because the on-prem licenses are released and allocated to new cloud resources.

But what if you don’t have Software Assurance? As a Windows operating system comes to the end of its support lifecycle, how are you going to remain compliant when there are no longer any updates available?

End of support for Windows Server 2012/2012 R2

In case you missed it, Windows Server 2012 and Windows Server 2012 R2 reached the end of extended support on October 10, 2023. (Mainstream support ended five years previously.) That means that these products will no longer receive security updates, non-security updates, bug fixes, technical support, or online technical content updates.

Microsoft’s advice is:

“If you cannot upgrade to the next version, you will need to use Extended Security Updates (ESUs) for up to three years. ESUs are available for free in Azure or need to be purchased for on-premises deployments.”

Extended Security Updates

Extended Security Updates are a safety net – even Microsoft describes the ESU programme as:

“a last resort option for customers who need to run certain legacy Microsoft products past the end of support”.

The ESU scheme:

“includes Critical and/or Important security updates for a maximum of three years after the product’s End of Extended Support date. Extended Security Updates will be distributed if and when available.

ESUs do not include new features, customer-requested non-security updates, or design change requests.”

They’re just a way to maintain support whilst you make plans to get off that legacy operating system – which by now will be at least 10 years old.

If your organisation is considering ESUs, The real questions to answer are what are their sticking points that are keeping you from moving away from the legacy operating system? For example:

  • Is it because there are applications that won’t run on a later operating system? Maybe moving to Azure (or to a hybrid arrangement with Azure Arc) will provide some flexibility to benefit from ESUs at no extra cost whilst the app is modernised? (Windows Server and SQL Server ESUs are automatically delivered to Azure VMs if they’re configured to receive updates).
  • Is it a budget concern? In this case, ESUs are unlikely to be a cost-efficient approach. Maybe there’s an alternative – again through cloud transformation, software financing, or perhaps a cloud-to-edge platform.
  • Is it a cash-flow concern? Leasing may be an answer.

There may be other reasons, but doing nothing and automatically accepting the risk is an option that a lot of companies choose… the art (of consulting) is to help them to see that there are risks in doing nothing too.

Featured image by 51581 from Pixabay

Password complexity in the 1940s

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of weeks I’ve been fortunate enough to have two demonstrations of Enigma machines. For those who are not familiar with these marvelous mechanical computers, they were used to encrypt communications. Most notably by German forces during World War 2.

The first of the demonstrations was at Milton Keynes Geek Night, where PJ Evans (@MrPJEvans) gave an entertaining talk on the original Milton Keynes Geeks.

Then, earlier this week, I was at Bletchley Park for Node4’s Policing First event, which wrapped up with an Enigma demonstration from Phil Simons.

The two sessions were very different in their delivery. PJ’s used Raspberry Pi and web-based emulators, along with slides and a demonstration with a ball of wool. Phil was able to show us an actual Enigma machine. What struck me though was that the weakness that ultimately led to Bletchley Park cracking wartime German encryption codes. It wasn’t the encryption itself, but the way human operators used it.

Downfall

The Enigma machine was originally invented for encrypted communications in the financial services sector. By the time the German military was using it in World War 2, the encryption was very strong.

Despite having just 26 characters, each one was encoded an electrical signal which passed through three rotors from a set of five, changed daily, with different start positions and incrementing on each use, plus a plug board of ten electrical circuits that further increased the complexity.

There’s a good description of how the Enigma machine works on Brilliant. To cut a long story short, an Enigma machine can be set up in 158,962,555,217,826,360,000 ways. Brute force attacks are just not credible. Especially when the setup changes every day and each military network has a different encryption setup.

But there were humans involved:

  • Code books were needed so that, the sending and receiving stations set their machines up identically each day.
  • Young soldiers on the front line took short-cuts. Like re-using rotor start positions. They would spell out things like BER, PAR (for their home city, where they were stationed, girlfriend’s name, etc.).
  • Some networks issued guidance that all 26 letters needed to be used for a rotor start position each 26 days. This had unintended consequence that the desire for perceived variety meant the letter being used was predictable. It actually reduced the combinations as it couldn’t be one of the ones used in the previous 26 days.
  • Then there was the flaw that an Enigma machine’s algorithm was designed to take one letter and output another. Input of A would never result in output of A, for example.
  • And there were common phrases to look for in the messages to test possible encryption combinations – like WETTERBERICHT (weather report).

All of these clues helped the code-breakers at Bletchley Park narrow down the combinations. That gave them the head start they needed to use to try and brute force the encryption on a message.

Why is this relevant today?

By now, you’re probably thinking “that’s a great history lesson Mark, but why is it relevant today?”

Well, we have the same issues in modern IT security. We rely on people following policies and processes. And people look for shortcuts.

Take password complexity as an example. The UK National Cyber Security Centre (NCSC) specifically advises against enforcing password complexity requirements. Users will work around the requirements with predictable outcomes, and that actually reduces security. Just like with the “use all 26 letters in 26 days” guidance I cited in my Enigma history lesson above.

And yet, only last month, I was advising a client whose CIO peers maintain that password complexity should be part of the approach.

One more thing… the Germans tried to crack Allied encryption too. They gave up after a while because it was difficult – they assumed if they couldn’t crack ours then we couldn’t crack theirs. But, whilst German command was distributed, the Allies set up what we would now call a “centre of excellence” in Bletchley Park. And that helped to bring together some of our greatest minds, along with several thousand support staff!

Postscript

After I started to write this post, I was multitasking on a Teams call. I should have concentrated on just one thing. Instead, went to open a DocuSign link from the company HR department and fell foul of a phishing simulation exercise. I’m normally pretty good at spotting these things but this time I was distracted. As a result, I clicked the (potentially credible) link without checking it. If you want an illustration of how fallible humans are, that’s one right there!

Featured image: author’s own.

This is what happened when I asked ChatGPT to write a blog post…

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent weeks, I’ve been watching the buzz of activity about large language models and generative AI. So I decided to see what would happen if I asked Open AI’s ChatGPT to write a blog post for me. I gave it the following prompt:

Write a blog post of up to 800 words that discusses the use of generative AI tools like ChatGPT to automate human activities Use writing a blog post as an example. Include some commentary about the potential social impacts of using AI to generate content. Also discuss whether there are any concerns that need to be considered - such as whether AI-generated content is accurate.

“Generative AI tools like ChatGPT have been making waves in the field of automation and artificial intelligence. These tools have the ability to generate human-like text, helping to automate certain human activities such as writing, customer service, and more. While the potential benefits of these tools are clear, there are also concerns about their impact on society and the accuracy of the content they produce.

One of the most significant benefits of generative AI tools like ChatGPT is their ability to automate repetitive or time-consuming tasks. For example, a blogger who regularly posts new content may use ChatGPT to generate article outlines, saving time and effort. Customer service teams can use generative AI tools to provide quick and accurate responses to frequently asked questions, improving the overall customer experience.

Another benefit of generative AI tools is their ability to create content in multiple languages, allowing businesses to expand their reach into new markets. These tools can also be used to generate personalized content for individuals, such as email newsletters or social media posts, which can improve engagement and lead to higher conversions.

However, the use of generative AI tools also raises concerns about the potential social impacts of using AI to generate content. One concern is that these tools could lead to a loss of jobs, particularly in industries where writing and content creation are essential. While automation has always been a part of technological progress, there is a risk that AI could replace more jobs than it creates.

Another concern is that AI-generated content could be used to spread misinformation and propaganda. With the ability to create convincing, human-like text, there is a risk that malicious actors could use these tools to create fake news stories or manipulate public opinion. This could have serious consequences for democracy and social stability.

Accuracy is another concern when it comes to generative AI tools. While these tools are designed to create human-like text, they may not always produce accurate or reliable information. This could be particularly problematic in fields like journalism or science, where accuracy is essential. As AI tools become more sophisticated, it will be important to ensure that the content they produce is factually correct and trustworthy.

Despite these concerns, generative AI tools like ChatGPT have the potential to revolutionize the way we create and consume content. As with any new technology, it is important to approach these tools with caution and to carefully consider their potential impact on society. However, if used responsibly, these tools could improve efficiency, accuracy, and creativity in a variety of industries.

In the case of writing blog posts, for example, using ChatGPT to generate outlines or ideas could be a useful starting point, but it is important to remember that these tools are not a substitute for human creativity and expertise. While AI-generated content may be faster and more efficient, it may lack the nuance and personality that comes from a human writer.

To ensure that AI-generated content is accurate and reliable, it is important to use reliable sources of information and to fact-check any content that is produced. This is particularly important when it comes to sensitive or controversial topics, where misinformation can have serious consequences.

Overall, the use of generative AI tools like ChatGPT to automate human activities is a promising development that has the potential to improve efficiency and creativity in a variety of fields. However, it is important to approach these tools with caution and to carefully consider their potential impact on society. With responsible use and careful oversight, generative AI tools could become an important part of our technological landscape, helping us to achieve more than we ever thought possible.”

Is it good enough?

As it happens, the resulting text is quite believable, but it’s not actually a very good post. A different AI, the one from Yoast SEO that helps with the readability of my content, highlighted that there aren’t enough subheadings. Also, more than half the sentences are a bit long (more than 20 words). And it’s using too much passive voice (I do that too, so I guess it could be considered authentic!).

Garbage In Garbage Out (GIBO)

I tried asking ChatCPT to improve its writing, based on the points above but I didn’t get far:

Re-write the above with shorter sentences to make it easier to read, and use sub-headings.

And then:

Try again, and this time use try to use active voice instead of passive voice... and make the subheadings a bit snappier!

It made some changes, but really the way I see tools like this working is as a very quick first draft for review and update by a human writer. We just have to be careful not to let those who don’t understand or who don’t value writing say “just get an AI to write it in a few seconds”.

Featured image by Alexandra_Koch from Pixabay.

Timeless technology

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent days, I’ve been thinking about tech that has become ubiquitous. Like the IBM Personal Computer – which is now well over 40 years old and I still use a derivative of it every day. But then I started to think about tech that’s no longer in daily use but yet which still seems modern and futuristic…

…like Concorde

Concorde may not have become as world-dominant was originally intended but, for a while, the concept of supersonic flying was the height (pun absolutely intended) of luxury air travel. Sadly, changing market demands, soaring costs, environmental impacts, and the Paris crash of AF4590 in 2000 sealed its fate. The plane’s operators (British Airways and Air France) agreed to end commercial flights of the jet from 2003.

The elegant lines and delta wings still look as great today as they would have in 1969. And supersonic commercial flights may even be returning to the skies by the end of the decade.

…the British Rail Advanced Passenger Train

British Rail’s Advanced Passenger Train (Experimental) – or APT-E – of 1972 is like a silver dart. Just as you don’t have to be a plane-spotter to appreciate Concorde, the APT-E’ is ‘s sleek lines scream “fast” and in 1975 it set a new British speed record of just over 150 miles an hour.

BR APT-E in 1972

The APT project was troublesome but the technology it developed lived on in other forms. The idea of a High Speed Train (HST) developed into the famous Inter-City 125. That was introduced in 1976 and is only now being withdrawn from service. Meanwhile, tilting train technology is used for high speed trains on traditional lines – most notably the Pendolinos on the UK’s West Coast Main Line.

…and Oxygène

Last night, I was relaxing by idly flicking through YouTube recommendations and it showed me this:

It’s an amazing view of the early-mid 1970s electronic instruments that Jean-Michel Jarre used to create his breakthrough album: Oxygène. And, as I found earlier this evening, it’s still a great soundtrack to go for a run. Listening on my earphones made me feel like I was in a science fiction film!

Modern electronic artists may use different synthesizers and keyboards but the technology Jean-Michel Jarre used smashed down doors. Oxygène was initially rejected by record companies and, in this Guardian Article, Jarre says:

“Oxygène was initially rejected by record company after record company. They all said: ‘You have no singles, no drummer, no singer, the tracks last 10 minutes and it’s French!’ Even my mother said: ‘Why did you name your album after a gas and put a skull on the cover?'”

Jean-Michel Jarre

Nowadays, electronic music – often instrumental – is huge. After playing the whole Oxygène album on my run, Spotify followed up with yet more great tracks. Visage (Fade to Grey), Moby (Go), OMD (Joan of Arc)… let’s see where it goes next!

What other timeless tech is out there?

I’ve written about three technologies that are around 50 years old now. Each one has lived on in a new form whilst remaining a timeless classic. What else have I missed? And what technology from today will we look back on so favourably in the future?

Featured images: British Airways Concorde G-BOAC by Eduard Marmet CC BY-SA 3.0 and The British Rail APT-E in the RTC sidings between tests in 1972 by Dave Coxon Public Domain.