Celebrating ChatGPT’s first birthday…

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was asked to comment on a few GPT-related questions by Node4’s PR agency. I haven’t seen where those comments are featured yet, but I decided to string them together into my own post…

How has ChatGPT prompted innovation over the past year, transforming industries for the better?

ChatGPT’s main impact in 2023 has been on driving a conversation around generative AI, and demonstrating the potential that this technology has to make truly impactful change. It’s certainly stoked a lot of interest in a lot of sectors and is clearly right at the top of the hype curve right now.

It’s sparked conversations around driving innovation and transforming a variety of industries in remarkable ways, including:

  • Revolutionising Customer Experiences: AI-powered chatbots, bolstered by sophisticated language models like ChatGPT, can engage with customers in natural language, offering real-time assistance, answering queries, and even providing personalized recommendations. This level of interaction not only improves customer satisfaction but also opens new avenues for businesses to understand their customers on a deeper level.
  • Enhancing Decision-Making and Strategic Planning: By harnessing the power of AI, leaders can make informed decisions that are driven by data rather than intuition alone. This has impacted decision-making processes and strategic planning across industries.
  • Transforming the Economy: ChatGPT, with its human-like writing abilities, holds the promise of automating all sorts of tasks that were previously thought to be solely in the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analysing data. This has left economists unsure how jobs and overall productivity might be affected [as we will discuss in a moment].
  • Developer Tools: OpenAI and Microsoft have unveiled a series of artificial intelligence tool updates, including the ability for developers to create custom versions of ChatGPT. These innovations are designed to make it easier for developers to incorporate AI into their projects, whether they’re building chatbots, integrating AI into existing systems, or creating entirely new applications.

These advancements signal a new direction in tackling complex challenges. The impact of ChatGPT on workers and the economy as a whole is far-reaching and continues to evolve. 2023 was just the start – and the announcements from companies like Microsoft on how it will use ChatGPT and other AI technologies at the heart of its strategy show that we are still only getting started on an exciting journey.

Elon Musk has recently claimed that AI will end all jobs, is this actually a reality or is it scaremongering?

Mr Musk was one of original backers of OpenAI, the company that created ChatGPT; however he resigned from their board in 2018. At the time that was over conflicts of interest with Tesla’s AI work and Musk now has another AI startup called xAI. He’s well-known for his controversial opinions, and his comment about AI ending all jobs is aimed to fuel controversy.

Computing has been a disruptor throughout the last few generations. Generative AI is the latest iteration of that cycle.

Will we see jobs replaced by AI? Almost certainly! But new jobs will be created in their place.

To give an example, when I entered the workplace in the 1990s, we didn’t have social media and all the job roles that are built around it today; PR involved phoning journalists and taking clippings from newspapers to show where coverage had been gained for clients; and advertising was limited to print, TV, radio, and billboards. That world has changed immensely in the last 30 years and that’s just one sector.

For another example, look at retail where a huge number of transactions are now online and even those in-store may be self-service. Or the rise in logistics opportunities (warehousing, transportation) as a result of all our online commerce and the fuel for ever more variety in our purchases.

The jobs that my sons are taking on as they enter the workplace are vastly different to the ones I had. And the jobs their children take a generation later will be different again.

Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

So will AI end work? Maybe one day, but not for a while yet!

When and if it does, way out in the future, we will need new social constructs – perhaps a Universal Basic Income – to replace wages and salaries from employment, but right now that’s a distant dream.

How far is there still to go to overcome the ethical nightmares surrounding the technology e.g. data privacy, algorithm bias?

There’s a lot of work that’s been done to overcome ethical issues with artificial intelligence but there’s still a lot to do.

The major source of bias is the training data used by models. We’re looking at AI in the context of ChatGPT and my understanding is that ChatGPT was trained, in part, on information from the world-wide web. That data varies tremendously in quality and scope.

As it becomes easier for organisations to generate their own generative AI models, tuned with their own data, we can expect to see improved quality in the outcomes. Whether that improved quality translates into ethical responses depends on the decisions made by the humans that decide how to use the AI results. OpenAI and its partners will be keen to demonstrate how they are improving the models that they create and reducing bias, but this is a broader social issue, and we can’t simply rely on technology.

ChatGPT has developed so much in just a year, what does the next year look like for its capabilities? How will the workplace look different this time next year, thanks to ChatGPT?

We’ve been using forms of AI in our work for years – for example predictive text, design suggestions, grammar correction – but the generative AI that ChatGPT has made us all familiar with over the last year is a huge step forwards.

Microsoft is a significant investor in OpenAI (the creators of ChatGPT) and the licensing of the OpenAI models is fundamental to Microsoft’s strategy. So much so that, when Sam Altman and Greg Brockman (OpenAI’s CEO and CTO respectively) abruptly left the company, Microsoft moved quickly to offer them the opportunity to set up an advanced AI Research function inside Microsoft. Even though Altman and Brockman were soon reinstated at OpenAI, it shows how important this investment is to Microsoft.

In mid-November 2023, the main theme of the Microsoft Ignite conference was AI, including new computing capabilities, the inclusion of a set of Copilots in Microsoft products – and the ability for Microsoft’s customers to create their own Copilots. Indeed, in his keynote, Microsoft CEO Satya Nadella repeatedly referred to the “age of the Copilot”.

These Copilots are assistive technologies – agents to help us all be more productive in our work. The Copilots use ChatGPT and other models to generate text and visual content.

Even today, I regularly use ChatGPT to help me with a first draft of a document, or to help me write an Excel formula. The results are far from perfect, but the output is “good enough” for a first draft that I can then edit. That’s the inspiration to get me going, or the push to see something on the page, which I can then take to the next level.

This is what’s going to influence our workplaces over the next year or so. Whereas now we’re talking about the potential of Copilots, next year we’ll be using them for real.

What about 5 or 10 years time?

5 or 10 years is a long time in technology. And the pace of change seems to be increasing (or maybe that’s just a sign of my age).

I asked Microsoft Copilot (which uses ChatGPT) what the big innovations of 2013 were and it said: Google Glass (now confined to the history books); Oculus Rift (now part of Meta’s plans for augmented reality) and Bitcoin (another controversial technology whose fans say it will be world-changing and critics say has no place in society). For 2018 it was duelling neural networks (AI); babel-fish earbuds (AI) and zero-carbon natural gas.

The fact that none of these are commonplace today tells me that predicting the future is hard!

If pushed, and talking specifically about innovations where AI plays a part, I’d say that we’ll be looking at a world where:

  • Our interactions with technology will have changed – we will have more (and better) spoken interaction with our devices (think Apple Siri or Amazon Alexa, but much improved). Intelligent assistants will be everywhere. And the results they produce will be integrated with Augmented Reality (AR) to create new computing experiences that improve productivity in the workplace – be that an industrial setting or an office.
  • The computing services that are provided by companies like Node4 but also by the hyperscalers (Microsoft, Amazon, et al.) will have evolved. AI will change the demands of our clients and that will be reflected in the ways that we provide compute, storage and connectivity services. We’ll need new processors (moving beyond CPUs, GPUs, TPUs, to the next AI-driven approach), new approaches to power and cooling, and new approaches to data connectivity (hollow-core fibre, satellite communications, etc.).
  • People will still struggle to come to grips with new computing paradigms. We’ll need to invest more and more effort into helping people make good use of the technologies that are available to them. Crucially, we’ll also need to help current and future generations develop critical thinking skills to consider whether the content they see is real or computer-generated.

Anything else to add?

Well, a lot has been made of ChatGPT and its use in an educational context. One question is “should it be banned?”

In the past, calculators (and even slide rules – though they were before my time!) were banned before they became accepted tools, even in the exam hall. No-one would say today “you can’t use Google to search for information”. That’s the viewpoint we need to get to with generative AI. ChatGPT is just a tool and, at some point, instead of banning these tools, educational establishments (and broader society) will issue guidelines for their acceptable use.

Postscript

Bonus points for working out which part of this was original content by yours truly, and which had some AI assistance…

Featured image: generated via Microsoft Copilot, powered by DALL-E 3.

What did we learn at Microsoft Ignite 2023?

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Right now, there’s a whole load of journalists and influencers writing about what Microsoft announced at Ignite. I’m not a journalist, and Microsoft has long since stopped considering me as an influencer. Even so, I’m going to take a look at the key messages. Not the technology announcements – for those there’s the Microsoft Ignite 2023 Book of News – but the real things IT Leaders need to take away from this event.

Microsoft’s investment in OpenAI

It’s all about AI. I know, you’re tired of the hype, suffering from AI fatigue, but for Microsoft, it really is about AI. And if you were unconvinced just how important AI is to Microsoft’s strategy, their action to snap up key members of staff from an imploding OpenAI a week later should be all you need to see:

Tortoise Media‘s Barney Macintyre (@barneymac) summed it up brilliantly when he said that

“Satya Nadella, chief executive of Microsoft, has played a blinder. Altman’s firing raised the risk that he would lose a key ally at a company into which Microsoft has invested $13 billion. After it became clear the board wouldn’t accept his reinstatement, Nadella offered jobs to Altman, Brockman and other loyalist researchers thinking about leaving.

The upshot: a new AI lab, filled with talent and wholly owned by Microsoft – without the bossy board. An $86 billion subsidiary for a $13 billion investment.”

But the soap opera continued and, by the middle of the week, Altman was back at OpenAI, apparently with the blessing of Microsoft!

If nothing else, this whole saga should reinforce just how important OpenAI is to Microsoft.

The age of the Copilot

Copilot is Microsoft’s brand for a set of assistive technologies that will sit alongside applications and provide an agent experience, built on ChatGPT, Dall-E and other models. Copilots are going to be everywhere. So much so that there is a “stack” for Copilot and Satya described Microsoft as “a Copilot company”.

That stack consists of:

  • The AI infrastructure in Azure – all Copilots are built on AzureAI.
  • Foundation models from OpenAI, including the Azure OpenAI Service to provide access in a protected manner but also new OpenAI models, fine-tuning, hosted APIs, and an open source model catalogue – including Models as a Service in Azure.
  • Your data – and Microsoft is pushing Fabric as all the data management tools in one SaaS experience, with onwards flow to Microsoft 365 for improved decision-making, Purview for data governance, and Copilots to assist. One place to unify, prepare and model data (for AI to act upon).
  • Applications, with tools like Microsoft Teams becoming more than just communication and collaboration but a “multi-player canvas for business processes”.
  • A new Copilot Studio to extend and customise Microsoft Copilot, with 1100 prebuilt plugins and connectors for every Azure data service and many common enterprise data platforms.
  • All wrapped with a set of AI safety and security measures – both in the platform (model and safety system) and in application (metaprompts, grounding and user experience).

In addition to this, Bing Chat is now re-branded as Copilot – with an enterprise version at no additional cost to eligible Entra ID users. On LinkedIn this week, one Microsoft exec posted that “Copilot is going to be the new UI for work”.

In short, Copilots will be everywhere.

Azure as the world’s computer

Of course, other cloud platforms exist, but I’m writing about Microsoft here. So what did they announce that makes Azure even more powerful and suited to running these new AI workloads?

  • Re-affirming the commitment to zero carbon power sources and then becoming carbon negative.
  • Manufacturing their own hollow-core fibre to drive up speeds.
  • Azure Boost (offloading server virtualisation processes from the hypervisor to hardware).
  • Taking the innovation from Intel and AMD but also introducing new Microsoft silicon: Azure Cobalt (ARM-based CPU series) and Azure Maia (AI accelerator in the form of an LLM training and inference chip).
  • More AI models and APIs. New tooling (Azure AI Studio).
  • Improvements in the data layer with enhancements to Microsoft Fabric. The “Microsoft Intelligent Data Platform” now has 4 tenets: databases; analytics; AI; and governance.
  • Extending Copilot across every role and function (as I briefly discussed in the previous section).

In summary, and looking forward

Microsoft is powering ahead on the back of its AI investments. And, as tired of the hype as we may all be, it would be foolish to ignore it. Copilots look to be the next generation of assistive technology that will help drive productivity. Just as robots have become commonplace on production lines and impacted “blue collar” roles, AI is the productivity enhancement that will impact “white collar” jobs.

In time we’ll see AI and mixed reality coming together to make sense of our intent and the world around us. Voice, gestures, and where we look become new inputs – the world becomes our prompt and interface.

Featured images: screenshots from the Microsoft Ignite keynote stream, under fair use for copyright purposes.

Learning to be intelligent about artificial intelligence

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.

Some thoughts on Microsoft Windows Extended Security Updates…

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Technology moves quickly. And we’re all used to keeping operating systems on current (or n-1) releases, with known support lifecycles and planned upgrades. We are, aren’t we? And every business application, whether COTS or bespoke, has an owner, who maintains a road map and makes sure that it’s not going to become the next item of technical debt. Surely?

Unfortunately, these things are not always as common as they should be. A lot comes down to the perception of IT – is it a cost centre or does it add value to the business?

Software Assurance and Azure Hybrid Benefit

Microsoft has a scheme for volume licensing customers called Software Assurance. One of the benefits of this scheme is the ability to keep running on the latest versions of software. Other vendors have similar offers. But they all come at a cost.

When planning a move to the cloud, Software Assurance is the key to unlocking other benefits too. Azure Hybrid Benefit is a licensing offer for Windows Server and SQL Server that provides a degree of portability between cloud and on-premises environments. Effectively, the cloud costs are reduced because the on-prem licenses are released and allocated to new cloud resources.

But what if you don’t have Software Assurance? As a Windows operating system comes to the end of its support lifecycle, how are you going to remain compliant when there are no longer any updates available?

End of support for Windows Server 2012/2012 R2

In case you missed it, Windows Server 2012 and Windows Server 2012 R2 reached the end of extended support on October 10, 2023. (Mainstream support ended five years previously.) That means that these products will no longer receive security updates, non-security updates, bug fixes, technical support, or online technical content updates.

Microsoft’s advice is:

“If you cannot upgrade to the next version, you will need to use Extended Security Updates (ESUs) for up to three years. ESUs are available for free in Azure or need to be purchased for on-premises deployments.”

Extended Security Updates

Extended Security Updates are a safety net – even Microsoft describes the ESU programme as:

“a last resort option for customers who need to run certain legacy Microsoft products past the end of support”.

The ESU scheme:

“includes Critical and/or Important security updates for a maximum of three years after the product’s End of Extended Support date. Extended Security Updates will be distributed if and when available.

ESUs do not include new features, customer-requested non-security updates, or design change requests.”

They’re just a way to maintain support whilst you make plans to get off that legacy operating system – which by now will be at least 10 years old.

If your organisation is considering ESUs, The real questions to answer are what are their sticking points that are keeping you from moving away from the legacy operating system? For example:

  • Is it because there are applications that won’t run on a later operating system? Maybe moving to Azure (or to a hybrid arrangement with Azure Arc) will provide some flexibility to benefit from ESUs at no extra cost whilst the app is modernised? (Windows Server and SQL Server ESUs are automatically delivered to Azure VMs if they’re configured to receive updates).
  • Is it a budget concern? In this case, ESUs are unlikely to be a cost-efficient approach. Maybe there’s an alternative – again through cloud transformation, software financing, or perhaps a cloud-to-edge platform.
  • Is it a cash-flow concern? Leasing may be an answer.

There may be other reasons, but doing nothing and automatically accepting the risk is an option that a lot of companies choose… the art (of consulting) is to help them to see that there are risks in doing nothing too.

Featured image by 51581 from Pixabay

The magic of the Tour de France

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

It’s July. That means one thing to me. The Tour de France! The greatest cycle race in the world – and three weeks of watching the highlights each evening!

It’s not secret that I enjoy cycling – and that I have passed that on to at least one of my children. It’s also fair to say that he shows considerably more talent and physical ability than me.

I started watching the Tour de France (and the Vuelta a España) in around 2011 or 2012. I’m not sure which but 2012 was the year when Team GB and Team Sky’s success started to switch Britons on to cycling and I think it was before then. I remember the discovery that it was more than just a race to see who is fastest around a course. There are actually several races happening at once. Then there are the team dynamics – who is working with whom to achieve what outcome. It’s a team sport and and individual sport, all rolled up in one. And the three “Grand Tours” (Giro d’Italia, Tour de France and Vuelta a España) are huge spectacles, each with 21 stages over three weeks…

In the Tour de France there are several competitions:

  • the overall leader in the general classification (shortest cumulative time since the start of the event) is awarded the maillot jaune (yellow jersey) and he wears that for the next day.
  • the leading young rider (under 26) is awarded the maillot blanc (white jersey).
  • the rider with most points gained for mountain-top positions (based on the difficulty of the climbs) wears the red and white polka dot jersey.
  • the rider with most points in the points competition (intermediate sprints, finish positions, etc.) wears the maillot vert (green jersey).

The other grand tours have similar systems but the jersey colours vary.

There are also prizes for the most combative rider, and a team classification. Put those things together and the dynamics of the race are many and varied.

I watch the Tour de France on ITV – mostly because I like the production style of their coverage. In previous years, the highlights programme has featured quiz questions at the start and end of each advertising segment but this year it’s little facts about the race and the sport – which is steeped in history. I’ve collected some and posted them here, along with a few extras I added myself.

AutobusA group of riders (typically non climbers) who ride together on mountain stages aiming to finish within the time limit.
BaroudeurA rider who attacks the race from the start in order to show off their sponsor and try their luck in winning the stage.
BarrageRace officials impede the progress of team cars when they could affect the outcome of the race.
BonkA sudden loss of energy, cause by depletion of gycogen stores in the liver and muscles. Usually caused by a lack of proper fuelling.
BlockingWhen riders of leading teams ride the width of the road to control the peloton’s speed, to ensure that no more riders join the breakaway.
BreakawayA group of riders who have managed to ride off the front of the race, leaving a clear gap.
Broom wagonA support vehicle following the race, that may pick up riders unable to continue. First introduced in 1910.
Bunny hopTo cause one’s bicycle to become airborne momentarily. Usually performed to avoid pavements.
CadenceThe rate at which a cyclist pedals (in revolutions per minute). High cadence is typical in climbers.
Chasse patateFrench for “hunting potatoes”. A rider caught between breakaway and peloton, pedals furiously but makes little headway.
Circle of DeathA Pyrenean stage including the Peyresourde, Aspin, Tourmalet and Aubisque. Dubbed the “Circle of Death” in 1910.
Coup de chacalThe “Jackal Trick”. A surprise attack in the last few kilometres to detach from the peloton and win the race.
Danseuse(French: danser – to dance.) Riding out of the saddle, standing up, and rocking side-to-side for leverage.
DerailleurThe gear-shifting device which is controlled with a lever on the handlebars or frame. First permitted at the Tour de France in 1937.
DomestiqueA rider whose job is to support other riders in their team, typically carrying water (literally “servant” in French).
DossardRace number attached to the back of a competitor’s jersey. If not visible then fines will ensue.
DraftingThe ride close behind another rider or vehicle using their slipstream to reduce wind resistance and required effort.
EchelonA diagonal, stagger line of riders in single file. An echelon is formed to save energy when riding in a strong crosswind. The Belgian teams are considered the masters of riding in an echelon.
Feed zoneA designated area for soigneurs and other helpers to hand out food and water to riders.
Flamme rougeThe red flag suspended over the road to confirm that the finish line is one kilometre away.
Full gasRiding as hard as possible, which can leave on needing recovery, and vulnerable to attack.
Hors catégorieA term applied to the hardest climbs on the Tour. A climb that is literally beyond category.
Hors délaiLiterally “out of time” – a rider finishing outside the time limit is eliminated from the race. Typically occurs on a mountain stage.
King of the mountainsThe leader of the mountain classification. First sponsored in 1975 by Chocolate Poulain whose chocolate bars were covered in a polka dot wrapper.
Lanterne RougeFrench for “red lantern”, as found at the end of a railway train, and the name given to the rider placed last in a race.
Magic spannerThe scenario where a mechanic appears to be adjusting a rider’s bike from the support car. The reality is the rider is usually using the team car to rest of get back to the peloton.
Maillot jauneYellow jersey. Firs introduced as the colour of the leader’s jersey in 1919. Yellow was the colour of L’Auto newspaper.
MusetteFrench for a farm horse’s nosebag. Small cotton shoulder bag, contains food and drink given to riders in a feed zone.
MuurDutch for wall. A short, steep climb. Muur de Huy is one of the more famous examples, last used in the Tour in 2015.
PalmaresThe list of races a rider has won. (French, meaning list of achievements.)
PanacheStyle or courage. Displayed by breaking away, remounting after a crash or riding whilst suffering injuries.
ParcoursThe profiles of the race or stage route in French.
PavéRoad made of cobblestones. Significantly cobbled stages have featured 6 times in the Tour de France since 2020.
Pedalling squaresRiding with such fatigue that the rider is unable to maintain an efficient pedalling form that is strong and smooth.
PelotonA group of cyclists that are coupled together through the mutual energy benefits of drafting, whereby cyclists follow others in zones of reduced air resistance.
PullTo take a “pull” is to ride at the front of the peloton or group. Usually done in short bursts, it requires immense power and endurance.
Road rashThe cuts, scratches and bruises that riders pick up after a fall or crash.
RouleurA cyclist who is comfortable riding on both flat and rolling terrain. A powerful rider, they can drive the pace along for hours.
SoigneurThe French term for “healer” who usually specialises in giving the riders post-race massages. A soigneur will also look after the riders’ non-racing needs.
SouplesseThe art of perfect pedalling that gives the rider a smooth and efficient style on the bike.

I’m not suggesting that readers of this blog will suddenly become cycling fans but maybe you’ll understand a little more about how it works when, later this weekend, the Tour de France culminates in a sprint on Paris’ Avenue Des Champs-Élysées and the overall prizes are awarded. And, if nothing else, enjoy the scenery along the rest of the route to Paris!

Featured image: author’s own – a still from the video taken when I was a Tour de France marshall in 2014!

Password complexity in the 1940s

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of weeks I’ve been fortunate enough to have two demonstrations of Enigma machines. For those who are not familiar with these marvelous mechanical computers, they were used to encrypt communications. Most notably by German forces during World War 2.

The first of the demonstrations was at Milton Keynes Geek Night, where PJ Evans (@MrPJEvans) gave an entertaining talk on the original Milton Keynes Geeks.

Then, earlier this week, I was at Bletchley Park for Node4’s Policing First event, which wrapped up with an Enigma demonstration from Phil Simons.

The two sessions were very different in their delivery. PJ’s used Raspberry Pi and web-based emulators, along with slides and a demonstration with a ball of wool. Phil was able to show us an actual Enigma machine. What struck me though was that the weakness that ultimately led to Bletchley Park cracking wartime German encryption codes. It wasn’t the encryption itself, but the way human operators used it.

Downfall

The Enigma machine was originally invented for encrypted communications in the financial services sector. By the time the German military was using it in World War 2, the encryption was very strong.

Despite having just 26 characters, each one was encoded an electrical signal which passed through three rotors from a set of five, changed daily, with different start positions and incrementing on each use, plus a plug board of ten electrical circuits that further increased the complexity.

There’s a good description of how the Enigma machine works on Brilliant. To cut a long story short, an Enigma machine can be set up in 158,962,555,217,826,360,000 ways. Brute force attacks are just not credible. Especially when the setup changes every day and each military network has a different encryption setup.

But there were humans involved:

  • Code books were needed so that, the sending and receiving stations set their machines up identically each day.
  • Young soldiers on the front line took short-cuts. Like re-using rotor start positions. They would spell out things like BER, PAR (for their home city, where they were stationed, girlfriend’s name, etc.).
  • Some networks issued guidance that all 26 letters needed to be used for a rotor start position each 26 days. This had unintended consequence that the desire for perceived variety meant the letter being used was predictable. It actually reduced the combinations as it couldn’t be one of the ones used in the previous 26 days.
  • Then there was the flaw that an Enigma machine’s algorithm was designed to take one letter and output another. Input of A would never result in output of A, for example.
  • And there were common phrases to look for in the messages to test possible encryption combinations – like WETTERBERICHT (weather report).

All of these clues helped the code-breakers at Bletchley Park narrow down the combinations. That gave them the head start they needed to use to try and brute force the encryption on a message.

Why is this relevant today?

By now, you’re probably thinking “that’s a great history lesson Mark, but why is it relevant today?”

Well, we have the same issues in modern IT security. We rely on people following policies and processes. And people look for shortcuts.

Take password complexity as an example. The UK National Cyber Security Centre (NCSC) specifically advises against enforcing password complexity requirements. Users will work around the requirements with predictable outcomes, and that actually reduces security. Just like with the “use all 26 letters in 26 days” guidance I cited in my Enigma history lesson above.

And yet, only last month, I was advising a client whose CIO peers maintain that password complexity should be part of the approach.

One more thing… the Germans tried to crack Allied encryption too. They gave up after a while because it was difficult – they assumed if they couldn’t crack ours then we couldn’t crack theirs. But, whilst German command was distributed, the Allies set up what we would now call a “centre of excellence” in Bletchley Park. And that helped to bring together some of our greatest minds, along with several thousand support staff!

Postscript

After I started to write this post, I was multitasking on a Teams call. I should have concentrated on just one thing. Instead, went to open a DocuSign link from the company HR department and fell foul of a phishing simulation exercise. I’m normally pretty good at spotting these things but this time I was distracted. As a result, I clicked the (potentially credible) link without checking it. If you want an illustration of how fallible humans are, that’s one right there!

Featured image: author’s own.

Breaking down and planning big tasks (e.g. for exam revision)

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In common with many young people in households across England and Wales, for the last few weeks, both my sons have been taking their end-of-school exams (Scottish schoolchildren finished theirs a few weeks ago).

My youngest son had more than 20 exams for his GCSEs; my eldest had eight for his A-Levels. On the lead up to these (and between them), there was a lot of revision to be done.

Creating a plan

Back when they were sitting mock/trial exams, we noticed that schools don’t teach young people how to plan. At least not based on my experience of two state secondary schools in Milton Keynes and Northampton. They might provide a list of topics, or even a per-subject “revision timetable” but my wife realised fairly early on that our sons could just see some dates, and a massive task ahead of them.

So, we sat them down, and helped to worth things through. Using Excel of course (other Project Management tools are available, but probably overkill)!

  1. First, look how many weeks there are until the exams. The days are your columns. Use borders/shading to see the weeks and weekends.
  2. Then, look at the subjects you need to cover. Those “swimlanes” are the rows. Break each swimlane into 3 rows: daytime; after school; evening.
  3. Then block out the time for actual school, part-time work, sports activities, holidays, etc.
  4. You can now see the amount of time that’s available for revision/study and populate each spare block with one or more topics to cover within each subject row.

I expected some push-back, but was amazed how positively they took on the advice (to the extent that they seemed to work it out for themselves and created their own plans when it came to the final exams).

It’s simple project management!

What we taught them to do was effectively basic project management. It’s effectively using a Gantt Chart to illustrate the schedule for completing a bunch of tasks, along with the resource availability and constraints.

This is a life skill but also a business skill. It amazes me that this isn’t taught in schools (even pretty good ones). Or perhaps it is, but it’s lost in the teenage air of nonchalance!

Featured image by Eric Rothermel from Unsplash.

This is what happened when I asked ChatGPT to write a blog post…

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent weeks, I’ve been watching the buzz of activity about large language models and generative AI. So I decided to see what would happen if I asked Open AI’s ChatGPT to write a blog post for me. I gave it the following prompt:

Write a blog post of up to 800 words that discusses the use of generative AI tools like ChatGPT to automate human activities Use writing a blog post as an example. Include some commentary about the potential social impacts of using AI to generate content. Also discuss whether there are any concerns that need to be considered - such as whether AI-generated content is accurate.

“Generative AI tools like ChatGPT have been making waves in the field of automation and artificial intelligence. These tools have the ability to generate human-like text, helping to automate certain human activities such as writing, customer service, and more. While the potential benefits of these tools are clear, there are also concerns about their impact on society and the accuracy of the content they produce.

One of the most significant benefits of generative AI tools like ChatGPT is their ability to automate repetitive or time-consuming tasks. For example, a blogger who regularly posts new content may use ChatGPT to generate article outlines, saving time and effort. Customer service teams can use generative AI tools to provide quick and accurate responses to frequently asked questions, improving the overall customer experience.

Another benefit of generative AI tools is their ability to create content in multiple languages, allowing businesses to expand their reach into new markets. These tools can also be used to generate personalized content for individuals, such as email newsletters or social media posts, which can improve engagement and lead to higher conversions.

However, the use of generative AI tools also raises concerns about the potential social impacts of using AI to generate content. One concern is that these tools could lead to a loss of jobs, particularly in industries where writing and content creation are essential. While automation has always been a part of technological progress, there is a risk that AI could replace more jobs than it creates.

Another concern is that AI-generated content could be used to spread misinformation and propaganda. With the ability to create convincing, human-like text, there is a risk that malicious actors could use these tools to create fake news stories or manipulate public opinion. This could have serious consequences for democracy and social stability.

Accuracy is another concern when it comes to generative AI tools. While these tools are designed to create human-like text, they may not always produce accurate or reliable information. This could be particularly problematic in fields like journalism or science, where accuracy is essential. As AI tools become more sophisticated, it will be important to ensure that the content they produce is factually correct and trustworthy.

Despite these concerns, generative AI tools like ChatGPT have the potential to revolutionize the way we create and consume content. As with any new technology, it is important to approach these tools with caution and to carefully consider their potential impact on society. However, if used responsibly, these tools could improve efficiency, accuracy, and creativity in a variety of industries.

In the case of writing blog posts, for example, using ChatGPT to generate outlines or ideas could be a useful starting point, but it is important to remember that these tools are not a substitute for human creativity and expertise. While AI-generated content may be faster and more efficient, it may lack the nuance and personality that comes from a human writer.

To ensure that AI-generated content is accurate and reliable, it is important to use reliable sources of information and to fact-check any content that is produced. This is particularly important when it comes to sensitive or controversial topics, where misinformation can have serious consequences.

Overall, the use of generative AI tools like ChatGPT to automate human activities is a promising development that has the potential to improve efficiency and creativity in a variety of fields. However, it is important to approach these tools with caution and to carefully consider their potential impact on society. With responsible use and careful oversight, generative AI tools could become an important part of our technological landscape, helping us to achieve more than we ever thought possible.”

Is it good enough?

As it happens, the resulting text is quite believable, but it’s not actually a very good post. A different AI, the one from Yoast SEO that helps with the readability of my content, highlighted that there aren’t enough subheadings. Also, more than half the sentences are a bit long (more than 20 words). And it’s using too much passive voice (I do that too, so I guess it could be considered authentic!).

Garbage In Garbage Out (GIBO)

I tried asking ChatCPT to improve its writing, based on the points above but I didn’t get far:

Re-write the above with shorter sentences to make it easier to read, and use sub-headings.

And then:

Try again, and this time use try to use active voice instead of passive voice... and make the subheadings a bit snappier!

It made some changes, but really the way I see tools like this working is as a very quick first draft for review and update by a human writer. We just have to be careful not to let those who don’t understand or who don’t value writing say “just get an AI to write it in a few seconds”.

Featured image by Alexandra_Koch from Pixabay.

Timeless technology

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In recent days, I’ve been thinking about tech that has become ubiquitous. Like the IBM Personal Computer – which is now well over 40 years old and I still use a derivative of it every day. But then I started to think about tech that’s no longer in daily use but yet which still seems modern and futuristic…

…like Concorde

Concorde may not have become as world-dominant was originally intended but, for a while, the concept of supersonic flying was the height (pun absolutely intended) of luxury air travel. Sadly, changing market demands, soaring costs, environmental impacts, and the Paris crash of AF4590 in 2000 sealed its fate. The plane’s operators (British Airways and Air France) agreed to end commercial flights of the jet from 2003.

The elegant lines and delta wings still look as great today as they would have in 1969. And supersonic commercial flights may even be returning to the skies by the end of the decade.

…the British Rail Advanced Passenger Train

British Rail’s Advanced Passenger Train (Experimental) – or APT-E – of 1972 is like a silver dart. Just as you don’t have to be a plane-spotter to appreciate Concorde, the APT-E’ is ‘s sleek lines scream “fast” and in 1975 it set a new British speed record of just over 150 miles an hour.

BR APT-E in 1972

The APT project was troublesome but the technology it developed lived on in other forms. The idea of a High Speed Train (HST) developed into the famous Inter-City 125. That was introduced in 1976 and is only now being withdrawn from service. Meanwhile, tilting train technology is used for high speed trains on traditional lines – most notably the Pendolinos on the UK’s West Coast Main Line.

…and Oxygène

Last night, I was relaxing by idly flicking through YouTube recommendations and it showed me this:

It’s an amazing view of the early-mid 1970s electronic instruments that Jean-Michel Jarre used to create his breakthrough album: Oxygène. And, as I found earlier this evening, it’s still a great soundtrack to go for a run. Listening on my earphones made me feel like I was in a science fiction film!

Modern electronic artists may use different synthesizers and keyboards but the technology Jean-Michel Jarre used smashed down doors. Oxygène was initially rejected by record companies and, in this Guardian Article, Jarre says:

“Oxygène was initially rejected by record company after record company. They all said: ‘You have no singles, no drummer, no singer, the tracks last 10 minutes and it’s French!’ Even my mother said: ‘Why did you name your album after a gas and put a skull on the cover?'”

Jean-Michel Jarre

Nowadays, electronic music – often instrumental – is huge. After playing the whole Oxygène album on my run, Spotify followed up with yet more great tracks. Visage (Fade to Grey), Moby (Go), OMD (Joan of Arc)… let’s see where it goes next!

What other timeless tech is out there?

I’ve written about three technologies that are around 50 years old now. Each one has lived on in a new form whilst remaining a timeless classic. What else have I missed? And what technology from today will we look back on so favourably in the future?

Featured images: British Airways Concorde G-BOAC by Eduard Marmet CC BY-SA 3.0 and The British Rail APT-E in the RTC sidings between tests in 1972 by Dave Coxon Public Domain.

Formatting cells in Excel if they match a value in another column

This content is 2 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I wrote about using Excel to sort a table of names that had been created in Word. And last year, I mentioned a dynamic report I’d created with a colleague, stringing together a few formulae, some data validation and some conditionally formatted cells. Well, today, that same colleague came to me and asked to borrow my Excel skills again.

He had a list of potential sales opportunities and wanted to highlight any cells in the column of client names that matched a list of primary accounts on another sheet. It sounded “do-able”, with some conditional formatting and a formula.

I like a challenge – and it’s as close to any development work as I get these days – so I got stuck in.

It seems the function I needed was =MATCH() although now I’m wondering if =VLOOKUP() might have been more appropriate.

The actual formula used in my conditional formatting rule (applied to data in column E) was this:

=MATCH($E2,'Primary Account List'!$A$4:$A$34,0)

Basically, it’s saying, for the value in this cell, have a look at the data in the Primary Account List sheet, cells A4-A34 and if there’s a match, apply the formatting (bold, orange). I did put $E1 in at first, and the lookup was one cell out… (row 1 is the column headers). E2 is the start of my list but the same formula is used in the conditional formatting rule that covers all the cells.

Featured image: author’s own.