Monthly retrospective: May 2025

I’ve been struggling to post retrospectives this year – they are pretty time consuming to write. But, you may have noticed the volume of content on the blog increasing lately. That’s because I finally have a workflow with ChatGPT prompts that help me draft content quickly, in my own style. (I even subscribe to ChatGPT now, and regular readers will know how I try to keep my subscription count down.) Don’t worry – it’s still human-edited (and there are parts of the web that ChatGPT can’t read – like my LinkedIn, Instagram and even parts of this blog) so it should still be authentic. It’s just less time-consuming to write – and hopefully better for you to read.

On the blog…

Home Assistant tinkering (again)

I’ve been continuing to fiddle with my smart home setup. This month’s project was replacing the ageing (and now unsupported) Volvo On Call integration in Home Assistant with the much better maintained HA Volvo Cars HACS integration. It works brilliantly – once you’ve jumped through the hoops to register for an API key via Volvo’s developer portal.

And no, that doesn’t mean I can now summon my car like KITT in Knight Rider – but I can check I locked it up and warm it up remotely. Which is almost as good. (As an aside, I saw KITT last month at the DTX conference in Manchester.)

Software-defined vehicles

On the subject of cars, I’ve been reflecting on how much modern cars depend on software – regardless of whether they’re petrol, diesel or electric. The EV vs. ICE debate often centres on simplicity and mechanics (less moving parts in an EV), but from my experience, the real pain points lie in the digital layer.

Take my own (Volvo V60, 2019 model year). Mechanically it’s fine and it’s an absolute luxury compared with the older cars that my wife and sons drive, but I’ve seen:

  • The digital dashboard reboot mid-drive
  • Apple CarPlay refusing to connect unless I “reboot” the vehicle
  • Road sign recognition systems confidently misreading speed limits

Right now, it’s back at the body shop (at their cost, thankfully) for corrosion issues on a supposedly premium marque. My next car will likely be electric – but it won’t be the drivetrain that convinces me. It’ll be the software experience. Or, more realistically, the lack of bad software. Though, based on Jonathan Phillips’ experience, new car software contains alarming typos in the UI, which indicates a lack of testing…

Thinking about the impact of generative AI

This update isn’t meant to be about AI – but it seems it is – because it’s become such a big part of my digital life now. And, increasingly, it’s something I spend more time discussing with my clients.

AI isn’t new. We’ve had robotic process automation (RPA), machine learning, data science and advanced analytics for years. I even studied neural networks at Poly’ in the early 1990s. But it’s generative AI that’s caught everyone’s imagination – and their budgets.

In Episode 239 of the WB-40 podcast (AI Leadership), I listened to Matt Cockbill talk about how it’s prompting a useful shift in how we think about technology. Less about “use cases” and more about “value cases” – how tech can improve outcomes, streamline services, and actually help achieve what the organisation set out to do.

The rush to digitise during COVID saw huge amounts of spending – enabling remote working or entrenching what was already there (hello, VDI). But now it feels like the purse strings are tightening, and some of that “why are we doing this again?” thinking is creeping back in. Just buying licences and rolling out tools is easy. Changing the way people work and deliver value? That’s the real work.

Meal planning… with a side of AI

I’ve also been experimenting with creating an AI-powered food coach to help me figure out what to eat, plan ahead, and avoid living off chocolate Hobnobs and toasted pitta. Still early days – but the idea of using an assistant to help nudge me towards healthier, simpler food is growing on me.

Reading: The Midnight Library

I don’t read much fiction – I’m more likely to be found trawling through a magazine or scrolling on my phone – but Matt Haig’s “The Midnight Library really got me. OK, so technically, I didn’t read it – it was an impulse purchase to use some credits before cancelling my Audible account – but it was a great listen. Beautifully read by Carey Mulligan, it’s one of those rare books that manages to be both dark and uplifting. Some reviews suggest that not everyone feels the same way – and my reading it at a time of grief and loss may have had an impact – but I found it to be one of my best reads in a long time.

Without spoiling anything, the idea of a liminal space between life and death – where you can explore the infinite versions of yourself – is quietly brilliant. Highly recommended. So much so that I bought another copy (dead tree edition) for my wife.

On LinkedIn this month…

It’s been a lively month over on LinkedIn, with my posts ranging from AI hype to the quirks of Gen-Z slang (and a fair dose of Node4 promotion). These are just a few of the highlights – follow me to get the full experience:

  • Jony and Sam’s mysterious new venture
    I waded into the announcement from Jony Ive and Sam Altman with, let’s say, a healthy dose of scepticism. A $6.5bn “something” was teased with a bland video and a promo image that felt more 80s album cover than product launch. It may be big. But right now? Vapourware.
  • Is the em dash trolling us?
    I chipped in on the debate about AI-written content and the apparent overuse of em dashes (—) –often flagged as an “AI tell” – usually by people who a) don’t understand English grammar or b) where LLMs learned to write. (I am aware that I incorrectly use en dashes in these posts, because people seem to find them less “offensive”.) But what if the em dash is trolling us?
  • Skibidi-bibidi-what-now?
    One of the lighter moments came with a post about Gen-Z/Gen-Alpha slang. As a Gen-Xer with young adult kids, I found a “translator” of sorts – and it triggered a few conversations about how language evolves. No promises I’ll be dropping “rizz” into meetings just yet. Have a look.
  • Politeness and prompting
    Following a pub chat with Phil Kermeen, I shared a few thoughts on whether being polite to AI makes a difference. TL;DR: it does. Here’s the post.
  • Mid-market momentum
    Finally, there have been lots of posts around the Node4 2025 Mid-Market Report. It was a big effort from a lot of people, including me, and I’m really proud of what we’ve produced. It’s packed with insights, based on bespoke research of over 600 IT and Business leaders.

Photos

A few snaps from my Insta’ feed…

For more updates…

That’s all for now. I probably missed a few things, but it’s a decent summary of what I’ve been up to at home and at work. I no longer use X, but follow me on LinkedIn (professional), Instagram (visual) and this blog for more updates – depending on which content you like best. Maybe even all three!

Next month…

A trip to Hamburg (to the world’s largest model railway); ramping up the work on Node4’s future vision; and hopefully I’ll fill in some of the gaps between January and May’s retrospectives!

Featured image by ChatGPT.

Who gets paid when the machines take over?

Yesterday evening, I was at the Bletchley AI User Group in Milton Keynes. One of the talks was from Stephanie Stasey (/in/missai) (aka Miss AI), titled “Gen AI vs. white collar workers and trad wives – building a robot to make my bed”.

It was delivered as a monologue – which sounds negative, but really isn’t. In fact, it was engaging, sharp, and packed with food for thought. Stephanie brought a fresh perspective to a topic we’re all thinking about: how AI is reshaping both the world of work and the way we live at home.

The labour that goes unnoticed (but not undone)

One part of the talk touched on “trad wives” – not a term I was especially familiar with, but the theme resonated.

If you’d asked my wife and I in our 20s about how we’d divide up household tasks, we might have offered up a fair and balanced plan. But real life doesn’t always match the theory.

These days, we both work part-time – partly because unpaid labour (childcare, cooking, washing, cleaning, all the life admin) still needs doing. And there don’t seem to be enough hours when the laptop is closed.

The system isn’t broken – it’s working exactly as designed

The point I’ve been turning over for a while is this: it feels like we’re on the edge of something big.

We could be on the brink of a fundamental shift in how we think about work – if those in power wanted to make radical changes. I’ll avoid a full political detour, though I’m disheartened by the rise of the right and how often “ordinary people” are reminded of their place. (My youngest son calls me a champagne socialist – perhaps not entirely unfairly.)

Still, AI presents us with a rare opportunity to do things differently.

But instead of rethinking how work and value are distributed, we’re told to brace for disruption. The current narrative is that AI is coming for our jobs. Or a variation on that theme: “Don’t worry,” we’re told, “it won’t take your job – but someone using AI might”. That line’s often repeated. It’s catchy. But it’s also glib – and not always true.

I’m close enough to retirement that the disruption shouldn’t hit me too hard. But for my children’s generation? The impact could be massive.

What if we taxed the agents?

So here’s a thought: what if we taxed the AI agents?

If a business creates an agent to do the work a person would normally do – or could reasonably do – then that agent is taxed, like a human worker would be. It’s still efficient, still scalable, but the benefits are shared.

And, how would we live, if the jobs go away? That’s where Universal Basic Income (UBI) comes in (funded by taxes on agents, as well as on human effort).

Put simply, UBI provides everyone with enough to cover their basic needs – no strings attached. People can still work (and many will). For extra income. For purpose. For contribution. It just doesn’t have to be 9-to-5, five days a week. It could be four. Or two. The work would evolve, but people wouldn’t be left behind. It also means that the current, complex, and often unjust benefits system could be removed (perhaps with some exceptions, but certainly for the majority).

What could possibly go right?

So yes, the conversation around AI is full of what could go wrong. But what if we focused on what could go right?

We’ve got a window here – a rare one – to rethink work, contribution, and value. But we need imagination. And leadership. And a willingness to ask who benefits when the machines clock in.

Further reading on UBI

If you’re interested in UBI and how it might work in practice, here are some useful resources:

Featured image: author’s own.

Does vibe coding have a place in the world of professional development?

I’ve been experimenting with generative AI lately – both in my day job and on personal projects – and I thought it was time to jot down some reflections. Not a deep think piece, just a few observations about how tools like Copilot and ChatGPT are starting to shape the way I work.

In my professional life, I’ve used AI to draft meeting agendas, prepare documents, sketch out presentation outlines, and summarise lengthy reports. It’s a co-pilot in the truest sense – it doesn’t replace me, but it often gives me a head start. That said, the results are hit and miss, and I never post anything AI-generated without editing. Sometimes the AI gives me inspiration. Other times, it gives me American spelling and questionable grammar.

But outside work is where things got interesting.

I accidentally vibe coded

It turns out there’s a name for what I’ve been doing in my spare time: vibe coding.

First up, I wanted to connect a microcontroller to an OLED display and to control the display with a web form and a REST API. I didn’t know exactly how to do it, but I had a vague idea. I asked ChatGPT. It gave me code, wiring instructions, and step-by-step guidance to flash the firmware. It didn’t work out of the box – but with a few nudges to fix a compilation error and rework the wiring, I got it going.

Then, I wanted to create a single-page website to showcase a custom GPT I’d built. Again, ChatGPT gave me the starter template. I published it to Azure Static Web Apps, with GitHub for source control and a CI/CD pipeline. All of it AI-assisted.

Both projects were up and running quickly – but finishing them took a lot more effort. You can get 80% of the way with vibes, but the last 20% still needs graft, knowledge, or at the very least, stubborn persistence. And the 80% is the quick part – the 20% takes the time.

What is vibe coding?

In short: it’s when you code without fully knowing what you’re doing. You rely on generative AI tools to generate snippets, help debug errors, or explain unfamiliar concepts. You follow the vibe, not the manual.

And while that might sound irresponsible, it’s increasingly common – especially as generative AI becomes more capable. If you’re solving a one-off problem or building a quick prototype, it can be a great approach.

I should add some context: I do have a Computer Studies degree, and I can code. But aside from batch scripts and a bit of PowerShell, I haven’t written anything professionally since my 1992/93 internship – and that was in COBOL.

So, yes, I have some idea of what’s going on. But I’m still firmly in vibe territory when it comes to ESP32 firmware or HTML/CSS layout.

The good, the bad, and the undocumented

Vibe coding has clear advantages:

  • You can build things you wouldn’t otherwise attempt.
  • You learn by doing – with AI as your tutor.
  • You get to explore new tech without wading through outdated forum posts.

But it also has its pitfalls:

  • The AI isn’t always right (and often makes things up).
  • Debugging generated code can be a nightmare.
  • If you don’t understand what the code does, maintaining it is difficult – if not impossible.
  • AI doesn’t always follow best practices – and those change over time.
  • It may generate code that’s based on copyrighted sources. Licensing isn’t always clear.

That last pair is increasingly important. Large language models are trained on public code from the Internet – but not everything online is a good example. Some of it is outdated. Some of it is inefficient. Some of it may not be free to use. So unless you know what you’re looking at (and where it came from), you risk building on shaky ground.

Where next?

Generative AI is changing how we create, code, and communicate. But it’s not a magic wand. It’s a powerful assistant – especially for those of us who are happy to get stuck in without always knowing where things will end up.

Whether I’ve saved any time is up for debate. But I’ve definitely done more. Built more. Learned more.

And that feels like progress.

A version of this post was originally published on the Node4 blog.

Featured image by James Osborne from Pixabay.

Building my own train departure board (because why not?)

In the UK, we’re lucky to have access to a rich supply of transport data. From bus timetables to cycle hire stats, there’s a load of open data just waiting to be used in clever ways. Some of the more interesting data – at least for geeks like me – is contained in the National Rail data feeds. These provide real-time information about trains moving around the country. Every late-running service, every platform change, every cancelled train… it’s all there, in near real-time.

There are already some excellent tools built on top of this data. You may have come across some of these sites:

  • RealTimeTrains: essential for anyone who wants to go beyond the standard passenger information displayed at the station.
  • Live Departures Info: perfect for an always-on browser display – e.g. on digital signage.
  • LED Departure Board: provides a view of upcoming departures or arrivals at a chosen station.

There are even physical displays, like those from UK Departure Boards. These look to be beautifully made and ideal for an office or hallway wall. But they’re not cheap. And that got me thinking…

Why not build my own?

Armed with a Raspberry Pi Zero W and an inexpensive OLED display, I decided to have a go at making my own.

A quick bit of Googling turned up an excellent website by Jonathan Foot, whose DIY departure board guidance gave me exactly what I needed. It walks through how to connect everything up, pull real-time train data, and output it to a screen. There’s even a GitHub repo for the code. Perfect.

Well, almost.

A slightly different display

Jonathan recommends a particular OLED display but I thought it was a bit on the pricey side. In the spirit of experimentation (and budget-conscious tinkering), I opted for a 3.12″ OLED display (256×64, SSD1322, SPI) from AliExpress. I think it’s the same – just from another channel.

This wasn’t entirely straightforward.

The display I received was described as SPI-compatible, but it wasn’t actually configured for SPI out of the box. I sent the first one back. Then I realised they’re all like that – you have to modify the board yourself.

Breaking out the soldering iron

There were no jumpers to move. No handy DIP switch to flick. Instead, I had to convert it from 80xx to 4SPI mode. This involved removing a resistor (R6), then soldering a link between two pads (R5). Not the hardest job in the world, but definitely not plug-and-play either.

This wasn’t ideal. I’m terrible at soldering, and I’d deliberately bought versions of the Raspberry Pi and the display with pluggable headers. But hey, I’d got this far. The worst thing that could happen is that I blew up a £12 display, right?

The modifications that I made to the display. (The information I needed is printed as a table on the back of the board.)

Once that was done, though – magic! The display came to life with data from my local station, and a rolling list of upcoming arrivals and departures. It’s a surprisingly satisfying thing to see working for the first time, especially knowing what went into making it happen.

What’s next?

All that’s left now is to print a case (or find someone with a 3D printer who owes me a favour). For around £50 in total, I’ve got a configurable, real-time train departure board that wouldn’t look out of place in a study, hallway or even the living room (subject to domestic approval, of course).

It’s been a fun little side project. A mix of software tinkering, a bit of hardware hacking, and that moment of joy when it all works together. And if you’ve ever looked at those expensive display boards and thought, I bet I could make one of those – well, now you know… you probably can.

Featured image: author’s own.

Nancy’s legacy: how losing someone can wake you up to life

Today is Nancy Wallace’s funeral. Nancy was my [wife’s] sister-in-law – and although I can’t be there in person (Nancy lived in the US, and I need to be in the UK this week), I’ve found myself pausing. Reflecting. Remembering.

Even though we lived on different continents and didn’t see each other often, I always enjoyed Nancy’s company. She was warm, supportive, and carried with her a perspective shaped by a life well lived – both through her travels and the way she connected with people. More than once, Nancy gently challenged my thinking or offered a different perspective. She had that gift.

Writing this post – something deeply personal yet that’s being shared with the world – I struggled to find the right words to describe our relationship, but this passage from her obituary captured it perfectly:

“Nancy’s friends, family, and colleagues will remember her as a radiant spirit, always present, always caring, and always eager to bring people together. As a friend beautifully commented, we never thought that Nancy would leave the party early. But apparently, she had other places to be and people to befriend.”

I can just hear her saying that – “other places to be and people to befriend”. That was Nancy.

Her death came as a shock to us all. She was only 59. Fit, active, full of plans. And that’s the thing – when someone like Nancy dies unexpectedly, it shakes you. It makes you stop. It forces you to think about life – which inevitably means thinking about death.

This isn’t the first time I’ve had one of these moments…

What my Dad’s death taught me

When my Dad died suddenly at 63, it knocked me sideways. I was in my late-30s and pretty sedentary – I had the kind of lifestyle that comes with a desk job and not enough movement. But that event triggered something. I threw myself into fitness, ran regularly and lost over 20kg as part of my “fit at forty” challenge. Later, I transitioned from running to cycling and over the following decade or so I rode hundreds of miles across the UK and Europe: London to Paris; coast to coast across England; the full length of Wales.

But old habits crept back in. More accurately, my sugar addiction – and yes, it really is an addiction – reasserted itself. The weight came back. By 50, I was in therapy, dealing with anxiety, depression, and a deeply complicated relationship with food.

This isn’t a pity post

Let me be clear: I’m not writing this for sympathy.

I have a good life. A loving wife. Two fantastic sons, both full of ambition and potential. And I’m back in a job that I enjoy. Financially, we’re not rolling in cash, but we’re not scraping by either.

Several years ago, I moved to a four-day working week – a conscious choice to reclaim some time. Not just to rest, but to live a little more. To be present rather than just productive.

And yet, I struggle.

Because I care. Because I want to give my family stability. Because I remember watching my Dad get made redundant in his 50s and the resulting challenges finding work again. Because – even with a stable job – the cost of living keeps creeping up. Because I’ve spent so much time and energy making sure we’re OK if something goes wrong, that I’ve often forgotten to enjoy what’s going right.

And because, the older I get, the harder it is to shift the weight. It’s not just about counting calories. It’s far more complex than that – hormones, metabolism, mental health, habits. A lifetime of habits and patterns.

So, what does this have to do with Nancy?

Nancy’s was just six years older than me, and only four short of the age my Dad was when he died. And, while I’d like to think I still have still plenty of life in me, the truth is that none of us really knows how long we’ve got.

Nancy and my brother-in-law, Simon, had plans. Big ones. But she never got to see them through.

We only get one life. And I’ve realised I need to start living mine more fully.

For the last couple of decades, my focus has been on family, mortgage, job security. Sensible, responsible stuff. Important stuff. But I’ve also been holding back. Waiting for the “right time” to do the things I dream of – travel more, work differently, embrace new experiences. There is a balancing act – between building up the pension pot and actually living the life I’m saving for.

Nancy’s passing has reminded me that the “right time” might never come unless I make it come.

So I need to make some changes. Tackle the sugar addiction. Get my fitness back. Be around long enough – and well enough – to support my sons as they step out into their adult lives. And then? Maybe it’s time to be a little selfish. Or at least, to live a little differently. Lighter. More intentionally.

Nancy may have left the party early, but she left behind a message – that it’s time to start living. Message received, loud and clear.

Featured image, from the family collection – Nancy, and her beloved canine companion, Ziggy, by the water – where she loved to be.

Postmortem: deploying my static website with Azure Static Web Apps (eventually)

This all started out as a bit of vibe coding* in ChatGPT…

Yesterday, whilst walking the dog, I was listening to the latest episode of WB-40. Something Julia Bellis said gave me an idea for a simple custom GPT to help people (well, mostly me) eat better. ChatGPT helped me to create a custom GPT – which we named The Real Food Coach.

With the GPT created, I asked ChatGPT for something else: help me build a one-page website to link to it. In minutes I had something presentable: HTML, styling, fonts, icons – all generated in a few minutes. Pretty slick.

When it came to hosting, ChatGPT suggested something I hadn’t used previously: Azure Static Web Apps, rather than the Azure Storage Account route I’d used for hosting in the past. It sounded modern and neat – automatic GitHub integration, free SSL, global CDN. So I followed its advice.

ChatGPT was great. Until it wasn’t.

A quick win turns into a slow burn

The proof of concept came together quickly – code committed to GitHub, site created in Azure, workflow generated. All looked good. But the deploys failed. Then failed again. And again.

What should have taken 10 minutes quickly spiralled into a full evening of troubleshooting.

The critical confusion

The issue came down to two settings that look similar – but aren’t:

  • Deployment source – where your code lives (e.g. GitHub)
  • Deployment authorisation policy – how Azure authenticates deployments (either via GitHub OIDC or a manually managed deployment token)

ChatGPT had told me to use GitHub for both. That was the mistake.

Using GitHub as the authorisation method relies on Azure injecting a secret (AZURE_STATIC_WEB_APPS_API_TOKEN) into GitHub, but that never happened. I tried regenerating it, reauthorising GitHub, even manually wiring in deployment tokens – all of which conflicted with the setup Azure had created.

The result? Deploys that failed with:

“No matching Static Web App was found or the API key was invalid”

Eventually, after several rounds of broken builds, missing secrets, and deleting and recreating the app, I questioned the advice ChatGPT had originally given. Sure enough, it confirmed that yes – the authorisation policy should have been set to Deployment Token, not GitHub.

Thanks, ChatGPT. Bit late.

The right setup

Once I created the app with GitHub as the deployment source and Deployment Token as the authorisation policy, everything clicked.

I copied the token from Azure, added it to GitHub secrets, updated the workflow to remove unnecessary build steps, and redeployed.

Success.

Custom domain and tidy-up

Pointing my subdomain to the Static Web App was easy enough. I added the TXT record for domain verification, removed it once verified, and then added the CNAME. SSL was provisioned automatically by Azure.

I now have a clean, simple landing page pointing to my custom GPT – fast to load, easy to maintain, and served via CDN with HTTPS.

Lessons learned

  • ChatGPT can take you far, fast – but it can also give you confidently wrong advice. Check the docs, and question your co-pilot.
  • Azure Static Web Apps is fantastic for a simple website – I’m even using the free tier for this.
  • Authorisation and deployment are not the same thing. Get them wrong, and everything breaks – even if the rest looks correct.
  • Start again sooner – sometimes it’s faster to delete and recreate than to debug a half-working config.
  • DNS setup was smooth, but your DNS provider might need you to delete the TXT record after verification before you can create a CNAME).

Where is this website?

You can check out The Real Food Coach at realfood.markwilson.co.uk – and chat with the GPT at chat.openai.com/g/g-682dea4039b08191ad13050d0df8882f-the-real-food-coach.

*Joe Tomkinson told me that’s what it is. I’d heard of vibe coding but I thought it was something real developers do. Turns out it’s more likely to be numpties like me…

A decade at risual… and beyond at Node4

Today marks ten years since I joined risual – now part of Node4 – a milestone that’s prompted a bit of reflection.

Not “institutionalised” – just evolving

Ten years is a long time in any role – especially in tech. At the interview back in 2015, I was asked if my previous 9.5-year tenure meant I was “institutionalised”. I pushed back – not because I’d stayed in one company, but because I’d held a variety of roles and worked with a broad range of end clients. That kept things fresh.

That same pattern has played out in my current employment. I started out at risual as a Consultant (that was the rule, regardless of background – everyone joined as a Consultant). From there, I moved into an Architecture role, and then – somewhat reluctantly – into management. I didn’t chase people leadership, but faced the choice: manage or be managed. Leading the Architecture team brought new challenges – and more than a few grey hairs.

A new chapter with Node4

After risual was sold in 2022, I found myself on a new path again – and in 2023 I moved into Node4’s Office of the CTO. This is not my first time in an OCTO and it’s a role that plays far more to my strengths, drawing together technical insight and strategic thinking, and communicating that to clients and colleagues alike.

Of course, my tenure hasn’t all been smooth sailing. At one point quite early on, I was told (by one of risual’s founding directors) that I was “approaching career peak”. I’ve never quite accepted that – not out of vanity, but because I know I still have more to contribute. Maybe I’ve lost some of the naivety of my 20s, but I’ve gained a more seasoned, grounded view.

Big ambitions, bumpy roads

I joined risual because I wanted to escape the constraints of a large enterprise and make a difference. And I believe I did – even if not always as effectively as I’d hoped. Some moments had real impact; others came with frustration. Anyone who’s lived through rapid business growth (followed by contraction), a global pandemic, and a business sale will understand the pressure. Strong personalities, shifting priorities, and an increasing focus on EBITDA all shaped that period.

But I learned a huge amount in those years – about myself, about the business, and about the kind of leader I am (and want to become). I worked with some brilliant people and saw first-hand how good – and bad – decisions shape real organisations.

From grief to growth

After the sale to Node4, some colleagues grieved the loss of the risual they knew. Others struggled to adapt – something I’ve also seen in other acquired businesses. But one colleague said something that stuck with me: “going somewhere else won’t bring risual back – risual no longer exists – but you can stick around and see what you can make of the new opportunity.”

That struck a chord. I’d already chosen to stay – not to chase what was lost, but to build something new at Node4. It hasn’t always been easy, but I feel like I’m in a good place again. I’m using my skills in ways that have real impact. I’m working with great people. And I’m helping to shape the direction, not just responding to it.

Celebrating the journey – and looking ahead

Ten years on, I’m not “institutionalised”. If anything, I’ve become more aware of the need to adapt, stay curious, and choose work that aligns with my values and strengths.

I’ve had the privilege of working with fantastic people, navigating all kinds of challenges, and growing along the way. It’s been a decade full of lessons, laughter, the occasional sigh of exasperation – and more than a few late-night slide decks.

And now? I’m proud of what I’ve done so far – and excited for what’s still to come.

Here’s to ten years… and the next chapter still to be written.

Featured image by ChatGPT.

Why Net Promoter Score (NPS) might not be the feedback tool you think it is

Earlier today, I received one of those “tell us how we did” emails from Churchill – the company I’ve used for home insurance for a few years now. Nothing unusual there; we’ve all had them. You can spot them a mile off – always a scale from 0-10 with a “how likely are you to recommend us” type of question. What caught my attention though was the way the feedback was framed and how my neutral score was interpreted as negative. It reminded me (again) why I think Net Promoter Score (NPS) is, frankly, overused and often misused.

What is NPS supposed to measure?

NPS was originally developed by Fred Reichheld at Bain & Company, with the noble aim of predicting customer loyalty based on a single question: “How likely are you to recommend us to a friend or colleague?”. Bain created a system to help companies earn customer loyalty and inspire employees with the idea that this would be a key factor for sales growth. The thinking was: if people are willing to stick their neck out and make a recommendation, they must really rate the product or service.

In theory? Sensible. In practice? Hmm.

A neutral score, treated as negative

I gave Churchill a 5 out of 10. Not because I had a bad experience – in fact, I’ve never made a claim. And that’s the point: I literally can’t rate how good their service is because I haven’t needed it. I pay my premium, they take my money, and (thankfully) we leave it at that. That’s a 5 in my book – neutral.

Apparently not.

Their automated system then asked me, “Why would you be unlikely to recommend Churchill?”. Because, in the NPS world, anything below a 7 counts as a detractor. So my middle-of-the-road score – which in British terms is essentially a polite nod of approval – flags me as someone with a grudge. That’s not only culturally tone-deaf, it’s also wildly misleading.

Cultural bias in customer feedback scoring

Let’s talk numbers for a moment. In the UK, where understatement is practically a national pastime, we rarely hand out 9s and 10s. Those are reserved for the truly exceptional – the sort of experience where someone goes so far above and beyond that we feel compelled to wax lyrical about them at dinner parties. A 7? That’s solid. Respectable. Better than most.

But in the land of NPS, that still doesn’t make you a “promoter”. NPS deems a 7 or an 8 as passive.

The real problem with NPS: it lacks context

This is where Net Promoter Score falls down. It takes a subjective experience and tries to quantify it with a crude scale that lacks nuance. Worse, it lumps “neutral” in with “actively dissatisfied,” which isn’t just lazy analysis – it can lead to poor decision-making.

There’s plenty of research that questions the validity of NPS as a predictor of business success. And its use today often bears little resemblance to the original intent. One such example is from the Journal of the Academy of Marketing Science, in a 2022 paper entitled “The Use of Net Promotor Score (NPS) to predict sales growth: insights from an empirical investigation“. That paper highlights methodological issues in the original research, in the NPS calculation method, and limitations in NPS’ predictive power and scope.

How NPS is misapplied in modern customer experience

NPS was meant to be a long-term loyalty indicator, not a knee-jerk reaction tool after every interaction. Yet these days it’s been reduced to a mandatory field at the end of every customer journey – from buying a sofa to updating your email address.

It’s become a checkbox exercise, often disconnected from the actual service experience. And that’s a shame, because meaningful feedback is still incredibly valuable – when it’s asked for in the right way, and interpreted with care.

We can do better than NPS

I’m not saying all customer feedback is pointless – far from it. Listening to your customers is essential. But let’s not pretend Net Promoter Score is a silver bullet. Context matters. Culture matters. And treating a nuanced response like mine as a black mark against your brand doesn’t do anyone any favours.

So Churchill, if you’re reading: I don’t dislike you. I’m just British.

Working with legacy tech: accessing old web portals that use an insecure TLS version

In my last post, I wrote about importing MiniDV tape content to a modern computer. That leads nicely into today’s topic… because modern computers tend not to have huge amounts of local storage. We generally don’t need it, because we store our files in the cloud, and only use the local drive as a cache. But what about when you’re importing large amounts of data (say video), and you want somewhere to stage it locally, with a little more space?

I was about to throw away an old NetGear ReadyNAS Duo unit (that’s been in disgrace ever since a disk failure taught me the hard way that RAID1 is not a backup…), but then I thought it might be useful to stage some video content, before moving it somewhere safer.

Accessing the ReadyNAS

First problem was knowing what IP address it had. I managed to find that using NetGear’s RAIDar utility. But, to change the IP address (or any other settings), I needed to use the admin console. And that gave me a problem: my browser refused to connect to the site, saying that the connection was not secure and that it uses an unsupported protocol.

Well, it’s better than a modern cutesey “Something went wrong” message. It gave me a clue as to the problem – SSL version or Cipher mismatch – sounds like out of date TLS. Indeed it is, and Gøran B. Aasen wrote about the challenge in March 2022, along with a potential solution for certain ReadyNAS devices.

I’m not bothered about upgrading Apache to support TLS 1.2 – but I did still need to administer the device. I tried messing around with browser settings in Edge, Chrome and Firefox but had no luck. The transition period is over. TLS 1.0 is not supported at all. Then I had an idea… what if I installed an older browser version? And instead of installing it, what if I used a portable app version?

Tada!

Firefox 73.0.1 being used to access the ReadyNAS admin console

So, here we go, Firefox 73.0.1 from PortableApps, via SourceForge. And I’m successfully accessing the ReadyNAS admin console.

The risk statement

For security folks who will tell me why this is a really bad idea, I know. So here’s the disclaimer. You’re only at risk when you’re using that old browser, because you didn’t install it on your system – it’s a portable app. And you’ll only use that old browser to access this one website, so when you’re not accessing the website, you’ll have closed it down, right? That way you are taking a calculated risk and mitigating it by minimising the time you have the old software running for.

As for publishing an internal IP on the web… yes, you’ve got me there…

Featured image: author’s own.

Working with legacy tech: importing MiniDV tape content to a modern Mac

It’s a common problem: aging media on legacy technology; valuable content that needs to be preserved. For me, that content is my travels to Australia and South Africa when I was in my 20s; getting married; and early days with our children (who are now adults). And whilst losing some of those videos (never watched or edited) would simply be annoying, the footage of the-tiny-people-who-grew-into-men would be devastating to lose.

So I have a project, to import as much of the content as I can to a modern computing system. I even used it as a justification to buy myself a new MacBook Pro (it’s “man maths” – don’t ask*).

What’s the problem?

  1. My digital video camera is a Sony DCR-TRV900E from circa 1998. The media are MiniDV cassettes (digital, but read serially). The interface is Firewire 400.
  2. My Mac has Thunderbolt 3 ports, using the USB-C form factor.

I needed to connect these two things.

And the hardware solution?

The Internet is full of videos about this – and it’s a cable and adapter setup that is so Heath Robinson-esque it shouldn’t work. But it seems to. For now. My task is to import the content before it stops working!

  1. First up, the Firewire 400 cable from the camcorder. Firewire 400 is also known as IEEE1394 or Sony i.LINK. I already had a suitable cable, possibly supplied with the camera.
  2. Then, a Firewire 400 to 800 adapter. I used this one from Amazon. That’s the inexpensive part.
  3. Next, Firewire 800 to Thunderbolt 2. Apple no longer sells this adapter so it’s expensive if you can find one. There are some very-similar looking ones on AliExpress, but they were only a little less expensive than the genuine Apple one that I found here: Apple Thunderbolt to FireWire 800 Adapter Dongle A1463. I paid £85 (ouch).
  4. Finally, Thunderbolt 2 to Thunderbolt 3. These are still available from Apple (Thunderbolt 3 (USB-C) to Thunderbolt 2 Adapter), but are another £49 so I saved a few quid by buying one second-hand.

The whole setup looks like this (£150 of electronics… but priceless memories, remember…):

How to import the footage

  1. Connect the Camcorder to the Mac, using the various cables and adapters.
  2. Put the Camcorder into Video Tape Recorder (VTR) mode and insert a tape. Rewind it to the start.
  3. Fire up iMovie on the Mac and create/open a library.
  4. Click Import Media (from the File menu). Select the camera (e.g. DCR-TRV900E) and you should see the first frame.
  5. Click Import.

Then, sit back and wait as the contents of the video are read, serially, and imported to iMovie. Other video editing packages may also work, but I used what I already had installed with macOS. Just remember that the transfer happens in real-time – but it’s also an opportunity to get nostalgic about times past. You’ll also need to stop the import when you get to the end of the footage.

Now, does anyone have an Advanced Photo System (APS) film attachment for a Sony Coolscan IV ED film scanner?

*I do have functioning Mac Minis from 2006 and 2012 with Firewire connections, but the 2006 Mini is long-unsupported and the versions of MacOS and iMovie it has are alarmingly dated. The 2012 one is better, but I found I was getting application crashes on import. That drove me to invest in a solution that would work with my M4 MacBook Pro…

Featured image: author’s own.