Learning to be intelligent about artificial intelligence

This content is 1 year old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week promises to be a huge one in the world of Artificial Intelligence (AI). I should caveat that in that almost every week includes a barrage of news about AI. And, depending which articles you read, AI is either going to:

  • Take away all our jobs or create exciting new jobs.
  • Solve global issues like climate change or hasten climate change through massive data centre power and water requirements.
  • Lead to the demise of society as we know it or create a new utopia.

A week of high profile AI events

So, why is this week so special?

  1. First of all, the G7 nations have agreed a set of Guiding Principles and a Code of Conduct on AI. This has been lauded by the European Commission as complementing the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.
  2. Then, starting on Wednesday, the UK is hosting an AI Safety Summit at “the home of computing”, Bletchley Park. And this summit is already controversial with some questioning the diversity of the attendees, including Dr Sue Black, who famously championed saving Bletchley Park from redevelopment.
  3. The same day, Microsoft’s AI Copilots will become generally available to Enterprise users, and there’s a huge buzz around how the $30/user/month Copilot plays against other offers like Bing Chat Enterprise ($5/user/month), or even using public AI models.

All just another week in AI news. Or not, depending on how you view these things!

Is AI the big deal that it seems to be?

It’s only natural to ask questions about the potential that AI offers (specifically generative AI – gAI). It’s a topic that I covered in a recent technology advice note that I wrote.

In summary, I said that:

“gAI tools should be considered as assistive technologies that can help with researching, summarising and basic drafting but they are not a replacement for human expertise.

We need to train people on the limitations of gAI. We should learn lessons from social media, where nuanced narratives get reduced to polarised soundbites. Newspaper headlines do the same, but social media industrialised things. AI has the potential to be transformative. But we need to make sure that’s done in the right way.

Getting good results out of LLMs will be a skill – a new area of subject matter expertise (known as “prompt engineering”). Similarly, questioning the outputs of GANs to recognise fake imagery will require new awareness and critical thinking.”

Node 4 Technology Advice Note on Artificial Intelligence, September 2023.

Even as I’m writing this post, I can see a BBC headline that asks “Can Rishi Sunak’s big summit save us from AI nightmare?”. My response? Betteridge’s law probably applies here.

Could AI have saved a failed business?

Last weekend, The Sunday Times ran an article about the failed Babylon Healthcare organisation, titled “The app that promised an NHS ‘revolution’ then went down in flames”. The article is behind a paywall, but I’ve seen some extracts.

Two things appear to have caused Babylon’s downfall (at least in part). Not only did Babylon attract young and generally healthy patients to its telehealth services, but it also offered frictionless access.

So, it caused problems for traditional service providers, leaving them with an older, more frequently ill, and therefore more expensive sector of the population. And it caused problems for itself: who would have thought that if you offer people unlimited healthcare, they will use it?!

(In some cases, creating friction in provision of a service is a deliberate policy. I’m sure this is why my local GP doesn’t allow me to book appointments online. By making me queue up in person for one of a limited number of same-day appointments, or face a lengthy wait in a telephone queue, I’m less likely to make an appointment unless I really need it.)

The article talks about the pressures on Babylon to increase its use of artificial intelligence. It also seems to come to the conclusion that, had today’s generative AI tools been around when Babylon was launched, it would have been more successful. That’s a big jump, written by a consumer journalist, who seems to be asserting that generative AI is better at predicting health outcomes than expert system decision trees.

We need to be intelligent about how we use Artificial Intelligence

Let me be clear, generative AI makes stuff up. Literally. gAIs like ChatGPT work by predicting and generating the next word based on previous words – basically, on probability. And sometimes they get it wrong.

Last week, I asked ChatGPT to summarise some meeting notes. The summary it produced included a typo – a made-up word:

“A meeting took tanke place between Node4 & the Infrastructure team at <client name redacted> to discuss future technology integration, project workloads, cost control measures, and hybrid cloud strategy.”

Or, as one of my friends found when he asked ChatGPT to confirm a simple percentage calculation, it initially said one thing and then “changed its mind”!

Don’t get me wrong – these tools can be fantastic for creating drafts, but they do need to be checked. Many people seem to think that an AI generates a response from a database of facts and therefore must be correct.

In conclusion

As we traverse the future landscape painted by artificial intelligence, it’s vital that we arm ourselves with a sound understanding of its potential and limitations. AI has often been regarded as a silver bullet for many of our modern challenges, a shortcut to progress and optimised efficiency. But as we’ve explored in this blog post – whether it’s the G7 nations’ principles, Microsoft’s AI Copilot, or a fallen Babylon Healthcare – AI is not a one-size-fits-all solution. It’s a tool, often brilliant but fallible, offering us both unprecedented opportunities and new forms of challenges.

The promises brought by AI are enormous. This week’s events underscore the urgency to familiarise ourselves with AI, acknowledge its potential, and intelligently navigate its limitations. From a set of AI guiding principles on a global scale, to raising awareness on gAI, and analysing the role of AI in business successes and failures – it’s clear that being informed about AI is no longer an option but a necessity.

gAI tools, while they are transformative, need to be used as assistive technologies and not as replacements for human intellect and expertise. Embracing AI should not mean renouncing critical thinking and caution. So, as we interact with AI, let’s do it intelligently, asking the right questions and understanding its strengths and limitations. We need to be smart about using AI, recognizing both its potential and its constraints. This will enable us to harness its power effectively, while avoiding over-reliance or the creation of new, unforeseen problems.

It’s time we stop viewing AI through a lens of absolute salvation or doom, and start understanding it as a dynamic field that requires thoughtful and knowledgeable engagement. Evolution in human tech culture will not be judged by the power of our tools, but by our ability to skillfully and ethically wield them. So, let’s learn to be intelligent about how we use artificial intelligence.

Postscript

That conclusion was written by an AI, and edited by a human.

Featured image: screenshot from the BBC website, under fair use for copyright purposes.

The 5 or 6 Rs of cloud transformation

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few years ago, a couple of colleagues showed me something they had been working on – a “5 Rs” approach to classifying applications for cloud transformation. It was adopted for use in client engagements but I decided it needed to be extended – there was no “do nothing” option, so I added “Remain” as a 6th R.

I later discovered that my colleagues were not the first to come up with this model. When challenged, they maintained that it was an original idea (and I was convinced someone had stolen our IP when I saw it used by another IT services organisation!). Research suggests Gartner defined 5Rs in 2010 and both Microsoft and Amazon Web Services have since created their own variations (5Rs in the Microsoft Cloud Adoption Framework and 6Rs in Amazon Web Services’ Application Migration Strategies). I’m sure there are other variations too, but these are the main ones I come across.

For reference, this is the description of the 6Rs that we use where I work, at risual:

  • Replace (or repurchase) – with an equivalent software as a service (SaaS) application.
  • Rehost – move to IaaS (lift and shift). This is relatively fast, with minimal modification but won’t take advantage of cloud characteristics like auto-scaling.
  • Refactor (or replatform/revise) – decouple and move to PaaS. This may provide lower hosting and operational costs together with auto-scaling and high availability by default.
  • Redesign (or rebuild/rearchitect) – redevelop into a cloud-aware solution. For example, if a legacy application is providing good value but cannot be easily migrated, the application may be modernised by rebuilding it in the cloud. This is the most complicated approach and will involve creating a new architecture to add business value to the core application through the incorporation of additional cloud services.
  • Remain (or retain/revisit) – for those cases where the “do nothing” approach is appropriate although, even then, there may be optimisations that can be made to the way that the application service is provided.
  • Retire – for applications that have reached the end of their lifecycle and are no longer required.

Right now, I’m doing some work with a client who is looking at how to transform their IT estate and the 5/6Rs have come into play. To help my client, who is also working with both Microsoft and AWS, I needed to compare our version with Gartner’s, Microsoft’s and AWS’… and this is what I came up with:

risualGartnerMicrosoftAWSNotes
ReplaceReplaceReplaceRepurchaseWhilst AWS uses a different term, the approach is broadly similar – look to replace/repurchase existing solutions with a SaaS alternative: e.g. Office 365, Dynamics 365, Salesforce, WorkDay, etc.
RehostRehostRehostRehostAll are closely aligned in thinking – rehost is the “lift and shift” option – based on infrastructure as a service (IaaS) – which is generally straightforward from a technical perspective but may not deliver the same long term benefits as other cloud transformation methods.
RefactorRefactorRefactorReplatformRefactoring generally involves the adoption of PaaS – for example making use of particular cloud frameworks, application hosting or database services; however this may be at the expense of portability between clouds. The exception is AWS, which uses refactor in a slightly different context and replatform for what is referred to as “lift, tinker and shift”.
 Revise  Gartner’s revise relates to modifying existing code before refactoring or rehosting. risual, Microsoft and AWS would all consider this as part of the refactoring/replatforming.
RedesignRebuildRebuildRefactor/re-architect.Gartner defines rebuilding as moving to PaaS, rebuilding the solution and rearchitecting the application.

AWS groups its definition of refactoring and rearchitecting, although the definition of refactor is closer to Microsoft/Gartner’s rebuild – adding features, scale, or performance that would otherwise be difficult to achieve in the application’s existing environment (for example.
  Rearchitect Microsoft makes the distinction between rebuilding (creating a new cloud-native codebase) and rearchitecting (looking for cost and operational efficiencies in applications that are cloud-capable but not cloud-native) – for example migrating from a monolithic architecture to a serverless architecture.
Remain  Retain/revisitPerhaps because their application transformation strategies assume that there is always some transformation to be done, Gartner and Microsoft do not have a remain/retain option. This can be seen as the “do nothing” approach but, as AWS highlights, it’s really a revisit as the do nothing is a holding state.
Maybe the application will be deprecated soon – or was recently purchased/upgraded and so is not a priority for further investment. It is likely to be addressed by one of the other approaches at some point in future.
Retire  RetireSometimes, an application has outlived its usefulness – or just costs more to run than it delivers in value, and should be retired. Neither Gartner nor Microsoft recognise this within their 5Rs.

Whichever 5 or 6Rs approach you take, it can be a useful approach for categorising potential transformation opportunities and I’m often surprised exercise how it exposes services that are consuming resources, long after their usefulness has ended.

Weeknote 21/2020: work, study (repeat)

This content is 5 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Another week in the socially-distanced economy. Not so much to write about this week as I spent most of it working or studying… and avoiding idiots who ignore the one-way system in the local supermarket…

Some more observations on remote working

It’s not often my tweets get as much engagement as this one did. So I’m putting it on the blog too (along with my wife’s response):

My “Build Box”

Unfortunately, I didn’t get to watch the Microsoft Build virtual event this week. I’m sure I’ll catch up later but it was great to receive this gift from Microsoft – it seems I was one of the first few thousand to register for the event:

Annual review

This week was my fifth anniversary of joining risual. Over that time I’ve watched the company grow and adapt, whilst trying to retain the culture that made it so strong in the early days. I don’t know if it’s possible to retain a particular culture as a business grows beyond a certain size but I admire the attempts that are made and one of those core tenets is an annual review with at least one if not both of the founding Directors.

For some, that’s a nerve-wracking experience but I generally enjoy my chat with Rich (Proud) and Al (Rogers), looking back on some of the key achievements of the last year and plans for the future. Three years ago, we discussed “career peak”. Two years ago it was my request to move to part-time working. Last year, it was my promotion to Principal Architect. This year… well, that should probably remain confidential.

One thing I found particularly useful in my preparation was charting the highs and lows of my year. It was a good way to take stock – which left me feeling a lot better about what I’d achieved over the last 12 months. For obvious reasons, the image below has had the details removed, but it should give some idea of what I mean:

Another exam ticked off the expanding list

I wrapped up the work week with another exam pass (after last week’s disappointment) – AZ-301 is finally ticked off the list… taking me halfway to being formally recognised as an Azure Solutions Architect Expert.

I’ll be re-taking AZ-300 soon. And then it looks like two more “Microsoft fundamentals” exams have been released (currently in Beta):

  • Azure AI Fundamentals (AI-900).
  • Azure Data Fundamentals (DP-900).

Both of these fit nicely alongside some of the topics I’ve been covering in my current client engagement so I should be in a position to attempt them soon.

Microsoft Surface Pro 3 refuses to power on: fixed with a handful of elastic bands

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week didn’t start well (and it hasn’t got much better either) but Monday morning was a write-off, as the Microsoft Surface Pro 3 that I use for work wouldn’t “wake up”.

I’d used it on Friday, closed the “lid” (i.e. closed the tablet against the Type Cover) and left it on a table all weekend. Come Monday and it was completely dead. I tried charging it for a while. I tried Power and Volume Up/Down combinations. I tried holding the power button down for 30 secs (at which point the light on the charging cable flashed, but that was all).

After speaking to colleagues in our support team, it seemed I’d tried everything they could think of and we were sure it was some sort of battery failure (one of my customers has seen huge levels of battery failure on their Surface Books, suspected to be after they were kept in storage for an extended period without having been fully shut down).

I was ready for a long drive to Stafford to swap it for another device, hoping that OneDrive had all of my data synced and that I didn’t get the loan Dell laptop with the missing key (I’m sure that’s a warning to look after our devices…).

Then I found a post on the Windows Central Forums titled “Surface Pro 3 won’t turn back on! – possible solution when all hope is lost”.

All hope was indeed lost. This had to be worth a read?

“My SP3 mysteriously stopped working yesterday morning. (Keep reading to the end for the solution that worked for me and maybe you too!)

It was fine the night before. […]

I spent the morning attempting to reboot the SP3. I thought maybe my charger wasn’t working even though I did see a white LED light on the adapter that connects to the Surface. I tried the hard reset, the 2-button reset, every combination of the volume up and down with the power button.

[…]

Finally, this morning, I caved in and call MS support. The tech said she would charge me $30 for a remote over the phone troubleshooting. I declined as I’ve tried everything I’ve found on the internet. Instead, I scheduled app with the MS store support in Garden City, NY (Roosevelt Field Mall).

I had the first or second app: 11:15am. The tech, I think his name was Adam, young guy in his 20’s. I told Adam my issue and that I’ve tried everything. I even had a USB LED light to show that the battery in my case wasn’t the problem. The USB LED light lit up for a few seconds when I pressed power. He said the problem was internal hardware and they there was no way to fix it. Since my SP3 was out of warranty, the only solution from MS was full replacement for $500. But, since I needed my files, a replacement won’t do me any good. So, the only other solution was have it sent to a third party data recovery place for $1000! They would basically destroy the SP3 and MS would then be unable to replace it.

Talk about bad options. Neither one seemed practical. I asked Adam if he’s seen this type of problem with any of the Surfaces before. He said maybe one or twice before. I was about to leave when another guy walked with his Surface, sat down next to me and said his Surface won’t boot up. I looked at Adam and I didn’t believe this was a rare issue with the Surface. MS probably train their techs to say that because they don’t want a class action law suit on their hand.

Anyway, just before I left, Adam, did say something, almost accidentally that I picked up. He said some guy had used a rubber band to hold down the power button for about a day and eventually the Surface woke up from sleep.

When I came home this afternoon, I was sure I had a $1100 paper weight with me. With nothing to lose, I took out some rubber bands and popsicle stick. I placed the popsicle stick flat against the power button and used the rubber band to apply pressure to keep the power button depressed the whole time. I can see the USB light connected to my Surface coming on and off as the power cycled. No sign of the Surface waking up.

Came back from dinner (that’s 5 hours later) and noticed the USB light didn’t come on and off any more. But still no sign the Surface was back. My 8 yr old sons comes into my office sees the contraption and says “what’s this” and pulls the popsicle stick off the Surface. I wasn’t even paying attention.

Lo and behold! the F—ing Surface logo flashed on the screen and booted up!!!!!
I immediately plugged in the charger and a backup HD and copied all my files!”

I was struggling to find any elastic bands at home but then, as the day’s post landed on my doormat, I thought “Royal Mail. Rubber bands!” and chased the postie down the street to ask if she had any spares. She was more than happy to give me a handful and so this was my setup (I don’t know what a “popsicle stick” is, but I didn’t need one):

A couple of hours later, I removed the bands and tried powering on the Surface Pro. I couldn’t believe it when it booted normally:

So, if your Surface Pro 3 (or possibly another Surface model) fails to power on, you might want to try this before giving up on it as a complete battery failure.

Quantum Computing 101

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There’s been a lot of buzz around quantum computing over the last year or so and there seems little doubt that it will provide the next major step forward in computing power but it’s still largely theoretical – you can’t buy a quantum computer today. So, what does it really mean… and why should we care?

Today’s computers are binary. The transistors (tiny switches) that are contained in microchips are either off (0) or on (1) – just like a light switch. Quantum computing is based on entirely new principles. And quantum mechanics is difficult to understand – it’s counterintuitive – it’s weird. So let’s look at some of the basic concepts:

Superposition Superposition is a concept whereby, instead of a state being on or off, it’s on and off. At the same time. And it’s everything in the middle as well. Think of it as a scale from 0 to 1 and all the numbers in-between.
Qubit A quantum bit (qubit) uses superposition so that, instead of trying problems sequentially, we can compute in parallel with superposition.

More qubits are not necessarily better (although there is a qubit race taking place in the media)… the challenge is not about creating more qubits but better qubits, with better error correction.

Error correction Particles like an electron have a charge and a spin so they point in a certain direction. Noise from other electrons makes them wiggle so the information in one is leaking to others, which makes long calculations difficult. This is one of the reasons that quantum computers run at low temperatures.

Greek dancers hold their neighbour so that they move as one. One approach in quantum computing is to do the same with electrons so that only those at the end have freedom of motion – a concept called electron fractionalisation. This creates a robust building block for a qubit, one that is more like Lego (locking together) than a house of cards (loosely stacked).

Different teams of researchers are using different approaches to solve error correction problems, so not everyone’s Qubits are equal! One approach is to use topological qubits for reliable computation, storage and scaling. Just like Inca quipus (a system of knots and braids used to encode information so it couldn’t be washed away, unlike chalk marks), topological qubits can braid information and create patterns in code.

Exponential scaling Once the error correction issue is solved, then scaling is where the massive power of quantum computing can be unleashed.

A 4 bit classical computer has 16 configurations of 0s and 1s but can only exist in one of these states at any time. A quantum register of 4 qubits can be in all 16 states at the same time and compute on all of them at the same time!

Every n interacting qubits can handle 2n bits of information in parallel so:

  • 10 qubits = 1024 classical bits (1KiB)
  • 20 qubits = 1MB
  • 30 qubits = 1GB
  • 40 qubits = 1TB
  • etc.

This means that the computational power of a quantum computer is potentially huge.

What sort of problems need quantum computing?

We won’t be using quantum computers for general personal computing any time soon – Moore’s Law is doing just fine there – but there are a number of areas where quantum computing is better suited than classical computing approaches.

We can potentially use the massive quantum computing power to solve problems like:

  • Cryptography (making it more secure – a quantum computer could break the RSA 2048 algorithm that underpins much of today’s online commerce in around 100 seconds – so we need new models).
  • Quantum chemistry and materials science (nitrogen fixation, carbon capture, etc.).
  • Machine learning (faster training of models – quantum computing as a “co-processor” for AI).
  • and other intractable problems that are supercompute-constrained (improved medicines, etc.).

A universal programmable quantum computer

Microsoft is trying to create a universal programmable quantum computer – the whole stack – and they’re pretty advanced already. The developments include:

Quantum computing may sound like the technology of tomorrow but the tools are available to develop and test algorithms today and some sources are reporting that a quantum computing capability in Azure could be just 5 years away.

Weeknote 17: Failed demos, hotel rooms, travel and snippets of exercise (Week 18. 2018)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week, I’ve learned that:

  • I must trust my better judgement and never allow anyone to push me into demonstrating their products without a properly rehearsed demo and the right equipment…
  • There are people working in offices today who not only claim to be IT illiterate but seem to think that’s acceptable in the modern workplace:
  • That operations teams have a tremendous amount of power to disregard and even override recommendations provided by architects who are paid to provide solid technical advice.
  • That, in 2018, some conference organisers not only think an all-male panel is acceptable but are hostile when given feedback…

I’ve also:

  • Gone on a mini-tour of Southern England working in London, Bristol and Birmingham for the first four days of the week. It did include a bonus ride on a brand new train though and a stint in first class (because it was only £3 more than standard – I’ll happily pay the difference)!
  • Taken a trip down memory lane, revisiting the place where I started my full-time career in 1994 (only to be told by a colleague that he wasn’t even born in 1994):
  • Squeezed in a “run” (actually more like a slow shuffle) as I try to fit exercise around a busy work schedule and living out of a suitcase.
  • Managed to take my youngest son swimming after weeks of trying to make it home in time.
  • Written my first blog post that’s not a “weeknote” in months!
  • Picked up a writing tip to understand the use of the passive voice:

So the week definitely finished better than it started and, as we head into a long weekend, the forecast includes a fair amount of sunshine – hopefully I’ll squeeze in a bike ride or two!

Keeping up to date with developments in the Microsoft world

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of my customers asked me today how I keep up to date with developments in the Microsoft world.

The answer is, “with great difficulty” but I do have a few resources at my disposal. Rather than create a blog post which will quickly be out of date, here’s a OneNote Notebook that has the info (and is more likely to be kept up-to-date).

I also get a fair amount of information directly from Microsoft, either as a P-TSP or through my work at risual but some of that is under NDA. Hopefully the links in the OneNote (which I will expand over time) will help…

Future Decoded 2016 highlights (#FutureDecoded)

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Two years ago, I attended Future Decoded – Microsoft’s largest UK event, which has taken place each November for the last few years at the ExCeL centre in London. It’s a great opportunity to keep up to date with the developments in the Microsoft stack, with separate Business-focused and Technical-focused days and some really good keynote speakers as well as quality breakout sessions.

Future Decoded has particular significance for me because it’s where I “met” risual, who have been headline sponsors for the last 3 events. After the 2014 event, I decided to find out more about risual and, in May 2015 I finally joined the “risual family”. This year I was lucky enough to be on one of our five stands (one headline stand in the form of a Shoreditch pub, complete with risuAle, and one each for our solutions businesses in retail, justice, education and productivity). I had a fantastic (if very tiring) day connecting with former colleagues, customers, industry contacts and potential new customers – as we chatted about how risual could help them on their digital transformation journey.

fd13

Whilst I wasn’t able to attend a lot of the sessions, indeed I was consulting with a customer in the north-east of England on the first day, I did manage to catch the day 2 keynote and was blown away with some of the developments around machine learning and artificial intelligence (maybe more on that in another post). I also noticed the teams behind the Microsoft Business (@MSFTBusinessUK) and Microsoft Developer (@MSDevUK) Twitter handles were tweeting sketch notes, which I thought might be a useful summary of the event:

You can also catch all of the main announcements in these two Microsoft live blog posts from the event:

Why Microsoft customers don’t need to worry about EU-US Safe Harbour/Harbor

This content is 9 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When European Courts judged the 15-year-old EU-US Safe Harbour/Harbor treaty to be invalid last October, Internet news sites started to report how terrible this was for EU companies placing data into cloud services offered (mostly) by American companies. For some, that may be true, but that assumes Safe Harbour is the only protection in place.

This week, IT news sites are at it again. The Register (the tabloid newspaper of IT news sites) has an article titled Safe Harbor 2.0: US-Europe talks on privacy go down to the wire but the actual URI belies a much more dramatic title of “Safe Harbor countdown to Armageddon”. Sensationalist at best, some might even say irresponsible.

I’m no lawyer but, for my customers, who are implementing Microsoft cloud services, there seems to be nothing to worry about and I’ll explain why in this blog post. Of course, Microsoft is just one of many cloud services providers – and for others there may be valid concerns.

The United States Export.Gov website currently displays the following text regarding Safe Harbor:

“On October 6, 2015, the European Court of Justice issued a judgment declaring as ‘invalid’ the European Commission’s Decision 2000/520/EC of 26 July 2000 ‘on the adequacy of the protection provided by the safe harbour privacy principles and related frequently asked questions issued by the US Department of Commerce.’

In the current rapidly changing environment, the Department of Commerce will continue to administer the Safe Harbor program, including processing submissions for self-certification to the Safe Harbor Framework. If you have questions, please contact the European Commission, the appropriate European national data protection authority, or legal counsel.”

EU Model Clauses trump Safe Harbour

Microsoft President and Chief Legal Officer, Brad Smith, issued a statement on 6 October 2015. Quoting from that article:

“For Microsoft’s enterprise cloud customers, we believe the clear answer is that yes they can continue to transfer data by relying on additional steps and legal safeguards we have put in place. This includes additional and stringent privacy protections and Microsoft’s compliance with the EU Model Clauses, which enable customers to move data between the EU and other places – including the United States – even in the absence of the Safe Harbor. Both the ruling and comments by the European Commission recognized these types of steps earlier today.

Microsoft’s cloud services including Azure Core Services, Office 365, Dynamics CRM Online and Microsoft Intune all comply with the EU Model Clauses and hence are covered in this way.”

There’s also a follow-on post which talks in general terms about the wider issues and privacy beliefs but the key point is that Microsoft offers EU Model Clauses within its contracts, which go beyond Safe Harbour. Microsoft also has an FAQ on the EU Model Clauses that is worth a read.

Quoting again from the 6 October 2015 statement:

“We wanted to make sure all of our enterprise cloud customers receive this benefit so, beginning last year, we included compliance with the EU Model Clauses as a standard part of the contracts for our major enterprise cloud services with every customer. Microsoft cloud customers don’t need to do anything else to be covered in this way.”

That suggests to me that customers who have signed up to Azure Core Services, Office 365, Dynamics CRM Online or Intune since early 2014 already have greater privacy protection than was afforded by Safe Harbour – and that protection meets the EU’s current requirements. In short, Microsoft customers don’t need to worry about Safe Harbor (sic).

One month with the Surface Pro 3

This content is 10 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

When I started my current job and tweeted about my new “laptop” (a Microsoft Surface Pro 3), I was a little surprised at the reaction from some people, including one of my friends whose words were along the line of “give it a month and then then tell me if you still like it…”

Well, it’s been a month, so here we go…

<tl; dr> I really, really, like it.

That’s not really much of a review though… so here’s some of the things that are good, and some that are less so…

Starting out with the positives:

  • It’s a fully-featured PC. Every time I see someone comparing the Surface with an iPad I cringe. I tried using an iPad as my primary device and it didn’t work for me. I can see why it would for some people but I need to work with multiple applications and task switch, copy and paste text all of the time. The Surface Pro runs Windows 8.1 and does everything I expect of a Windows PC, plus the benefits of having a touch screen display and a tablet form factor.
  • The display is fantastic. Crisp, clear, 2160×1440 (as Ed Bott highlights, that would be called a retina display on an Apple device).
  • The type cover keyboard is really good. Backlit keys, easy to type on, a good size. Combined with the kickstand on the tablet itself, it becomes a fully-featured 12″ laptop and it’s far more stable than many tablet/cover/keyboard combinations.
  • I live in OneNote. I can draw with the Surface Pen now – and that is incredibly useful.
  • It’s light. I haven’t checked how light, but light enough to carry with ease.
  • The power supply is not too big – and it has a USB charging socket too. Having said that, I can usually manage on the battery to catch the train in/out of London and get through a customer meeting.

On the downside though:

  • There aren’t enough USB ports and the use of a Mini DisplayPort means I need to carry adaptors. To be fair, I carry quite a few for my other devices too.
  • The price of accessories is way over the top: type cover is a penny under £110; Surface Pen is £45; Docking station is £165. Really? Add that to the cost of the device itself and you could buy a pretty good laptop. (The Surface Pro 3 range starts at £639 but the Intel i5 model with 4GB RAM and 128GB of storage that I use is £849 and the top of the range Intel i7 with 8GB RAM and 512GB storage will set you back £1549).
  • The type cover trackpad is awful. I use a mouse. That’s how bad it is.
  • The pen takes some getting used to (this post from Microsoft helps) – and I ran through the first set of batteries in no time (this support page came in useful too).
  • I’ve had some worrying issues with resuming from standby, sometimes not resuming at all, sometimes having to go through a full reboot. I suspect that’s the Windows build it’s running though – I can’t blame the Surface for that…

I’m more than happy with the Surface Pro 3 (at least, I am until the Surface Pro 4 comes out!). I was given the choice between this and a Dell ultrabook and I’m pretty sure I made the right choice. Maybe if I was a developer and I needed a laptop which was effectively a portable server then that would be a different story – but for my work as a Consultant/Architect – it’s exactly what I need.

If you need a Windows PC, your work is mobile (and not too taxing in terms of hardware requirements), and your employer has the facilities for effective remote working, the Surface Pro 3 is worth a look. I’d even go as far as to say I would spend my own money on this device. That’s more than I can say about any company-supplied PC I’ve had to date.