Do we need another as-a-service to describe functions?

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week saw quarterly earnings reports for major cloud vendors and this tweet caught my eye:

You see, despite Azure growing by 93%, this suggests that Amazon has the cloud market sewn up. Except I’m not sure they do…

I think it would be interesting to see this separated into infrastructure-, platform- and software-as-a-service (IaaS/PaaS/SaaS). I suggest that would present three very different stories. And I’d expect that Amazon would only really be way out front for IaaS.

My friend and former colleague, Garry Martin (@GarryMartin) questioned the relevance of those “legacy” distinctions but I think they still have value today.

In the early days of what we now recognise as cloud computing, every vendor was applying their own brand of cloud-washing. It still happens today, with vendors claiming to offer IaaS when really they have a hosted service and a traditional delivery model.

Back in 2011, the US National Institute of Standards and Technology (NIST) defined cloud computing, including the service models of IaaS, PaaS and SaaS. Those service models, along with the (also abused) deployment models (public cloud, private cloud, etc.) have served us well but are they really legacy?

I don’t think they are. Six years is a long time in IT, let alone the cloud but I think IaaS, PaaS and SaaS are as relevant today as they were when NIST wrote their definition.

When asked how “serverless” technologies like AWS Lambda, Azure Functions or Google Cloud Functions fit in, I say they’re just PaaS. Done right.

Some people want to add another service model/definition for Function-as-a-Service (FaaS). But why? What value does it add? Functions are just PaaS but we’ve finally evolved to a place where we are moving past the point of caring about what the code runs on and letting the cloud manage that for us. That’s what PaaS has supposed to have been doing for years (after all, should I really need to define a number of instances to run my web application – that all sounds a bit like virtual machines to me…)

To my mind, “serverless” is just the ultimate platform as a service and we really don’t need another service model to describe it.

To quote a haiku from Onsi Fakhouri (@onsijoe):

“Here is my code
Run it in the cloud for me
I don’t care how”

Or, as Simon Wardley (@swardley) “fixed” this Cloud Foundry diagram:

Outlook gotcha: only cached data is exported to data file (.PST)

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This weekend, a family project that required its own mailbox ended, meaning I could reduce the number of licences in my Exchange Online subscription. That’s straightforward enough but I wanted to take a backup copy of the email before cutting the mailbox loose.

From the last time I did any Exchange Online administration, I recalled that one of the limitations was that you can’t back up a mailbox to a PST from PowerShell. That may have changed but the advice at the time was to backup to an Outlook data file (also known as a Personal Folder) in Outlook. It’s clunky but at least it’s functional.

I couldn’t work out why not all of the data was being exported; only the items that were cached and not the ones that appeared if I clicked on “There are more items in this folder on the server/click here to view more on Microsoft Exchange”. Then I found a clue in a Spiceworks post from Joe Fenninger, where Joe says “Dont [sic] forget to download all [Office 365] content prior to export.”.

I needed to adjust the cached mode settings for the mailbox to change how much email is kept offline, after which Outlook could export all items to the Outlook Data File, rather than just the ones that were cached locally.

Securing the modern productive enterprise with Microsoft technology

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

“Cybercrime costs projected to reach $2 trillion by 2019” [Forbes, 2016]

99: The median number of days that attackers reside within a victim’s network before detection [Mandiant/FireEye M-Trends Report, 2017]

“More than 63% of all network intrusions are due to compromised user credentials” [Microsoft]

The effects of cybercrime are tremendous, impacting a company’s financial standing, reputation and ultimately its ability to provide security of employment to its staff. Nevertheless, organisations can protect themselves. Mitigating the risks of cyber-attack can be achieved by applying people, process and technology to reduce the possibility of attack.

Fellow risual architect Tim Siddle (@tim_siddle) and I have published a white paper that looks at how Microsoft technology can be used to secure the modern productive enterprise. The tools we describe are part of Office 365, Enterprise Mobility + Security, or enterprise editions of Windows 10. Together they can replace many point solutions and provide a holistic view, drawing on Microsoft’s massive intelligent security graph.

Read more in the white paper:

Securing the modern productive enterprise with Microsoft technology

Generating a GPX file for Strava after the tech let me down

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This afternoon was glorious. The sun was shining and, even though it was a work day, the company I work for had arranged an afternoon out for staff at Cannock Chase (Go Ape). High ropes, Forest Segway, or Mountain biking activities were all available – right up my street!

I decided I’d like to Segway but I was in the second group (which meant waiting around for an hour or so), so I took a bike out for a little ride whilst I was waiting. Unfortunately, I didn’t have my Garmin with me and my iPhone’s attempts to capture my movements on Strava were woeful.

Shortly after setting off on “Follow The Dog“, I lost the rest of the group (whilst messing around with Strava!) and decided that I would rather come back and ride another day with my son than ride on my own and (potentially) miss the Segway opportunity. But I still wanted to capture the details of the (admittedly short) ride…

Generating a GPX file to upload to Strava is straightforward enough – I used Mapometer for that. Unfortunately though, Strava won’t allow GPX files without time information to be uploaded.

The workaround is to estimate some time data and insert it in the file – which is where the excellent Gotoes site helped! Goetoes has several utilities for Strava and Garmin Connect including:

  • Combining FIT, GPX or TCX files
  • Merging heart rate and position files (FIT/TCX)
  • A bookmarklet to export GPX from Garmin Connect
  • The ability to upload to Strava via email

and…

Using this with an estimate of my time, a known distance (so an estimated speed) and Gotoes’ ability to work out what my speed might have been at different points on the route came up with something approximate to put into Strava. I’ve hidden it from leaderboards – because it’s “fake data” – but it’s enough for me to track the distance and the fact I did go for a little bimble.

Strangely, the iPhone’s GPS performed OK for the Segway ride (which I’ve recorded as an eBike and alse hidden from leaderboards):

“You need to work less”. Musings on finding the elusive work-life balance

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

“You need to work less”, said David Hughes (@davidhughes) as we were discussing why I carried a power supply with my Surface Pro. This was in response to my observation that the device will get me through the work day but not through travel at each end as well.

“Actually, you have a point”, I thought. You see, weekdays are pretty much devoted to work and pseudo-work (blogging, social media, keeping up to date with tech, etc.) – except for meals, sleep, the couple of hours a week spent exercising, and a bit of TV in the evening.

David commented that he reads – rather than working – on the train (I tweet and email but really should read more). And when I asked how he organises his day, he introduced me to ToDoIst. It seems that having a task list is one thing but having a task list that can work for you is something else.

Today was different. I knew I wanted to get a blog post out this morning, finish writing a white paper, and find time to break and meet with David in my favourite coffee shop. I’m terrible at getting up on working-from-home days (more typically working well into the evening instead) but I had managed to be at my desk by 7am and that meant that when I left the house mid-morning I’d already got half a day’s work in. For once, I’d managed some semblance of work-life balance. The afternoon was still pretty tough and I’m still working as we approach 7pm (my over-caffeinated state wasn’t good for writing!) but I met my objectives for the day.

Now I’ve added ToDoIst to my workflow I’m hoping to be more focused, to wrap up each day and set priorities for the next. I need to stop trying to squeeze as much as I can into an ever-more-frantic existence and to be ruthless with what can and can’t be achieved. Time will tell how successful I am, but it feels better already.

My first PowerApps app – a business mileage recorder

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In common with many people who travel for work, I keep a record of my journeys so that I can claim mileage expenses. For the last couple of years, that record has been a spiral-bound notebook (for driving) and Strava (for cycling) – though I haven’t actually claimed any mileage for cycling yet! I wanted to replace my analogue system with a smartphone app and, following a conversation a few weeks ago with my colleague Brian Cain (@BrianCainUC), I decided to create something using Microsoft PowerApps.

For those who are unfamiliar with PowerApps, it’s a technology solution provided by Microsoft to help normal business users – people who are not developers – to create simple applications to connect systems and data. The resulting apps can run on mobile devices, as well as on Windows 10.

PowerApps is available in my Office 365 subscription (though I think there are other ways to sign-up too) and I set to work creating my Mileage Recorder. A few minutes later I had something functional. Not long after that I had tweaked it to be pretty much what I needed. So I created an app in less than 30 minutes and it’s taken me three weeks to write this blog post! Hmm…

Creating my first PowerApps app

My app is a simple three-screen app – taking a table in an Excel Workbook from OneDrive for Business as a data source. PowerApps recognised the data types in the columns of the table and formatted accordingly, then I tweaked things a little in PowerApps Studio.

PowerApps Mileage Recorder: Home PowerApps Mileage Recorder: View PowerApps Mileage Recorder: Edit

I haven’t looked in detail at the architecture used by PowerApps but essentially the PowerApps app provides a native OS wrapper for any apps that I create. This means my app will work on any platform where PowerApps is supported.

PowerApps Mileage Recorder

I can also create a direct link to the app on my phone’s home screen but the look and feel is one of a PowerApps app – not a native application. None of that is an issue – if I want more complex cross-platform apps then someone who can cut code (not me!) can use Xamarin – but for a simple app, PowerApps seems to do the job.

PowerApps/Mileage Recorder on iOS Home Screen

The PowerApps documentation helped me out a lot – and these were the tutorials I found most useful:

There’s also a useful Q&A on using PowerApps within an organisation.

I did have some challenges worth noting but none are show-stoppers:

  • The Windows 10 smartphone that I use for work doesn’t meet the PowerApps hardware requirements, which is a little bizarre. So, I needed to use the app on my personal iPhone. I had created my PowerApp using my employer’s Office 365 tenant and a data source in my work OneDrive but I also use the Outlook app on iOS to connect to my personal Office 365 tenant. This combination was causing challenges that required re-authentication. I couldn’t find an easy way to move the app between tenants (though I’m sure there is one) so I moved the data source to my own tenant and recreated the PowerApp. I’m pretty sure that there must be a proper way to import and export apps, I just haven’t found it yet!
  • The web version of the PowerApps Studio seems a bit flaky at times but it is still a preview. Installing the Universal Windows Platform (UWP) version on a Windows 10 PC worked flawlessly though, even without any admin rights on my company Surface.
  • I couldn’t work out how to make a date and time field work as a simple date field. I really don’t need to record the time of my journeys – just the date!
  • PowerApps doesn’t support formulae in Excel workbooks. Instead, I had to apply some logic in the app to calculate the miles travelled, which displays in my app but doesn’t get written back to the data source. I’m pretty sure that’s fixable – I just haven’t worked out how, yet…

Is it really a good idea to let users create their own apps?

In my customer conversations, it’s quite common to hear IT people saying they don’t want their users creating PowerApps. I can see why – after all, we’ve all seen Access databases and Excel spreadsheets become “business-critical applications” that then create issues for the IT department. For what it’s worth, my view is that if something is really business critical, the business will invest resources into developing something that’s properly supportable. If it doesn’t reach that bar, then it’s not a business-critical app – and why would you prevent users from generating their own tools that help them to work more effectively, albeit unsupported by corporate IT?

To put it another way, people will do what they need to do to get things done, with or without IT’s blessing – so why not give them the tools to do things in a manner that integrates well with existing (supported) applications and services?

I’ll be at Microsoft tomorrow, attending a training event around PowerApps and Flow. That should give me a good opportunity to build on the experience from creating my Mileage Recorder. Together with PowerBI (something else I really need to learn more about) these technologies provide a trilogy of tools to empower users to do more with data. And on that note, I should probably end this blog post, as I’m starting to sound like a Microsoft marketing representative…

Removing the ability to accidentally email colleagues from my personal mailbox in Office 365

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For some time now, Outlook has supported the use of multiple Exchange servers inside a single profile. This is very useful because I can use a single client to connect to my work email (@risual.com), my Microsoft email (until recently), any email accounts that are provided by customers (e.g. for project purposes) and my personal email account.

There are a couple of gotchas though:

  • My employer uses Azure Information Protection (AIP) to classify email and the AIP client will not allow me to send a message unless it’s classified, regardless of whether I’m sending using my risual.com account or one of the others.
  • I have to be careful to make sure that I don’t accidentally send business email from my personal account. This isn’t a problem when responding to an existing message but is possible if the focus is on my personal Inbox and I start a new message thinking “I just need to email so-and-so about something-or-other” (often out of hours).

The first of these is just a minor inconvenience – I just send as Unclassified if I’m not using my risual.com account. The second requires a little more thought – and my colleague Simon Bilton (@sabrisual) suggested creating a transport rule in Exchange Online (who said Engagement Managers aren’t technical?).

So, as of now, the following rule is in place:

<?xml version="1.0" encoding="utf-16" standalone="yes"?>
<rules name="TransportVersioned">
  <rule name="Prevent accidentally sending work email from personal account" id="a0f59e36-93f1-4f2e-bccb-3eddf0c097e1" format="cmdlet">
    <version requiredMinVersion="15.0.3.0">
      <commandBlock><![CDATA[New-TransportRule -Name 'Prevent accidentally sending work email from personal account' -Comments '
' -Mode Enforce -RecipientAddressContainsWords 'risual.com' -ExceptIfSentTo 'markw@risual.com' -SetAuditSeverity 'High' -RejectMessageReasonText 'This email contains recipients at risual.com and you are sending from your personal account' -RejectMessageEnhancedStatusCode '5.7.1']]></commandBlock>
    </version>
  </rule>
</rules>

This rejects email sent from my Exchange Online subscription to any risual.com address except markw@risual.com. That exception allows my wife (on the same server) to send email to me and still allows me to forward emails to myself at work (e.g. receipts for expenses using my personal email address).

I’ve tested by sending to both markw@risual.com (allowed) and mark@risual.com (blocked) so accidentally emailing someone at work from my personal address is no longer a concern!

Custom mail flow rule blocks email sent to work from personal mailbox

Office 365 data moves are now available for UK customers

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last year I wrote a post about data residency options for Office 365 customers in the UK. At the time, Microsoft was publishing a window for UK-based customers to request data moves between December 2016 and February 2017 but then the web page was updated to say “TBA”. Now, the how to request your data move page has been updated again (thanks to @gavinmorrison for the tip-off), giving UK customers six months between 15 March 2017 and 15 September 2017 to request a move to UK-hosting. Microsoft will then take up to 2 years to complete the move.

This is a one-time opportunity to request a data move (although tenants created after UK datacenter availability will already be hosted in the UK) but it’s only recommended if your organisation has strict data residency requirements. If you don’t see the option to move, it’s probably because:

  • You’re using the old Office 365 Admin Center – the option is only available (under Settings, Organization Profile, Data Residency Option) in the preview Admin Center.
  • Your tenant is not eligible for the move.
  • All of your data is already located in the new region.

Once you’ve started the move process, it cannot be cancelled.

Further reading

Need an AAAA battery in a hurry? There may be six of them inside a PP3!

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of days ago, I was having issues with the Surface Pen that I use with my Surface Pro 3. Microsoft’s Troubleshooting Surface Pen page suggested I needed to replace the AAAA battery and, sure enough, a quick test on a battery tester confirmed that my battery was indeed flat.

I went to Amazon and bought a pack of 4 AAAA batteries and was pretty pleased to find I could get near-instant gratification, with the batteries delivered around 4 hours later!

Then, Gary Quigley (@quiggles) tweeted me to say that a Duracell 9V (PP3) battery has 6 AAAAs inside:

I had to test this out so, yesterday, I disassembled an old battery that was due to be recycled and, sure enough, there were 6 AAAA-sized cells!  In the image below you can see the disassembled PP3 on the right, with the old Duracell AAAA and the new Amazon Basics AAAA cells to the left:

AAAA batteries and similar cells inside a disassembled PP3 battery

Wikipedia suggests that not all PP3 batteries are constructed in this way, so “your mileage may vary” but it might be useful when I use up my current stock of AAAAs!

?? Warning: disassembling batteries is probably not the smartest thing to do. I’m not responsible if you hurt yourself or others as a result of any action you take after reading this blog post.

Designing for failure does not necessarily mean multi-cloud

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, Amazon Web Services’ S3 storage service suffered an outage that affected many websites (including popular sites to check if a website is down for everyone or just you!).

Unsurprisingly, this led to a lot of discussion about designing for failure – or not, it would seem in many cases, including the architecture behind Amazon’s own status pages:

The Amazon and Azure models are slightly different but in the past we’ve seen outages to the Azure identity system (for example) impact on other Microsoft services (Office 365). When that happened, Microsoft’s Office 365 status page didn’t update because of a caching/CDN issue. It seems Amazon didn’t learn from Microsoft’s mistakes!

Randy Bias (@RandyBias) is a former Director at OpenStack and a respected expert on many cloud concepts. Randy and I exchanged many tweets on the topic of the AWS outage but, after multiple replies, I thought a blog post might be more appropriate. You see, I hold the view that not all systems need to be highly available. Sometimes, failure is OK. It all comes down to requirements:

And, as my colleague Tim Siddle highlighted:

I agree. 100%.

So, what does that architecture look like? Well, it will vary according to the provider:

So, if we want to make sure our application can survive a region failure, there are ways to design around this. Just be ready for the solution we sold to the business based on using commodity cloud services to start to look rather expensive. Whereas on-premises we typically have two datacentres with resilient connections, then we’ll want to do the same in the cloud. But, just as not all systems are in all datacentres on-premises, that might also be the case in the cloud. If it’s a service for which some downtime can be tolerated, then we might not need to worry about a multi-region architecture. In cases where we’re not at all concerned about downtime we might not even use an availability set

Other times – i.e. if the application is a web service for which an outage would cause reputational or financial damage – we may have a requirement for higher availability.  That’s where so many of the services impacted by Tuesday’s AWS outage went wrong:

Of course, we might spread resources around regions for other reasons too – like placing them closer to users – but that comes back to my point about requirements. If there’s a requirement for fast, low-latency access then we need to design in the dedicated links (e.g. AWS Direct Connect or Azure ExpressRoute) and we’ll probably have more than one of them too, each terminating in a different region, with load balancers and all sorts of other considerations.

Because a cloud provider could be one of those single points of failure, many people are advocating multi-cloud architectures. But, if you think multi-region is expensive, get ready for some seriously complex architecture and associated costs in a multi-cloud environment. Just as in the on-premises world, many enterprises use a single managed services provider (albeit with multiple datacentres), in the cloud many of us will continue to use a single cloud provider.  Designing for failure does not necessarily mean multi-cloud.

Of course, a single-cloud solution has its risks. Randy is absolutely spot on in his reply below:

It could be argued that one man’s “lock-in” is another’s “making the most of our existing technology investments”. If I have a Microsoft Enterprise Agreement, I want to make sure that I use the software and services that I’m paying for. And running a parallel infrastructure on another cloud is probably not doing that. Not unless I can justify to the CFO why I’m running redundant systems just in case one goes down for a few hours.

That doesn’t mean we can avoid designing with the future in mind. We must always have an exit strategy and, where possible, think about designing systems with a level of abstraction to make them cloud-agnostic.

Ultimately though it all comes back to requirements – and the ability to pay. We might like an Aston Martin but if the budget is more BMW then we’ll need to make some compromises – with an associated risk, signed off by senior management, of course.

[Updated 2 March 2017 16:15 to include the Mark Twomey tweet that I missed out in the original edit]