Providing fast mailbox access to Exchange Online in virtualised desktop scenarios

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In last week’s post that provided a logical view on end user computing (EUC) architecture, I mentioned two sets of challenges that I commonly see with customers:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

What I didn’t say, is that I’m seeing a lot of Microsoft customers who have a combination of these and who are refreshing parts of their EUC provisioning without looking at the whole picture – for example, moving email from Exchange to Exchange Online but not adopting other Office 365 workloads and not updating their Office client applications (most notably Outlook).

In the last month, I’ve seen at least three organisations who have:

  • An investment in non-persistent virtualised desktops (using technology products from Citrix and others).
  • A stated objective to move email to Exchange Online.
  • Office Enterprise E3 or higher subscriptions (i.e. the licences for Office 365 ProPlus – for subscription-based evergreen Office clients) but no immediate intention to update Office from current levels (typically Office 2010).

These organisations are, in my opinion, making life unnecessarily difficult for themselves.

The technical challenges with such as solution come down to some basic facts:

  • If you move your email to the cloud, it’s further away in network terms. You will introduce latency.
  • Microsoft and Citrix both recommend caching Exchange mailbox data in Outlook.
  • Office 365 is designed to work with recent (2013 and 2016) versions of Office products. Previous versions may work, but with reduced functionality. For example, Outlook 2013 and later have the ability to control the amount of data cached locally – Outlook 2010 does not.

Citrix’s advice (in the Citrix Deployment Guide for Microsoft Office 365 for Citrix XenApp and XenDesktop 7.x) is using Outlook Cached Exchange Mode; however, they also state “For XenApp or non-persistent VDI models the Cached Exchange Mode .OST file is best located on an SMB file share within the XenApp local network”. My experience suggests that, where Citrix customers do not use Outlook Cached Exchange Mode, they will have a poor user experience connecting to mailboxes.

Often, a migration to Office 365  (e.g. to make use of cloud services for email, collaboration, etc.) is best combined with Office application updates. Whilst Outlook 2013 and later versions can control the amount of data that is cached, in a virtualised environment, this represents a user experience trade-off between reducing login times and reducing the impact of slow network access to the mailbox.

Put simply: you can’t have fast mailbox access to Exchange Online without caching on virtualised desktops, unless you want to add another layer of software complexity.

So, where does that leave customers who are unable or unwilling to follow Microsoft’s and Citrix’s advice? Effectively, there are two alternative approaches that may be considered:

  • The use of Outlook on the Web to access mailboxes using a browser. The latest versions of Outlook on the Web (formerly known as Outlook Web Access) are extremely well-featured and many users find that they are able to use the browser client to meet their requirements.
  • Third party solutions, such as those from FSLogix can be used to create “profile containers” for user data, such as cached mailbox data.

Using faster (SSD) disks for XenApp servers and improving the speed of the network connection (including the Internet connection) may also help but these are likely to be expensive options.

Alternatively, take a look at the bigger picture – go back to basics and look at how best to provide business users with a more flexible approach to end user computing.

Short takes: running apps from unidentified developers on a Mac; Dropbox stuck importing photos on a Mac; and virtual card numbers in Apple Wallet

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A collection of snippets that don’t make a full blog post on their own…

Mac apps that won’t open because the developer is unidentified

Every now and again, I’ll download an app on my Mac that gets flagged as unsigned on my Mac (“can’t be opened because it is from an unidentified developer”. It turns out that, if you hold down the Control key at the same time as clicking its icon, you can open it.

Dropbox (Mac) stuck importing photos

I use Dropbox to upload my photos from my phone (it names them nicely for me by date!) and then copy them across to OneDrive (where I have more storage). A few months ago, I had a problem where I couldn’t upload my photos to DropBox. I’d plug my phone into a Mac, and the import would never finish. It showed a camera icon and said it was importing photos but didn’t show any progress, as though the DropBox app had hung. Looking around on the ‘net this is a common issue – but there’s no sign of DropBox fixing it…

In the end, my workaround was to upload the images directly from my iPhone, which seemed to clear the bottleneck, whatever it was…

Virtual card numbers in an Apple Wallet

Those who use their mobile phone for contactless payments (Apple Pay, etc.) may not be aware that each registered card has a virtual card number – the 16-digit card number used is not the same number as the physical card. That’s why (for example), if you touch in to pay for travel in London using contactless on a card but finish the journey with contactless on your phone, Transport for London won’t realise that the two transactions are linked.

I’m not sure how to find the full card number for the device, but you can find the last 4 digits of the virtual card number by pressing the “information icon in the lower right of Apple Wallet. That will give a whole host of information, as well as transaction history.

Device Account Number in Apple Wallet on iOS

A logical view on end user computing architecture

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Over the last couple of years, I’ve worked with a variety of customers looking to transform the way they deliver end user computing services. Typically they fall into two camps:

  1. “We invested heavily in thin client technologies and now we’re finding them to be over-engineered and expensive with multiple layers of technology to manage and control.”
  2. “We have a managed Windows desktop running <insert legacy version of Windows and Office here> but the business wants more flexibility than we can provide.”

There are others too (like the ones who bought into a Mobile Device Management platform that’s no longer working for them) but the two examples above are by far and away the most common issues I see. When helping customers to understand their options for providing end user computing services, I like to step up a level from the technology – to get back to the logical building blocks of an end user computing solution. And, over time, I’ve developed and refined a diagram that seems to resonate pretty well with customers as a framework around which to build end user solutions.

Logical view on the end user computing landscape


Starting at the bottom left of the diagram, I’ll describe each of the main blocks in turn:

  • Identity and access: I start here because identity is absolutely key in any modern enterprise. If you’re still thinking about devices and operating systems – you’re doing it wrong (more on that later). Instead, the model is built around using someone’s identity to determine what applications they can access and how data is protected. Identity platforms work across cloud and on-premises environments, provide additional factors for authentication, self-service functionality (e.g. for password and group management), single sign-on to both corporate and cloud applications, integration with consumer and partner directory services and the ability to federate (i.e. to use a security token service to authenticate on-premises).
  • Data protection: with identity frameworks in place, let’s turn our attention to the data. Arguably there should be many more building blocks here but the main ones are around digital rights management, data loss prevention and endpoint security (firewalls, anti-virus, encryption, etc.).
  • Connectivity: until we all consume all of our services from the cloud, we generally need some form of connectivity to “the mothership”, whether that’s a client-less solution (like Microsoft DirectAccess) or another form of VPN. And of course that needs to run over some kind of network – typically a Wi-Fi or 4G connection but maybe Ethernet.
  • Devices: Arguably, there’s far too much attention paid to different types of devices here but there are considerations around form factor and ownership. Ultimately, with the correct levels of management control, it shouldn’t matter who owns the device but, for now, there’s a distinction between corporately-owned and user-owned devices. And what’s the “other” for? I use it as a placeholder to discuss embedded systems, etc.
  • Desktop operating system: Windows, MacOS, Linux… increasingly it doesn’t matter what the OS is as apps run cross-platform or even in a browser.
  • Mobile operating system: iOS, Android (maybe Windows Mobile). Again, it’s just a platform to run a browser – though there are considerations around native applications, app stores, etc. (we’ll come back to those in a short while).
  • Application delivery: this is where the “fun” starts. Often, this will be influenced by some technical debt – and many organisations will use more than one of the technologies listed. Apps may be locally installed – and they can be managed using a variety of management tools. In my world it’s System Center Configuration Manager, Intune and the major mobile app stores but, for others, there may be a different set of tools. Then there’s virtualised/containerised applications, remote desktops and published applications, trusted apps that run from a file share and, finally, the panacea that is a browser-delivered app. Which makes me think… maybe this diagram needs to consider add-ins and extensions… for now, let’s keep it simple.
  • Device and asset management: until we live in a world of entirely user-owned devices, there are assets to manage. Then, sadly, we have to control devices – whoever they belong to – whether that’s policy-driven device and application management, more traditional configuration management, or just the provision of a catalogue of approved applications. Then there’s alerting, perhaps backups (though maybe not if the data is stored away from devices) and something I’ve referred to as “desktop optimisation” which is really the management tools for some of the delivery methods and tools described elsewhere.
  • Productivity services: name your poison – Office 365 or G-Suite – it doesn’t matter; these are the things that people do in their productivity apps. You may disagree with some of the categories (would Slack fit into enterprise social networking, or is it team sites?) but ultimately it’s about an extensible set of productivity services that end users can consume.
  • Input/output services: I print very little but others print a lot. Similarly, there’s scanning to be done. The paperless office is still not here…
  • Environmental management: over time, this will fade away in favour of mobile device and application management solutions but, today, many organisations still need to consider how they control the configuration of desktop operating systems – in the Windows world that might mean Group Policy and for other platforms it could be scripted.
  • Business data and applications: all of the stuff above means nothing if organisations can’t unlock the potential of their data – whether it’s in the CRM or ERP system, end user-driven reporting and BI, workflow or another line of business system.
  • High availability and business continuity: You’ll notice that this block has no subcomponents. For me, it’s nothing more than a consideration. If the end user computing architecture has been designed to be device and platform agnostic, then replacing a device should be straightforward – no need to maintain whole infrastructures for business continuity purposes. Similarly, if all I need is a device with an Internet connection and a browser, then the high availability conversation moves away from the end user computing platform and into how we provide the services that end users need to access.

I’m sure the model will continue to develop over time – it’s far from perfect and some items will be de-emphasised over the years (for example the differentiation between mobile and desktop operating systems will become less important) whilst others will need to be added, but it seems a reasonable starting point around which to start a discussion.

Finding the PlanId for a Microsoft Planner Plan

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I wrote about creating Microsoft Planner tasks from email using Microsoft Flow. At the time, my flow wasn’t quite working because for some reason Flow wouldn’t pull through the details of all of my plans.  I even deleted and recreated a plan but Flow would only show me one. And entering a Custom Value with the name of my plan in my flow resulted in a Schema error for field PlanId in entity Task: Field failed schema validation.

That was, until I found a very useful nugget of information in the PowerApps Community forums. To find the PlanId, open the corresponding Plan in a browser and the last part of the URL contains the PlanId:

Finding the PlanID for a Microsoft Planner Plan

Put that into your flow and the corresponding list of BucketIds should then be visible:

Bucket Id located based on the Plan Id

Now my flow runs and puts the plain-text contents of an email into the subject of a new task. Unfortunately, I’m still working on how to populate other fields in the task and I think I may have hit the current limits of the Microsoft Flow-Planner integration.

Creating Microsoft Planner tasks from email using Microsoft Flow

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Work is pretty hectic at the moment. To be honest, that’s not unusual but scanning through tweets at lunchtime or at the start/end of the day is not really happening. I tend to take a look in bed (a bad habit, I know) and often think “that looks interesting, I’ll read it tomorrow” or “I’ll retweet that, but in the daytime when my followers will see it”.  At the moment, my standard approach is to email the tweets to myself at work but, 9 times out of 10, they just sit in my Inbox and go no further.

So, I thought I’d set up a Kanban board in Microsoft Planner for interesting tweets (I already have one for future blog posts). That’s pretty straightforward but one of the drawbacks with Planner is that you can’t email tasks to the plan. That’s a pretty big omission in my view (and it seems I’m not alone) as I believe it’s something that can be done in Trello (which is the service that Planner is trying to compete with).

I got thinking though, one of the other services that might help is Microsoft Flow. What if I could create a flow to receive an email (in my own mailbox) and then create an item in a plan, then delete the email?

The first challenge was receiving the email. I set up a new email alias for my account but my interestingtweets@markwilson.it wouldn’t trigger the flow, because it’s a secondary address.

So, I switched to looking for a particular string in the subject of the email. That worked. But creating an item in the plan was failing with a “Bad Request” error. I took a look at the advice for troubleshooting a flow and, digging a little deeper showed the failure message of Schema error for field Assignments in entity Task: Field failed schema validation. That was because I was using dynamic content to assign the task to myself (so I removed that setting).

This left me with a different message: Schema error for field Title in entity Task: Field failed schema validation. That turned out to be because I was using the message body as the title of the email and Planner was only happy if I sent it as plain text (not as HTML). I can convert the HTML to plain text in Flow, but the multi-line content still fails validation…

So far, I’ve been able to successfully create tasks from single-line emails in one of my Plans but not in the one I created for this purpose (it’s not appearing as a target and if I enter the name manually the flow fails with a message of Schema error for field PlanId in entity Task: Field failed schema validation“)… I’ve made the plan publicly visible, so I’ll wait and see if that makes a difference (it hasn’t so far). If not, I may need to remove and recreate the Plan.

So near, yet so far. And ideally, I’d be able to do something more intelligent with the task items (like to read links from the email and add them as links to the task in Planner) – maybe what I want is too much for Flow and I need to use a Logic App instead.

At the moment, this is what my Flow looks like:

Microsoft Flow to create a task in Microsoft Planner from an email

When I have it working with marking the email as read, I’ll change it over to deleting the email instead – after all, I don’t need an email and a task in Planner!

Using Mail Users/Contacts to redirect email in Exchange Online

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Being a geek, I have a few domain names registered, including one that I use for my family’s email. I don’t pay for Exchange Online mailboxes for us all though. Instead, I have a full Office 365 subscription, my wife has Exchange Online and the children (and some other family members) use a variety of free accounts from Apple, Google, Microsoft, etc.

Earlier this evening, my son asked me to switch his family email from iCloud to GMail (he has an Android phone, and was getting annoyed with winmail.dat files in iCloud), so I had to unpick my email redirection method… which seemed like a good time to blog about it…

Obviously, my wife and I have full mailboxes in Exchange Online but other family members are set up as contacts. In the Office 365 Admin Center they show up as unlicenced users but if I drill down into the Exchange Admin Center I have some more control.

Each family member is set up as a contact/Mail User. Each contact has been set up with at least two email addresses:

  • user@myfamilydomain.com (that’s not the real name but it will do for this example);
  • user@externalmailprovider.com (i.e. user@icloud.com, user@gmail.com, user@hotmail.com).

By setting the primary email (the one prefixed with upper case SMTP: rather than lower case smtp:) to the user@gmail.com (or wherever), mail will be received at user@myfamilydomain.com but is redirected to their “real” email address.

Exchange Online shows them as type Mail User and lists their external email address as the primary.

Bids, tenders, requests for information and word counts…

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I won’t go into the details (internal company stuff that shouldn’t be on a blog) but at the moment I’m working on a lot of bids, tenders, requests for quotations, requests for information, etc., etc.

I’ve done this sort of work before and it’s not a great fit for me but sometimes it has to be done. It’s my turn. But I hadn’t realised until recently why it is that I struggle so much…

Over the years, I’ve learned to deal with ambiguity; I’ve learned how to respond without having all the facts. I can write convincing copy (at least I think I can) and I can usually spell (despite a colleague suggesting yesterday that I should check a dictionary because “siloes” didn’t look right to him and maybe it should be “silo’s” – arghhh!).

It was my wife who pointed out to me that the very same attributes and skills that help me as an architect (general pedantry; taking the time to consider the various consequences of choices made; a desire to put in place controls to get things right and to do them well) hinder me in a high-pressure sales environment where I don’t have time to think and where everything is urgent/important and needs to be done NOW (or very soon after now)…

…and relax. Because it’s Friday night. And, in a short while, I will have a beer, or a glass of wine, in my hand.

Anyway, what is the point of this drivel? The ranting ramblings of an Architect? No. Ah, yes, word counts.

Counting words in a document – or in a cell in a spreadsheet…

Lots of bid responses are limited in the number of words that can be accepted. Often, the tool I’m using is Microsoft Word and it’s pretty easy to show the word count for a document or part of a document. Sometimes though, I’m using a different tool to create a document. Like Microsoft Excel.

I was working on a form of response that lists several skills and requires a response of less than a hundred words for each. Sounds easy? Maybe, but thirty 100-word responses are still 3000 words… and only having 100 words to detail experience can be limiting sometimes.

I needed a method to count the number of words in a cell of the spreadsheet and, as usual, I found the answer online:

=IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1," ",""))+1)

Basically, this compares the length of a string with the length of the same string with all the spaces removed and adds 1 (for a single word with no spaces) or returns 0 if there is nothing in the cell (the TRIM function removes any extra spacing). It’s pretty crude but assuming no hyphenated words or solidi (oblique slashes) it will give a good enough count of the number of words in the cell. Definitely a time-saver for me…

Bonus tip

Excel does have a spell checker – it’s just not very obvious. Just press F7 (or go to the Review menu, then choose Spelling). This only works in the desktop client – not Excel Online.

Where did my blog go? And why didn’t I have backups?

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I wrote a blog post about SQL Server. I scheduled it to go live the following lunchtime (as I often do) and then checked to see why IFTTT hadn’t created a new link in bit.ly for me to tweet (I do this manually since the demise of TwitterFeed).

To my horror, my blog had no posts. Not one. Nothing. Nada. All gone.

Where did all the blog posts go?

As I’m sure you can imagine, the loss of 13 years’ of work invoked mild panic. But never mind, I have regular backups to Dropbox using the WordPress Backup to Dropbox plugin. Don’t I? Oh. It seems the last one ran in March.

History: Backup completed on Saturday March 11, 2017 at 02:24:40.

Ah.

Now, having done some research, my backups have probably been failing since my hosting provider updated PHP on the server. My bad for not checking them more carefully. I probably hadn’t noticed because the backup was partially running (so I was seeing the odd file written to Dropbox) and I didn’t realise the job was crashing part way through.

Luckily for me, the story has a happy ending*. My hosting provider (ascomi) kept backups (thank you Simon!). Although they found that a WordPress restore failed (maybe the source of the data loss was a database corruption), they were able to restore from a separate SQL backup. All back in a couple of hours, except for the most recent post, which I can write again one evening.

So, what about fixing those backups, Mark

Tonight, before writing another post, I decided to have a look at the broken backups.

I’m using WordPress to Dropbox v4.7.1 on WordPress 4.8, both of which are the latest versions at the time of writing. The Backup Monitor log for my last (manual) attempt at backing up read:

22:55:57: A fatal error occured: The backup is having trouble uploading files to Dropbox, it has failed 10 times and is aborting the backup.
22:55:57: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/video-seo.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:56: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/configuration-service.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:56: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/banner/news-seo.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:55: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/editicon.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:54: Processed 736 files. Approximately 14% complete.
22:55:54: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/link-out-icon.svg’ to Dropbox: unexpected parameter ‘overwrite’
22:55:53: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/extensions-local.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:53: Error uploading ‘/home/mwilson/public_html/blog/wp-content/plugins/wordpress-seo/images/question-mark.png’ to Dropbox: unexpected parameter ‘overwrite’
22:55:48: Processed 706 files. Approximately 13% complete.
22:55:42: Processed 654 files. Approximately 12% complete.
22:55:36: Processed 541 files. Approximately 10% complete.
22:55:30: Processed 416 files. Approximately 8% complete.
22:55:24: Processed 302 files. Approximately 6% complete.
22:55:18: Processed 170 files. Approximately 3% complete.
22:55:12: Processed 56 files. Approximately 1% complete.
22:55:10: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/redirection-en_GB.mo’ to Dropbox: unexpected parameter ‘overwrite’
22:55:10: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/widget-logic-en_GB.mo’ to Dropbox: unexpected parameter ‘overwrite’
22:55:09: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/widget-logic-en_GB.po’ to Dropbox: unexpected parameter ‘overwrite’
22:55:09: Error uploading ‘/home/mwilson/public_html/blog/wp-content/languages/plugins/redirection-en_GB.po’ to Dropbox: unexpected parameter ‘overwrite’
22:55:06: SQL backup complete. Starting file backup.
22:55:06: Processed table ‘wp_yoast_seo_meta’.
22:55:06: Processed table ‘wp_yoast_seo_links’.
22:55:06: Processed table ‘wp_wpb2d_processed_files’.
22:55:06: Processed table ‘wp_wpb2d_processed_dbtables’.
22:55:06: Processed table ‘wp_wpb2d_premium_extensions’.
22:55:06: Processed table ‘wp_wpb2d_options’.
22:55:06: Processed table ‘wp_wpb2d_excluded_files’.
22:55:06: Processed table ‘wp_users’.
22:55:06: Processed table ‘wp_usermeta’.
22:55:06: Processed table ‘wp_terms’.
22:55:06: Processed table ‘wp_termmeta’.
22:55:06: Processed table ‘wp_term_taxonomy’.
22:55:06: Processed table ‘wp_term_relationships’.
22:55:05: Processed table ‘wp_redirection_logs’.
22:55:05: Processed table ‘wp_redirection_items’.
22:55:05: Processed table ‘wp_redirection_groups’.
22:55:05: Processed table ‘wp_redirection_404’.
22:55:03: Processed table ‘wp_ratings’.
22:55:03: Processed table ‘wp_posts’.
22:54:54: Processed table ‘wp_postmeta’.
22:54:49: Processed table ‘wp_options’.
22:54:48: Processed table ‘wp_links’.
22:54:48: Processed table ‘wp_feedfooter_rss_map’.
22:54:48: Processed table ‘wp_dynamic_widgets’.
22:54:48: Processed table ‘wp_comments’.
22:54:45: Processed table ‘wp_commentmeta’.
22:54:43: Processed table ‘wp_bad_behavior’.
22:54:43: Processed table ‘wp_auth0_user’.
22:54:43: Processed table ‘wp_auth0_log’.
22:54:43: Processed table ‘wp_auth0_error_logs’.
22:54:43: Starting SQL backup.
22:54:41: Your time limit is 90 seconds and your memory limit is 128M
22:54:41: Backup started on Tuesday July 25, 2017.

Hmm, fatal error caused by overwriting files in Dropbox… I can’t be the only one having this issue, surely?

Indeed not, as a quick Google search led me to a WordPress.org support forum post on how to tweak the WordPress Backup to Dropbox plugin for PHP 7. And, after making the following edits, I ran a successful backup:

“All paths are relative to $YOUR_SITE_DIRECTORY/wp-content/plugins/wordpress-backup-to-dropbox.

In file Dropbox/Dropbox/OAuth/Consumer/Curl.php: comment out the line:
$options[CURLOPT_SAFE_UPLOAD] = false;
(this option is no longer valid in PHP 7)

In file Dropbox/Dropbox/OAuth/Consumer/ConsumerAbstract.php: replace the test if (isset($value[0]) && $value[0] === '@') with if ($value instanceof CURLFile)

In file Dropbox/Dropbox/API.php: replace 'file' => '@' . str_replace('\\', '/', $file) . ';filename=' . $filenamewith 'file' => new CURLFile(str_replace('\\', '/', $file), "application/octet-stream", $filename)

(actually, a comment further down the post highlights there’s a missing comma after $filename) on that last edit so it should be 'file' => new CURLFile(str_replace('\\', '/', $file), "application/octet-stream", $filename),)

So, that’s the backups fixed (thank you @smowton on WordPress.org). I just need to improve my monitoring of them to keep my blog online, and my blood pressure at sensible levels…

 

*I still have some concerns, because the data loss occurred the night after a suspected hacking attempt on my LastPass account; which seems to have been thwarted by second-factor authentication… at least LastPass say it was…

Serverless and the death of DevOps

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of weeks back, I took a trip to London after work to attend the latest CloudCamp meet-up. It’s been a while since I last went to CloudCamp but I was intrigued by the title of the event: “Serverless and the death of DevOps?”. The death of DevOps? Surely not. Most organisations I’m working with are only just getting their heads around what DevOps is. Some are still confusing a cultural change with some tools (hey, we’ll adopt some new tools and rebrand our AppDev function as DevOps). If anything, DevOps is at the top of the hype curve; it can’t possibly be dead!

Well, 5 minutes into the event and, after Simon Wardley (@SWardley)’s introduction, I could see where he was coming from. Mix the following up with some “Wardley Mapping” and you can see that what’s being discussed is not really the death of DevOps (as a concept where development and operations teams work in a more integrated fashion) but it may well be a new cloud computing paradigm, in the form of “serverless” computing (AWS Lambda, Azure Functions, etc.):

  • Back in the beginning of computing, systems were hard-wired (e.g. Colossus).
  • Then, they evolved and we had custom-built computing (e.g. Leo) with the concept of applications and an operating system.
  • This evolved and new products (like the IBM 650) were born with novel architectural practices, based around the concept of compute as a product.
  • These systems had a high mean time to recover (MTTR) so the architecture of the day was designed around N+1, DR tests, scaling up.
  • Evolution continued and novel architectural practices became emerging, then good. Computing became more resilient.
  • Next came frameworks. We had applications and an emerging coding practice based around these frameworks, running on an operating system using good architectural practice, all built around the concept of compute as a product (a server).
  • All was happy.
  • Then along came the cloud. Compute was no longer a product but a utility. It brought new benefits of efficiency, pooling resources, agility. Computing had new sources of worth.
  • And organisations said “make my legacy cloudy” [actually, this is as far as many have got to…].
  • Some people asked “but shouldn’t architecture evolve too?” And, after the initial cries of “burn him, heretic”, a new novel architectural practice emerged, built around a low MTTR. It took seconds to get a new virtual machine, distributed systems were designed for failure, indeed chaos monkeys were introduced to the environment to introduce failure and ensure resilience. We introduced co-evolution (which has been practiced in other fields throughout history) and we called it DevOps.
  • This evolved until it became good architectural practice for the utility world and the old practices for a product world became legacy.
  • The legacy world was held back by inertia but the cloud was about user needs, measurement, automation, collaboration and fast feedback.
  • Then a new tribe began to rise up. Using commodity operating systems and functions as a framework. This framework is becoming a utility. And it will move from emerging to good practice, then best practice and “serverless” will be the future.
  • The old world will become legacy. Even the wonderful world of “DevOps”.
  • But, for now, if we say that “DevOps” is legacy, the response will be “burn him, heretic”.

So that’s the rise of serverless and the “death of DevOps”.

[Simon Wardley does a much better job of this… hopefully, there’s a video out there of him explaining the above somewhere…]

Installing youtube-dl on a Mac to watch YouTube videos when working offline

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My work pattern at the moment means that I’m spending a lot of time travelling on trains up and down the country (in fact, as this post is published, I’ll be somewhere between Bedford and Sheffield). A combination of fatigue and motion sickness means that this isn’t always a good opportunity to work on the train but it is potentially an opportunity to listen to podcasts or, unlike when I’m driving, to watch some videos. Unfortunately, travelling at 125 miles an hour with a varying quality of 4G data signal doesn’t always lend itself well to streaming from YouTube, etc.

That’s where youtube-dl comes in – I can download videos to my MacBook before I leave home, and watch at my leisure on the train. So, how do you get started?

Well, the Mac App Store website helped me. Following advice there, I issued two commands in Terminal to first install the HomeBrew package manager for MacOS, and then to install youtube-dl:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null

brew install youtube-dl

So, with youtube-dl installed, how do I use it? The youtube-dl readme.md file has lots of information but it’s not exactly easy to digest.

I found that:

youtube-dl -F youtubeurl

would give me a list of available video formats for a given URL and reading about YouTube media types led me to the very important number 22 for MP4 video at 720p with H.264 encoding and AAC audio. That should play on a wide variety of devices (including Quicktime on my Mac).

Next, to download:

youtube-dl -f 22 https://www.youtube.com/channel/UCz7bkEsygaEKpim0wu_JaUQ

(this URL is the CloudTechTV channel).

That command brought down all of the videos in the channel but I can also download individual episodes, for example:

youtube-dl -f 22 https://www.youtube.com/watch?v=ymKSGTR55LQ
youtube-dl
I can do something similar for other YouTube videos/channels (and even for some other video services) and build a library of videos to watch on my journeys, without needing to worry about an Internet connection.