Some thoughts on wearable tech…

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Almost a year ago, I wrote a great blog post about the rise of wearable technology. I know it was great, possibly my best ever, and it was inspired by some thoughts on the journey to work, hurriedly scribbled on a piece of paper when I got to the office, waiting to be fleshed out with research and published. Except you won’t find it anywhere, because I accidentally put the scrap of paper into the confidential waste bin…

Wearable hype

Fast forward to January 2014 and it seems that wearable tech is at the peak of the hype curve. I haven’t actually checked the analyst reports – it’s just an observation based on the tech news that’s reached me of late, particularly since CES.

So, whether it’s personal health and fitness, location tracking, lifeblogging, information on the move – or something else we haven’t thought of yet – wearable seems to be the first buzzword of 2014. But the key to making it successful is an ecosystem for devices to work together. My data is only useful when combined with other data – islands of Fitbit, Nike fuel band, Google Glass (bad example – we’ll come back to that in a moment) and other data sources are really just of personal interest/vanity (a bit like me publishing my exercise activities on Facebook) until they are combined to actually mean something. Whilst Cisco’s vision of a connected world sounds a little too Orwellian for me, there is definitely something there – could wearable tech, combined other machine to machine (M2M) communications (e.g. home automation) provide enough benefits to make us all sign up?

That’s where we come to Google. The current incarnation of Google Glass may be a bit clunky, but it will improve. Google’s acquisition of Nest is surely intended to provide a foothold in home M2M communications. Most people seem increasingly accepting of the ad agency’s services*, being prepared to exchange our personal data for “free” services that are of value. Google is about data. Vast amounts of it – and we’re generating more and more!

The personal communications hub

So what’s at the centre of all of this tech – surely something is needed to act as a broker, to combine data feeds and act as a personal communications hub in an ever-connected world? Ah yes, communications hub – that will be our smartphones then. Forget about where we will put the wearable clothing. As the technology develops the sensors become less obtrusive and less noticable.

 

The smartphone, meanwhile, remains as the voice, video, and data device that we carry about our person at all times. Increasingly powerful, with better battery life and containing a plethora of sensors, it can interact with others about our person and provide the conduit for our personal data streams – to and from the ‘net. Google is well placed with its online services, Android operating system, and recent steps into the wearable tech and M2M marketplaces (indeed, I’d argue that wearables are just one part of a much wider M2M market but it will be difficult to separate them soon). The question is, in this “post-PC” world, what are Apple and Microsoft doing to follow? An Apple iWatch has been rumoured for years – and I’m sure it will offer a vastly improved experience compared with Samsung’s Galaxy Gear but for now it’s just vapourware (or is that vapourwear?). Microsoft had the idea of building connected screens for its devices a few years ago but they just didn’t take off. Google’s Glass is the closest anyone has come to something that might just be acceptable in the marketplace but I firmly beleive that the watch/glasses/whatever-the-wearable-interface-is is simply that – an interface – something to use instead of pulling our smartphones out of our pocket.

One thing’s for sure, as wearables and M2M comms become more and more established, we’ll start to see some amazing uses for technology – as long as the privacy concerns can be overcome. It’s a bit too soon to say who will dominate, but short of a new entrant taking the market by storm, or an industry-wide federation of companies creating an ecosystem for smart devices, my money is on Google.

Random thoughts

This is certainly not my best ever blog post – perhaps it’s little more than a jumbled collection of thoughts – but at least I got something on the web this time (unlike the thoughts I had on virtual currencies in 2011, that never got further than an item on the “to do” list).  Talking of random thoughts, that reminds me… somewhere I have a t-shirt with built in light-up graphic equaliser – is that an example of mid-2000s wearable tech?

* I believe that most of the Google’s revenues still come from search, but clearly not in the UK or else there would be corporation taxes to pay.

My email SLA

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Returning to work this week after almost two weeks with my family was not pleasant. In particular, I knew that I had over 1500 items in my three inboxes (direct, copied, external) and I’d long since abandoned Inbox Zero (despite loving my mental state when I do get it working for me).  I’d intended to use the last couple of days before Christmas to fix this, but found myself working on various crises until I finally logged off for the holidays (and afterwards too…)

This week, I’ve tweeted a couple of times on what might be called “productivity tips” or teaching others how you expect to engage.  It started out with an excellent email 101 post from Wes Miller (@getwired) which looks at something many organisations suffer with – too many meetings, and too much email. For me, the last paragraph says it all:

Then, last night, I saw that Alan Berkson (@berkson0) wrote an article for Social Media Today aimed at setting expectations for customer service. Even if you don’t interact directly with customers, it’s highly likely that you have “internal customers” – people in your organisation who rely on you to respond to their requests. So, I’ve taken his tip to update my email signature to set expectations re: replies – call it an “email SLA” if you like – after all, email is an asynchronous communication mechanism:

“Please note that, whilst I generally try to respond to emails sent directly to me within 24 hours, this is not always possible. If your message is urgent (i.e. requires same-day or next-day action), please feel free to call me and, if necessary, leave a message on my mobile phone.  My Calendar is also open to view. Messages on which I’m copied (CC or BCC) are assumed to be for information only and it may be longer before they are read/acted upon.”

Added to that, my out of office message is frequently set, even when I’m in the office, just to say “I’m really, really busy and these are the people who might be able to help whilst I can’t”.

One final point, whilst you’re setting expectations around email, share your calendar too… getting others to look at it before booking meetings/calling you – well, that’s another issue entirely…

Good old fashioned incandescent light bulbs

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The main light bulb blew in our living room earlier this week. Nothing unusual there – it happens. Except have you tried to buy a dimmable, bayonet fitted, pearl finish, low energy, 100W equivalent bulb? If anyone has found one, I’d be interested to know more because, for the life of me, I can’t find a modern replacement.

Luckily, our local independent hardware store (no relation) still stock “old skool” lightbulbs, so we’ve stocked up.  I like to be more energy efficient where it makes sense but the major retailers who have stopped stocking these bulbs have done so because they were lobbied by government the bulbs are inefficient – except all that heat they give off is not going to waste… it just means the central heating doesn’t need to work so hard, surely?

Of course, I’d like to save money, just like the next man, but have you tried to navigate the maze of light bulbs in the average DIY store recently?

Unfortunately, it looks as though the dimmer switch has failed too as the new bulbs won’t fully illuminate in the living room (but are fine elsewhere)… that’ll be a job for me this weekend then…

Remembering the changes made to my blog’s theme

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every year I need to update the copyright notice on my blog, and every year I forget where to do it in the theme for the site.

I should use a dynamic copyright date, as suggested by James Bavington (@jamesbavington):


Or perhaps this more elegant solution highlighted to me by Garry Martin (@garrymartin) but my late night coding changes just broke my blog, so I did it the old way.  Whilst I was at it, I applied an updated version of the theme, so this post is more of a “note to self” of the things I need to change when theme updates come along:

  • Include the various items in the header.php file that are needed for Google (and others) site verification, Google Analytics, Facebook Like button, Google+ badge, better search results, etc.
  • Put back in the style.css stylesheet amendments that adjust colours, etc.
  • Edit the functions.php file to make sure that the footer_link() function has the correct attribution and copyright notice…

Could low cost tablets actually knock the iPad off its perch?

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last weekend, my family went on a theatre trip to the Pantomime.  After Snow White had been rescued from her slumber by a charming prince, there was a short interlude whilst “Herbert the henchman” invited children with “golden tickets” onto the stage.  Asking one six-year-old what she had received for Christmas, she said “A Hudl“.

The bemused actor had not heard of a Hudl before and she went on to explain “it’s like an iPad, but without the button”.

Aside from amusing me that the Tesco device might actually have a name that could catch on with consumers (cf. the Kindle Fire HD that my kids referred to as “an iPad Kindle”), this got me thinking.  Could the low cost tablets from Tesco, Argos, et al be about to shake the iPad off it’s perch? I was reading a Which report over Christmas which lauded the iPad Mini as a great small form factor tablet but it’s expensive. Meanwhile even my Mum has bought a £100 Acer tablet (I wish she’d spoken to me first but, never mind).

My father-in-law was amazed that six-year-olds would be given a tablet but I highlighted that, at £120 (or as low as £60 with Clubcard vouchers) it was a consumable device – and that’s the beauty. It doesn’t have to be great, just good enough and cheap.  After all, my very expensive 64GB 3G first generation iPad was thrown on the scrap-heap by Apple with a lack of OS updates etc. after about 2 years.  Why spend £700 when I can spend far less and upgrade more frequently? The Google Nexus may be technically superior but buying a £120 tablet is very low risk.

Let me be clear: Apple has some great premium products – but with mass market acceptance of Android they have a problem. Whilst some of my friends have purchased iPad Minis for the family (one Christmas day Facebook update read “Operation iPad Mini declared a success – never seen the children so quiet”), how many more will go for the low cost option from the supermarket?

RasPi Wi-Fi

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Way back in the autumn of 2012, I was getting all excited about my Raspberry Pi. I even hacked around to get it working over Wi-Fi but never got around to publishing the post!  So, a year and a bit later, here are a few notes based on some links I recorded at the time. Your mileage may vary (the Raspberry Pi has come a long way since then and I was running Debian Squeeze rather than Raspbian) but if you’re having difficulties getting RasPi Wi-Fi to work, hopefully some of this will help.

The Wi-Fi adapter that I used was an Edimax EW-7811Un nano USB adapter which I seem to recall I originally purchased from Maplin before returning it when I realised it was much less expensive online.  There are some good notes on the Raspberry Pi verified peripherals list that may help (much better than when I was working on this in 2012).

Tomasz Miklas’ post provided a ton of information on configuring the operating system to work with the adapter, as did this guide on elinux.org.  If you have trouble with the Realtek drivers, there’s a post that may help – you might want to read it in conjunction with these notes on the Raspberry Pi forums.  I also found that I had to use the sudo bash scriptname.sh command, rather than just sudo scriptname.sh. The final resource I found in my notes was Mr Engman’s “idiots guide” to RTL8188CUS Wi-Fi setup.

So, there you have it – ingredients but no method, I’m afraid.  I also found that the WiFi reliability depended on which other peripherals were plugged in to the RasPi (for example I use a cheap mini wireless keyboard and mouse set from Maplin) and I had some success with a powered USB hub (a Logik LT4HUB10).  Since then, I’ve switched over to a 1500mA power supply from The Pi Hut but am not sure it’s made much difference.

Confusion over accounts used to access Microsoft’s online services

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently bought a new computer, for family use (the Lenovo Flex 15 that I was whinging about the other week finally turned up). As it’s a new PC, it runs Windows 8 (since upgraded to 8.1) and I log in with my “Microsoft account”. All good so far.

I set up local accounts for the kids, with parental controls (if you don’t use Windows Family Safety, then I recommend you do! No need for meddling government firewalls at ISP level – all of the major operating systems have parental controls built in – we just need to be taught to use them…), then I decided that my wife also needed a “Microsoft account” so she could be registered as a parent to view the reports and over-ride settings as required.

Because my wife has an Office 365 mailbox, I thought she had a “Microsoft account” and I tried to use her Office 365 credentials. Nope… authentication error. It was only some time later (after quite a bit of frustration) that I realised that the “Organization account” used to access a Microsoft service like Office 365 is not the same as a “Microsoft account”. Mine had only worked because I have two accounts with the same username and password (naughty…) but they are actually two entirely separate identities. As far as I can make out, “organization accounts” use the Windows Azure Active Directory service whilst “Microsoft accounts” have their heritage in Microsoft Passport/Windows Live ID.

Tweeting my frustrations I heard back from a number of online contacts – including journalists and MVPs – and it seems to be widely accepted that Microsoft’s online authentication is a mess.

As Jamie Thomson (@JamieT) commented to Alex Simons (@Alex_A_Simons – the Programme Director for Windows Azure Active Directory), if only every “organization account” could have a corresponding “Microsoft account” auto-provisioned, life would be a lot, lot simpler.

Collecting train tickets at the station? Seems it doesn’t matter which station you select…

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

At least once a month, I travel to Manchester for work. I tend to use the train, rather than drive because: it’s pretty straightforward; I can work on the journey; and I’m not so tired at the other end (although having the car with me can be more flexible at times).

Today is one of those days when I’m heading north but this time, instead of a straight out and back from Milton Keynes Central to Manchester Piccadilly, I need to be in Crewe tomorrow. That meant buying three single tickets – and even though my train from Manchester to Milton Keynes sometimes goes via Crewe, it cost more to break the journey than to go direct. That’s just one of the many vagaries of the British railway ticket system (and contrary to a popular money-saving tip)… go figure!

Anyway, the reason for this diatribe is that the Virgin Trains website defaulted to letting me collect my tickets from the “Fast Ticket” machine (a complete misnomer when it involves looking up and entering an 8 digit alphanumeric reference on a not-very-responsive touch screen using a non-QWERTY keyboard) at the origin of my last journey (i.e. Crewe) rather than my first (i.e. Milton Keynes Central).

In horror, after spending £150 on train tickets, I thought I would have to *drive* to Crewe to collect them! In a state of panic I called Virgin Trains (calls cost 4.5p a minute from a BT land line – on other networks you may need a small mortgage), who told me it doesn’t actually matter which station I collect the tickets from, as long as I have my payment card with me.  Bizarre! So why ask me which station I want to collect from then?!  (Maybe blame the Trainline.com back-end – or perhaps the rail ticketing systems…)

I didn’t trust the advice and didn’t want to be caught out whilst trying to catch the something-way-too-early train to Manchester this morning, so I headed to my local station to collect my tickets on Friday evening, just in case I needed to get someone at Virgin Trains to help me out.  Actually, I drove over twice because I forgot my credit card on the first occasion and left it next to my laptop on my desk, from where I’d bought the tickets (idiot)!

Anyway, the verdict is that it really doesn’t seem to matter which station you select to collect your tickets at – you can collect them in any Fast Ticket machine at any station (as long as you have the card used to purchase them).  Something that might be worth knowing about if you ever find yourself panicking as a result of some poor UX design on a website…

Improving performance; managing expectations; being responsive; work in progress; and fear, uncertainty and doubt (#MKGN)

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I can’t believe that the quarterly Milton Keynes Geek Night is nearly upon us again. I usually try to blog about the evening but I’ve failed spectacularly on recent attempts.  I might fail again with this week’s MKGN – not because I’m slow to get a blog post up but because the tickets “sold” out in something crazy like 2 minutes…

September’s Geek Night was up to the usual high standard (including the return of David Hughes – seems you can’t escape that easily!) but included one talk in particular that stood out above all of the others, when Ben Foxall (@BenjaminBenBen) showed us (literally) the other side of responsiveness… but we’ll come back to that in a moment.

Back to front performance

First up was Drew McLellan (@DrewM)’s take on “back to front” performance. You can catch the whole talk on Soundcloud but for me, as someone who runs a fairly shoddy WordPress site, it got me thinking about how performance is not just about optimising the user experience but also about the back end – perhaps summed up in one of the first points that Drew made:

“Website performance is about how your site feels.”

That may be obvious but how many times have you heard people taking about optimisation of one part of a site in isolation, without considering the whole picture.  As Drew highlighted, performance is a feature to build in – not a problem to fix – and it’s also factored into search engine algorithms.

Whilst many performance gains can be found by optimising the “front-end” (i.e. Browser-side), there are some “back-end” changes that should be considered – sites need to be super-fast under normal load in order to be responsive under heavy load (quite simply, simultaneous requests affect responsiveness – they use memory and the quicker you can process pages and release memory, the better!).

First up, consider hosting. Drew’s advice was:

  • Cheap hosting is expensive (shared hosting is cheap for a reason).
  • Shared hosting is the worst (rarely fast) – think about a virtualised or dedicated server solution instead.  Constrain by CPU, then RAM, not disk space (that should be a red flag – it’s cheap, if not much is allocated it shows lots of people crammed on a server).
  • Consider what your project has cost to build when buying hosting! Use the best you can afford – and if they advertise with scantily clad ladies, they’re probably not very good (or to be encouraged)

Next, the content management system (CMS), where Drew says:

  • Think about the cost of external resources (going to database or web API, for example). Often these are necessary costs but can be reduced with careful architecture.
  • Employ DRY coding (don’t repeat yourself) – make sure everything only has a single representation in code. Do things once, cache and reuse (unless you expect different results). For example, if something doesn’t change often (e.g. post count by category on a blog), don’t calculate this on every page serve – instead consider calculating when adding/removing a post or category (called denormalisation in database terms)… be smart – consider how real-time is the data? And are people making decisions using this data?
  • Do the work once – “premature optimization is the root of all evil” is actually a quote from 1974, when line-by-line optimisation was necessary.  Focus on the bottlenecks: “premature” should not be confused with “early” – if you know something will be a bottleneck, optimisation is not premature, it’s sensible.
  • Some frameworks focus on convention over configuration (code works things out, reduces developer decisions) – can lead to non-DRY code – so let’s make programming fun and allow the developer to work out the best way instead of burning CPU cycles.  “Insanity is doing the same thing over and over again and expecting different results”.
  • The Varnish caching HTTP reverse proxy may be something to consider to speed up web site (unfortunately Drew ran out of time to tell us more – and my hosting provided found it caused problems for some other customers, so had to remove it after giving it a try for me)

In summary, Drew told us to care about front end optimisation; be careful about setting cookies and serve assets from cookieless domains; be smart about server headers; use CDNs to outsource traffic; GZip content; JavaScript at bottom of page and minimise it; test with PageSpeed and YSlow; ignore bits that make no sense for responsive web design.  But, importantly, don’t forget the back end – hosting, CMS, stay dry (do it once), a few minutes configuring up front saves wasted time later, and optimise early. In short – front end performance can’t make up for slow servers!


Related reading: check out Kier Whitaker (@KierWhitaker)’s  adventures with Google Page Speed in my write-up from MK Geek Night 4

Managing client expectations

The first of the five-minute talks was from Christian Senior (@senoir – note the spelling of the Twitter handle, it’s senoir not senior!).  Christian spoke about managing client expectations.  Whilst my notes from Christian’s talk are pretty brief (it was only 5 minutes after all) it certainly struck a chord, even with an infrastructure guy like me.

Often, the difficult part is getting a client to understand what they are getting for their money (“after all, how hard can it really be?”, they ask!) – but key to that is understanding the customer’s requirements and making sure that’s what your service delivers.  Right from the first encounter, find out about the customer (not just who they are, what want, how much money they will spend – but browsers, devices available, etc.) and try to include that detail in a brief – the small things count too and can be deliverables (incidentally, it can be just as important to distinguish the non-deliverables as the deliverables). Most of all, don’t take things for granted.  My favourite point of the talk though, was “talk to customers in a language they understand!”:

Or, to put it another way:

“Work in code, not talk in code!”

The other side of responsive

As I mentioned in my introduction, Ben Foxall (@BenjaminBenBen)’s five minute talk on “the other side” of responsive design was nothing short of stunning. If I ever manage to deliver a presentation that’s half as innovative as this, I’ll be a happy man.  Unfortunately, I’m not sure I can do it justice in words but, as we know from Sarah Parmenter (@Sazzy)’s talk at MK Geek Night 5, responsive websites provide the same content, constructed in different ways to serve to multiple devices appropriately.

  • Ben got us all to go to , which reacted according to our devices.
  • He then showed how the site responded differently on a phone or a PC – choose a file from a PC, or take a photo on a phone.
  • He tweeted that photo.
  • He showed us the device capabilities (i.e. the available APIs).
  • He updated his “slides” (in HTML5, of course), interactively.
  • And projected those slides in our browsers (via the link we all blindly clicked).

Actually – Ben did so much more than that. And thankfully he blogged about what he did and how he did it – I recommend you go take a look.

In summary, Ben wrapped up by saying that “responsiveness and the web needs to use the capabilities of all the devices and push the boundaries to do interesting things”.  If only more “responsive” designers pushed those boundaries…

One last thought on this topic (from Brad Frost, via Ben Foxall’s MK Geek Night talk), is contained in these three images (provided under a Creative Commons attribution license):

  

Work in progress

Following Ben’s talk was always going to be a tough gig.  I’m not sure that I really grokked Tom Underhill (@imeatingworms)’s “Work in Progress” although the gist seemed to be that technology gallops on and that we’re in a state of constant evolution with new tools, programs, apps, books, articles, courses, posts, people to follow (or not follow), etc., etc.

Whilst the fundamentals of human behaviour haven’t changed, what’s going on around us have – now we need more than just food and warmth – we “need” desktops, laptops, smartphones, pink smartphones, smart watches.  Who knows what’s in the future in a world of continued change…

Constant change is guaranteed – in technology, social context and more. Tech is a great enabler, it could be seen as essential – but should never replace the message. Brands, experiences and products change lives based on the fundamentals of need.

Hmm…

Interlude

The one minute talks were the usual mixed bag of shout-outs for jobs at various local agencies (anyone want to employ an ex-infrastructure architect who manages a team and really would like to do something exciting again… maybe something “webby”?), Code Club, the first meeting of Leamington Geeks, and upcoming conferences.

Fear, uncertainty and doubt

The final keynote was from Paul Robert Lloyd (@paulrobertlloyd), speaking on FUD – fear, uncertainty and doubt. Paul makes the point that these are all real human emotions – and asks what the consequences of abusing them are. He suggests that the web has been hijacked by commercial interests – not only monitoring behaviour but manipulating it too.

Some of the highlights from Paul’s talk make quite a reading list (one that I have in Pocket and will hopefully get around to one day):

  • Jonathan Harris’ modern medicine considers the ethical implications of software. Even a default setting can affect the daily behaviours of thousands of people.  Facebook asks its designers about the “Serotonin” of new features – i.e. how will it affect how we behave.
  • As the web is largely unregulated, it’s attractive to those who want to increase their personal wealth; so we have to be optimistic that there are enough people working in the tech sector with a moral compass. Arguably, the Snowden leaks show that some people have integrity and courage. But Paul is uncertain that Silicon Valley is healthy – “normal” people don’t see customers as data points against which to test designs – for example a team at Google couldn’t decide on shade of blue so they tested 41 shades (and border widths). Paul also made the point that the team was working under Marissa Mayer – for a more recent example witness the Yahoo! logo changes…
  • Then there are the “evil” social networks where, as Charles Stross highlights, “Klout operates under American privacy law, or rather, the lack of it”.
  • Paul says that The Valley operates in a bubble – and that Americans (or at least startups) skew to the workaholic side of things, viewing weekends off as a privilege not a right. He also suggests that the problem is partly a lack of diversity – The Valley is basically a bunch of Stanford guys making things to fix their own problems. Very few start from a social problem and work backwards – so very few are enhancing society; they’re making widgets or enhancing what already exists. Funding can be an issue but governments are seeing the tech sector as an area of rapid growth and it’s probably good not to be aligned to a sector where you can launch start-ups without a business case!
  • Lanyrd shows that it is possible to start up outside The Valley (although they have been bought by Eventbrite so have to move) [TweetDeck is another example, although bought by Twitter] but Silicon Valley arrived by a series of happy accidents and good luck/fortune – it’s important that the new tech hubs shouldn’t be a facsimile of this.
  • We trust Yahoo! by putting photos on Flickr but they also have form for removing content (e.g. Geocities) – but what happens when your service is closed down? Is there something morally wrong with closing sites containing thousands of hours of individuals’ comments, posts, etc.? Shouldn’t we treat data like it matters, allow export capabilities and support data rescue?
  • Then there’s protecting out data from Governments. Although conducted before the Snowden leaks the Electronic Frontier Foundation’s annual survey asks “who has your back?” – and, although it’s still young, it seems companies are starting to take notice.
  • Choose your services wisely – we (the geeks) are early adopters – and we can stop using social networks too.  It’s easier to change services if data can be exported – but all too often that’s not the case so you need to own your own content.
  • We all have the power to change the web to the way we want to see it, says Paul – all we need is need a text editor, an FTP client and some webspace. In the wake of the NSA revelations, Bruce Schneier writes in the Guardian how those who love liberty have to fix the ‘net.

Paul’s slides are available on Speaker Deck.

So, what’s next?

MK Geek night #7 is on Thursday 5 December featuring:

together with five minute features from:


Even if I don’t manage to get there (or if I do and am a bit slow blogging) you can find out more on the MK Geek Night website on Twitter (@MKGeekNight), or Soundcloud (on the MKGN stream).

Related reading: James Bavington has another write-up of MKGN #6.

[Update 7 December 2013: Added links to Paul Robert Lloyd’s slides and to James Bavington’s post]

Remote PowerShell to manage Exchange, even without the Exchange Management Shell installed

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Following on from yesterday’s Exchange Admin Center/Outlook Web App tips, I thought I’d share another gem that came from Microsoft Exchange Premier Field Engineer and PowerShell author Mike Pieffer (@mike_pfeiffer) in the Microsoft Virtual Academy Core Solutions of Microsoft Exchange Server 2013 Jump Start course.

Sometimes, you’ll need to perform an operation on an Exchange Server and you won’t have the Exchange Management Shell installed.  You may be able to carry out the operation graphically using the Exchange Admin Center but, more likely, you’ll need to invoke a remote PowerShell session.

The magic commands (which need PowerShell v2 or later) use implicit remoting via the IIS PowerShell virtual directory (proxied via an Exchange server with the CAS role installed):

$session = New-PSSession -ConfigurationName microsoft.exchange -ConnectionUri http://servername/powershell
Import-PSSession $session

After running these commands, you should be able to run Microsoft Exchange cmdlets, as long as you have the appropriate permissions assigned via Exchange’s Role Based Access Control mechanism. I’ve used the same approach previously to connect to Exchange Online (Office 365) using remote PowerShell.

A couple of additional points to note: because you’re running a remote PowerShell session, you’ll also need the script execution policy to allow RemoteSigned scripts; also, don’t forget to tear down the session when you’re done, using Remove-PSSession $session.