Confusion over accounts used to access Microsoft’s online services

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently bought a new computer, for family use (the Lenovo Flex 15 that I was whinging about the other week finally turned up). As it’s a new PC, it runs Windows 8 (since upgraded to 8.1) and I log in with my “Microsoft account”. All good so far.

I set up local accounts for the kids, with parental controls (if you don’t use Windows Family Safety, then I recommend you do! No need for meddling government firewalls at ISP level – all of the major operating systems have parental controls built in – we just need to be taught to use them…), then I decided that my wife also needed a “Microsoft account” so she could be registered as a parent to view the reports and over-ride settings as required.

Because my wife has an Office 365 mailbox, I thought she had a “Microsoft account” and I tried to use her Office 365 credentials. Nope… authentication error. It was only some time later (after quite a bit of frustration) that I realised that the “Organization account” used to access a Microsoft service like Office 365 is not the same as a “Microsoft account”. Mine had only worked because I have two accounts with the same username and password (naughty…) but they are actually two entirely separate identities. As far as I can make out, “organization accounts” use the Windows Azure Active Directory service whilst “Microsoft accounts” have their heritage in Microsoft Passport/Windows Live ID.

Tweeting my frustrations I heard back from a number of online contacts – including journalists and MVPs – and it seems to be widely accepted that Microsoft’s online authentication is a mess.

As Jamie Thomson (@JamieT) commented to Alex Simons (@Alex_A_Simons – the Programme Director for Windows Azure Active Directory), if only every “organization account” could have a corresponding “Microsoft account” auto-provisioned, life would be a lot, lot simpler.

Collecting train tickets at the station? Seems it doesn’t matter which station you select…

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

At least once a month, I travel to Manchester for work. I tend to use the train, rather than drive because: it’s pretty straightforward; I can work on the journey; and I’m not so tired at the other end (although having the car with me can be more flexible at times).

Today is one of those days when I’m heading north but this time, instead of a straight out and back from Milton Keynes Central to Manchester Piccadilly, I need to be in Crewe tomorrow. That meant buying three single tickets – and even though my train from Manchester to Milton Keynes sometimes goes via Crewe, it cost more to break the journey than to go direct. That’s just one of the many vagaries of the British railway ticket system (and contrary to a popular money-saving tip)… go figure!

Anyway, the reason for this diatribe is that the Virgin Trains website defaulted to letting me collect my tickets from the “Fast Ticket” machine (a complete misnomer when it involves looking up and entering an 8 digit alphanumeric reference on a not-very-responsive touch screen using a non-QWERTY keyboard) at the origin of my last journey (i.e. Crewe) rather than my first (i.e. Milton Keynes Central).

In horror, after spending £150 on train tickets, I thought I would have to *drive* to Crewe to collect them! In a state of panic I called Virgin Trains (calls cost 4.5p a minute from a BT land line – on other networks you may need a small mortgage), who told me it doesn’t actually matter which station I collect the tickets from, as long as I have my payment card with me.  Bizarre! So why ask me which station I want to collect from then?!  (Maybe blame the Trainline.com back-end – or perhaps the rail ticketing systems…)

I didn’t trust the advice and didn’t want to be caught out whilst trying to catch the something-way-too-early train to Manchester this morning, so I headed to my local station to collect my tickets on Friday evening, just in case I needed to get someone at Virgin Trains to help me out.  Actually, I drove over twice because I forgot my credit card on the first occasion and left it next to my laptop on my desk, from where I’d bought the tickets (idiot)!

Anyway, the verdict is that it really doesn’t seem to matter which station you select to collect your tickets at – you can collect them in any Fast Ticket machine at any station (as long as you have the card used to purchase them).  Something that might be worth knowing about if you ever find yourself panicking as a result of some poor UX design on a website…

Improving performance; managing expectations; being responsive; work in progress; and fear, uncertainty and doubt (#MKGN)

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I can’t believe that the quarterly Milton Keynes Geek Night is nearly upon us again. I usually try to blog about the evening but I’ve failed spectacularly on recent attempts.  I might fail again with this week’s MKGN – not because I’m slow to get a blog post up but because the tickets “sold” out in something crazy like 2 minutes…

September’s Geek Night was up to the usual high standard (including the return of David Hughes – seems you can’t escape that easily!) but included one talk in particular that stood out above all of the others, when Ben Foxall (@BenjaminBenBen) showed us (literally) the other side of responsiveness… but we’ll come back to that in a moment.

Back to front performance

First up was Drew McLellan (@DrewM)’s take on “back to front” performance. You can catch the whole talk on Soundcloud but for me, as someone who runs a fairly shoddy WordPress site, it got me thinking about how performance is not just about optimising the user experience but also about the back end – perhaps summed up in one of the first points that Drew made:

“Website performance is about how your site feels.”

That may be obvious but how many times have you heard people taking about optimisation of one part of a site in isolation, without considering the whole picture.  As Drew highlighted, performance is a feature to build in – not a problem to fix – and it’s also factored into search engine algorithms.

Whilst many performance gains can be found by optimising the “front-end” (i.e. Browser-side), there are some “back-end” changes that should be considered – sites need to be super-fast under normal load in order to be responsive under heavy load (quite simply, simultaneous requests affect responsiveness – they use memory and the quicker you can process pages and release memory, the better!).

First up, consider hosting. Drew’s advice was:

  • Cheap hosting is expensive (shared hosting is cheap for a reason).
  • Shared hosting is the worst (rarely fast) – think about a virtualised or dedicated server solution instead.  Constrain by CPU, then RAM, not disk space (that should be a red flag – it’s cheap, if not much is allocated it shows lots of people crammed on a server).
  • Consider what your project has cost to build when buying hosting! Use the best you can afford – and if they advertise with scantily clad ladies, they’re probably not very good (or to be encouraged)

Next, the content management system (CMS), where Drew says:

  • Think about the cost of external resources (going to database or web API, for example). Often these are necessary costs but can be reduced with careful architecture.
  • Employ DRY coding (don’t repeat yourself) – make sure everything only has a single representation in code. Do things once, cache and reuse (unless you expect different results). For example, if something doesn’t change often (e.g. post count by category on a blog), don’t calculate this on every page serve – instead consider calculating when adding/removing a post or category (called denormalisation in database terms)… be smart – consider how real-time is the data? And are people making decisions using this data?
  • Do the work once – “premature optimization is the root of all evil” is actually a quote from 1974, when line-by-line optimisation was necessary.  Focus on the bottlenecks: “premature” should not be confused with “early” – if you know something will be a bottleneck, optimisation is not premature, it’s sensible.
  • Some frameworks focus on convention over configuration (code works things out, reduces developer decisions) – can lead to non-DRY code – so let’s make programming fun and allow the developer to work out the best way instead of burning CPU cycles.  “Insanity is doing the same thing over and over again and expecting different results”.
  • The Varnish caching HTTP reverse proxy may be something to consider to speed up web site (unfortunately Drew ran out of time to tell us more – and my hosting provided found it caused problems for some other customers, so had to remove it after giving it a try for me)

In summary, Drew told us to care about front end optimisation; be careful about setting cookies and serve assets from cookieless domains; be smart about server headers; use CDNs to outsource traffic; GZip content; JavaScript at bottom of page and minimise it; test with PageSpeed and YSlow; ignore bits that make no sense for responsive web design.  But, importantly, don’t forget the back end – hosting, CMS, stay dry (do it once), a few minutes configuring up front saves wasted time later, and optimise early. In short – front end performance can’t make up for slow servers!


Related reading: check out Kier Whitaker (@KierWhitaker)’s  adventures with Google Page Speed in my write-up from MK Geek Night 4

Managing client expectations

The first of the five-minute talks was from Christian Senior (@senoir – note the spelling of the Twitter handle, it’s senoir not senior!).  Christian spoke about managing client expectations.  Whilst my notes from Christian’s talk are pretty brief (it was only 5 minutes after all) it certainly struck a chord, even with an infrastructure guy like me.

Often, the difficult part is getting a client to understand what they are getting for their money (“after all, how hard can it really be?”, they ask!) – but key to that is understanding the customer’s requirements and making sure that’s what your service delivers.  Right from the first encounter, find out about the customer (not just who they are, what want, how much money they will spend – but browsers, devices available, etc.) and try to include that detail in a brief – the small things count too and can be deliverables (incidentally, it can be just as important to distinguish the non-deliverables as the deliverables). Most of all, don’t take things for granted.  My favourite point of the talk though, was “talk to customers in a language they understand!”:

Or, to put it another way:

“Work in code, not talk in code!”

The other side of responsive

As I mentioned in my introduction, Ben Foxall (@BenjaminBenBen)’s five minute talk on “the other side” of responsive design was nothing short of stunning. If I ever manage to deliver a presentation that’s half as innovative as this, I’ll be a happy man.  Unfortunately, I’m not sure I can do it justice in words but, as we know from Sarah Parmenter (@Sazzy)’s talk at MK Geek Night 5, responsive websites provide the same content, constructed in different ways to serve to multiple devices appropriately.

  • Ben got us all to go to , which reacted according to our devices.
  • He then showed how the site responded differently on a phone or a PC – choose a file from a PC, or take a photo on a phone.
  • He tweeted that photo.
  • He showed us the device capabilities (i.e. the available APIs).
  • He updated his “slides” (in HTML5, of course), interactively.
  • And projected those slides in our browsers (via the link we all blindly clicked).

Actually – Ben did so much more than that. And thankfully he blogged about what he did and how he did it – I recommend you go take a look.

In summary, Ben wrapped up by saying that “responsiveness and the web needs to use the capabilities of all the devices and push the boundaries to do interesting things”.  If only more “responsive” designers pushed those boundaries…

One last thought on this topic (from Brad Frost, via Ben Foxall’s MK Geek Night talk), is contained in these three images (provided under a Creative Commons attribution license):

  

Work in progress

Following Ben’s talk was always going to be a tough gig.  I’m not sure that I really grokked Tom Underhill (@imeatingworms)’s “Work in Progress” although the gist seemed to be that technology gallops on and that we’re in a state of constant evolution with new tools, programs, apps, books, articles, courses, posts, people to follow (or not follow), etc., etc.

Whilst the fundamentals of human behaviour haven’t changed, what’s going on around us have – now we need more than just food and warmth – we “need” desktops, laptops, smartphones, pink smartphones, smart watches.  Who knows what’s in the future in a world of continued change…

Constant change is guaranteed – in technology, social context and more. Tech is a great enabler, it could be seen as essential – but should never replace the message. Brands, experiences and products change lives based on the fundamentals of need.

Hmm…

Interlude

The one minute talks were the usual mixed bag of shout-outs for jobs at various local agencies (anyone want to employ an ex-infrastructure architect who manages a team and really would like to do something exciting again… maybe something “webby”?), Code Club, the first meeting of Leamington Geeks, and upcoming conferences.

Fear, uncertainty and doubt

The final keynote was from Paul Robert Lloyd (@paulrobertlloyd), speaking on FUD – fear, uncertainty and doubt. Paul makes the point that these are all real human emotions – and asks what the consequences of abusing them are. He suggests that the web has been hijacked by commercial interests – not only monitoring behaviour but manipulating it too.

Some of the highlights from Paul’s talk make quite a reading list (one that I have in Pocket and will hopefully get around to one day):

  • Jonathan Harris’ modern medicine considers the ethical implications of software. Even a default setting can affect the daily behaviours of thousands of people.  Facebook asks its designers about the “Serotonin” of new features – i.e. how will it affect how we behave.
  • As the web is largely unregulated, it’s attractive to those who want to increase their personal wealth; so we have to be optimistic that there are enough people working in the tech sector with a moral compass. Arguably, the Snowden leaks show that some people have integrity and courage. But Paul is uncertain that Silicon Valley is healthy – “normal” people don’t see customers as data points against which to test designs – for example a team at Google couldn’t decide on shade of blue so they tested 41 shades (and border widths). Paul also made the point that the team was working under Marissa Mayer – for a more recent example witness the Yahoo! logo changes…
  • Then there are the “evil” social networks where, as Charles Stross highlights, “Klout operates under American privacy law, or rather, the lack of it”.
  • Paul says that The Valley operates in a bubble – and that Americans (or at least startups) skew to the workaholic side of things, viewing weekends off as a privilege not a right. He also suggests that the problem is partly a lack of diversity – The Valley is basically a bunch of Stanford guys making things to fix their own problems. Very few start from a social problem and work backwards – so very few are enhancing society; they’re making widgets or enhancing what already exists. Funding can be an issue but governments are seeing the tech sector as an area of rapid growth and it’s probably good not to be aligned to a sector where you can launch start-ups without a business case!
  • Lanyrd shows that it is possible to start up outside The Valley (although they have been bought by Eventbrite so have to move) [TweetDeck is another example, although bought by Twitter] but Silicon Valley arrived by a series of happy accidents and good luck/fortune – it’s important that the new tech hubs shouldn’t be a facsimile of this.
  • We trust Yahoo! by putting photos on Flickr but they also have form for removing content (e.g. Geocities) – but what happens when your service is closed down? Is there something morally wrong with closing sites containing thousands of hours of individuals’ comments, posts, etc.? Shouldn’t we treat data like it matters, allow export capabilities and support data rescue?
  • Then there’s protecting out data from Governments. Although conducted before the Snowden leaks the Electronic Frontier Foundation’s annual survey asks “who has your back?” – and, although it’s still young, it seems companies are starting to take notice.
  • Choose your services wisely – we (the geeks) are early adopters – and we can stop using social networks too.  It’s easier to change services if data can be exported – but all too often that’s not the case so you need to own your own content.
  • We all have the power to change the web to the way we want to see it, says Paul – all we need is need a text editor, an FTP client and some webspace. In the wake of the NSA revelations, Bruce Schneier writes in the Guardian how those who love liberty have to fix the ‘net.

Paul’s slides are available on Speaker Deck.

So, what’s next?

MK Geek night #7 is on Thursday 5 December featuring:

together with five minute features from:


Even if I don’t manage to get there (or if I do and am a bit slow blogging) you can find out more on the MK Geek Night website on Twitter (@MKGeekNight), or Soundcloud (on the MKGN stream).

Related reading: James Bavington has another write-up of MKGN #6.

[Update 7 December 2013: Added links to Paul Robert Lloyd’s slides and to James Bavington’s post]

Remote PowerShell to manage Exchange, even without the Exchange Management Shell installed

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Following on from yesterday’s Exchange Admin Center/Outlook Web App tips, I thought I’d share another gem that came from Microsoft Exchange Premier Field Engineer and PowerShell author Mike Pieffer (@mike_pfeiffer) in the Microsoft Virtual Academy Core Solutions of Microsoft Exchange Server 2013 Jump Start course.

Sometimes, you’ll need to perform an operation on an Exchange Server and you won’t have the Exchange Management Shell installed.  You may be able to carry out the operation graphically using the Exchange Admin Center but, more likely, you’ll need to invoke a remote PowerShell session.

The magic commands (which need PowerShell v2 or later) use implicit remoting via the IIS PowerShell virtual directory (proxied via an Exchange server with the CAS role installed):

$session = New-PSSession -ConfigurationName microsoft.exchange -ConnectionUri http://servername/powershell
Import-PSSession $session

After running these commands, you should be able to run Microsoft Exchange cmdlets, as long as you have the appropriate permissions assigned via Exchange’s Role Based Access Control mechanism. I’ve used the same approach previously to connect to Exchange Online (Office 365) using remote PowerShell.

A couple of additional points to note: because you’re running a remote PowerShell session, you’ll also need the script execution policy to allow RemoteSigned scripts; also, don’t forget to tear down the session when you’re done, using Remove-PSSession $session.

Customising the behaviour of Exchange 2013 web apps (ECP/EAC/OWA)

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve been spending quite a bit of time refreshing my technical knowledge of Microsoft Exchange recently and, aside from the detail of how the product works, I came across a couple of little nuggets that could be useful for admins working with the product.

Force a connection to the Exchange 2013 Admin Center (EAC)

If your organisation is has a combination of Exchange 2013 and previous releases (2007/2010) and your mailbox is on an older version of Exchange then accessing https://servername/ecp will result in the CAS proxying your connection to the mailbox role on a legacy server. Exchange 2007 users will be presented with an error, whilst those who have their mailbox on Exchange 2010 will see the old (yellow) Exchange Control Panel (ECP) rather than the new (blue) Exchange Admin Center (EAC).  To force a connection to an Exchange 2013 server and access the EAC, add ?ExchClientVer=15 to the URI.

Change the view in Outlook Web App (OWA)

Outlook Web App (OWA) will try to detect the device that you’re using and format the display to match; however this can be over-ridden with a few additions to the URL:

  • ?layout=desktop is the standard 3 pane view.
  • ?layout=twide can be used to force into Touch Wide (2 pane mode).
  • ?layout=tnarrow will select Touch Narrow (single pane).

Migrating contacts from iOS to Android

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last month I blogged about migrating SMS messages from an iPhone to an Android handset. I ignored my contacts because I figured that Active Sync would do that for me – and it does, except that my Galaxy S3 Mini is subject to mobile device management policies and we use the TouchDown client for ActiveSync access to Exchange so that’s where the contacts end up.  Whilst TouchDown can export contacts to the phone book on the device, I only found that after I’d migrated them a different way, so I thought I’d write a quick post about the options.

Many Android users will be GMail users.  If you fall into that camp then it’s pretty easy – sync to Google Contacts via iTunes.  An alternative (regardless of whether you use GMail) is to export the contacts from iCloud as a vCard (.VCF file).  This can be imported to various places – including GMail – or, as I did, directly on my Android handset.  The hongkiat.com post on transferring iPhone contacts to Android uses a Google account to sync the contacts onto the device. I elected to use Dropbox to get the .VCF file into the local storage on my phone, then imported the contacts from there, using the Import/Export option in the Options menu in my contact list.

Lenovo found lacking with lost laptop

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Even though I work in the application services business of a rival PC manufacturer, I’ve always had a soft spot for ThinkPads and Lenovo is (was) one of the quality PC brands I would recommend to friends and colleagues.  Until last week…

Our kids increasingly need to use a computer (a real one with a proper screen, so not a netbook – and not my iPad) for school work.  They can’t use my work PC (against policy) and for similar reasons we don’t really like them using my wife’s (again, it’s her business asset). Add to that the fact that some of the IT policies at work make it increasingly difficult for me to use my corporate laptop for anything (work-related or otherwise!) we decided that our Christmas present to ourselves will be a family computer.

Because a mouse will soon be to my kids what a command-line interface is to most of my peers (not including any IT admins or developers reading this blog), I wanted a touch-screen computer and I didn’t want to spend much over £500, which ruled out any ultrabook. Touchscreen requirement and modest budget means no Macs either.  Then I found Lenovo’s “affordable 15.6″ dual-mode notebook” – the Flex 15 – at a penny under £550 with the note that it “ships within 2-3 business days”. And the form factor means that, whilst a touchscreen desktop failed to pass spousal approval, a laptop that doubles up as a picture frame will quite happily sit in our kitchen/dining/family room without being considered unnecessary gadgetry.

I would have liked to customise the specification but that option wasn’t available so I placed an order for the stock version and it was duly processed by Lenovo’s UK reseller, Digital River (or analogue stream as I will now think of them…).

A couple of hours later I got a confirmation, which said:

“Dear Mark Wilson,

Thank you for ordering from the Lenovo Online Store powered by Digital River.

Please note, systems that are built to order can take 1 to 2 weeks to build and ship, plus 3 – 6 days for delivery.

Systems that are not built to order and were purchased with predefined specifications will ship within 2-3 business days.

Accessory options will typically ship within 2-3 business days and therefore may result in multiple deliveries when purchased with a system.

The following is a summary of your order. If you paid by credit card, please look for DRI*Lenovo on your credit card billing statement.”

[The bold text was added for emphasis by me]

A day or so later, I saw that a mouse I’d ordered for my son (he won’t need it but Mrs W insisted) had been shipped but no word on the PC. The shipping notes suggested the full value had been charged to my credit card (as it happens, only the cost of the mouse has been) but I called the number on the order confirmation email, navigated the IVR system and waited on hold before I was greeted, in German, by someone who doesn’t work on the Lenovo account. She suggested I should call back in 30 minutes as her colleague who does work with Lenovo was busy! I asked if they could call me instead and she took my details.  Surprise, surprise – no call.  Since then I’ve called twice more and each time have been told that they can’t provide an estimated shipping date but will escalate for me. Whatever that means, clearly it wasn’t done because the next time I called, I was told that “no ticket had been opened”.

In parallel, I’ve been communicating with the Lenovo UK social media team (@Lenovo_UK) who were helpful at first but then when I asked for progress told me to be patient, following up a few hours later to say they had tried to call (they did – twice, within two minutes, from a blocked number so I can’t call them back) and advising me that another team will send an email (they haven’t). Sorry guys – that’s not “trying”, that’s a pathetic attempt to contact me once before fobbing me off…

The Lenovo website says it ships in 2-3 days but the reseller (Digital River) don't know when it will shipThe thing is, I don’t mind if I’m told it’s on a ship from China (or wherever) and will take 2 weeks but the website still says “ships in 2-3 business days” and so does the order confirmation, yet the reseller doesn’t know when it will ship.  Which means I don’t know if it will ship.

Perhaps I’d be better off writing a letter to Father Christmas…

[Update, 25 November 2013 16:00 – I received a shipping confirmation from Digital River this afternoon.  Still not had the promised contact from Lenovo, or any explanation as to what caused the delay though]

Side by side installation of Office 2013 – watch out for Outlook

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

For a while now, I’ve been running two versions of Office on my corporate laptop with no problems – Office 2007 from our corporate “gold brick” image and Office 2010 (mostly for functionality I’ve got very used to in Outlook).  After a recent “Patch Tuesday” I started to see some strange behaviour whereby, depending on the method of invocation used, sometimes a 2007 version of an Office application would open, and sometimes a 2010 version.

I’ve had the media and keys for Office 2013 for a while (a properly licensed copy but not supported by our IT department) so I decided to remove 2007 and install 2013.  Because I figured the new UI would take a while to get used to (actually, it hasn’t) and because I wasn’t sure if any macros, etc. would run in the latest versions of Word and Excel (still a possibility), I elected to install 2013 alongside the existing 2010 installation.

It all went swimmingly, until I was having issues with Outlook, which is quite happily connected to our Exchange servers but telling me it isn’t when I want to update my out of office settings or view a colleague’s calendar.  I started to look for Outlook 2010, and found it wasn’t there any more…

Of course, being me, the first thing I did was tweet my bemusement and, being Twitter (and despite being 9pm on a Friday night) I quickly got some responses which told me why (thanks Aaron and Garry).

For those who can be bothered to RTFM, check out Microsoft knowledge base article 2784668 (“Information about how to use Office 2013 suites and programs (MSI deployment) on a computer that is running another version of Office”) or, for a workaround, there’s a TechNet forum post called Outlook 2010 gone in side-by-side installation with 2013″.

<tl;dr>

Outlook 2013 cannot coexist with any earlier version of Outlook. Unless you want to try a complex click-to-run setup…

Authentication issues with SharePoint in Windows Explorer mode resolved with browser proxy settings

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again I get infuriated by our Microsoft Office SharePoint Server (2007) platform as it prompts for credentials (before failing to authenticate and repeating the process) when I go to open a document library in Windows Explorer mode.  Today I found the cause of that issue.

I’d been working at Microsoft’s offices yesterday and had disabled the proxy server settings in my browser.  After returning home and VPNing to our network, I was able to access both Internet and intranet resources as normal and I forgot about the proxy server change. Only when trying to work out why I was being asked for authentication as I tried to use SharePoint in Windows Explorer mode did I remember to turn it back on again – after which everything worked as it should.

It may be peculiar to our infrastructure, or it may be a wider issue that’s worth mentioning so, if you experience authentication issues when trying to open a SharePoint library in Windows Explorer mode, double-check your browser’s proxy server settings!

“Rogue” retention policies in Exchange Online after false positive junk mail is moved to the Inbox

This content is 11 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My Office 365 tenant was recently upgraded to the “Wave 15” version of the service, meaning that my email is now hosted on Exchange 2013, rather than 2010 (Microsoft has provided an article that helps users to understand which version of the service they are on).

Unfortunately, since the upgrade, an awful lot of my legitimate email is getting trapped as junk.  After moving it back to the Inbox, I noticed that one of the items displayed a message about retention policies, highlighting that it would expire in 30 days.

I don’t use retention policies (with gigabytes of empty space in my mailbox I don’t need to), so I thought this was a little strange, until I realised that this was a side effect of having been previously flagged as junk, where there is a retention policy set to remove mail after a month.  I then found that the Managed Folder Assistant (which applies the retention policies) only runs every 7 days on Exchange Online but can be forced in PowerShell.

Sure enough, once I’d eventually managed to connect to Office 365 in PowerShell and run the Start-ManagedFolderAssistant -Identity mailboxalias command, the email was no longer flagged for expiry.

There’s more information on setting up and managing retention policies in Exchange Online with Windows PowerShell on the Outlook.com help pages.