Google Analytics: Honing in on the visits that count

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every week I create a report that looks at a variety of social media metrics, including visits to the Fujitsu UK and Ireland CTO Blog.  It’s developing over time – I’m also working on a parallel activity with some of my marketing colleagues to create a social media listening dashboard – but my Excel spreadsheet with metrics cobbled together from a variety of sources and measuring against some defined KPIs seems to be doing the trick for now.

One thing that’s been frustrating me is that I know a percentage of our visits are from employees and, frankly, I don’t care about their visits to our blog.  Nor for that matter do I want my own visits (mostly administrative) to show in the stats that I take from Google Analytics.

I knew it should be possible to filter internal users and, earlier this week, I had a major breakthrough.

I created an advanced segment that checked the page (to filter out one blog from the rest of the content on the site) and the source (to filter anyone whose referral source contained certain keywords – for example our company name!).  I then tested the segment and, hey presto – I can see how many results apply to each of the queries and the overall result – now I can concentrate on those visits that really matter.

Google Analytics advanced segment settings to remove internal referrals

Of course, this only relates to referrals, so it doesn’t help me where internal users access the content from an email link (even if I could successfully filter out all the traffic via the company proxy servers, which I haven’t managed so far, some users access the content directly whilst working from home), but it’s a start.

The other change was one I made a few months ago, by defining a number of filters to adjust the reporting:

Unfortunately filters do not apply retrospectively, so it’s worth defining these early in the life of a website.

Keeping Windows alive with curated computing

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Like it or loath it, there’s no denying that the walled garden approach Apple has adopted for application development on iOS (the operating system used for the iPhone, iPad and now new iPods) has been successful. Forrester Research talk about this approach using the term “Curated Computing” – a general term for an environment where there is a gatekeeper controlling the availability of applications for a given platform. So, does this reflect a fundamental shift in the way that we buy applications? I believe it does.

Whilst iOS, Android (Google’s competing mobile operating system) and Windows Phone 7 (the new arrival from Microsoft) have adopted the curated computing approach (albeit with tighter controls over entry to Apple’s AppStore) the majority of the world’s computers are slightly less mobile. And they run Windows. Unfortunately, Windows’ biggest strength (its massive ecosystem of compatible hardware and software) is also its nemesis – a whole load of the applications that run on Windows are, to put it bluntly, a bit crap!

This is a problem for Microsoft. One the one hand, it gives their operating system a bad name (somewhat unfairly, in my opinion, Windows is associated with it’s infamous “Blue Screen of Death” yet we rarely hear about Linux/Mac OS X kernel panics or iOS lockups); but, on the other hand, it’s the same broad device and application support that has made Windows such a success over the last 20 years.

What we’re starting to see is a shift in the way that people approach personal computing. Over the next few years there will be an explosion in the number of mobile devices (smart phones and tablets) used to access corporate infrastructure, along with a general acceptance of bring your own computer (BYOC) schemes – maybe not for all organisations but for a significant number. And that shift gives us the opportunity to tidy things up a bit.

Remove the apps at the left side of the diagram and only the good ones will be left...A few weeks ago, Jon Honeyball was explaining a concept to me and, like many of the concepts that Jon puts forward, it makes perfect sense (and infuriates me that I’d never looked at things this way before). If we think of the quality of software applications, we can consider that, statistically, they follow a normal distribution. That is to say that, the applications on the left of the curve tend towards the software that we don’t want on our systems – from malware through to poorly-coded applications. Meanwhile, on the right of the curve are the better applications, right through to the Microsoft and Adobe applications that are in broad use and generally set a high standard in terms of quality.  The peak on the curve represents the point with the most apps – basically, most application can be described as “okay”. What Microsoft has to do is lose the leftmost 50% of applications from this curve, instantly raising the quality bar for Windows applications. One way to do this is curated computing.

Whilst Apple have been criticised for the lack of transparency in their application approval process (and there are some bad applications available for iOS too), this is basically what they have managed to achieve through their AppStore.

If Microsoft can do the same with Windows Phone 7, and then take that operating system and apply it to other device types (say, a tablet – or even the next version of their PC client operating system) they might well manage to save their share of the personal computing marketplace as we enter the brave new world of user-specific, rather than device-specific computing.

At the moment, the corporate line is that Windows 7 is Microsoft’s client operating system but, even though some Windows 7 tablets can be expected, they miss the mark by some way.

Time after time, we’ve seen Microsoft stick to their message (i.e. that their way is the best and that everyone else is wrong), right up to the point when they announce a new product or feature that seems like a complete U-turn.  That’s why I wouldn’t be too surprised to see them come up with a new approach to tablets in the medium term… one that uses an application store model and a new user interface. One can only live in hope.

Yikes! My computer can tell websites where I live (thanks to Google)

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago there was a furor as angry Facebook users rallied against the social networking site’s approach to sharing our personal data.  Some people even closed their accounts but at least Facebook’s users choose the information that they post on the site.  OK, so I guess someone else may tag me in an image, but it’s basically up to me to decide whether I want something to be made available – and I can always use fake information if I choose to (I don’t – information like my date of birth, place of birth, and my Mother’s maiden name is all publicly available from government sources, so why bother to hide it?).

Over the last couple of weeks though, I’ve been hearing about Google being able to geolocate a device based on information that their Streetview cars collected.  Not the Wi-Fi traffic that was collected “by mistake” but information collected about Wi-Fi networks in a given neighbourhood used to create a geolocation database.  Now, I don’t really mind that Google has a picture of my house on Streetview… although we were having building work done at the time, so the presence of a builder’s skip on my drive does drag down the impression of my area a little!  What I was shocked to find was that Firefox users can access this database to find out quite a lot about the location of my network (indeed, any browser that supports the Geolocation API can) – in my case it’s only accurate to within about 30-50 metres, but that’s pretty close! I didn’t give consent for Google to collect this – in effect they have been “wardriving” the streets of Britain (and elsewhere).  And if you’re thinking “thats OK, my Wi-Fi is locked down” well, so is mine – I use WPA2 and only allow certain MAC addresses to connect but the very existence of the Wi-Fi access point provides some basic information to clients.

Whilst I’m not entirely happy that Google has collected this information, it’s been done now, and being able to geolocate myself could be handy – particularly as PCs generally don’t have GPS hardware and location-based services will become increasingly prevalent over the coming years.  In addition, Firefox asks for my consent before returning the information required for the database lookup (that’s a requirement of the W3C’s Geolocation API)  and it’s possible to turn geolocation off in Firefox (presumably it’s as simple in other browsers too).

What’s a little worrying is that a malicious website can grab the MAC address of a user’s router, after which it’s just a simple API call to find out where the user is (as demonstrated at the recent Black Hat conference).  The privacy and security implications of this are quite alarming!

One thing’s for sure: Internet privacy is an oxymoron.

So, where exactly is Silverstone?

This content is 14 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As I carried out the school run this morning (on foot), I noticed that there seemed to be a lot of extra traffic in town today.  Then I realised it was due to the British Grand Prix, which is taking place in Silverstone this weekend.  The A43 is closed for non-event traffic, which puts pressure on the M1, which in turn pushes traffic onto other roads in the area, including the main road through our town.

It’s not really a problem – The A43 is only closed for 3 days a year and the knock-on effect of having the home of British Motor Racing on our doorstep is increased engineering jobs in the local area (it’s a shame that Aston Martin moved out of Newport Pagnell in favour of their new HQ at Gaydon) – indeed I’m pretty pleased that Donington Park couldn’t get it’s act together so the British Grand Prix has stayed at Silverstone!

What was interesting though was a local BBC news story last night… BBC East was covering a (non-) story about whether Silverstone is in Northamptonshire, or in Buckinghamshire.  It’s simple – Silverstone village is in Northamptonshire, but the circuit spans the county boundary.  Why that’s the cause of such controversy is, frankly, a mystery but, then again, I know what a big deal crossing the county boundary is to some locals (for reference, I was born in Northamptonshire and now live within the Milton Keynes unitary authority, in what was once Buckinghamshire… and it will take 3 generations to be accepted as a true local in the town where I now live!).

What amused me was one of the people that the BBC interviewed for its package on last night’s news – when asked which county Silverstone is in, one guy responded that it’s in Buckinghamshire because, when he googled for the weather, Buckingham was the nearest town!  I found it quite amusing that people today are happy to judge geographic boundaries by Google search results, rather than by a map…

So, if you’re at the British Grand Prix this weekend (I’d love to be there but will be watching on a television instead) it seems that the Grand Prix circuit from Becketts Corner to somewhere near Village Corner is in Buckinghamshire, otherwise you’re in Northamptonshire (Rose of the Shires).

Ordnance Survey Map showing Silverstone circuit spanning the county boundary between Northamptonshire and Buckinghamshire

Image produced from the Ordnance Survey Get-a-map service. Image reproduced with kind permission of Ordnance Survey and Ordnance Survey of Northern Ireland.

Connecting an E.ON EnergyFit Monitor to Google PowerMeter

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The energy company that I buy my electricity and gas from (E.ON) is currently running an “EnergyFit” promotion where they will send you an energy monitor for free. A free gadget sounded like a good idea (I have a monitor but it’s the plugin type – this monitors the whole house) so I applied for one to help me reduce our family’s spiraling energy costs (and, ahem, to help us reduce our environmental footprint).

The EnergyFit package appeared on my doorstep sometime over the weekend and setup was remarkably easy – there’s a transmission unit that loops around the main electricity supply cable (without any need for an electrician) and a DC-powered monitor that connects to this using a wireless technology called C2, which works on the 433MHz spectrum (not the 2.4GHz that DECT phones, some Wi-Fi networks, baby monitors, etc. use).  Within a few minutes of following E.ON’s instructions, I had the monitor set up and recording our electricity usage.

The monitor is supplied with E.ON’s software to help track electricity usage over time and it seems to work well – as long as you download the data from the monitor (using the supplied USB cable) every 30 days (that’s the limit of the monitor’s internal memory).

I wondered if I could get this working with Google PowerMeter too (Microsoft Hohm is not currently available in the UK) and, sure enough, I did.  This is what I had to do:

  1. Head over to the Google PowerMeter website.
  2. Click the link to Get Google PowerMeter.
  3. At this point you can either sign up with a utility company, or select a device.  The E.ON-supplied device that I have is actually from a company called Current Cost so I selected them from the device list and clicked through to their website.
  4. Once on the Current Cost website, click the button to check that your device will work with Google PowerMeter.
  5. The E.ON EnergyFit monitor is an Envi device – click the Activate button.
  6. Complete the registration form in order to download the software required to connect the monitor to Google.
  7. Install the software, with includes a registration process with Google for an authorisation key that is used for device connection.
  8. After 10 minutes of data upload, you should start to see your energy usage appear on the Google PowerMeter website.

Of course, these instructions work today but either the Google or Current Cost websites are subject to change – I can’t help out if they do but you should find the information you need here.

There are some gotchas to be aware of:

  • The monitor doesn’t keep time very well (mine has drifted about 3 minutes a day!).
  • Configuring the monitor (and downloading data to the E.ON software) requires some arcane keypress combinations.
  • According to the release notes supplied with the Current Cost software, it only caches data for 2 hours so, if your PC is switched off (perhaps to save energy!), Google fills in the gaps (whereas the E.ON Energy Fit software can download up to 30 days of information stored in the monitor).
  • You can’t run both the E.ON EnergyFit and the Current Cost Google PowerMeter applications at the same time – only one can be connected to the monitor.

If your energy company doesn’t supply power monitors, then there are a variety of options for purchase on the Google PowerMeter website.

Google Calendar Sync’s Outlook version check means it won’t work with the 2010 technical preview

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday I wrote about how I love Outlook 2010. I still do. Sadly Google Calendar Sync is coded to check for the version of Outlook in use and it doesn’t like version 14.

Error: Google Calendar Sync supports Microsoft Outlook 2003 and 2007 only. Your version is 14.0.0.4006.

Oh well… guess that’s the price I pay for living at the cutting edge of IT!

Microsoft takes the wrapper off some more Office 2010 features as the Technical Preview is released

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Office productivity applications are pretty dull really. Or at least we like to think so, until a new suite comes along and we get excited about the new features. Three years ago, I wrote about how I was looking forward to the 2007 Microsoft Office System. These days I use Office 2007 every day and I really appreciate some of the new functionality. Even if I do still struggle with the ribbon from time to time, it does make sense – and going back to Office 2003 for a few weeks whilst my work notebook PC was being repaired was not a pleasant experience.

Microsoft gave us a sneak preview of the next version of Office (codenamed Office 14) at PDC 2008. Then they announced that it would be known as Office 2010 and today, at the Worldwide Partner Conference (WPC) in New Orleans, they showed us a few more of the features to expect when the product ships in the first half of next year and announced that the product has reached its Technical Preview milestone.

Today’s keynote only included a short section on Office (Ed Bott has some more information in his blog post about the Office 2010 debut) but Takeshi Yamoto (Corporate Vice President for Office) demonstrated:

  • Outlook: receives the ribbon interface (as do all Office applications, including SharePoint); voicemail in inbox with text to speech preview so it can be read without playing the audio; click on a section of the text in the preview and Outlook will play just that part of the message; contact cards include integration with OCS for click to call, e-mail or send an instant message; a conversation view (familiar to Google Mail) aids dealing with Inbox clutter as it allows a few conversations to be acted on at once – and works across folders; conversations can be ignored (“a Mute button for Outlook”); QuickSteps can be applied to common e-mail tasks – e.g. message archival, forwarding, or the creation of a meeting invite; finally MailTips warns that that someone is out of office before you send them mail, or that mail is being sent outside the organisation, or to a large distribution group. (Some features may require Exchange Server 2010, Office Communications Server 2007 R2, or Office SharePoint Server 2010)
  • Excel: new business intelligence tools for analysis are provided as part of Microsoft’s “democratisation of BI” – putting more useful tools into the hands of more people – including mini-charts in a single cell and slices to drill through data.
  • PowerPoint: new transitions; video becomes a first class citizen – insert footage and edit inside PowerPoint, including recolouring and the application of effects such as border, reflection, etc.; a new backstage view allows organisation of all features and commands for the entire file, including compression, seeing who is editing the file, and allowing integration with line of business applications; more SmartArt (building on Office 2007); slides can be advanced whilst presenting across the web, in browser and on even on a smartphone!
  • Office Web Applications: demonstrating Excel running in a browser – looking the same as the full client (complete with ribbon); multiple users working on a file simultaneously with syncronised updates populated on one another’s views; works in Internet Explorer, Firefox and Safari.
  • SharePoint: More information will be made available at the SharePoint conference in October.

Whilst there are some cool new features here (and there are enhancements to other Office applications, like Word and Visio too), the most significant part of Office 2010 is the web application (webapp) functionality. Microsoft announced that webapps will be available in three ways:

  1. For consumers: free of charge via Windows Live.
  2. For business that require management and control: as a Microsoft Online Service.
  3. For volume license customers: Office webapps running on premise.

With over 400 milion Windows Live customers, plus business customers with software assurance, around half a billion users will have immediate access to office web applications on the day of launch. In short, Microsoft wants to make the 2010 Office System the best productivity experience whatever the device – making the most of the power of a PC, the mobility of a phone and the ubiquity of a web browser.

Microsoft wants to make the 2010 Office System the best productivity experience – for PC, phone and browser

Office web applications are clearly aimed at competing with offerings from Google (and others) but, as a Google Apps user who is collaborating on a pretty simply budget spreadsheet (for some home improvements) with someone else (my wife), I find it Google Spreadsheets very basic and I can’t wait to see what I need to make rich Office functionality work in across browsers with the Microsoft solution.

I’m sure some of the features demonstrated today will be missing from the technology preview (I should find out soon, as should anyone else who is accepted for the technical preview) but, as someone who collaborates with others, working on multiple computers, with multiple operating systems and a plethora of browsers, when Office 2010 finally makes it to my screen I expect to be a very happy chap.

If Microsoft Windows and Office are no longer relevant then why are #wpc09 and Office 2010 two of the top 10 topics on Twitter right now?

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Every now and again, I read somebody claiming that Microsoft is no longer relevant in our increasingly online and connected society and how we’re all moving to a world of cloud computing and device independence where Google and other younger and more agile organisations are going to run our lives. Oh yes and it will also be the year of Linux on the desktop!

Then I spend an afternoon listening to a Microsoft conference keynote, like the PDC ones last Autumn/Fall (announcing Windows Azure and the next generation of client computing), or today’s Worldwide Partner Conference and I realise Microsoft does have a vision and that, under Ray Ozzie’s leadership, they do understand the influence of social networks and other web technologies. That’ll be why, as I’m writing this, the number 6 and 7 topics on Twitter are Office 2010 and #wpc09.

Office 2010 and #WPC09 trending on Twitter

Competition is good (I’m looking forward to seeing how the new Ubuntu Google OS works out and will probably run it on at least one of my machines) but I’m really heartened by some of this afternoon’s announcements (which I’ll write up in another blog post).

Meanwhile, for those who say that Windows 7 will be Microsoft’s last desktop operating system, perhaps this excerpt from a BBC interview with Ray Ozzie will be enough to convince them that the concept of an operating system is not dead… it’s just changing shape:

(Credit is due to Michael Pietroforte at 4sysops for highlighting the existence of this video footage.)

Archive Google Mail to a Mac using getmail

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Late last year I questioned the wisdom of trusting critical data to the cloud and cited Google Mail as an example. Whilst the Google Mail service is generally reliable, there have been some well-publicised instances of failure (including data loss). I shouldn’t be too alarmed by that – for many things in life you get what you pay for and I pay Google precisely nothing (although they do get to build up a pretty good profile of my interests against which to target advertising…). So, dusting off the motto from my Scouting days (“Be Prepared”), I set about creating a regular backup of my Google Apps mail – just in case it ever ceased to exist!

I already use the Apple Mail application (mail.app) for IMAP access but I have some concerns about mail.app – it’s failed to send messages (and not stored a draft either) on at least two occasions and basically I don’t trust it! But using Mac OS X (derived from BSD Unix) means that I also have access to various Unix tools (e.g. getmail) and that means I can take a copy of my Google Mail and store it in maildir or mbox format for later retrieval, on a schedule that I set.

The first step is to install some Unix tools on the Mac. I chose DarwinPorts (also known as MacPorts). After running the 1.7.0 installer, I fired up a terminal and entered the following commands:

su - Administrator
cd /opt/local/bin
sudo ./port -d selfupdate

This told me that my installation of MacPorts was already current, so set about installing the getmail port:

sudo ./port install getmail

The beauty of this process is that it also installed all the prerequisite packages (expat, gperf, libiconv, ncursesw, ncurses, gettext and python25). Having installed getmail, I followed George Donnelly’s advice to create a hidden folder for getmail scripts and a maildir folder for my GmailArchive – both inside my home directory:

mkdir ~/.getmail
mkdir ~/GmailArchive/ ~/GmailArchive/new ~/GmailArchive/tmp ~/GmailArchive/cur

I then created and edited a getmail configuration file at ~/.getmail/getmailrc.mygmailaccount) and entering the following settings:

[retriever]
type = SimpleIMAPSSLRetriever
server = imap.gmail.com
username = googleaccountname
password = googleaccountpassword

[destination]
type = Maildir
path = ~/GmailArchive/

[options]
verbose = 2
received = false
delivered_to = false
message_log = ~/.getmail/gmail.log

I tested this by running:

/opt/local/bin/getmail -ln --rcfile getmailrc.gmailarchive

but was presented with an error message:

Configuration error: SSL not supported by this installation of Python

That was solved by running:

sudo ./port install py25-socket-ssl

(which installed zlib, openssl and py25-socket-ssl), after which I could re-run the getmail command and watch as my terminal session was filled with messages being downloaded (and the folder at ~/GmailArchive/new started to fill up). Then I saw a problem – even though I have a few thousand messages, I noticed that getmail was only ever downloading the contents of my Inbox.

Eventually, I solved this by adding the following line to the [retriever] section of the getmail configuration file:

mailboxes = ("[Google Mail]/All Mail",)

This took a while to work out because many blog posts on the subject suggest that the mailbox name will include [GMail] but I found I needed to use [Google Mail] (I guess that could be the difference between GMail and the Google Mail service provided as part of Google Apps). After making the change I was able to download a few thousand messages, although it took a few tries (the good news is that getmail will skip messages it has already retrieved). Strangely, although the Google Mail web interface says that there are 3268 items in my All Mail folder, getmail finds 5320 (and, thankfully, doesn’t seem to include the spam, which would only account for 1012 of the difference anyway).

In addition, the getmail help text explains that multiple mailboxes may be selected by adding to the tuple of quoted strings but, if there is just a single value, a trailing comma is required.

Having tested manual mail retrieval, I set up a cron job to retrieve mail on a schedule. Daily would have been fine for backup purposes but I could also schedule a more frequent job to pull updates every few minutes:

crontab -e

launched vim to edit the cron table and I added the following line:

4,14,24,34,44,54 * * * * /opt/local/bin/getmail -ln --rcfile getmailrc.gmailarchive

I then opened up a terminal window and (because running lots of terminal windows makes me feel like a real geek) ran:

tail -f ~/.getmail/gmail.log

to watch as messages were automatically downloaded every 10 minutes at 4, 14, 24, 34, 44, and 54 minutes past the hour.

This also means that I get 6 messages an hour in my the local system mailbox (/var/mail/username) to tell me how the cron job ran so I chose to disable e-mail alerting for the cron job by appending >/dev/null 2>&1 to the crontab entry.

Many of the posts on this subject suggest using POP to download the mail, but Google limits POP transfers so it will require multiple downloads. Peng.u.i.n writes that IMAP should help to alleviate this (although that wasn’t my experience). He also suggests using several mbox files (instead of a single mbox file or a maildir) to backup mail (e.g. one file per calendar quarter) and Matt Cutts suggests backing up to mbox and maildir formats simultaneously:

[destination]
type = MultiDestination
destinations = (’[mboxrd-destination]‘, ‘[maildir-destination]‘)

[mboxrd-destination]
type = Mboxrd
path = ~/GmailArchive.mbox

[maildir-destination]
type = Maildir
path = ~/GmailArchive/

If you do decide to use a mbox file, then it will need to be created first using:

touch ~/GmailArchive.mbox

In Chris Latko’s post on pulling mail out of Gmail and retaining the labels, he describes some extra steps, noteably that the timestamps on mail are replaced with the time it was archived, so he has a PHP script to read each message and restore the original modification time.

Aside from the MacPorts installation, the process is the same on a Unix/Linux machine and, for Windows users, Gina Trapani has written about backing up GMail using fetchmail with Cygwin as the platform.

A real use for Google Maps Street View

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

If all has gone to plan, by the time you read this, I’ll just have returned from a romantic weekend in Paris with Mrs. W. In itself, that’s not particularly relevant to a technology blog but, whilst booking the hotel for the weekend, I found Google Maps incredibly useful. Not just because the search results were integrated with people’s reviews on Trip Advisor and other such sites but also because Google Maps Street View really came into its own.

If you’re reading this in the US, then you’re probably wondering why the fuss? Well, here in the UK street view is not available (Google’s cameras have started to photograph the country, much to the dismay of privacy campaigners) but for me to have a look at our prospective hotels (albeit on a very grey day) was really useful and provided a real-world view (to compare with the hotel website’s slightly more enticing images).

The map shows a link to street view:

Google Maps France with a link to street view

And this is what it looks like:

Google Maps France in street view

Previously I’d failed to see any use for this technology. Now I can’t wait for it to come to the UK.