This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Initially perfect for young children (portable, cheap, small keyboard), the screen resolution (1024×576) on my sons’ netbook is becoming too restrictive and, with no Flash Player, some of the main websites they use (Club Penguin, CBeebies) don’t work on the iPad. Setting up an external monitor each time they want to use the computer is not really practical so I needed to find another option – for now that option is recycling my the laptop that my wife replaced a couple of years ago (and which has been in the loft ever since…)
The laptop in question is an IBM ThinkPad T40 – a little long in the tooth but with a 1.5GHz Pentium M and 2GB of RAM it runs OK, although hundreds of Windows XP updates have left it feeling a little sluggish. Vista and 7 are too heavyweight so I decided to install Ubuntu (although I might also give ChromeOS a shot).
Unfortunately, the Ubuntu 12.04 installer stalled, complaining about a lack of hardware support:
This kernel requires the following features not present on the CPU:
pae
Unable to boot – please use a kernel appropriate for your CPU
So much for Linux being a lightweight operating system, suitable for use on old hardware (in fairness, other distributions would have worked). As it happens, it turns out that this is a known issue and there are a few workarounds – the one that worked for me was to use the non-PAE mini.iso installer (I wasn’t prompted to select the generic Linux kernel, but I did have to select the Ubuntu Desktop option).
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Last night I wrote a blog post about the “traffic lights” for my office. The trouble with the original setup was that the light was set to a particular state until I edited the code and uploaded a new version to the Arduino.
I was so excited about having something working that I hadn’t actually finished reading the Arduino Programming For Beginners: The Traffic Light Controller post that I referenced. Scroll down a bit further and James Bruce extends his traffic light sequence to simulate someone pressing a button to change the lights (e.g. to cross the road).
I worked on this to come up with something similar – using a variation on James’ wiring diagram (except with 2 LEDs rather than 3, and using pins 11 and 12 for my LEDs and 2 for the pushbutton switch), I now have a setup which waits for the button to be pressed, then sets the red LED, until the button is pressed again (when it goes green), etc.
My code is available on github but here’s the current version (just in case I lose that version as I get to grips with source control…):
/*
Red/green LED indicator with pushbutton control
Based on http://www.makeuseof.com/tag/arduino-traffic-light-controller/
*/
// Pins for coloured LEDs
int red = 11;
int green = 12;
int light = 0;
int button = 2; // Pushbutton on pin 2
int buttonValue = 0; // Button defaults to 0 (LOW)
void setup(){
// Set up pins with LEDs as output devices and switch for input
pinMode(red,OUTPUT);
pinMode(green,OUTPUT);
pinMode(button,INPUT);
}
void loop(){
// Read the value of the pushbutton switch
buttonValue = digitalRead(button);
if (buttonValue == HIGH){
changeLights();
delay(15000); // Wait 15 seconds before reading again
}
}
void changeLights(){
// Change the lights based on current value: 0 is not set; 1 is green; 2 is red
switch (light) {
case 1:
turnLightRed();
break;
case 2:
turnLightGreen();
break;
default:
turnLightRed();
}
}
void turnLightGreen(){
// Turn off the red and turn on the green
digitalWrite(red,LOW);
digitalWrite(green,HIGH);
light = 1;
}
void turnLightRed(){
// Turn off the green and turn on the red
digitalWrite(green,LOW);
digitalWrite(red,HIGH);
light = 2;
}
Now I’m wondering how long it will be before the kids work out that they too can change the status (like pushing the button at a pedestrian crossing!)…
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:
schema.org – HTML microformats for tagging web content, supported by major search engines
Kasabi dataset archive – Archive of datasets as when Kasabi shutdown was announced – see http://blog.kasabi.com/2012/07/16/archive-of-datasets/ for more info (until 30 July 2012)
“Debranding” a Nokia Lumia phone – I’ve not tried this (YMMV) but looks like a useful reference for anyone whose Lumia has been branded by their mobile operator (mine was bought SIM-only so isn’t) to get it back to a default Nokia state.
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Scott’s solution uses something called a Busylight but a) I’m too tight to spend €49 on this and b) the geek in me thinks “surely I can rig up something using a few LEDs?”. One of the comments on Scott’s post led me to an open source project called RealStatus but that uses an expensive USB HID for the LEDs so doesn’t really move me much further forward…
I decided that I should use my Arduino instead… with the added bonus that involving the children in the project might get them “onboard” too… the trouble is that my electronics prototyping skills are still fairly rudimentary.
As it happens, that’s not a problem – I found an Arduino traffic light program for beginners and, as I don’t have a yellow LED right now, I adapted it to what I do have – simple red/green status (my son and I had fun trying different resistors to adjust the brightness of the LEDs)
You can see the breadboard view here (generated with Fritzing – I’m still working out how to make this into a schematic) and below is my Arduino code (which is also available on github – although I might have worked on it a bit by the time you read this):
// Pins for coloured LEDs
int red = 12;
int green = 13;
void setup(){
// Set up pins as output devices
pinMode(red,OUTPUT);
pinMode(green,OUTPUT);
}
void loop(){
// Change the lights
// turnLightRed();
turnLightGreen();
}
void turnLightGreen(){
// Turn off the red and turn on the green
digitalWrite(red,LOW);
digitalWrite(green,HIGH);
}
void turnLightRed(){
// Turn off the green and turn on the red
digitalWrite(green,LOW);
digitalWrite(red,HIGH);
}
It’s pretty simple really – I just call the turnLightRed() or turnLightGreen() function according to whether I am ready to accept visitors. In itself, that’s a bit limited but the next step will be to work out how to send commands to the Arduino over USB (for some integration with my instant messaging client, perhaps) or even using a messaging service (Twitter?) and some network connectivity… more research required!
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Tonight’s Digital Surrey was, as usual, a huge success with a great speaker (Google’s @EdParsons) in a fantastic venue (Farnham Castle). Ed spoke about the future of geospatial data – about annotating our world to enhance the value that we can bring from mapping tools today but, before he spoke of the future, he took a look at how we got to where we are.
What is geospatial information? And how did we get to where we are today?
Geospatial information is very visual, which makes it powerful for telling stories and one of the most famous and powerful images is that of the Earth viewed from space – the “blue marble”. This emotive image has been used many times but has only been personally witnessed by around 20 people, starting with the Apollo 8 crew, 250000 miles from home, looking at their own planet. We see this image with tools like Google Earth, which allows us to explore the planet and look at humankind’s activities. Indeed about 1 billion people use Google Maps/Google Earth every week – that’s about a third of the Internet population, roughly equivalent to Facebook and Twitter combined [just imagine how successful Google would be if they were all Google+ users…]. Using that metric, we can say that geospatial data is now pervasive – a huge shift over the last 10 years as it has become more accessible (although much of the technology has been around longer).
The annotated world is about going beyond the image and pulling out info otherwise invisible information, so, in a digital sense, it’s now possible to have map of 1:1 scale or even beyond. For example, in Google Maps we can look at StreetView and even see annotations of buildings. This can be augmented with further information (e.g restrictions in the directions in which we can drive, details about local businesses) to provide actionable insight. Google also harvests information from the web to create place pages (something that could be considered ethically dubious, as it draws people away from the websites of the businesses involved) but it can also provide additional information from image recognition – for example identifying the locations of public wastebins or adding details of parking restrictions (literally from text recognition on road signs). The key to the annotated web is collating and presenting information in a way that’s straightforward and easy to use.
Using other tools in the ecosystem, mobile applications can be used to easily review a business and post it via Google+ (so that it appears on the place page); or Google MapMaker may be used by local experts to add content to the map (subject to moderation – and the service is not currently available in the UK…).
So, that’s where we are today… we’re getting more and more content online, but what about the next 10 years?
A virtual (annotated) world
Google and others are building a virtual world in three dimensions. In the past, Google Earth pulled data from many sets (e.g. building models, terrain data, etc.) but future 3D images will be based on photographs (just as, apparently, Nokia have done for a while). We’ll also see 3D data being using to navigate inside buildings as well as outside. In one example, Google is working with John Lewis, who have recently installed Wi-Fi in their stores – to use this to determine a user’s location determination and combine this with maps to navigate the store. The system is accurate to about 2-3 metres [and sounds similar to Tesco’s “in store sat-nav” trial] and apparently it’s also available in London railway stations, the British Museum, etc.
Ed made the point that the future is not driven by paper-based cartography, although there were plenty of issues taken with this in the Q&A later, highlighting that we still use ancient maps today, and that our digital archives are not likely to last that long.
Moving on, Ed highlighted that Google now generates map tiles on the fly (it used to take 6 weeks to rebuild the map) and new presentation technologies allow for client-side rendering of buildings – for example St Pauls Cathedral, in London. With services such as Google Now (on Android), contextual info may be provided, driven by location and personality
With Google’s Project Glass, that becomes even more immersive with augmented reality driven by the annotated world:
Although someone also mentioned to me the parody which also raises some good points:
Seriously, Project Glass makes Apple’s Siri look way behind the curve – and for those who consider the glasses to be a little uncool, I would expect them to become much more “normal” over time – built into a normal pair of shades, or even into prescription glasses… certainly no more silly than those Bluetooth earpieces the we used to use!
Of course, there are privacy implications to overcome but, consider what people share today on Facebook (or wherever) – people will share information when they see value in it.
Big data, crowdsourcing 2.0 and linked data
At this point, Ed’s presentation moved on to talk about big data. I’ve spent most of this week co-writing a book on this topic (I’ll post a link when it’s published) and nearly flipped when I heard the normal big data marketing rhetoric (the 3 Vs) being churned out. Putting aside the hype, Google should know quite a bit about big data (Google’s search engine is a great example and the company has done a lot of work in this area) and the annotated world has to address many of the big data challenges including:
Data integration.
Data transformation.
Near-real-time analysis using rules to process data and take appropriate action (complex event processing).
Semantic analysis.
Historical analysis.
Search.
Data storage.
Visualisation.
Data access interfaces.
Moving back to Ed’s talk, what he refers to as “Crowdsourcing 2.0” is certainly an interesting concept. Citing Vint Cerf (Internet pioneer and Google employee), Ed said that there are an estimated 35bn devices connected to the Internet – and our smartphones are great examples, crammed full of sensors. These sensors can be used to provide real-time information for the annotated world: average journey times based on GPS data, for example; or even weather data if future smartphones were to contain a barometer.
Linked data is another topic worthy of note, which, at its most fundamental level is about making the web more interconnected. There’s a lot of work been done into ontologies, categorising content, etc. [Plug: I co-wrote a white paper on the topic earlier this year] but Google, Yahoo, Microsoft and others are supporting schema.org as a collection of microformats, which are tags that websites can use to mark up content in a way that’s recognised by major search providers. For example, a tag like <span itemprop="addresscountry">Spain</span> might be used to indicate that Spain is a country with further tags to show that Barcelona is a city, and that Noucamp is a place to visit.
Ed’s final thoughts
Summing up, Ed reiterated that paper maps are dead and that they will be replaced with more personalised information (of which, location is a component that provides content). However, if we want the advantages of this, we need to share information – with those organisations that we trust and where we know what will happen with that info.
Mark’s final thoughts
The annotated world is exciting and has stacks of potential if we can overcome one critical stumbing point that Ed highliughted (and I tweeted):
In order to create a more useful, personal, contextual web, organisations need to gain our trust to share our information #DigitalSurrey
Unfortunately, there are many who will not trust Google – and I find it interesting that Google is an advocate of consuming open data to add value to its products but I see very little being put back in terms of data sets for others to use. Google’s argument is that it spent a lot of money gathering and processing that data; however it could also be argued that Google gets a lot for free and maybe there is a greater benefit to society in freely sharing that information in a non-proprietary format (rather than relying on the use of Google tools). There are also ethical concerns with Google’s gathering of Wi-Fi data, scraping website content and other such issues but I expect to see a “happy medium” found, somewhere between “Don’t Be Evil” and “But we are a business after all”…
Thanks as always to everyone involved in arranging and hosting tonight’s event – and to Ed Parsons for an enlightening talk!
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Those who were watching my Twitter stream last Friday and Saturday will have followed my saga with Apple and their apparent disregard for customer service or the law when my iPad developed a fault… “Apple?” you say, “but aren’t they renowned for their fantastic customer service?”. Well, they do have a reputation but my experience suggests it’s not deserved, at least not here in the UK…
I waited a few days before writing this post as anyone who criticises Apple is laid open to a barrage of abuse. Even so, I thought it was appropriate to share – and, by “cooling off”, I’m hoping to be objective.
What’s the problem?
A few months ago, I noticed a greenish glow on a small portion of the screen on my iPad, which I purchased in July 2010. It was particularly visible on dark areas, when the brightness is turned up (e.g. when using the iPad in a dark room). So, I booked an appointment at the Genius Bar in the Milton Keynes Apple Store to see what could be done to repair/replace the defective screen. I arrived on time and, whilst it was certainly busy, there were lots of blue t-shirts doing what, to a bystander, appeared to be very little. I’m sure they all had their own jobs but, after waiting 20 minutes past my appointment, I was seen, not by one of the staff who were at the Genius Bar, but by the guy who had been performing some kind of co-ordination role on the shop floor until that point. He took my iPad away, then came back to say that it was over a year old and so out of warranty – repair wasn’t an option and a refurbished replacement would cost £199. I was given the option of speaking to a Manager and I did, but he was equally unhelpful – and apparently unwilling to move an inch, even when I pointed out that the UK’s Sale of Goods Act gives me some rights here…
More support required
I went home and found a statement on the Apple website about Apple Products and EU Statutory Warranty, which directed me to call AppleCare. I opened a support case and, the next morning, I spoke to an Apple representative who listened, logged the call details, but ultimately advised me to contact the point of purchase (the Apple Store in Solihull). Solihull is an hour’s drive away so I called the store, who said I could visit any Apple Retail location and I headed to Milton Keynes, where I had made a Genius Bar appointment in anticipation.
Five minutes before my appointment AppleCare called and said they had spoken to store and could handle a “consumer law” complaint on my behalf, and that I didn’t need to go to store. Ten minutes after that they called again and said they couldn’t after all and 15 minutes later they said EU Consumer Law doesn’t apply in the UK (it doesn’t – but the UK Sale of Goods Act does!) and that I should contact the local Trading Standards department. By then I was at the store again, where I spent the next couple of hours (including almost an hour waiting to be seen as AppleCare’s previous advice meant I’d missed my Genius Bar appointment and I was on standby), eventually being convinced to part with money to replace my iPad (more on that in a moment).
So how is this Apple’s problem?
Those in the US and elsewhere may well be thinking, “so you wanted Apple to repair or replace a product that was out of warranty – are you for real?” but in Europe, consumer law is on our side.
The UK hasn’t adopted this EU regulation because our own laws provide even better cover – The Sale of Goods Act gives consumers up to six years to pursue claims. Although UK law does not specify how long a product should last (all products and manufacturers are different), a product is considered faulty if it stops working properly in less time than a reasonable person would expect the product to last. A screen defect within two years does not sound like something that Apple (or any reasonable person) would expect, and so I believe that Apple should have offered me a free repair or replacement with the same or similar product at no cost.
Instead, Apple tried to pass the buck. Initially I was batted back and forth between AppleCare (Apple’s support channel) and Apple Retail (who sold me the iPad). At one point I was advised to contact the actual store where my iPad was purchased (not my local store). Finally, Apple Retail attempted to pass me on to my local Trading Standards department and when I said that the problem was between Apple and myself, not with Milton Keynes Council (the Trading Standards authority in this case), the store manager started talking about me pursuing action in the small claims court, in a “David and Goliath” fashion, playing the part of “the small man” against the big company (and yes, those are quotes!). The arrogance of Apple’s retail management and of the company as a whole, which seems to put itself above the law is, frankly, astounding.
A compromise?
Eventually, one of the Managers in the Apple Store in Milton Keynes offered me a replacement iPad but it cost me £69 – a discount from the £199 originally quoted to the price that I would have paid for AppleCare, if I had taken it at the time of purchase. I didn’t take AppleCare because consumer law covers me against product defects, my home insurance covers me against accidental damage, and the Internet covers me against technical support. In short, I shouldn’t need to buy an extended warranty (AppleCare), and I’m still unhappy at having paid for something that should have been free of charge, if only Apple was prepared to accept the rule of law.
“Apple set themselves up as the tech company that is way ahead of everyone else in the industry, but their after sales service is worse than mediocre. I used to be a fanboy.”
I think that just about sums it up!
I’m still tempted to contact the Trading Standards department at Milton Keynes Council – and maybe I will sue Apple for costs but, to be honest, my time is worth more than the £69 I paid for the replacement iPad and I’ve already spent several hours speaking to AppleCare, travelling back and forth to my local Apple Store, or hanging around waiting to be seen. Do I really need that hassle? No, I don’t, but there is a principle at stake here – the world’s largest company appears to be ignoring the rule of law – so maybe I should take this further. If I do, I’m sure you’ll read about it here…
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Please don’t misunderstand me – nine times out of ten – clip-art, over-use of slide animations/transitions and sound effects in PowerPoint presentations are naff. No – worse than that – often completely unnecessary and, in some ways, reminding me of the early days of desktop publishing, when it seemed to be necessary to use 20 fonts on a single page… just because they were there
Thankfully these days (most) people have reined themselves in and seem to steer clear of the “embellishments”, maybe using a single transformation style throughout a whole deck and the occasional build, perhaps with the odd animation – and some decent stock images. Even so, I recently found myself wanting to use sound in a PowerPoint animation.
I could work out how to add the sound to the slide transition but these was nothing obvious for individual animation steps. After some googling, it turns out that the trick is to select the barely-noticable dropdown arrow on a custom animation, and then click Effect Options, after which the option to enhance the animation with sound will become visible. I was using PowerPoint 2007 – it might be different with other versions but, be warned, with great power comes great responsibility. Or something like that.
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Long-time readers of my blog will know that I used to manage the Fujitsu UK and Ireland CTO Blog (which we’ve recently closed, but have left the content in place for posterity) and I’m still getting the comment notifications (mostly spam). Many of the posts have HTTP 301 redirects to either mine or David Smith‘s blogs (I found a great WordPress plugin for that – Redirection) but, for those that remain, I wanted to turn off comments. Doing this individually for each post seemed unnecessarily clunky but there is, apparently, no way to do this from the WordPress user interface (with database access it would have been straightforward but I don’t have that level of access).
There is a plug-in that globally disables all comments – named, rather aptly, Disable Comments – except that the blog is part of a multi-site (network) install and I’m not sure what the broader impact would be…
No bother, I found a workaround – simply set all of the posts to close comments after a certain number of days. The theme that someone has applied to the site (since I stopped working with it) doesn’t seem to respect that, and still leaves a comment button visible, but anyone with a well-developed theme should be OK…
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
What a week! Believe it or not, my job isn’t one as a SharePoint administrator, although sometimes it feels like it is (and recent blog posts might suggest otherwise)!
The thing about SharePoint is that even non-developers like me can string together a few webparts and lists to create something reasonably useful. Last week, one of the sites that I’m managing as part of my day job was migrated from its incubation location as a subsite of my team’s portal, to join some related content on another site. It should have been fairly straightforward, but sadly that wasn’t the case…
Updating hyperlinks
I knew that I needed to edit the hyperlinks on some of my pages but I forgot that the webparts showing views of my lists would also need to be changed, meaning that the data was actually being displayed from the old site (still left in place until I knew that the migration had been a success). I couldn’t find a way to just edit a link to the list, so I had to replace each webpart with a new version, picking up the appropriate list (and view). That was issue number 1 sorted.
Corrupted lists with orphaned columns
Most of the lists in my site came across without issue but the largest one (30-odd columns and around 350 items) was generating an error, with the expected data replaced by “One or more field types are not installed properly. Go to the list settings page to delete these fields”. I found various articles about this, but they all seemed to relate to migrations from SharePoint 2007 to 2010.
Then, as I was digging around in the site, I found a “hidden” column that wasn’t visible in the list settings but could be seen when editing list views. The name of the column was familiar though – it matched a workflow that someone had created on the original (source) site but which wasn’t attached to the list. Apparently, when SharePoint starts a workflow on a list for the first time, it adds a workflow status column to the default view of the list and it seems that “orphaned” workflow status columns are not unheard of. I tried creating another column with the same name (to then delete it again) but, predictably, that wasn’t possible (although creating a new workflow with the same name did create a duplicate column, which was deleted when the workflow was removed).
I still had an issue in that attempting to add a new item to the list resulted in “an unexpected error has occurred” messages, which are far from helpful!
A colleague had spotted that the NewForm.aspx and EditForm.aspx forms were incorrectly linked (the list had been renamed at some point and for some reason the original list name was still being used in the path to the forms) but that was easily corrected in SharePoint Designer. Even so, adding or editing an item in the list was generating errors and I was running out of ideas (as were my colleagues).
I thought that I had spotted all of the differences between the two copies of the list (source and target sites) so, I conceded defeat and started to recreate the list from scratch (before copying in the data in data sheet view – I know I can import/export via Excel, but that sometimes results in incorrect column types that can’t be edited). That’s when I found some more corrupted lookup columns. I couldn’t edit them (at least not through the SharePoint web interface) so, again, I deleted and recreated them, before repopulating the data. All of a sudden, my site was working as it should have been – hooray!
Wrap-up
All of the problems I had were avoidable, with the benefit of hindsight, but I’m hoping not to have to migrate too many SharePoint sites in future. I expect I won’t be the last person to go through this process though and hopefully the experiences I’ve written about here will come up on a search when others are looking for help…
This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Last night I tried again, reflashing my Pi’s SD card using the Debian 6 “Squeeze” distro from the Raspberry Pi downloads page. There are various tools to do this (I used Win32DiskImager, also recommended on the downloads page, although the Softpedia download site is UX disaster, but Linux and Mac users already have dd and there is a Windows port of dd that Element 14 are distributing).
Edit the partition table: sudo fdisk -cu /dev/mmcblk0 p (to view the partition table) d (to delete a partition) 3 (to select partition 3) d (to delete a partition) 3 (to select partition 2) n (to create a new partition) p (to make it a primary partition) 2 (to create partition 2) 157696 (to set the starting position to match the old partition table – see the output from the p command earlier)
Press the Enter key (to set the maximum available partition size) w (to write the partition table)
Then, reboot: sudo shutdown -r now
After logging in again, resize the partition: sudo resize2fs /dev/mmcblk0p2
Back in the command line, I wanted to install a twitter client (so that my Pi can tweet status updates) and Twidge is my favourite (CLI-based) client on a Linux system. Romilly Cocking has written about installing Twidge on the Pi (Tweety Pi!) but I found I needed to run sudo apt-get update before I could successfully complete the sudo apt-get twidge command (without the update, there were lots of 404 errors for missing dependencies). As I was running Terminal inside the LXDE environment, I could use Midori to authorise Twidge via the Twitter API, completing the twidge setup process, before sending a couple of tweets. If you don’t like Midori, I couldn’t find a suitable version of Firefox but I understand Google Chromium can be installed on the RasPi using the sudo apt-get install chromium-browser command).
I’m much happier with the Pi now it’s running Debian – and tonight’s activity involves creating a case for it out of an old business card box (an Altoids tin won’t fit!) – watch this space for more details!