Keeping my low-power server cool

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

6.30am, sometime over the Christmas holidays and, after being woken by one of our sons, my wife informs me that there’s a strange noise coming from one of the computers in the office… bleary-eyed, I stumble to my desk and shut down the machine, before returning to my slumber.

Thankfully, it was just a noisy fan, not (yet) another hard disk failure but it did require attention, which involved me learning a little bit more than I should need to about the innards of a PC… so I’m blogging the key points for future reference.

Hardcore gamers need serious cooling for their PCs. Thankfully mine is the “low-power server” that I built a few years ago and the requirements are a lot lower – indeed this machine only has two 40mm fans – one on the case and one on the main board.

I initially swapped the case fan for one I picked up from Maplin (I could get cheaper online, but not once I’d taken into account shipping for such a small item) but found it was the one on the Intel D945GCLF2 board that was making most of the noise.  So I put the Maplin unit there instead (it’s not the CPU that needs cooling, but the inefficient Northbridge/GPU that accounts for most of the power consumption on this board – the Atom 330 is only using about 8W and is passively cooled.

Unfortunately the screws that fixed the OEM fan to the heatsink wouldn’t fit the replacement, so I used a piece of plastic-coated wire instead to poke through the holes and twist tight – it’s functional at least.

With the case fan also making a racket now, I found that it only did so when sucking air into the case (the fan seems to brush on the case when attached).  I’d assumed that a fan on the bottom of a case should bring in cold air and with hot air rising to the holes on the top of the case. So I flipped the fan over (I’m not sure which way it was originally pointed) so it’s now blowing air out of the bottom (it’s the only place to fix a fan). Fingers crossed, it’s doing something… monitoring with Open Hardware Monitor tells me my CPU is fine but SpeedFan suggests something else is running a little warm!

 

DIY home electrics

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m fortunate enough to live in a pleasant market town which generally has a low crime rate.  Unfortunately, recent months have seen a significant increase in the number of burglaries and, with Thames Valley Police seemingly mystified as to who the culprits are (other than suspecting that they are coming in “across the border from Northamptonshire”!), I started to look into ways to increase the security of our home.

Of course, if someone wants to get into your house they will find a way but the advice we’ve been given can be paraphrased as “make sure your house is less attractive than the alternative” and, although I already have several security measures in place, an extra security light (with PIR) on the front drive was an inexpensive modification (and also quite handy when arriving home in the dark).

In the UK, regulations have brought electrical work under the control of the local authority Building Regulations but that doesn’t outlaw DIY electrical work entirely. All it means is that the works need to be carried out to a particular standard, as well as distinguishing between major (notifiable) and minor works. As my household electrics were professionally upgraded a few years ago (including extensive re-wiring for most of the ground floor and a new consumer unit), I know that they are in good shape and felt reasonably confident in my abilities to run a fused spur in our garage from the existing ring main (many projects would be “notifiable” – this is not).

It took me a few hours, and the hardest part was getting cable clips to attach to the blockwork/mortar that makes up the interior walls of our garage but I got there in the end. For a description of the electrical changes, there’s some good advice on the ‘net, like the description of the project at lets-do-diy.com. Unfortunately, there’s also a fair bit of scaremongering out there – this post on the IET forums is a great example, with one user asking if the person asking the question is qualified, highlighting that a circuit could be overloaded and others saying that any circuit can be overloaded, but that’s the point of adding a fuse where the rating of the cable changes! Others point out that there are also degrees of experience and that qualification has very little to do with competence. From my perspective it’s good to see that electricians are no different to us IT bods – still dealing with the fallout from bodged DIY jobs and squabbling over the value of certifications over experience!

Tweaking the display on a Samsung TV for use as a computer monitor

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Samsung UE37ES6300A few weeks ago, I bought my first flat screen TV. The old (c1998) Sony Trinitron still works, but it was starting to lose the colour a little around the edges and was, frankly, taking up a huge chunk of living room so I splashed out and bought a Samsung UE37ES6300 from John Lewis.

I’m not bothered about 3D pictures but the Smart TV (Internet-connected) functionality is a huge bonus. Meanwhile, the availability of HDMI ports (no VGA on this year’s model) led me to hook up my old Mac Mini as a permanently connected place for Internet access in the living room (although the requirement is rapidly dropping as more and more Samsung Apps become available – Spotify appeared last night!).

Using a DVI to HDMI cable, the Mac was able to detect the 1080p display but it did enable overscan which meant I was losing the edge of the picture. Turning off overscan helped, but didn’t use the whole display (and was also a bit fuzzy).  With a bit of help from a friend (who, conincidentally, had come over and hooked his Linux machine up to the display), I worked out that the solution is to leave overscan enabled on the computer but to set the TV Picture Size to Screen Fit.  I’m not sure if I can see much difference betwen 50Hz PAL and 60Hz NTSC but, seeing as this is a European model, I left the computer set to 50Hz PAL.

This resolved the display size but it was still not as sharp as I would expect for a native resolution display. Switching the Picture Mode from Standard to Movie made a big difference (although the colours were a little muted and there was a slight magenta cast) so I started to look at the differences between the two profiles.  Now I’ve tweaked the Standard profile to bring down the sharpness from the default of 50 to 20 and turned off the Dynamic Contrast in the TV’s Advanced Settings and I think I’m pretty much there.

So, there you have it. I haven’t tried a Windows PC yet, but those settings seem to work well with the Mac – and the result is a much improved digital display output.

No sound from Google Chrome: Adobe Flash issue and workaround

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Adobe Flash has no place in the modern web.  Unfortunately there are many sites that still use it, so it can’t be ignored entirely. This weekend I found I had no sound in my browser and it turned out to be Flash-related.  This is what I found…

No sound in Google Chrome

Over the week, I tend to accumulate open browser tabs of things that look interesting but which I haven’t got time to read/watch in the working day. Written content is simple enough (it gets saved to Pocket, and then not read from there instead), videos are less straightforward.

Anyway, I’d finally got round to watching a video link I’d been sent and found that I had no sound. Strange. Windows sound was working – I could test from Control Panel and in other apps – it seemed to be a problem for YouTube in my browser (Google Chrome).

A bit of digging turned up a Google Groups post that sounded similar.  Whilst the issue was reported to affect Flash 11.3 and I’m running 11.5.31.2, I did follow a link to Adobe’s Flash Player 11.3 Audio Update, which suggested I knock my sound quality down to 16 bit 44,100Hz (CD Quality). That did the trick – and is perfectly fine for playing MP3s and YoutTube videos…

What are all of these Flash versions anyway?

As Michael Horowitz explains in a defensive computing post, Flash versioning is, to put it mildly, a mess. Added to that chrome://flash tells me that I’m using something called Pepper Flash, which I’ve never installed but it turns out is part of Google Chrome 21 and later (I’m on 23.0.1271.95) to provide better sandboxing, among other things.  You can find details of the version of Flash installed (and the latest version) on Adobe’s Flash Tester and Michael also has information at his Flash Tester site.

SharePoint datasheet mode crashes Internet Explorer

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Back in the summer I wrote about creating dashboards in SharePoint using some borrowed JavaScript in a webpart to display calculate columns of HTML.  I needed to create another dashboard recently, so I reused my old technique but then, today, I found that I could no longer edit my list in SharePoint’s Datasheet mode.  Each time I tried, Internet Exploder crashed, blaming the problem on the Data Execution Prevention (DEP) functionality that is meant to prevent malicious code from being executed in memory.

Of course, being SharePoint (well, on 2007 at least), I couldn’t use an alternative browser but I was pretty sure the issue was related to the HTML generated an placed in a calculated column in my list. By creating a new view that excluded the problematic column (i.e. the one containing  the HTML), I was able to edit as normal, without a browser crash.

Short takes: Amazon Web Services 101, Adobe Marketing Cloud and Milton Keynes Geek Night (#MKGN)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…

Amazon Web Services 101

One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!

I already blogged about Amazon’s reference architecture for utility computing but I wanted to mention Ryan Shuttleworth’s (@RyanAWS) explaination of how Amazon Web Services (AWS) came about.

Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:

“Enable business and developers to use web services* to build scalable, sophisticated applications”

*What people now call “the cloud”

In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.

Adobe Marketing Cloud

Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader.  I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals.  Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.

Milton Keynes Geek Night

MK Geek Night #mkgn

The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).

The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:

  • Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
  • Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
  • And how should you make it? (cross platform framework, native app, HTML, or hybrid?)

Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.

Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:

  1. Increase sales by driving traffic.
  2. Improve your brand awareness by working with others.
  3. Increase innovation, by allowing others to interface with your platform.
  4. Create partnerships, with symbiotic relationships to develop complimentary products.
  5. Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.

Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.

And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:

  1. Improve the process – collaborate from the outset.
  2. Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
  3. Learn to let go – not everything can be perfect.

Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.

In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:

  • Free – with consequential loss of privacy.
  • Paid – with consequential loss of audience (i.e. niche) and user experience.
  • Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.

This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source.  Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.

The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)

Photo credit: David Hughes, on Flickr. Used with permission.

[Amazon’s] Reference architecture for utility computing

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week, I attended an Amazon Web Services (AWS) 101 briefing, delivered by Amazon UK’s Ryan Shuttleworth (@RyanAWS).  Although I’ve been watching the “Journey into the AWS cloud” series of webcasts too, it was a really worthwhile session and, when the videos are released to the web, well worth watching for an introduction to the AWS cloud.

One thing I particularly appreciate about Ryan’s presentations is that he approaches things from an architectural view. It’s a refreshing change from the evangelists I’ve met at other companies who generally market software by talking about features (maybe even with some design considerations/best practice or coding snippets) but rarely seem to mention reference architectures or architectural patterns.

During his presentation, Ryan presented a reference architecture for utility computing and, even though this version relates to AWS services, it’s a pretty good model for re-use (in fact, the beauty of such a  reference architecture is that the contents of each box could be swapped out for other components, without affecting the overall approach – maybe I should revisit this post and slot in the Windows Azure components!).

So, what’s in each of these boxes?

  • AWS global infrastructure: consists of regions to collate facilities, with availability zones that are physically separated, and edge locations (e.g. for content distribution).
  • Networking: Amazon provides Direct Connect (dedicated connection to AWS cloud) to integrate with existing assets over VPN Connections and Virtual Private Clouds (your own slice of networking inside EC2), together with Route 53 (a highly available and scalable global DNS service).
  • Compute: Amazon’s Elastic Compute Cloud (EC2) allows for the creation of instances (Linux or Windows) to use as you like, based on a range of instance types, with different pricing – to scale up and down, even auto-scalingElastic Load Balancing  allows the distribution of EC2 workloads across instances in multiple availability zones.
  • Storage: Simple Storage Service (S3) is the main storage service (Dropbox, Spotify and others runs in this) – designed for write once read many applications.  Elastic Block Store (EBS) can be used to provide persistent storage behind an EC2 instance (e.g. boot volume) and supports snapshotting, replicated within an availability zone (so no need to RAID). There’s also Glacier for long term archival of data, AWS Import/Export for bulk uploads/downloads to/from AWS and the AWS Storage Gateway to connect on-premises and cloud-based storage.
  • Databases: Amazon’s Relational Database Service (RDS) provides database as a service capabilities (MySQL, Oracle, or Microsoft SQL Server). There’s also DynamoDB – a provisioned throughput NoSQL database for fast, predictable performance (fully distributed and fault tolerant) and SimpleDB for smaller NoSQL datasets.
  • Application services: Simple Queue Service (SQS) for reliable, scalable, messages queuing for application decoupling); Simple Workflow Service (SWF) to coordinate processing steps across applications and to integrate AWS and non-AWS resources, to manage distributed states in complex systems; CloudSearch – an elastic search engine based on Amazon’s A9 technology to provide auto-scaling and a sophisticated feature set (equivalent to SOLR); CloudFront for a worldwide content delivery network (CDN), to easily distribute content to end users with a single DNS CNAME.
  • Deployment and admin: Elastic Beanstalk allows one click deployment from Eclipse, Visual Studio and Git  for rapid deployment of applications with all AWS resources auto-created; CloudFormation is a scripting framework for AWS resource creation that automates stack creation in a repeatable way. There’s also Identity and Access Management (IAM), software development kits, Simple Email Service (SES), Simple Notification Service (SNS), ElastiCache, Elastic MapReduce, and  the CloudWatch monitoring framework.

I suppose if I were to re-draw Ryan’s reference architecture, I’d include support (AWS Support) as well some payment/billing services (after all, this doesn’t come for free) and the AWS Marketplace to find and start using software applications on the AWS cloud.

One more point: security and compliance (security and service management are not shown as they are effectively layers that run through all of the components in the architecture) – if you implement this model in the cloud, who is responsible? Well, if you contract with Amazon, they are responsible for the AWS global infrastructure and foundation services (compute, storage, database, networking). Everything on top of that (the customisable parts) are up to the customer to secure.  Other providers may take a different approach.

Useful links: November 2012

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A list of items I’ve come across recently that I found potentially useful, interesting, or just plain funny:

Website moving to a new server…

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

My hosting provider has told me that they are moving this website to a new server over the weekend.

All being well, the move will be transparent but I will also need to point the domain names at new DNS servers, so, if I disappear offline for a while on Sunday night, please bear with me and I should be back again once the interwebs have updated…

HomePlug Ethernet, part 1

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

As more an more computing devices are being allowed into my living room (Xbox, Smart TV, etc.) I’m starting to find that the Wi-Fi in our house, which seems fine for basic surfing, email, social media, etc. is struggling more and more when it comes to streaming video content.

It could be a problem with my Wi-Fi setup but I have a pretty good access point, located in a reasonably central position (albeit upstairs) and an Apple Airport Express acting as a repeater, connected to some speakers in our garden room.  I have a feeling that the TV and Xbox are picking up the Airport Express, rather than the main access point (no way to tell on the Airport Express as its diagnostics are almost non-existent) and the lengthy Wi-Fi journey between access points may be the cause of my problems.  I could redesign the network but it works for streaming Spotify to the garden room/kitchen so I started to consider alternatives.

Creating CAT5E/6 cable runs around the house is just too disruptive (I did consider it when we extended a few years ago, but it was quite expensive too), so I started to look at running Ethernet over the household electrical system with HomePlug devices.

A bit of crowdsourcing (asking around on Twitter) turned up quite a bit of advice:

  • Develo dLAN devices seemed to be well-regarded and I nearly bought a dLAN 500 AVtriple+ starter kit.
  • A few people mentioned the TP link Powerline products too.
  • Some people told me to go for faster connections (500Mbps) and that slower devices may be limited by 10/100Mbps Ethernet connections.
  • Others suggested higher speeds are more vulnerable to overheating and interference (that was another common theme – depending on the household wiring it seems you might not get very close to the stated maximum).

Ultimately, whatever I use will mostly be streaming content from the Internet (BBC iPlayer, etc.) over my ADSL connection (which runs at about 6Mbps downstream) so the home network shouldn’t be the bottleneck, once I get off Wi-Fi and onto some copper.

I mentioned that I nearly bought the Develo kit, so why didn’t I? Well, just as I was getting ready to purchase, PowerEthernet (@PowerEthernet) picked up on my tweet and suggested I take a look at their product, which is really rather neat…

Instead of plugging into a socket (either with or without pass-through power capabilities), the PowerEthernet devices replace a standard UK double socket to provide a single socket and four 200Mbps Ethernet ports. You need a pair (of course) but they work together to create an encrypted (AES128) mesh network that’s compatible with the HomePlug Alliance AV standard.

Professional installation is recommended but, as Paul Ockenden (@PaulOckenden) highlights in his PCPro article:

“Most competent DIYers should be able to replace an existing two-gang socket with a Power Ethernet faceplate, and indeed the IEE Wiring Regulations do allow for a confident consumer to do this. For a new installation, however, or if you lack the confidence, you’ll need to consult a qualified electrician.”

I haven’t installed mine yet – I only collected them from the Royal Mail today – but I intend to report back when I’ve had a chance to play. In the meantime, Jonathan Margolis (@SimplyBestTech) wrote a short but sweet piece for the FT. PC Pro’s full review suggests they are a bit pricey (almost £282 for a pair including VAT) but Girls n Gadgets’ Leila Gregory (@Swannyfound them on Amazon at closer to £80 each (as did I).

I’ll write more when I’ve had a chance to use them for a bit…