x64 finally comes of age

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

To be honest, I got a bit confused with the various 64-bit CPUs (like why didn’t Intel and HP’s Itanium take off, but AMD’s AMD64 did and Itanium 2 looks like it will too), but whatever the hardware issues, it seems that x64 software has finally come of age. Paul Thurrott reports in his Windows IT Pro magazine network WinInfo Daily Update that, at the IT Forum this week, Microsoft announced that the Longhorn Server wave of products will be 64-bit only (except Longhorn Server itself, which will be available in both 32- and 64-bit flavours). That means that, for example, the next version of Exchange Server (codenamed Exchange 12) will only run on a 64-bit platform. There’s no news yet as to what is happening on the desktop (except that it seems, like Windows XP, Windows Vista will be available in both 32- and 64-bit editions) but it looks like I’d better get saving for a new PC…

Processor area networking

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I was at a very interesting presentation from Fujitsu-Siemens Computers. It doesn’t really matter who the OEM was – it was the concept that grabbed me, and I’m sure IBM and HP will also be looking at this and that Dell will jump on board once it hits the mass market. That concept was processor area networking.

We’ve all got used to storage area networks (SANs) in recent years – the concept being to separate storage from servers so that a pool of storage can be provided as and when required.

Consider an e-mail server with 1500 users and 100Mb mailbox limits. When designing such a system, it is necessary to separate the operating system, database, database transaction logs, and message transfer queues for recoverability and performance. The database might also be split for fast recovery of VIP’s mailboxes but my basic need is to provide up to 150Gb of storage for the database (1500 users x 100Mb). Then another 110% storage capacity is required for database maintenance and all of a sudden the required disk space for the database jumps to 315Gb – and that doesn’t include the operating system, database transaction logs or message transfer queues!

Single instance storage might reduce this number, as would the fact that most users won’t have a full mailbox, but most designers will provide the maximum theoretical capacity “just in case” because to provision it later would involve: gaining management support for the upgrade; procuring the additional hardware; and scheduling downtime to provide the additional storage (assuming the hardware is able to physically accommodate the extra disks).

Multiply this out across an organisation and that is a lot of storage sitting around “just in case”, increasing hardware purchase and storage management costs in the process. Then consider the fact that storage hardware prices are continually dropping and it becomes apparent that the additional storage could probably have been purchased at a lower price when it was actually needed.

Using a SAN, coupled with an effective management strategy, storage can be dynamically provisioned (or even deprovisioned) on a “just in time” basis, rather than specifying every server with extra storage to cope with anticipated future requirements. No longer is 110% extra storage capacity required on the e-mail server in case the administrator needs to perform offline defragmentation – they simply ask the SAN administrator to provision that storage as required from the pool of free space (which is still required, but is smaller than the sum of all the free space on a all of the separate servers across the enterprise).

Other advantages include the co-location of all mission critical data (instead of being spread around a number of diverse server systems) and the ability to manage that data effectively for disaster recovery and business continuity service provision. Experienced SAN administrators are required to manage the storage, but there are associated manpower savings elsewhere (e.g. managing the backup of a diverse set of servers, each with their own mission critical data).

A SAN is only part of what Fujitsu-Siemens Computers are calling the dynamic data centre, moving away from the traditional silos of resource capability.

Processor area networking (PAN) extends takes the SAN storage concept and applies it to the processing capacity provided for data centre systems.

So, taking the e-mail server example further, it is unlikely that all of an organisation’s e-mail would be placed on a single server and as the company grows (organically or by acquisition), additional capacity will be required. Traditionally, each server would be specified with spare capacity (within the finite constraints of the number of concurrent connections that can be supported) and over time, new servers would be added to handle the growth. In an ideal world, mailboxes would be spread across a farm of inexpensive servers, rapidly bringing new capacity online and moving mailboxes between servers to marry demand with supply.

Many administrators will acknowledge that servers typically only average 20% utilisation and by removing all input/output (I/O) capabilities from the server, diskless processing units can be provided (effectively blade servers). These servers are connected to control blades which manage the processing area network, diverting I/O to the SAN or the network as appropriate.

Using such an infrastructure in a data centre, along with middleware (to provide virtualisation, automation and integration technologies) it is possible to move away from silos of resource and be completely flexible about how services are allocated to servers, responding to peaks in demand (acknowledging that there will always be requirements for separation by business criticality or security).

Egenera‘s BladeFrame technology is one implementation of processor area networking and last week, Fujitsu-Siemens Computers and Egenera announced an EMEA-wide deal to integrate Egenera Bladeframe technology with Fujitsu-Siemens servers.

I get the feeling that processor area networking will be an interesting technology area to watch. With virtualisation rapidly becoming accepted as an approach for flexible server provision (and not just for test and development environments), the PAN approach is a logical extension to this and it’s only a matter of time before PANs become as common as SANs are in today’s data centres.

How about this for a test system…

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

In one of the SQL Server sessions at last week’s Microsoft Technical Roadshow, Michael Platt showed the first three minutes or so from an MSDN Channel 9 video. In it, we saw one of the systems at Microsoft’s labs in Redmond where ISVs and OEMs assist the SQL Server team with their performance testing and benchmarking – an HP Integrity Superdome system with 64 64-bit Intel Itanium 2 CPUs, 1Tb of RAM and a couple of thousand 18.2Gb disks. Why so many small disks? Apparently it’s about providing provide parallel reading capacity to increase the overall system throughput and hence run the CPUs at their limits.

The whole system cost in the region of $5.1m and the full details of the benchmark tests may be found on the transaction processing performance council website.

Interestingly, one of the problems encountered during the benchmarking was running out of power to spin up all of the disks and having to install a new power distribution unit at a cost of $250,000!

HP lights-out configuration utility

This content is 21 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

One of the most significant additions to server hardware in recent years has been the inclusion of on-board management facilities. HP, IBM and Dell all have their own hardware implementations, but I’ve been looking at a great piece of software for the Compaq/HP remote insight lights-out edition (RILOE) cards – the HP lights-out configuration utility (cpqlocfg.exe). This can be used (along with appropriate security credentials and an XML configuration file) to remotely manage servers from the command line, for example:

cpqlocfg -s ipaddress -v -f poweron.xml

poweron.xml is a modified version of one of the HP-supplied sample scripts which logs on to the server, sets write access and turns the power on. Full documentation on the scripting interface is available from the HP website.

Tracking down IBM BIOS updates

This content is 21 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

The IBM website is not always the easiest to navigate and I spent ages today tracking down the latest BIOS for a number of servers. To save someone else the same issues in future, I recommend that to quickly find the latest BIOS for a PC or server, search for +flash +BIOS +update +modelnumber.

More search tips are available from the IBM website.