Weeknote 17: Failed demos, hotel rooms, travel and snippets of exercise (Week 18. 2018)

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This week, I’ve learned that:

  • I must trust my better judgement and never allow anyone to push me into demonstrating their products without a properly rehearsed demo and the right equipment…
  • There are people working in offices today who not only claim to be IT illiterate but seem to think that’s acceptable in the modern workplace:
  • That operations teams have a tremendous amount of power to disregard and even override recommendations provided by architects who are paid to provide solid technical advice.
  • That, in 2018, some conference organisers not only think an all-male panel is acceptable but are hostile when given feedback…

I’ve also:

  • Gone on a mini-tour of Southern England working in London, Bristol and Birmingham for the first four days of the week. It did include a bonus ride on a brand new train though and a stint in first class (because it was only £3 more than standard – I’ll happily pay the difference)!
  • Taken a trip down memory lane, revisiting the place where I started my full-time career in 1994 (only to be told by a colleague that he wasn’t even born in 1994):
  • Squeezed in a “run” (actually more like a slow shuffle) as I try to fit exercise around a busy work schedule and living out of a suitcase.
  • Managed to take my youngest son swimming after weeks of trying to make it home in time.
  • Written my first blog post that’s not a “weeknote” in months!
  • Picked up a writing tip to understand the use of the passive voice:

So the week definitely finished better than it started and, as we head into a long weekend, the forecast includes a fair amount of sunshine – hopefully I’ll squeeze in a bike ride or two!

Designing a private cloud infrastructure

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A couple of months ago, Facebook released a whole load of information about its servers and datacentres in a programme it calls the Open Compute Project. At around about the same time, I was sitting in a presentation at Microsoft, where I was introduced to some of the concepts behind their datacentres.  These are not small operations – Facebook’s platform currently serves around 600 million users and Microsoft’s various cloud properties account for a good chunk of the Internet, with the Windows Azure appliance concept under development for partners including Dell, HP, Fujitsu and eBay.

It’s been a few years since I was involved in any datacentre operations and it’s interesting to hear how times have changed. Whereas I knew about redundant uninterruptible power sources and rack-optimised servers, the model is now about containers of redundant servers and the unit of scale has shifted.  An appliance used to be a 1U (pizza box) server with a dedicated purpose but these days it’s a shipping container full of equipment!

There’s also been a shift from keeping the lights on at all costs, towards efficiency. Hardly surprising, given that the IT industry now accounts for around 3% of the world’s carbon emissions and we need to reduce the environmental impact.  Google’s datacentre design best practices are all concerned with efficiency: measuring power usage effectiveness; measuring managing airflow; running warmer datacentres; using “free” cooling; and optimising power distribution.

So how do Microsoft (and, presumably others like Amazon too) design their datacentres? And how can we learn from them when developing our own private cloud operations?

Some of the fundamental principles include:

  1. Perception of infinite capacity.
  2. Perception of continuous availability.
  3. Drive predictability.
  4. Taking a service provider approach to delivering infrastructure.
  5. Resilience over redundancy mindset.
  6. Minimising human involvement.
  7. Optimising resource usage.
  8. Incentivising the desired resource consumption behaviour.

In addition, the following concepts need to be adopted to support the fundamental principles:

  • Cost transparency.
  • Homogenisation of physical infrastructure (aggressive standardisation).
  • Pooling compute resource.
  • Fabric management.
  • Consumption-based pricing.
  • Virtualised infrastructure.
  • Service classification.
  • Holistic approach to availability.
  • Computer resource decay.
  • Elastic infrastructure.
  • Partitioning of shared services.

In short, provisioning the private cloud is about taking the same architectural patterns that Microsoft, Amazon, et al use for the public cloud and implementing them inside your own data centre(s). Thinking service, not server to develop an internal infrastructure as a service (IaaS) proposition.

I won’t expand on all of the concepts here (many are self-explanitory), but some of the key ones are:

  • Create a fabric with resource pools of compute, storage and network, aggregated into logical building blocks.
  • Introduced predictability by defining units of scale and planning activity based on predictable actions (e.g. certain rates of growth).
  • Design across fault domains – understand what tends to fail first (e.g. the power in a rack) and make sure that services span these fault domains.
  • Plan upgrade domains (think about how to upgrade services and move between versions so service levels can be maintained as new infrastructure is rolled out).
  • Consider resource decay – what happens when things break?  Think about component failure in terms of service delivery and design for that. In the same way that a hard disk has a number of spare sectors that are used when others are marked bad (and eventually too many fail, so the disk is replaced), take a unit of infrastructure and leave faulty components in place (but disabled) until a threshold is crossed, after which the unit is considered faulty and is replaced or refurbished.

A smaller company, with a small datacentre may still think in terms of server components – larger organisations may be dealing with shipping containers.  Regardless of the size of the operation, the key to success is thinking in terms of services, not servers; and designing public cloud principles into private cloud implementations.

Spending money to increase organisational agility at a time of economic uncertainty

This content is 16 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last week I attended a Microsoft TechNet event in Birmingham, looking at Microsoft systems management and the role of System Center. The presenter was Gordon McKenna, who has a stack of experience with the System Center products (in particular System Center Operations Manager, for which he is an MVP) but had only a limited time to talk about the products as the first half of his session was bogged down in Microsoft’s Infrastructure Optimisation (IO) marketing.

Infrastructure optimisationI’ve written about IO before and, since then, I’ve found that the projected savings were often difficult to relate to a customer’s particular business model but as an approach for producing a plan to increase the agility of the IT infrastructure, the Microsoft IO model can be a useful tool.

For me, the most interesting part of Gordon’s IO presentation was seeing the analysis of over 15,000 customers with more than 500 employees who have taken part in these studies. I’m sure that most IT Managers would consider their IT to be fairly standardised, but the figures seem to suggest otherwise:

Basic Standard Rational Dynamic
Core 91.1% 8.7% 0.2% 0.0%
Identity and Access Management 27.9% 62.8% 5.9% 3.4%
Desktop, Device and Server Management 90.1% 8.9% 0.7% 0.3%
Security and Networking 30.5% 65.0% 3.0% 1.4%
Data Protection and Recovery 28.0% 34.9% 37.1% 0.0%
Security Process 81.1% 11.9% 7.0% 0.0%
ITIL/COBIT-based Management Process 68.9% 20.7% 7.1% 3.3%
Business Productivity
Unified Communications (Conferencing, Instant Messaging/Presence, Messaging, Voice) 96.3% 3.5% 0.2% 0.0%
Collaboration (Collaborative Workspaces and Portals) 76.5% 21.3% 1.4% 0.7%
Enterprise Content Management (Document and Records Management, Forms, Web Content Management) 86.7% 12.9% 0.3% 0.2%
Enterprise Search 95.0% 4.1% 0.4% 0.5%
Business Intelligence (Data Warehousing, Performance Management, Reporting and Analysis) 88.3% 10.5% 0.8% 0.3%
Application Platform
User Experience (Client and Web Development) 75.3% 20.9% 2.5% 1.3%
Data Management (Data Infrastructure and Data Warehouse) 87.6% 11.7% 0.6% 0.0%
SOA and Business Process (Process Workflow and Integration) 80.7% 18.0% 1.2% 0.1%
Development (Application Lifecycle Management, Custom Applications, Development Platform) 72.2% 26.2% 1.4% 0.3%
Business Intelligence (Data Warehousing, Performance Management, Reporting and Analysis) 88.3% 10.5% 0.8% 0.3%

As we enter a world recession/depression and money is tight, it’s difficult to justify increases in IT spending – but it’s potentially a different story if those IT projects can help to increase organisational agility and reduce costs overall.

Alinean founder and CEO, Tom Pisello, has identified what he calls “simple savvy savings” to make anyone a cost-cutting hero. In a short white paper, he outlines nine projects that could each save money (and at the same time move the organisation further across the IO model):

  • Server virtualisation (to reduce infrastructure investments, energy and operations overhead costs, and to help improve server administration).
  • Database consolidation.
  • Improved storage management.
  • Leveraging licensing agreements to save money.
  • Implement server systems management to reduce administration costs.
  • Virtualised destop applications to help reduce application management costs.
  • Save on PC engineering costs through PC standardisation and security management.
  • Unified communications.
  • Collaboration.

Looking at the figures quoted above it seems that many IT organisations have a way to go in order to delever the flexibility and return on investment that their businesses demand and a few of these projects could be a step in the right direction.

The delicate balance between IT security, supportability and usability

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

There is a delicate balance between IT security, supportability and usability. Just like the project management trilogy of fastest time, lowest cost and highest quality, you cannot have all three. Or can you?

Take, for example, a fictitious company with an IT-savvy user who has a business requirement to run non-standard software on his (company-supplied) notebook PC. This guy doesn’t expect support – at least not in the sense that the local IT guys will resolve technical problems with the non-standard build but he does need them to be able to do things like let his machine access the corporate network and join the domain. Why does he need that? Because without it, he has to authenticate individually for every single application. In return, he is happy to comply with company policies and to agree to run the corporate security applications (anti-virus, etc.). Everyone should be happy. Except it doesn’t work that way because the local IT guys are upset when they see something different. Something that doesn’t fit their view of the normal world – the way things should be.

I can understand that.

But our fictitious user’s problem goes a little further. In their quest to increase network security, the network administrators have done something in Cisco-land to implement port security. Moving between network segments (something you might expect to do with a laptop) needs some time for the network to catch up and allow the same MAC address to be used in a different part of the network. And then, not surprisingly, the virtual switch in the virtualisation product on this non-standard build doesn’t work when connected to the corporate LAN (it’s fine on other networks). What is left is a situation whereby anything outside the norm is effectively unsupportable.

Which leaves me thinking that the IT guys need to learn that IT is there to support the business (not the other way around).

Of course this fictitious company and IT-savvy user are real. I’ve just preserved their anonymity by not naming them here but discovering this (very real) situation has led me to believe that I don’t think company-standard notebook builds are the way to go. What we need is to think outside the box a little.

Three years ago, I blogged about using a virtual machine (VM) for my corporate applications and running this on a non-standard host OS. Technologies exist (e.g. VMware ACE) to ensure that VM can only be used in the way that it should be. It could be the other way around (i.e. to give developers a virtual machine with full admin rights and let them do their “stuff” on top of a secured base build) but in practice I’ve found it works better with the corporate applications in the VM and full control over the host. For example, I have a 64-bit Windows Server 2008 build in order to use technologies like Hyper-V (which I couldn’t do inside a virtual machine) but our corporate VPN solution requires a 32-bit Windows operating system and some of our applications only work with Internet Explorer 6 – this is easily accommodated using a virtual machine for access to those corporate applications that do not play well with my chosen client OS.

So why not take this a step further? Why do users need a company PC and a home PC? Up until now the justification has been twofold:

  • Security and supportability – clearly separating the work and personal IT elements allows each to be protected from the other for security purposes. But for many knowledge workers, life is not split so cleanly between work and play. I don’t have “work” and “home” any more. I don’t mean that my wife has kicked me out and I sleep under a desk in the office but that a large chunk of my working week is spent in my home office and that I often work at home in the evenings (less so at weekends). The 9 to 5 (or even 8 to 6) economy is no-more.
  • Ownership of an asset – “my” company-supplied notebook PC is not actually “mine”. It’s a company asset, provided for my use as long as I work for the company. When I leave, the asset, together with all associated data, is transferred back to the company.

But if work and home are no longer cleanly separated, why can’t we resolve the issue of ownership so that I can have a single PC for work and personal use?

Take a company car as an analogy – I don’t drive different cars for work and for home but I do have a car leased for me by the company (for which I am the registered keeper and that I am permitted to use privately). In the UK, many company car schemes are closing and employees are being given an allowance instead to buy or lease a personal vehicle that this then available for business use. There may be restrictions on the type of vehicle – for example, it may need to be a 4 or 5 door hatchback, saloon or estate car (hatchback, sedan or station-wagon for those of you who are reading this in other parts of the world) rather than a 2-seater sports car or a motorbike.

If you apply this model to the IT world, I could be given an allowance for buying or leasing a PC. The operating system could be Windows, Mac OS X or Linux – as long as it can run a virtual machine with the corporate applications. The IT guys can have their world where everything is a known quantity – it all lives inside a VM – where there will be no more hardware procurement to worry about and no more new PC builds when our chosen vendor updates their product line. It will need the IT guys to be able to support a particular virtualisation solution on multiple platforms but that’s not insurmountable. As for corporate security, Windows Server 2008 includes network access protection (NAP) – Cisco have an equivalent technology known as network access control (NAC) – and this can ensure that visiting PCs are quarantined until they are patched to meet the corporate security requirements.

So it seems we can have security, supportability, and usability. What is really required is for IT managers and architects to think differently.

Now is the time to start planning for Windows Server 2008

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I recently attended a presentation at which the CA (formerly Computer Associates) Directory of Strategic Alliances, Dan Woolley, spoke about how CA is supporting Windows Server 2008.

CA is not a company that I associate with bringing products to market quickly and I understand why companies are often reticent to invest in research and development in order to support new operating system releases.  Hardware and software vendors want to see customer demand before they invest – just look at the debacle with Windows Vista driver support! There are those that blame Microsoft for a lack of device support in Windows Vista but they worked with vendors for years (literally) before product launch and even now, a full year later, many items of hardware and software have issues because they have not been updated. That’s not Microsoft’s fault but a lack or foresight from others.

It’s true that some minor releases are probably not worth the effort, but supporting a new version of Windows, or a new version of a major server product like Exchange Server or SQL Server should be a no-brainer.  Shouldn’t it?

It’s the same with 64-bit driver support (although Microsoft is partly to blame there, as their own 64-bit products seem to lag behind the 32-bit counterparts – hopefully that will change with the "Longhorn" wave of products.

Dan Woolley’s presentation outlined the way that CA views new product releases and, whilst his view was that they are ready when the customers are ready, from my perspective it felt like a company justifying why they wait to provide new product versions.

CIOs expect infrastructure to be extensible, stable, flexible and predictable

He made the point that CIOs expect infrastructure to be extensible, stable, flexible and predictable (they abhor change as the impact of change on thousands of customers, users, and servers is difficult to understand) and how they:

  • Deliver services to facilitate corporate business (so require a stable infrastructure platform).
  • Work to maximise IT investments.

That may be true but Woolley didn’t consider the cost of running on legacy systems.  Last year I was working with a major accounting firm that was desperate to move away from NT because the extended support costs were too high (let alone the security risks of running on an operating system for which no new patches are being developed).  As recently as 2005, I worked with a major retailer whose back office systems in around 2000 outlets are running on Windows NT 3.51 and whose point of sale system depends on FoxPro for MS-DOS!  Their view is that it works and that wholesale replacement of the infrastructure is expensive.  The problem was that they were struggling to obtain spare parts for legacy hardware and modern hardware didn’t support the old software.  They were literally running on borrowed time (and still are, to the best of my knowledge).

CA’s view is that, when it comes to product deployment, there are five types of organisation:

  • Innovators – investigating new products in the period before it is launched  – e.g. Microsoft’s Technology Adoption Programme (TAP) customers.
  • Early adopters – who work with new products from the moment they are launched up to around about the 9 month point.
  • General adoption – product deployment between 9 months and 4 years.
  • Late adopters – deploying products towards the end of their mainstream support (these organisations are probably running Windows 2000 and are only now looking at a move to Windows Server 2003).
  • Laggards – the type of customers that I described earlier.

Looking at the majority of that group, there are a number of deployment themes:

  • Inquiring (pre-launch).
  • Interest and testing (post-launch).
  • Budgeting (~4 months after launch)
  • Prototyping and plots (~1 year after launch)
  • Deployment (~18 months after launch)
  • Replacement/upgrade programmes (~5 years after launch – co-incidentally at the end of Microsoft’s mainstream support phase)
  • Migration (7+ years after launch – onto another platform altogether).

What is interesting though is that there are also two distinct curves for product deployment:

  • Sold licenses.
  • Deployed enterprise licenses.

This is probably because of the way that project financing works.  I know from bitter experience that it’s often better to buy what I need up front and deploy later because if I wait until the moment that I need some more licenses, the budget will not be forthcoming.  It seems that I’m not alone in this.

CA view their primary market as the customers on a general/late adoption timescale.  That sounds to me like a company trying to justify why it’s products are always late to market.  Windows Server 2008 will launch early next year and serious partners need to be there with products to work with the new operating system right from the launch – innovators expect a few problems along the way but when I’m trying to convince customers to be early adopters I don’t want to be held back by non-existent management agents, backup clients, etc.

Windows Server 2008 is built on shared code with Windows Vista so the early hiccups and device adoption should already have been ironed out

CA’s view supports the "wait for service pack 1" mentality but then Woolley closed his presentation by stating that CA builds on Microsoft platforms because they consider them to be extensible, stable, flexible and predictable and because they will allow the delivery of service to facilitate corporate business imperatives and maximise IT investments.  He stated that CA has been working with Microsoft on Windows Server 2008 architecture reviews, design previews, TAPs and logo testing but if they are truly supportive of Microsoft’s new server operating system, then why do they consider their primary market as not being ready for another year?

Once upon a time hardware led software but in today’s environment business is supported by software and the hardware is just something on which to run the software.  In today’s environment we have to consider a services model.  Microsoft’s move towards regular monthly patches supports this (they are certainly less focused on service packs with the last service pack for Windows XP – the client operating system with the largest installed base – having shipped over three years ago).

Windows Server 2008 is built on shared code with Windows Vista so the early hiccups and device adoption should already have been ironed out.  That means that Windows Server 2008 should not be viewed as "too new", "too disruptive" (it will actually ship at service pack 1 level) and, all being well, the adoption curve may be quicker than some think.

A light-hearted look at infrastructure optimisation

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’ve written before about Microsoft infrastructure optimisation (IO), including a link to the online self-assessment tool but I had to laugh when I saw James O’Neill’s post on the subject. I’m sure that James won’t mind me repeating his IO quiz here – basically the more answers from the right side of the table, the more basic the IT operations are and the more from the left side, the more dynamic they are. Not very scientific (far less so than the real analysis tools) and aimed at a lower level than a real IO study but amusing anyway:

The rest of my company… …involves the IT department in their projects. …accepts IT guys have a job to do. …tries to avoid anyone from IT.
My team… …all hold some kind of product certification. …read books on the subject. …struggle to stay informed.
What worries me most in the job is… …fire, flood or other natural disaster. …what an audit might uncover. …being found out.
My department reminds me of… …’Q branch’ from a James Bond movie. …Dilbert’s office. …trench warfare.
Frequent tasks here rely on… …automated processes. …a checklist. …Me.
What I like about this job is… …delivering the on the promise of technology. …it’s indoors and the hours are OK. …I can retire in 30 years.
If asked about Windows Vista I… …can give a run down of how its features would play here. …repeat what the guy in PC World told me. …change the subject.
New software generally is… …an opportunity. …a challenge. …something we ban.
My organization sees “software as a service” as a way to… …do more things. …do the same things, more cheaply. …do the same things without me.
Next year this job will be… …different. …the same. …outsourced.

ROI, TCO and other TLAs that the bean counters use

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few days ago I posted a blog entry about Microsoft infrastructure optimisation and I’ve since received confirmation of having been awarded ValueExpert certification as a result of the training I received on the Alinean toolset from Alinean’s CEO and recognised ROI expert Tom Pisello.

One of the things that I’ve always found challenging for IT infrastructure projects has been the ability to demonstrate a positive return on investment (ROI) – most notably when I was employed specifically to design and manage the implementation of a European infrastructure refresh for a major fashion design, marketing and retail organisation and then spent several months waiting for the go-ahead to actually do something as the CIO (based in the States) was unconvinced of the need to spend that sort of money in Europe. I have, on occasion, used a house-building analogy whereby a solid IT infrastructure can be compared to the plumbing, or the electrical system – no-one would consider building a house without these elements (at least not in the developed world) and similarly it is necessary to spend money on IT infrastructure from time to time. Sadly that analogy isn’t as convincing as I’d like, and for a long while the buzzword has been “total cost of ownership” (TCO). Unfortunately this has been misused (or oversold) by many vendors and those who control the finances are wary of TCO arguments. With even ROI sometimes regarded as a faulty measurement instrument, it’s interesting to look at some of the alternatives (although it should be noted that I’m not an economist).

Looking first at ROI:

ROI = (benefits – investments)/investments x 100%

Basically, ROI is looking to see that the project will return a greater financial benefit than is invested in order to carry it out. On the face of it, ROI clearly describes the return that can be expected from a project relative to the cost of investment. That’s all well and good but for IT projects the benefits may vary from year to year, may be intangible, or (more likely) will be realised by individual business units using the IT, rather than by the IT department itself – they may not even be attributed to the IT project. Furthermore, because all the benefits and costs are cumulative over the period of analysis, ROI fails to take into account the value of money (what would happen if the same amount was invested elsewhere) or reflect the size of the required investment.

To try and address the issue of the value of money, net present value (NPV) can be used – a figure used to determine what money spent today would be worth in future, given a defined interest rate. The NPV formula is a complex one as it takes into account the cost of money now, next year, the year after and so on for the entire period of analysis:

NPV = I0 + (I1/(1+r)) + (I2/(1+r)2) + … + (In/(1+r)n)

(where I is the net benefit for each year (cash flow) and r is the cost of capital – known as the discount rate).

The theory is that future benefits are worth less than they would be today (as the value of money will have fallen). Furthermore, the discount rate may be adjusted to reflect risk (risk-adjusted discount rate). NPV will give a true representation of savings over the period of analysis and will recognise (and penalise) projects with large up-front investments but it doesn’t highlight how long it will take to achieve a positive cash flow and is only concerned with the eventual savings, rather than the ratio of investment to savings.

That’s where risk-adjusted ROI comes in:

Risk-adjusted ROI = NPV (benefits – investments) / NPV (investments).

By adjusting the ROI to reflect the value of money, a much more realistic (if less impressive) figure is provided for the real return that a project can be expected to realise; however, just like ROI, it fails to highlight the size of the required investment. Risk-adjusted ROI does demonstrate that some thought has been put into the financial viability of a project and as a consequence it can aid with establishing credibility.

Internal rate of return (IRR) is the discount rate used in NPV calculations in order to drive the NPV formula to zero. If that sounds like financial gobbledegook to you (as it does me), then think of it as the value that an investment would need to generate in order to be equivalent to an alternative investment (e.g. keeping the money in the bank). Effectively, IRR is about opportunity cost and whilst it is not widely used, a CFO may use it as a means of comparing project returns although it neither indicates the level of up-front investment required nor the financial value of any return . Most critically (for me), it’s a fairly crude measure that fails to take into account strategic vision or any dependencies between projects (e.g. if the green light is given to a project that relies on the existence of some infrastructure with a less impressive IRR).

Finally, the payback period. This one’s simple – it’s a measure of the amount of time that is taken for the cumulative benefits in a project to outweigh the cumulative investments (i.e. to become cash flow positive) – and because it’s so simple, it’s often one of the first measures to be considered. Sadly it is flawed as it fails to recognise the situation where a project initially appears to be cash flow positive but then falls into the red due to later investments (such as an equipment refresh). It also focuses on fast payback rather than strategic investments.

As can be seen, none of these measures are perfect but each organisation will have it’s preferred method of measuring the success (or failure) of an IT project in financial terms. Indeed, each measure may be useful at a different level in an organisation – an IT manager focused on annual budgets may not be concerned with the cost of money but will want a fast payback and an impressive ROI whereas the CIO may be more concerned with the risk-adjusted ROI and the CFO may only consider IRR. It’s worth doing some homework on the hurdle rates that exist within an organisation and, whilst I may not be an expert in financial management, sometimes it doesn’t hurt to understand the language that the bean counters use.

Microsoft infrastructure optimisation

This content is 17 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Infrastructure optimisationI don’t normally write about my work on this blog (at least not directly) but this post probably needs a little disclaimer as, a few months ago I started a new assignment working in my employer’s Microsoft Practice and, whilst I’m getting involved in all sorts of exciting stuff, it’s my intention that a large part of this work will involve consultancy engagements to help customers understand the opportunities for optimising their infrastructure. Regardless of my own involvement in this field, I’ve intended to write a little about Microsoft’s infrastructure optimisation (IO) model since I saw Garry Corcoran of Microsoft UK present at the Microsoft Management Summit highlights event back in May… this is a little taster of what IO (specifically Core IO) is about.

Based on the Gartner infrastructure maturity model, the Microsoft infrastructure optimisation model is broken into three areas around which IT and security process is wrapped:

  • Core infrastructure optimisation.
  • Business productivity infrastructure optimisation.
  • Application platform infrastructure optimisation.

Organisations are assessed on a number of capabilities and judged to be at one of four levels (compared with seven in the Gartner model):

  • Basic (“we fight fires” – IT is a cost centre) – an uncoordinated, manual infrastructure, knowledge not captured.
  • Standardised (“we’re gaining control” – IT becomes a more efficient cost centre) – a managed IT infrastructure with limited automation and knowledge capture.
  • Rationalised (IT is a business enabler) – managed and consolidated IT infrastructure with extensive automation, knowledge captured for re-use.
  • Dynamic (IT is a strategic asset) – fully automated management, dynamic resource usage, business-linked SLAs, knowledge capture automated and use automated.

Infrastructure optimisation overview diagramIt’s important to note that an organisation can be at different levels for each capability and that the capability levels should not be viewed as a scorecard – after all, for many organisations, IT supports the business (not the other way around) and basic or standard may well be perfectly adequate but the overall intention is to move from IT as a cost centre to a point where the business value exceeds the cost of investment. For example, Microsoft’s research (carried out by IDC) indicated that by moving from basic to standardised the cost of annual IT labour per PC could be reduced from $1320 to $580 and rationalisation could yield further savings down to $230 per PC per annum. Of course, this needs to be balanced with the investment cost (however that is measured). Indeed, many organisations may not want a dynamic IT infrastructure as this will actually increase their IT spending; however the intention is that the business value returned will far exceed the additional IT costs – the real aim is to improve IT efficiencies, increase agility and to shift the investment mix.

Microsoft and its partners make use of modelling tools from Alinean to deliver infrastructure optimisation services (and new models are being released all the time). Even though this is clearly a Microsoft initiative, Alinean was formed by ex-Gartner staff and the research behind core IO was conducted by IDC and Wipro. Each partner has it’s own service methodology wrapped around the toolset but the basic principles are similar. An assessment is made of where an organisation is currently at and where they want to be. Capability gaps are assessed and further modelling can help in deriving those areas where investment has the potential to yield the greatest business benefit and what will be required in order to deliver such results.

It’s important to note that this is not just a technology exercise – there is a balance to be struck between people, processes and technology. Microsoft has published a series of implementer resource guides to help organisations to make the move from basic to standardised, standardised to rationalised and from rationalised to dynamic.

Links

Core infrastructure self-assessment.
Microsoft infrastructure optimisation journey.

Towards operational excellence on the Microsoft platform

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I was sorting out my den/office this weekend and came across a Microsoft operational excellence resource CD. The concept seems quite good (although the content seemed a little out of date, even bearing in mind that it had sat in a pile of “stuff to look at when I have time” for 10 months); however, the operational excellence section of the Microsoft website is worth a look.

More operational advice buried deep in the Microsoft website

This content is 19 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Last month I blogged about some useful operational advice on the Microsoft website and I’ve just found a load more in the MSDN library. Specifically, I was looking at the advice for Microsoft BizTalk Server 2004 operations but there is a whole load of guidance there for pretty much all of the Microsoft server products (although I should also point out that in true Microsoft style, each product group has structured its information differently).