This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I’m often asked why Microsoft’s datacentre regions for Azure in Europe are named as they are:
The Netherlands is “West Europe”
Ireland is “North Europe”
(then there’s country-specific regions in UK and Germany too…)
But Ireland and the Netherlands are (sort of) on the same latitude. So how does that work?
MVP Martina Grom explains it like this:
#azure regions in Europe, simply explained: Dublin is more North, so it is Azure North. Amsterdam is …. West, so its is Azure West. :-) pic.twitter.com/YdHMFNDSEz
I suspect the backstory behind the naming is more down to the way that the United Nations divides Europe up for statistical purposes [source: Wikipedia article on Northern Europe]:
On this map, North Europe is the dark blue area (including the UK and Ireland), whilst West Europe is the cyan area from France across to Austria and Germany (including the Benelux countries).
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Last week, I wrote about the architectural work my team is doing to standardise the way we deploy technology. But not everything I do for my customers is a “standard” solution. I also get involved in strategic consulting, stepping away from the solution and looking at the business challenges an organisation is facing, then looking at how Microsoft technologies can be applied to address those issues. Sometimes that’s in the form of one of our core transformational propositions and sometimes I’m asked to deliver something more ad-hoc.
One such occasion was a few weeks ago when I worked with a customer to augment their IT team, which doesn’t include an architecture capability. They are preparing for an IT transformation programme, triggered by an office move but also by a need for flexibility in a rapidly growing business. As they’ve matured their IT Service Management approach, they’ve begun to think about technology strategy too, and they wanted some help to create a document, based on a template that had been provided by another consultant.
I like this kind of work – and I’m pretty pleased with the outcome too. What we came up with was a pictorial representation of the IT landscape with a current state (“as-is”) view on the left, a future state (“to be”) view on the right, and a transformational view of priorities and dependencies (arranged into “swim-lanes”) in the centre.
I’m sure many organisations have similar approaches, but this really was a case of a picture is worth a thousand words. In this case, the period covered was short – just two years – but I also suggested we should add another view on the right, showing the target state further out, to give a view of the likely medium-term position (e.g. 5 years).
Each representation of the IT landscape has a number of domains/services (which will eventually relate to service catalogue items, once defined), and within each service are the main components, colour coded as follows:
Red (Retire): components that exist but which should be retired (for example the technologies used have reached the end of their lifecycle). These must not be used in new solutions.
Amber (Tolerate): components that exist but for which the supporting technologies are reaching the end of their lifecycle or are not strategic. These may only be used in new solutions with approval from senior IT management (i.e. the Head of IT – or the Chief Technology Officer, if there is one).
Green (Mainstream): These are the core building blocks for new solutions that are put in place.
Blue (Emerging): These are the components and technologies that are being considered, being implemented, or are expected to become part of the landscape within the period being modelled.
It’s important to recognise that this view of the technology strategy is just a point-in-time snapshot. It can’t be left as a static document and needs to be reviewed periodically. Even so, it gives some guidance from which to generate a plan of activities, so that the target vision can become reality.
I’m sure it’s not new, but it would be good to know the origin of this approach if anyone has used something similar in the past!
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I was recently discussing Azure infrastructure services with a customer who has implemented a solution based on Azure Service Manager (ASM – also known as classic mode) but is now looking to move to Azure Resource Manager (ARM).
Moving to ARM has some significant benefits. For a start, we move to declarative, template-driven deployment (infrastructure as code). Under ASM we had programmatic infrastructure deployment where we wrote scripts to say “Dear Azure, here’s a list of everything I want you to do, in excruciating detail“ and deployment ran in serial. With ARM we say “Dear Azure, here’s what I want my environment to look like – go and make it happen” and, because Azure knows the dependencies (they are defined in the template), it can deploy resources in parallel:
If a resource is not present, it will be created.
If a resource is present but has a different configuration, it will be adjusted.
If a resource is present and correctly configured, it will be used.
ASM is not deprecated, but new features are coming to ARM and they won’t be back-ported. Even Azure AD now runs under ARM (one of the last services to come across), so there really is very little reason to use ASM.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
For a better understanding of what ITIL is, try this:
Certification
ITIL certification is available at several levels (foundation, practitioner, intermediate, expert and master) with foundation being the entry level. The ITIL Foundation syllabus details the information that candidates are expected to demonstrate to be successful in the exam.
Axelos has published three “top tips” articles around the ITIL Foundation certification:
“The implementation and management of quality IT services that meet the needs of the business. IT service management is performed by IT service providers, through an appropriate mix of people, process and information technology.”
ITIL is an IT Management framework showing best practices for IT Service Management. It’s a library of 5 books (and other resources):
Service Strategy.
Service Design.
Service Transition.
Service Operation.
Continual Service Improvement (CSI).
There is also complementary guidance (e.g. release control and validation) – broken out by industry vertical (e.g. health, government) or by technology architecture (e.g. cloud, networking, software).
Basic terminology
Baselines – starting points/reference point. Used to look back or get back:
ITSM Baseline – measure service improvement plan (SIP).
Configuration Baseline – used for remediation/back-out from change.
Performance Baseline – response before made service improvement.
Business case – justification for spending money (planning tool):
Costs.
Benefits.
Risk.
Potential problems.
Capabilities – ability to carry out activities (functions and processes)
Functions – team of people and tools carrying out activities:
Who will do something – e.g.:
Service Desk.
Technical Management.
Application Management.
IT Operations Management.
IT Service Management – capabilities providing value to customers in the form of services.
Process – co-ordinated activities to produce an outcome that provides value:
How to do something.
Process Owner vs. Manager
Owner is responsible and accountable for making sure process does what it should.
Manager is responsible for operational management of the process (reports to the owner).
Resources:
IT infrastructure.
People.
Money.
Tangible assets (used to deliver service, cf. capabilities which are intangible).
Service – means to deliver value
Manage costs on the provider side whilst delivering value to the customer.
Service will have a service strategy.
Service owner – responsible for delivering the service. Also responsible for CSI.
ITSM and Services
Organisation has strategic goals, objectives
Core business processes are the activities that produce results for the organisation (reliant on vision)
IT service organisation – to execute on core business processes.
IT service management – repeatable, managed and controlled processes to deliver service
IT technical – computers, networking, etc.
Each layer supports the levels above.
Services:
“A means of delivering value to customers by facilitating outcomes customers want to achieve without the ownership of specific costs or risks”
Processes and Functions
Processes:
“A structured set of activities designed to accomplish a specific objective. A process takes one or more defined inputs and turns them into defined outputs.”
Trigger.
Activity.
Dependency.
Sequence.
Processes:
Are measured.
Have specific results.
Meet expectations.
Trigger change.
Processes have practitioners, managers and owners (accountable for making sure the process is fit for purpose, including definition of the process).
Functions:
“Grouping of roles that are responsible for performing a defined process or activity.”
Service Desk.
Technical Management.
Applications Management.
Facilities Management.
IT Operations Control.
Functions interact and have dependencies.
Responsibility Assignment Matrix (RAM chart) – e.g. RACI:
Responsible.
Accountable.
Consult.
Inform.
Map processes and roles.
ITIL Service Lifecycle
Earlier versions of ITIL used to focus on the processes. Version 3 focuses on why those processes are necessary. For foundation level, candidates need to know the objectives, rather than the detail.
The idea to provide a service will be conceptualized, then it will be designed, it will transition, and be maintained through operation. Always looking for ways to improve a process or service for customers (and deliver more value).
Services will be born and also retired (death).
The service catalogue details the things on offer, together with service levels.
The table below shows the processes for each stage within the ITIL service lifecycle:
Service strategy
Service design
Service transition
Service operation
CSI
Business relationship management
Service portfolio management
Financial management
Demand Management*
IT Strategy*
* Not required at foundation level
Design coordination
Supplier management
Information security management
Service catalogue management
IT service continuity
Availability management
Capacity management
Service level management
Transition planning
Knowledge management
Release and deployment management
Service asset and configuration management
Change management
Change Evaluation*
Service Validation and Testing*
* Not discussed in detail at foundation level
Processes:
Event management
Request fulfilment
Access management
Problem management
Incident management
Functions:
Service desk
IT operations
Application management
Technical management
7 step improvement plan
The next post in this series will follow soon, looking at service strategy.
These notes were written and published prior to sitting the exam (so this post doesn’t breach any NDA). They are intended as an aid and no guarantee is given or implied as to their suitability for others hoping to pass the exam.
ITIL® is a registered trademark of Axelos limited.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I’m not massively into collecting and curating digital video content – I have some family movies, and I stream content from BBC iPlayer, Amazon Video, etc. – pretty normal stuff. Even so, there are times that I think I could use the tech available to me in a better way – and there are times when I find I can do something that I didn’t previously know about!
Today was one of those days, whilst I was studying for an exam and I wanted to watch some videos. I wanted to be able to watch the videos in the comfort of my living room instead of on a PC and I was sure there must be a way. I had copies on my Synology NAS but, somewhat frustratingly, the Plex media server wasn’t picking them up (and I wanted to be watching the videos, not playing with Plex!).
Then, when I right-clicked on a video file in Windows Explorer, I spotted an option to “Cast to Device” which included options for my Samsung TV and also my Bose speakers – though I think the choices will depend on the Digital Living Network Alliance (DLNA) devices that are available on the local network. I selected the TV and found I could create a playlist of videos to watch in the comfort of my sofa – and, even better, the TV remote can be used to pause/resume playback (the PC was in a different room).
Now I’m studying in comfort (well, maybe not – I gave up the sofa and lay on the floor with another PC to take notes!) and streaming media across the home network using Windows and DLNA.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
I got a bit of a surprise in my email recently, when I saw that someone had nominated this blog for the UK Blog Awards 2017. That’s a nice touch – after 14 years and well over 2000 posts (some that even I now regard as drivel and some that people find useful), it’s exactly the kind of feedback that keeps me going!
The site has no marketing team (just me), no social media campaign (just my website and my Twitter feed @markwilsonit) – and now it’s the public vote where I’m up against all of the other entrants in the Digital and Technology category vying for a place on the shortlist of 8 blogs.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
IT architecture is a funny old game… you see, no-one does it the same way. Sure, we have frameworks and there’s a lot of discussion about “how” to “architect” (is that even a verb?) but there is no defined process that I’m aware of and that’s broadly adopted.
A few years ago, whilst working for a large systems integrator, I was responsible for parts of a technology standardisation programme that was intended to use architecture to drive consistency in the definition, design and delivery of solutions. We had a complicated system of offerings, a technology strategy, policies, architectural principles, a taxonomy, patterns, architecture advice notes, “best practice”, and a governance process with committees. It will probably come as no surprise that there was a fair amount of politics involved – some “not invented here” and some skunkworks projects with divisions defining their own approach because the one from our CTO Office “ivory tower” didn’t fit well.
I’m not writing this to bad-mouth a previous employer – that would be terribly bad form – but I honestly don’t believe that the scenario I’ve described would be significantly different in any large organisation. Politics is a fact of life when working in a large enterprise (and some smaller ones too!). And what we created was, at its heart, sound. I might have preferred a different technical solution to manage it (rather than a clunky portfolio application based on SharePoint lists and workflow) but I still think the principles were solid.
Fast-forward to 2016 and I’m working in a much smaller but rapidly-growing company and I’m, once again, trying to drive standardisation in our solutions (working with my peers in the Architecture Practice). This time I’m taking a much more lightweight approach and, I hope, bringing key stakeholders in our business on the journey too.
We have:
Standards: levels of quality or attainment used as a measure or model. These are what we consider as “normal”.
Principles: fundamental truths or propositions that serve as a foundation for a system or behaviour. These are the rules when designing or architecting a system – our commandments.
We’ve kept these simple – there are a handful of standards and around a dozen principles – but they seem to be serving us well so far.
Then, there’s our reference architecture. The team has defined three levels:
An overall reference model that provides a high level structure with domains around which we can build a set of architecture patterns.
The technical architecture – with an “architecture pattern” per domain. At this point, the patterns are still technology-agnostic – for example a domain called “Datacentre Services” might include “Compute”, “Storage”, “Location”, “Scalability” and so on. Although our business is purely built around the Microsoft platform, any number of products could theoretically be aligned to what is really a taxonomy of solution components – the core building blocks for our solutions.
“Design patterns” – this is where products come into play, describing the approach we take to implementing each component, with details of what it is, why it would be used, some examples, one or more diagrams with a pattern for implementing the solution component and some descriptive text including details such as dependencies, options and lifecycle considerations. These patterns adhere to our architectural standards and principles, bringing the whole thing full-circle.
It’s fair to say that what we’ve done so far is about technology solutions – there’s still more to be done to include business processes and on towards Enterprise Architecture but we’re heading in the right direction.
I can’t blog the details here – this is my personal blog and our reference architecture is confidential – but I’m pleased with what we’ve created. Defining all of the design patterns is laborious but will be worthwhile. The next stage is to make sure that all of the consulting teams are on board and aligned (during which I’m sure there will be some changes made to reflect the views of the guys who live and breathe technology every day – rather than just “arm waving” and “colouring in” as I do!) – but I’m determined to make this work in a collaborative manner.
Our work will never be complete – there’s a balance to strike between “standardisation” and “innovation” (an often mis-used word, hence the quotation marks). Patterns don’t have to be static – and we have to drive forward and adopt new technologies as they come on stream – not allowing ourselves to stagnate in the comfort of “but that’s how we’ve always done it”. Nevertheless, I’m sure that this approach has merit – if only through reduced risk and improved consistency of delivery.
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
Where is the hash key? (essential for tweeting!) – Ctrl+Alt+3 creates a lovely # (no, it’s not a “pound” – a pound is £, unless we’re talking about weight, when it’s lb).
And, talking of currency, a Euro symbol (€) is Right Alt+4 (just as in Windows), not on the 2 key (as printed on my keyboard).
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
The trainer I’ve gone for is a Tacx Vortex and it’s proved pretty easy to set up. Ideally I’d use a spare wheel with a trainer tyre but I don’t have one and my tyres are already looking a bit worn – I may change them when the bike comes out again in the spring, meaning I can afford to wear them out on the trainer first! All I had to do was swap out my quick release skewer for the one that comes with the trainer and my bike was easily mounted.
Calibration was a simple case of using the Tacx Utility app on my iPhone – which finds the trainer via Bluetooth and can also be used for firmware upgrades (it’s available for Android too). All you have to do is cycle up to a given speed and away you go!
I found that the Tacx Utility would always locate my trainer but the Tacx Cycling app was less reliable. Ultimately that’s not a problem because I use the Zwift virtual cycling platform (more on that in a moment) and the Zwift Mobile Link app will allow the PC to find my trainer via Wi-Fi and Bluetooth. There is one gotcha though – on the second time I used the trainer I spent a considerable time trying to get things working with Zwift. In the end I found that:
The Tacx apps couldn’t be running at the same time as Zwift Mobile Link.
My phone had a tendency to roam onto another Wi-Fi network (the phone and the PC have to be on the same network for the mobile link to work).
My Bose Soundlink Mini II speakers were also interfering with the Bluetooth connection so if I wanted to listen to music whilst cycling then a cable was needed!
I’m guessing that none of this would be an issue if I switched to ANT+ – as my Garmin Edge 810 does. The trick when using the Garmin is to go into the bike profile and look for sensors. Just remember to turn GPS off when using it on a stationary bike (or else no distance is recorded). Also, remember that:
“[…] when doing an indoor activity or when using devices that do not have GPS capability, the [Garmin Speed and Cadence Sensor] will need to be calibrated manually by entering a custom wheel size within the bike profile to provide accurate speed and distance.” [Garmin Support]
And, talking of ANT+ – one thing I couldn’t work out before I bought my trainer was whether I needed to buy an ANT+ dongle for Zwift? Well, the answer is “No”! as the Zwift Mobile Link app works beautifully as a bridge on my trainer – it’s worth checking out the Zwift website to see which trainers work with the platform though (and any other gear that may be required).
I’ll probably write another post about Zwift but, for now, check out:
In the meantime, it’s worth mentioning that I started out riding on a 14 day/50km trial. I was about to switch to a paid subscription but I found out Strava Premium members get 2 months’ Zwift free* and, as that’s half the price of Zwift, I’ve upgraded my Strava for a couple of months instead!
So, with the trainer set up in the garage (though it’s easy to pop the bike off it if we do have some winter sunshine), I can keep my miles up through the Winter, which should make the training much, much easier in the Spring – that’s the idea anyway!
*It now looks as though the Strava Premium-Zwift offer has now been limited to just November and December 2016 – though I’m sure it will come around again!
This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.
18 months ago, I joined the team at risual – and I’ve been meaning to write this blog post since, well, since just about the time I joined the team at risual!
In previous roles, I’ve used a mish-mash of communications systems that, despite the claims of Cisco et al have been anything but unified. For example, using Lync for IM and presence, with a hokey-client bridge called CUCILync to pass through presence information etc. from the Cisco Call Manager PBX; then a separate WebEx system for conferencing – but with no ability to host PSTN and VoIP callers in the same conference (either set it up as VoIP or as PSTN). Quite simply, there was too much friction.
Working for a Microsoft-only consultancy means that I use tools from one vendor – Microsoft. That means Skype for Business (formerly Lync) is my one-stop-shop for instant messaging, presence, desktop sharing, web conferencing, etc. and that it’s integrated with the PSTN and with Exchange (for voicemail and missed call notifications).
The one fly in the ointment is that my company mobile phone is on the EE network. EE’s 4G coverage is fast in urban areas but the GPRS signal for calls can be pretty poor and the frequencies used mean that the signals don’t pass through walls very well for indoor coverage. One of the worst locations is my home, which is my primary work location when I’m not required to be consulting on a customer site.
Here, the flexibility of my Skype for Business setup helps out:
By setting a divert on my company mobile phone to a DDI that routes the call to work, I can let Skype for Business simultaneously call any Skype for Business applications running on my devices and the PSTN number for my personal phone (which, unsurprisingly, is on a network that works at home!). Then, I can answer as a PSTN call or as a VoIP call (depending on whether I have my headset connected!). If I can’t take the call, then a missed call notification or voicemail ends up in my Inbox, including contact details if the caller was in the company directory.
Ever-so-occasionally, the call is diverted to my personal voicemail, rather than transferring back to work and into Exchange, but that only seems to happen if I’m already engaged on the mobile. 90% of the time, my missed calls and voicemails end up in my Inbox – where they are processed in line with my email.
With new options for hosting Skype for Business Online (in the Microsoft cloud, as part of Office 365), it’s shaping up to be a credible alternative to more expensive systems from Cisco, Avaya, etc. and as an end user I’m impressed at the flexibility it offers me. What’s more, based on the conversations I have with my clients, it seems that the “but is Lync really a serious PBX replacement?” stigma is wearing off…