Introduction to Microsoft .NET Standard

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 2 of a 2-part series. In part 1, I explained .NET versions and the lightweight, cross-platform Microsoft .NET Core SDK. This post moves on to look at standardising Microsoft .NET.

Where does .NET Standard fit in?

Hopefully in part 1, I showed how .NET Core is a fantastically quick to run set of development tools that can run across platforms but you might also have heard of .NET Standard.

Over the years, the .NET landscape has become pretty complicated (the discussion on versions just scrapes the surface) and we should really think of “.NET” as a big umbrella for marketing purposes with three main instances:

  1. Microsoft .NET Framework is really the .NET (Full) Framework – it runs on Windows clients and servers.
  2. Microsoft .NET Core is for server applications with no UI and is ideally suited to microservices, Docker containers, web apps and more. It runs on Windows, MacOS and multiple Linux distributions.
  3. Mono (led by Xamarin) is an open source cleanroom implementation of .NET – written without access to the code. Its developers looked at the interfaces/shape and made new versions, which means it has the benefit of running anywhere (e.g. “exotic places” like a Sony Playstation, or on tiny devices).

That means .NET code can, theoretically run anywhere. Except that the versioning gets in the way… and that’s where .NET Standard fits in.

.NET Standard is a target. Not a platform. Not a runtime. It’s an “agreement”.

If that’s difficult to understand, here’s the analogy thatScott Hanselman (@shanselman) used in a recent webcast: Android developers don’t target an Android OS (e.g. v4.3), they target an API level (e.g. 15). If a new API is released that provides new capabilities, a developer can move to the new API level but it might not work on older devices.

Microsoft has something like this in Portable Class Libraries, except they are the lowest common denominator – the centre of a Venn diagram. .NET Standard is about running anywhere so that if a developer targets their application to a standard, they can be sure it will run wherever that standard is supported.

The .NET Platform Support table demonstrates this and indicates the minimum version of the platform that’s needed to implement that .NET Standard.

.NET Platforms Support table
.NET Platforms Support table as at July 2017

For example, if you want to run on .NET Framework 4.5 and .NET Core 1.0, the highest .NET Standard version you can use is .NET Standard 1.1; if you want to run on Windows 8.1 you can’t go above .NET Standard 1.2; etc. The higher the version, the more APIs are available and the lower the version, the more platforms implement the standard. Developers target a framework for specific Windows APIs and target a .NET standard for portability and flexibility.

Resources

Introduction to Microsoft .NET Core

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 1 of a 2-part series, looking at .NET versions and the Microsoft .NET Core SDK. Tomorrow’s post moves on to look at Microsoft .NET Standard.

.NET versions

The first thing to understand is that .NET versioning is a mess [my words, not Scott’s]:

On a Windows machine with Visual Studio installed, open a command prompt and type clrver -all. Chances are you’ll see versions 2.0 and 4.0. cd C:\Windows\Microsoft.NET\Framework and on my Windows 10/Office 2016 machine I can see versions 1.0.3705, 1.1.4322, 2.0.50727 and 4.0.30319.

So where is .NET 3.0? Well, 3.0 and 3.5 were the Windows Presentation Framework (WPF) and Windows Communication Framework (WCF), running on v2.0 of the Common Language Runtime (CLR). They are actually part of Windows!

All of this makes developing for the .NET framework tricky. Side by side running for .NET 4.x is at the CLR level – if your app is developed for 4.6 and others are for 4.0, you may have a hard time convincing IT operations people to update and chances are you’ll need to drop down to 4.0.

That (together with cross-platform capabilities) is where .NET Core comes in.

Creating simple applications with the .NET Core SDK

.NET Core is in C:\Program Files\DotNet and is implemented as a driver (dotnet.dll). With a few simple commands we can create and run a console application:

dotnet new console

dotnet restore

dotnet run

That’s three steps to Hello World!

It’s just as simple for a web application:

dotnet new web

dotnet restore

dotnet run

Then browse to http://localhost:5000.

One difference in the web app is the presence of the

<Project Sdk="Microsoft.NET.Sdk.Web">

line in the .csproj file. This signifies a “meta package” – a package of packages that avoids explicitly listing multiple package references (for cleaner code).

Unlike in the (full) .NET Framework, we can specify the version of .NET Core to use in the .csproj file, meaning that multiple versions of the SDK can be used with whatever variety of libraries are needed:

<TargetFramework>netcoreapp1.1</TargetFramework>

We can also add references to libraries with dotnet add reference libraryname. Adding a package with dotnet add package packagename not only adds it but restores, downloads and checks compatibility. Meanwhile dotnet new solution creates a solution file – something that would be complex to do manually.

Why revert to a CLI?

So we’ve seen that you can be a successful .NET programmer without a visual editor. We can build an entire project system from the command line. .NET Core is not just about involving the Linux community with a cross-platform version of .NET but also about speed and Scott used an analogy of thinking of the CLI as creating a “2D” version to validate before working on the full “3D” GUI version.

In the background (under the covers), the .NET Core SDK uses the NuGet package manager (dotnet add package), MSBuild (dotnet build) and VSTest (dotnet new xunit and dotnet test).

Visual Studio gives a better user interface but we can develop the basics very quickly using a command line and then get the best of both worlds – both CLI and GUI. What .NET Core does is lower the barrier to entry – getting up and running writing Microsoft.NET code in just a few minutes, for Windows, MacOS or Linux.

Resources

Short takes: Amazon Web Services 101, Adobe Marketing Cloud and Milton Keynes Geek Night (#MKGN)

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

What a crazy week. On top of a busy work schedule, I’ve also found myself at some tech events that really deserve a full write-up but, for now, will have to make do with a summary…

Amazon Web Services 101

One of the events I attended this week was a “lunch and learn” session to give an introduction/overview of Amazon Web Services – kind of like a breakfast briefing, but at a more sociable hour of the day!

I already blogged about Amazon’s reference architecture for utility computing but I wanted to mention Ryan Shuttleworth’s (@RyanAWS) explaination of how Amazon Web Services (AWS) came about.

Contrary to popular belief, AWS didn’t grow out of spare capacity in the retail business but in building a service-oriented infrastructure for a scalable development environment to initially provide development services to internal teams and then to expose the amazon catalogue as a web service. Over time, Amazon found that developers were hungry for more and they moved towards the AWS mission to:

“Enable business and developers to use web services* to build scalable, sophisticated applications”

*What people now call “the cloud”

In fact, far from being the catalyst for AWS, Amazon’s retail business is just another AWS customer.

Adobe Marketing Cloud

Most people will be familiar with Adobe for their design and print products, whether that’s Photoshop, Lightroom, or a humble PDF reader.  I was invited to attend an event earlier this week to hear about the Adobe Marketing Cloud, which aims to become for marketers what the Creative Suite has for design professionals.  Whilst the use of “cloud” grates with me as a blatant abuse of a buzzword (if I’m generous, I suppose it is a SaaS suite of products…), Adobe has been acquiring companies (I think I heard $3bn mentioned as the total cost) and integrating technology to create a set of analytics, social, advertising, targeting and web experience management solutions and a real-time dashboard.

Milton Keynes Geek Night

MK Geek Night #mkgn

The third event I attended this week was the quarterly Milton Keynes Geek Night (this was the third one) – and this did not disappoint – it was well up to the standard I’ve come to expect from David Hughes (@DavidHughes) and Richard Wiggins (@RichardWiggins).

The evening kicked off with Dave Addey (@DaveAddey) of UK Train Times app fame, talking about what makes a good mobile app. Starting out from a 2010 Sunday Times article about the app gold rush, Dave explained why few people become smartphone app millionaires, but how to see if your idea is:

  • Is your mobile app idea really a good idea? (i.e. is it universal, is it international, and does it have lasting appeal – or, put bluntly, will you sell enough copies to make it worthwhile?)
  • Is it suitable to become a mobile app? (will it fill “dead time”, does it know where you go and use that to add value, is it “always there”, does it have ongoing use)
  • And how should you make it? (cross platform framework, native app, HTML, or hybrid?)

Dave’s talk warrants a blog post of it’s own – and hopefully I’ll return to the subject one day – but, for now, that’s the highlights.

Next up were the 5 minute talks, with Matt Clements (@MattClementsUK) talking about empowering business with APIs to:

  1. Increase sales by driving traffic.
  2. Improve your brand awareness by working with others.
  3. Increase innovation, by allowing others to interface with your platform.
  4. Create partnerships, with symbiotic relationships to develop complimentary products.
  5. Create satisfied customers – by focusing on the part you’re good at, and let others build on it with their expertise.

Then Adam Onishi (@OnishiWeb) gave a personal, and honest, talk about burnout, it’s effects, recognising the problem, and learning to deal with it.

And Jo Lankester (@JoSnow) talked about real-world responsive design and the lessons she has learned:

  1. Improve the process – collaborate from the outset.
  2. Don’t forget who you’re designing for – consider the users, in which context they will use a feature, and how they will use it.
  3. Learn to let go – not everything can be perfect.

Then, there were the usual one-minute slots from sponsors and others with a quick message, before the second keynote – from Aral Balkan (@Aral), talking about the high cost of free.

In an entertaining talk, loaded with sarcasm, profanity (used to good effect) but, most of all, intelligent insight, Aral explained the various business models we follow in the world of consumer technology:

  • Free – with consequential loss of privacy.
  • Paid – with consequential loss of audience (i.e. niche) and user experience.
  • Open – with consequential loss of good user experience, and a propensity to allow OEMs and operators to mess things up.

This was another talk that warrants a blog post of its own (although I’m told the session audio was recorded – so hopefully I’ll be able to put up a link soon) but Aral moved on to talk about a real alternative with mainstream consumer appeal that happens to be open. To achieve this, Aral says we need a revolution in open source culture in that open source and great user experience do not have to be mutually exclusive. We must bring design thinking to open source. Design-led open source.  Without this, Aral says, we don’t have an alternative to Twitter, Facebook, whatever-the-next-big-platform-is doing what they want to with our data. And that alternative needs to be open. Because if it’s just free, the cost is too high.

The next MK Geek Night will be on 21 March, and the date is already in my diary (just waiting for the Eventbrite notice!)

Photo credit: David Hughes, on Flickr. Used with permission.

Storing Arduino code in the cloud

This content is 12 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Earlier this week I blogged about some of the stuff I’d been doing with Arduino and I mentioned that my code is up on GitHub. What I didn’t mention is how it got there…

I use the Arduino IDE on a netbook running Ubuntu Linux (other development tools are available) and, a few weeks ago, I stumbed across an interesting-sounding hack to store sketches (Arduino code) in the cloud. The tool to make this happen is David Vondle’s Upload and Retrieve Source project. There’s a good description in Dave’s blog post about the project that clears up parts I struggled with (like: the location of the gistCredentials.txt file, used to store your GitHub credentials and which is found in the ~/.arduino folder on my system; and that you also need the username to be included in a comment inside the sketch).  Of course, you’ll need to create an account at GitHub first but you don’t need to know anything about the various git commands to manage source code once you have created the account.

The only downside I’ve found (aside from plain text passwords) is that there is only one project for each Arduino – if you re-use the Arduino with another circuit, the new sketch will be stored in the same gist (although the version control will let you retrieve old sketches, if you know which is which)

 

Short takes: the rise of the personal cloud; what’s in an app; and some thoughts on Oracle

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few highlights from my week that didn’t grow into blog posts of their own…

Oracle: complete, open and integrated

I was at an Oracle event earlier this week when I heard the following comment that amused me somewhat (non-attributed to protect the not-so-innocent):

“Oracle likes to say they are complete, open and integrated – and they are:

  • Complete – as in ‘we have a long price list’.
  • Open – as in ‘we’re not arrogant enough to think you buy everything from us’.
  • Integrated – as in ‘if we own both sides of the connection we’ll sell you the integration’.”

I don’t have enough experience of working with Oracle to know how true that is, but it certainly fits the impression I have of the company… I wonder what the Microsoft equivalent would be…

The rise of the Personal Cloud

I’ve been catching up with reading the paper copy of Computing that arrives ever fortnight (and yes, I do prefer the dead tree edition – I wouldn’t generally read the online content without it). One of the main features on 22 March was about the rise of the personal cloud – a contentious topic among some, but one to ignore at your peril as I highlighted in a recent post.

One quote I particularly liked from the article was this one:

“The personal cloud isn’t so much the death of the PC as its demotion. The PC has become just another item in a growing arsenal of access devices.”

Now all we need is for a few more IT departments to wake up to this, and architect their enterprise to deliver device-agnostic services…

What’s app?

In another Computing article, I was reading about some of the technologies that Barclays is implementing and it was interesting to read COO Shaygan Kheradpir’s view on apps:

“Many […] tasks that happen on the front-line are […] app-oriented […].

And what are apps? They are deep and narrow. They’re not like PC applications, which are broad and shallow. You want apps to do one, often complex, task.”

Sounds like Unix to me! (but also pretty much nails why mobile apps work so well in small packages.)

Resources to help get started developing Windows Phone apps

This content is 13 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I tweeted to see if anyone had any resources to help a non-programmer create a Windows Phone app (I haven’t written any code in anger since I graduated 18 years ago, although I did used to write Basic, 68000 assembler, Modula-2, Turbo Pascal, COBOL, C, and C++):

Are there any good, idiot-proof, guides for people like me who haven't written code for 20 years to learn how to create a Windows Phone app?
@markwilsonit
Mark Wilson

I got quite a few responses asking me to share the findings so here’s what’s been suggested so far:

Now to find some time to make my application idea a reality…

Writing SOLID code

This content is 15 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I’m not a coder: the last time I wrote any functioning code was at Uni’ in the early 1990s.  I can adapt other people’s scripts and, given enough time, write my own. I can knock up a bit of XHTML and CSS but writing real applications?  Nope.  Not my game.

Every now and again though, I come up against a development topic and I do find them interesting, if a little baffling.  I guess that’s how devs feel when I talk infrastructure.

From 2004 to 2005, I worked for a company called Conchango (who are now part of EMC Consulting) – I had a great time there, but the company’s focus had shifted from infrastructure to web design agency and Java/.NET development (which, by the way, they were rather good at – with an impressive client list).  Whilst I was there, it seemed that all I heard about was “agile” or “XP” (extreme programming… nothing to do with Windows XP) and these were approaches that were taking the programming world by storm at the time.

Then, a few weeks ago, I had a similar experience at an Edge User Group meeting, where Ian Cooper (a C# MVP) was talking about the SOLID principles.  Not being a coder, most of this went right over my head (I’m sure it would make perfect sense to my old mates from Uni’ who did follow a programming route, like Steve Knight), but it was interesting nonetheless – and, in common with some of the stuff I heard about in my days at Conchango, I’m sure the basic principles of what can go wrong with software projects could be applied to my infrastructure projects (with a little tweaking perhaps):

  • Rigidity – difficult to change.
  • Fragility – change one thing, breaks something else.
  • Immobility – e.g. one part of the solution is welded into the application.
  • Viscosity – wading through treacle, maintaining someone else’s software.
  • Needless complexity – why did we do it that way?
  • Needless repetition – Ctrl+C Ctrl+V is not an ideal programming paradigm!
  • Opacity – it made sense to original developer… but not now!

Because of these issues, maintenance quickly becomes an issue in software development and Robert C Martin (@unclebobmartin – who had previously led the group that created Agile software development from Extreme programming techniques) codified the SOLID principles in his Agile Principles, Patterns and Practices in C# book (there is also a Hanselminutes podcast where Scott Hanselman and “Uncle Bob” Martin discuss the SOLID principles and a follow-up where they discuss the relevance of SOLID).  These principles are:

  • Single responsibility principle
  • Open/closed principle
  • Liskov substitution principle
  • Interface segregation principle
  • Dependency inversion principle

This is the point where my brain starts to hurt, but please bear with me as I attempt to explain the rest of the contents of Ian’s presentation (or listen to the Hanselminutes podcast)!

The single responsibility principle

This principle states that a class should have only one reason to change.

Each responsibility is an axis of change and, when the requirements change, that change will be manifested through a change in responsibility among the classes. If a class assumes more than one responsibility, that class will have more than one reason to change, hence single responsibility.

Applying this principle gives a developer a single concept to code for (also known as separation of concerns) so, for example, instead of having a GUI to display a purchase order, this may be separated into GUI, controller, and purchase order: the controller’s function is to get the data from the appropriate place, the GUI is only concerned with displaying that data, and the purchase order is not concerned with how it is displayed.

The open/closed principle

This principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification.

The thinking here is that, when a single change to a program results in a cascade of changes to dependent modules, the design becomes rigid but, if the open/closed principle is applied well, further changes are achieved by adding new code, not by changing old code that already works.

Some may think that it’s impossible to be both open to extension and closed to change: the key here is abstraction and composing.

For example, a financial model may have different rounding rules for different markets.  This can be implemented with local rounding rules rather than changing the model each time the model is applied to a different market.

The Liskov substitution principle

This principle (attributed to Barbara Liskov) states that subtypes most be substitutable for their base types.  Unfortunately, attempts to fix Liskov substitution problems often result in violations of the open/closed principle but, in essence, the validity of a model can be expressed only in terms of its clients so, for example, if there is a type called Bird (which has got wings and can fly), where what happens to penguin and emu when an attempt is made to implement the fly method? We need to be able to call fly for a penguin and handle it appropriately so there are effectively two solutions: change the type hierarchy; or refactor the type to express it differently – fly may become move, or we could have a flightless bird type and a running bird type.

The interface segregation principle

The interface segregation principle says that clients should not be forced to depend on methods they do not use.

Effectively, this means that clients should not be affected by changes that don’t concern them (i.e. fat types couple disparate parts of the application).  In essence, each interface should have smallest set of features that meet client requirements but this means it may be necessary to create multiple interfaces within a class.

The dependency inversion principle

The dependency inversion principle states that high level models should not depend on low level models – both should depend on abstractions. In addition, abstractions should not depend on details. Details should depend upon abstractions.  This is sometimes known as the Hollywood principle (“Don’t call us, we’ll call you”).  So, where is the inversion?  If a class structure is considered as a tree with the classes at the leaves and abstraction at the trunk, we depend on the tree, not the leaves, effectively inverting the tree and grasping by the roots (inversion of control).

Summing it up

I hope I’ve understood Ian’s presentation enough to do it justice here but to sum it up: the SOLID principles help to match computer science concepts such as cohesion and polymorphism to actual development in practice. Or, for dummies like me, if you write your code according to these principles, it can avoid problems later when the inevitable changes need to be made.

As an infrastructure guy, I don’t fully understand the details, but I can grasp the high level concepts and see that it’s not really so different to my world.  Look at a desktop delivery architecture for a large enterprise:

  • We abstract data from user objects, applications, operating system and hardware (which is possibly virtualised) and this gives us flexibility to extend (or substitute) parts of the infrastructure (well, that’s the idea anyway).
  • We only include those components that we actually want to use (no games, or other consumer functionality). 
  • We construct servers with redundancy then build layers of software to construct service platforms (which is a kind of dependency inversion). 

OK, so the infrastructure links may seem tenuous, but the principle is sound.  Sometimes it’s good for us infrastructure guys to take a look at how developers work…