“dotnet: command not found” after installing the Microsoft .NET Core SDK on a Mac

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Whilst installing the Microsoft .NET Core SDK on my MacBook earlier last week, I found that the instructions on the Microsoft website were not quite complete.

Microsoft tells us to run a few commands to install OpenSSL:

  1. Install Homebrew (it was already on my system)
  2. Then run:

brew update
brew install openssl
mkdir -p /usr/local/lib
ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/
ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/

After this, you should be able to download and install the .NET Core SDK package (version 1.0.4 seems to be the latest version of the SDK at the time of writing, which includes .NET Core 1.0 and 1.1).

Then, in theory, running the dotnet command should be all that’s required but, for me, it resulted in an error:

-bash: dotnet: command not found

The fix, it seems, is to create another symbolic link:

ln -s /usr/local/share/dotnet/dotnet /usr/local/bin/

After that, dotnet ran as expected.

For reference, My MacBook is running MacOS Sierra version 0.12.5 (16F73).

Introduction to Microsoft .NET Standard

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 2 of a 2-part series. In part 1, I explained .NET versions and the lightweight, cross-platform Microsoft .NET Core SDK. This post moves on to look at standardising Microsoft .NET.

Where does .NET Standard fit in?

Hopefully in part 1, I showed how .NET Core is a fantastically quick to run set of development tools that can run across platforms but you might also have heard of .NET Standard.

Over the years, the .NET landscape has become pretty complicated (the discussion on versions just scrapes the surface) and we should really think of “.NET” as a big umbrella for marketing purposes with three main instances:

  1. Microsoft .NET Framework is really the .NET (Full) Framework – it runs on Windows clients and servers.
  2. Microsoft .NET Core is for server applications with no UI and is ideally suited to microservices, Docker containers, web apps and more. It runs on Windows, MacOS and multiple Linux distributions.
  3. Mono (led by Xamarin) is an open source cleanroom implementation of .NET – written without access to the code. Its developers looked at the interfaces/shape and made new versions, which means it has the benefit of running anywhere (e.g. “exotic places” like a Sony Playstation, or on tiny devices).

That means .NET code can, theoretically run anywhere. Except that the versioning gets in the way… and that’s where .NET Standard fits in.

.NET Standard is a target. Not a platform. Not a runtime. It’s an “agreement”.

If that’s difficult to understand, here’s the analogy thatScott Hanselman (@shanselman) used in a recent webcast: Android developers don’t target an Android OS (e.g. v4.3), they target an API level (e.g. 15). If a new API is released that provides new capabilities, a developer can move to the new API level but it might not work on older devices.

Microsoft has something like this in Portable Class Libraries, except they are the lowest common denominator – the centre of a Venn diagram. .NET Standard is about running anywhere so that if a developer targets their application to a standard, they can be sure it will run wherever that standard is supported.

The .NET Platform Support table demonstrates this and indicates the minimum version of the platform that’s needed to implement that .NET Standard.

.NET Platforms Support table
.NET Platforms Support table as at July 2017

For example, if you want to run on .NET Framework 4.5 and .NET Core 1.0, the highest .NET Standard version you can use is .NET Standard 1.1; if you want to run on Windows 8.1 you can’t go above .NET Standard 1.2; etc. The higher the version, the more APIs are available and the lower the version, the more platforms implement the standard. Developers target a framework for specific Windows APIs and target a .NET standard for portability and flexibility.

Resources

Introduction to Microsoft .NET Core

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

A few months ago, I was (along with about 70 other rMVPs) privileged to be on a Skype call with Scott Hanselman (@shanselman) as he gave an overview into Microsoft .NET Core. Some of what was discussed was confidential but the general overview of how .NET Core fits with the full .NET Framework and with Mono/Xamarin was a great education for a non-developer like me and it seems fine to reproduce it here for the benefit of others.

This is part 1 of a 2-part series, looking at .NET versions and the Microsoft .NET Core SDK. Tomorrow’s post moves on to look at Microsoft .NET Standard.

.NET versions

The first thing to understand is that .NET versioning is a mess [my words, not Scott’s]:

On a Windows machine with Visual Studio installed, open a command prompt and type clrver -all. Chances are you’ll see versions 2.0 and 4.0. cd C:\Windows\Microsoft.NET\Framework and on my Windows 10/Office 2016 machine I can see versions 1.0.3705, 1.1.4322, 2.0.50727 and 4.0.30319.

So where is .NET 3.0? Well, 3.0 and 3.5 were the Windows Presentation Framework (WPF) and Windows Communication Framework (WCF), running on v2.0 of the Common Language Runtime (CLR). They are actually part of Windows!

All of this makes developing for the .NET framework tricky. Side by side running for .NET 4.x is at the CLR level – if your app is developed for 4.6 and others are for 4.0, you may have a hard time convincing IT operations people to update and chances are you’ll need to drop down to 4.0.

That (together with cross-platform capabilities) is where .NET Core comes in.

Creating simple applications with the .NET Core SDK

.NET Core is in C:\Program Files\DotNet and is implemented as a driver (dotnet.dll). With a few simple commands we can create and run a console application:

dotnet new console

dotnet restore

dotnet run

That’s three steps to Hello World!

It’s just as simple for a web application:

dotnet new web

dotnet restore

dotnet run

Then browse to http://localhost:5000.

One difference in the web app is the presence of the

<Project Sdk="Microsoft.NET.Sdk.Web">

line in the .csproj file. This signifies a “meta package” – a package of packages that avoids explicitly listing multiple package references (for cleaner code).

Unlike in the (full) .NET Framework, we can specify the version of .NET Core to use in the .csproj file, meaning that multiple versions of the SDK can be used with whatever variety of libraries are needed:

<TargetFramework>netcoreapp1.1</TargetFramework>

We can also add references to libraries with dotnet add reference libraryname. Adding a package with dotnet add package packagename not only adds it but restores, downloads and checks compatibility. Meanwhile dotnet new solution creates a solution file – something that would be complex to do manually.

Why revert to a CLI?

So we’ve seen that you can be a successful .NET programmer without a visual editor. We can build an entire project system from the command line. .NET Core is not just about involving the Linux community with a cross-platform version of .NET but also about speed and Scott used an analogy of thinking of the CLI as creating a “2D” version to validate before working on the full “3D” GUI version.

In the background (under the covers), the .NET Core SDK uses the NuGet package manager (dotnet add package), MSBuild (dotnet build) and VSTest (dotnet new xunit and dotnet test).

Visual Studio gives a better user interface but we can develop the basics very quickly using a command line and then get the best of both worlds – both CLI and GUI. What .NET Core does is lower the barrier to entry – getting up and running writing Microsoft.NET code in just a few minutes, for Windows, MacOS or Linux.

Resources

Bose Soundlink Mini II speakers turn off at low volume levels

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Listening to music a couple of nights ago (streamed from Spotify on my MacBook, though I’m not sure how relevant that is), I found that my Bose SoundLink Mini II speakers kept turning off after 5 minutes (running on battery power, connected with a cable). Spotify kept playing but the sound stopped until I turned the speakers off/on again.

I hadn’t seen this issue before – and I was using the same 3.5mm AUX cable setup that I often use with our small TV (to improve its sound quality), so I hit the interwebs to see what I could find…

Some hunting around suggests that the issue may have been the low volume level on my MacBook (I was in the room directly under my youngest son’s bedroom, after bedtime).

“The Speaker does have a power-save mode, but it will generally only enter this when no audio is detected. The most likely explanation here is that the speaker is getting a very weak signal […] and boosting it enormously with its internal amp.

[…]
If you are using a headphone jack or similar […], try increasing the […] output level while turning the speaker’s volume down. This should provide a stronger signal on the AUX port which would prevent the speaker from sleeping automatically.”

Sure enough, increasing the volume on the MacBook to around level 4-5 and decreasing the volume on the speakers seems to stop the power-down. Indeed, to make sure this was the case, I turned the MacBook’s volume back down to 1 and waited for the music to cut out… then, when it did, I just increased the volume to around level 4-5 again and the speakers came alive!

On a related note… I stumbled across these Spotify tips and tricks that might be useful…

Reducing the time taken for a Garmin Edge 25 to find a satellite signal

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Regular readers will know that cycling is one of my hobbies (with my eldest son looking like he may follow in the same direction…).

I have a Garmin Edge 810 cycle computer to use on my big rides but for commutes (e.g. on the Brompton) I use a smaller unit – an Edge 25. The Edge 25 is a cracking little unit, with all the basic functionality I’d expect and Bluetooth connectivity, but one of the issues I’ve found is that it can be slow to pick up a GPS signal.

I think I may have made a breakthrough though, thanks to a comment in this review from Average Joe Cyclist:

“Satellite Acquisition on the Garmin Edge 25
The Garmin Edge 25 can connect to both GPS and GLONASS satellites. As it has more satellites to choose from, it can lock in faster. I know that Garmin Edge bike computers with only GPS can be frustratingly slow to lock in, so this is important. It was a very happy surprise to find GLONASS on such a relatively cheap bike computer as the Garmin Edge 25. This is obviously a huge selling point for this tiny bike computer.

Note: GPS and GLONASS are different kinds of satellite systems – the GPS was developed by the USA, and the GLONASS is Russian.”

Sure enough, I checked my settings and GLONASS was off. So I turned it on and limited testing suggests that it may now be faster to pick up a satellite. Time will tell, as will experience with the second Edge 25 that’s in the post for my son to use…

Some more reading suggests that using GLONASS and GPS together may affect battery life but could also improve accuracy. If satellite lock-in is still slow, then a master reset may be required. To reset the Edge 25:

  1. Power on the device whilst holding the two right-side buttons down.
  2. Release the top button when you hear the first beep.
  3. Release the bottom button when you hear the second beep.

Garmin Edge software version as viewed in Garmin ConnectI also upgraded the firmware (unfortunately breaking the rule of only changing one thing at a time when troubleshooting tech…) which got me thinking “what firmware did I have before?”. It seems the way to tell this is to view an activity in Garmin Connect, where the details of the device used to upload the data shown on the right-hand side.

Short takes: SSH, custom ports, root and Synology NASs

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

This blog has been much maligned of late… I’d like to get more time to write and I have literally hundreds of part-written posts, some of which are now just a collection of links for me to unpick…

In the meantime, a couple of snippets that may be useless, or may help someone one day…

Using SSH with a custom port number

My Synology NAS complains about poor security if I leave SSH enabled on port 22. It’s fine if I change it to another port though (security by obscurity!). Connecting then needs a bit more work as it’s ssh user@ipaddress -p portnumber (found via the askubuntu forums)

Logging on to a Synology NAS from SSH as root

On a related topic, I recently needed to SSH to my NAS as root (not admin). ssh root@ipaddress -p portnumber wasn’t authenticating correctly and then I found Synology’s advice on how to login to DSM with root permission via SSH/Telnet. It seems I have to first log on as admin, then sudo -i to elevate to root.

Synology Hyper Backup and DSM update failures

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

I have a Synology DS916+ NAS and, for the 9 months or so, I’ve been using it to back up my photos to Microsoft Azure. I’ve realised that they are being backed up in a format that’s unique to Synology’s Hyper Backup program, so I should probably see if there is an alternative that backs up the files in their native format but, more worryingly, this afternoon I noticed that backups had been failing for a few days. The logs weren’t much help (no detailed information) and a search on the ‘net didn’t turn much up either. For reference, this was the (very high level) information in the logs when viewed in the Hyper Backup GUI:

Information 2017/07/08 03:00:02 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Backup task started.
Error 2017/07/08 03:00:33 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Exception occured while backing up data.
Error 2017/07/08 03:00:36 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Failed to backup data.
Error 2017/07/08 03:00:36 SYSTEM [Azure Blob] [Backup Photos to Microsoft Azure] Failed to run backup task.

(Since then, I’ve found how to view detailed backup logs on a Synology NAS, thanks to a blog post by Jonathan Mumm, though in this case, the logs didn’t shine much of a light on the problem for me.)

I wondered if there were any DSM updates available that might fix things but, when I checked for updates, I got a message to say “Insufficient capacity for update. The system partition requires at least 400MB”. Googling suggested lots of manual file deletion and I was sure this was just a buildup of temp files (maybe to do with the failed backup), so I decided to reboot. After all, what do you do when a computer isn’t working as expected? Turn it off and on again!

After rebooting, attempts to update no longer produced an error (simply confirming that I’m up-to-date with DSM 6.1.2-15132) and the backup is now running nicely (it will take a few hours to complete as I added a few months’ worth of iPhone photos to the NAS earlier in the week, around about the time the backups started failing…)

VPN, DirectAccess or Windows 10 auto-trigger VPN profile?

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

On a recent consulting gig, I found myself advising a customer who was keen to deploy Microsoft DirectAccess (DA) in place of their legacy virtual private network (VPN) solution. As a DirectAccess user (who used Cisco AnyConnect VPN at my last place of work), I have to say the convenience of being always connected to the company network without any interaction on my part is awesome. I’m sure the IT guys like that they can always access my PC for management purposes too…

The trouble with DirectAccess is that it doesn’t seem to have a published roadmap. So, should I really be advising my customers to use a technology that doesn’t seem to be being developed? First of all, I should add that it’s not been deprecated. DirectAccess is still a supported feature in Windows Server 2016 (it’s part of the Remote Access server role) – so it’s still got a future. Annoyingly, it’s not a supported workload on Azure (leading to on-premises deployments) but we can’t have everything…

Now for the question of whether to use DA or a traditional VPN. Well, Microsoft MVP Richard Hicks (@RichardHicks) has written a fantastic blog post that goes through this in detail. Rather than paraphrasing, I’ll suggest that you go and read Richard’s post on DirectAccess vs. VPN.

But that’s not the whole picture… you see Windows 10 has a new auto-triggered VPN profile capability that I’m sure will, in time, replace DirectAccess. So, where does that fit in?

Great response there from Richard, and then my colleague Steve Harwood (@steveeh) joined in, advising that Auto VPN still requires a VPN profile and infrastructure but gets initiated through either a Universal Windows Platform (UWP) or desktop app being started or stopped, meanwhile DirectAccess has other benefits from being always-on avoiding the need to expose management/compliance systems publicly.

Actually, it gets a bit better with the Windows 10 Anniversary Update (RedStone 1/1607), which has the Always On VPN profile option, but we’re still Windows-only at this point. Richard has recommended a DirectAccess alternative for Windows, MacOS, iOS and Android:

So if the question is “should you deploy DirectAccess?”, the answer is “maybe”. It’s a Windows Enterprise-only solution but, if you have other clients in your enterprise, you might want to consider alternatives instead of or alongside DA.

Auto-responder for blog marketing requests…

This content is 7 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Having a popular blog is great. Mine’s probably not as popular as it once was – mostly that’s because I don’t get the time to write all the content that I would like to – but there are still more than 2000 posts here, so I do see a reasonable volume of traffic.

Unfortunately, that also means I get a lot of emails (sometimes several a day) asking me to add a link/feature some content/something else – much of which is clearly scripted bulk email. And not replying only results in multiple chaser emails… so I’m fighting back with my own scripted response (I actually got the idea from a journalist who provided advice to PR teams to help them only pitch items he’d be interested in…):

“Hi,

You’re receiving this email because you recently emailed about the website at markwilson.it/markwilson.co.uk. Thanks for getting in touch; however, I receive several emails each day that take a lot of time to respond to (or multiple chaser emails if I don’t respond) so please don’t be offended by this automatic reply.

  • If you’re looking to place ads on my site, please don’t ask me what I would charge. Instead, please make me an offer. I don’t really know what the market rates are but you probably do. Please also include details of the page you’d like to advertise on, the landing page you would like and the period you would like to advertise for. I’ll only advertise sites that I think will be relevant to my readers so please don’t be offended if I don’t reply.
  • If you have a great resource that you’re sure would improve my content, please consider that markwilson.it is a blog. I’m not going to go back and edit posts from months or years past but you could always leave a comment on a post instead, as long as it’s genuine and not just spam.
  • If you’re offering to create content, please note that the content on the site is all written by me or by one of a very small number of trusted colleagues or family. I do not feature content written by others to promote their goods and services. If you’re starting out as a writer, I wish you well but would politely suggest you write on a public platform – or maybe start your own blog.

Thanks for your understanding.

Mark”

It’ll probably make no difference at all… but at least I can legitimately ignore repeated requests that haven’t acted on my reply…

Removing the residue left behind by stickers on a laptop

This content is 8 years old. I don't routinely update old blog posts as they are only intended to represent a view at a particular point in time. Please be warned that the information here may be out of date.

Yesterday, I was at an event where, during a discussion on developers becoming evangelistic on their various technology choices, another delegate referred to the “stickers and t-shirt” brigade. That made me laugh (and he was joking) as my Surface Pro wears quite a few stickers (though I struggle to find good ones for Microsoft products…) and only the night before I’d been removing one that was dragging down the overall tone.

After removing a large sticker that was starting to look a bit scruffy (using a plastic spatula to try and prise it away in pieces), I was left with a lot of sticky mess. Inspired by a WikiHow article on removing stickers from a laptop, I first used some cooking oil, then some window-cleaning fluid and finally a baby wipe to remove the glue, leaving the surface clean.

These may sound like strange materials for removing stickers but I didn’t want to risk anything stronger as it might damage the paint on the device (which isn’t actually mine – it’s my work PC). The end result is some pristine new real estate for a new batch of stickers (maybe there will be some at the Azure Red Shirt Developer event tomorrow…).