gh repo create RehanSaeed/FastestNuGet --public
cd FastestNuGet
dotnet new --install Boxed.Templates
dotnet new nuget --github-username RehanSaeed --github-project FastestNuGet
git add .
git commit -m "Initial"
git push --set-upstream origin master
# Published to GitHub Packages
start https://github.com/RehanSaeed/FastestNuGet/packages
# Add NUGET_API_KEY to GitHub secrets
git tag -a 0.0.1 -m "Initial"
git push --tags
# Published to NuGet
start https://www.nuget.org/packages/FastestNuGet/
The Fastest Way to Publish a NuGet Package
The Fastest NuGet Package Ever Published (Probably)
::: tip Updated 2020-07-08 12:16 I forgot to mention how we can use labels to help automatically draft release notes, so I've updated the post with a few extra screenshots and descriptions. :::
So, you want to publish a new NuGet package? You just want to get your code up into nuget.org as quickly as possible but there is so much that you have to setup to get there. Not any more! I'll show you how you can create a new project and publish a NuGet package with all the bells and whistles in a couple of minutes.
We'll start off by creating a new GitHub repository using the new GitHub CLI. Unfortunately however, the CLI is interactive, so after executing a command, you do have to answer some questions instead of being able to pass some flags. In this case, we enter Y
to tell it to clone the repository.
gh repo create RehanSaeed/FastestNuGet --public
# Select 'Y' to create a local directory
cd FastestNuGet
The next step is to install the Dotnet Boxed project templates and then create a new project using the NuGet template. There is a lot of optional features you can toggle in this project template which you can review by looking at the output for the dotnet new nuget --help
command.
dotnet new --install Boxed.Templates
dotnet new nuget --help
dotnet new nuget --name FastestNuGet --title "Project Title" --description "Project Description" --github-username RehanSaeed --github-project FastestNuGet
Next we'll commit and push our newly created project to the master
branch.
git add .
git commit -m "Initial"
git push --set-upstream origin master
As soon as we do this, we'll see two GitHub Actions have started.
The Build
GitHub Action has completed several actions you can see below. Note that these actions were completed on Windows, MacOS and Ubuntu Linux. This ensures that your code builds and passes tests on all platforms.
This resulted in a NuGet package being packaged up and pushed to GitHub packages. This is a nice place to store pre-release packages that you can use for testing.
The other Release Drafter
GitHub action created a draft release for us in GitHub releases.
Next we need to create some default labels that we can apply to pull requests. This will help us create automatic release notes for any NuGet packages we release. The bug
, enhancement
and maintenance
labels will categorise changes in our release notes. The major
, minor
and patch
labels will automatically generate a semantic versioning 2.0 compliant version number for us. Unfortunately, we can't use the GitHub CLI to create these for us at this time, so we'll have to do it manually.
Now it's time to make a change and submit a new pull request (PR) to our repository. Notice I'm adding a major
and enhancement
label to the pull request.
git switch --create some-change
git add .
git commit -m "Some change"
git push --set-upstream origin some-change
gh pr create --fill --label major --label enhancement
Next, I'll check that the pull request passed all eight of it's continuous integration build checks and merge the pull request.
If we go back to GitHub Releases, we'll see that our draft GitHub release was automatically updated with details of our pull request! Notice that the enhancement
label also caused our pull request to be categorised under 'New Features'.
Next, we'll want to publish an official release of our NuGet package to nuget.org but first, we need to get hold of a NuGet API key from nuget.org and add it as a secret named NUGET_API_KEY
in GitHub secrets.
Finally I'll edit the release and change the tag name and display name for the release to 1.0.0
. Normally, the major
, minor
and patch
labels we applied earlier would generate this version for us but this is the first ever Git tag, so we'll need to do it ourselves.
In my last post 'The Easiest Way to Version NuGet Packages' I talked more about how we are using MinVer for taking the Git tags and versioning our DLL's and NuGet packages.
Now bask in the glory of seeing your NuGet package on nuget.org. I also just noticed there is a Black Lives Matter (BLM) banner on the site! Those lives certainly do matter, check out my recent post on Racism in Software Development and Beyond for my take on the subject.
That's not all! We didn't just push one NuGet package, we also pushed it's symbols to the nuget.org symbol server. The NuGet package is also signed and has source link support, so developers can debug code in your NuGet package. If you look at the main ReadMe of your project, you'll see a badge showing you the status of the latest GitHub Action run on the master branch and finally you also see a graph showing you how long each GitHub Action run took and it's status over time.
You can take a look at the repository at RehanSaeed/FastestNuGet to see all of the above in action.
The Complete Script
Here is the complete script we ran to get from starting a new project to publishing on NuGet. I took lots of screenshots along the way but overall, you can do all this in about two minutes assuming you have everything installed.
gh repo create RehanSaeed/FastestNuGet --public
# Select 'Y' to create a local directory
cd FastestNuGet
dotnet new --install Boxed.Templates
dotnet new nuget --name FastestNuGet --title "Project Title" --description "Project Description" --github-username RehanSaeed --github-project FastestNuGet
git add .
git commit -m "Initial"
git push --set-upstream origin master
# View GitHub Actions Continuous Integration Build
start https://github.com/RehanSaeed/FastestNuGet/actions
# View NuGet Package Published to GitHub Packages
start https://github.com/RehanSaeed/FastestNuGet/packages
# Create major, minor, patch, bug, enhancement, maintenance labels
start https://github.com/RehanSaeed/FastestNuGet/labels
git switch --create some-change
git add .
git commit -m "Some change"
git push --set-upstream origin some-change
gh pr create --fill --label major --label enhancement
# View and Complete Pull Request
start https://github.com/RehanSaeed/FastestNuGet/pull/1
# Add NUGET_API_KEY to GitHub Secrets
start https://github.com/RehanSaeed/FastestNuGet/settings/secrets
# View and Publish Updated Draft Release
start https://github.com/RehanSaeed/FastestNuGet/releases
# View NuGet Package Published to NuGet
start https://www.nuget.org/packages/FastestNuGet/
Conclusions
I hope this Dotnet Boxed project template accelerates development of your next NuGet package. There are lots of optional features of the NuGet project template I haven't even shown like support for Azure Pipelines and Appveyor continuous integration builds and more, so please do go and take a look.
Automating .NET Security Updates
Every few weeks Microsoft pushes out a .NET SDK update to patch zero day security vulnerabilities. It's important to keep up to date with these to ensure that your software is protected. The problem is, keeping up to date is a manual and boring process but what if you could automate it?
In this post, I'll talk through how you can get most of the way to a fully automated solution with the last hurdle requiring some of your help.
Single Source of Truth
The first problem we need to solve is to enforce a specific version of the .NET SDK to be used to build our code. We can do this by adding a global.json
file to the root of our repository. We can set the .NET SDK version in it like so:
{
"sdk": {
"version": "3.1.402"
}
}
::: warning Security vs Convenience
If a developer doesn't have the version of the .NET SDK you've specified in your global.json
file, Visual Studio will fail to load the projects and show a pretty good error in the output window telling you to update the SDK. It would be nice if it also contained a link to the exact SDK install you needed to smooth the experience.
:::
Continuous Integration
Continuous integration servers like GitHub Actions, Azure Pipelines or AppVeyor all have a version of the .NET SDK pre-installed for your convenience. However, when a new version is released it takes them days to update to the latest version.
In my opinion, it's just better to install the .NET SDK yourself, which is pretty easy to do. The trick is to read the .NET SDK version number from the global.json
file, so that there is a single source of truth for the version number and it's easier to update.
It's worth noting that this adds a few seconds to your build time. However, if the build server already has the version installed which is usually true, it's very quick.
GitHub Actions
For GitHub Actions, we can use the first party actions/setup-dotnet
GitHub action to install the .NET SDK. You can provide it a hard coded version number but it turns out omitting this causes it to lookup the version number from any global.json
file it finds.
- name: 'Install .NET Core SDK'
uses: actions/setup-dotnet@v1
Azure Pipelines
Azure Pipelines has a similar first party UseDotNet
task that can install the .NET SDK. It's a bit more verbose, as you need to set the useGlobalJson
flag to true
.
- task: UseDotNet@2
displayName: 'Install .NET Core SDK'
inputs:
packageType: 'sdk'
useGlobalJson: true
PowerShell
.NET ships with a PowerShell and Bash script to install the .NET SDK. They both ship with an argument you can pass to tell them to use the global.json
file to read the version number. Here is a short cross-platform PowerShell 7 (previously known as PowerShell Core) script that you can use:
if ($isWindows) {
Invoke-WebRequest "https://dot.net/v1/dotnet-install.ps1" -OutFile "./dotnet-install.ps1"
./dotnet-install.ps1 -JSonFile global.json
}
else {
Invoke-WebRequest "https://dot.net/v1/dotnet-install.sh" -OutFile "./dotnet-install.sh"
sudo chmod u+x dotnet-install.sh
sudo ./dotnet-install.sh --jsonfile global.json
}
AppVeyor
AppVeyor has some issues with installing the .NET SDK using the PowerShell and Bash scripts. For reasons I'm not too clear on, you have to set the installation directory. So here is the updated script I use for that:
if ($isWindows) {
Invoke-WebRequest "https://dot.net/v1/dotnet-install.ps1" -OutFile "./dotnet-install.ps1"
./dotnet-install.ps1 -JSonFile global.json -InstallDir 'C:\Program Files\dotnet'
}
else {
Invoke-WebRequest "https://dot.net/v1/dotnet-install.sh" -OutFile "./dotnet-install.sh"
sudo chmod u+x dotnet-install.sh
if ($isMacOS) {
sudo ./dotnet-install.sh --jsonfile global.json --install-dir '/Users/appveyor/.dotnet'
} else {
sudo ./dotnet-install.sh --jsonfile global.json --install-dir '/usr/share/dotnet'
}
}
Dependabot
Dependabot is an amazing tool that GitHub recently acquired. It automatically submits pull requests to your repository to update packages of various kinds including NuGet and NPM packages.
This is where I need your help. The Dependabot GitHub repository has an open issue (dependabot-core#2442) to also do the same for the .NET SDK version in the global.json
file. Upvoting the issue will really help raise it's profile and get it implemented.
Conclusions
Security is hard. Keeping up to date is important but a never ending boring chore. It doesn't have to be that way. With a little extra work, we can get as close to making a .NET SDK update a three character commit every few weeks and with your help, maybe even that can be automated with Dependabot.
Muhammad Rehan Saeed: Developer at Microsoft | Leonardo Tuna Podcast
I did a fun podcast episode with Leonardo Tuna. We talked about the difficulty of getting a job at Microsoft, some of my work there, JavaScript frontend frameworks, Vue vs React and my interesting experiences writing software for education at Bridge International Academies.
Leonardo is also looking for other developers from all walks of life who are interested in recoding podcast episodes, so if you're interested give him a ping.
Deep Dive into Open Telemetry for .NET
- Open Telemetry - Deep Dive into Open Telemetry for .NET
- Open Telemetry - Optimally Configuring Open Telemetry for ASP.NET Core
Open Telemetry is an open source specification, tools and SDK's used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces). Open Telemetry is backed by the Cloud Native Computing Foundation (CNCF) which backs a mind boggling array of popular open source projects. It's worth looking at the CNCF Landscape to see what I really mean. The SDK's support all the major programming languages including C# and ASP.NET Core.
In this post, I'm going to discuss what Open Telemetry is all about, why you'd want to use it and how to use it with .NET specifically. With a typical application there are three sets of data that you usually want to record: metrics, logs and traces. Lets start by discussing what they are.
Logging
Provides insight into application-specific messages emitted by processes. In a .NET application, Open Telemetry support can easily be added if you use ILogger
for logging which lives in the Microsoft.Extensions.Logging
NuGet package. You'd typically already use this if you're building an ASP.NET Core application.
Metrics
Provide quantitative information about processes running inside the system, including counters, gauges, and histograms. Support for metrics in Open Telemetry is still under development and being finalised at the time of writing. Examples of metrics are:
- Percentage CPU usage.
- Bytes of memory used.
- Number of HTTP requests.
Tracing
Also known as distributed tracing, this records the start and end times for individual operations alongside any ancillary data relevant to the operation. An example of this is recording a trace of a HTTP request in ASP.NET Core. You might record the start and end time of a request/response and the ancillary data would be the HTTP method, scheme, URL etc.
If an ASP.NET Core application makes database calls and HTTP requests to external API's these could also be recorded if the database and API's which are in totally separate processes also support recording Open Telemetry tracing. It's possible to follow the trace of a HTTP request from a client, down to your API, down to a database and all the way back again. This allows you to get a deep understanding of where the time is being spent or if there is an exception, where it is occurring.
Jaeger
Collecting metrics, logs and traces is only half of the equation, the other half is exporting that data to various applications that know how to collect Open Telemetry formatted data, so you can view it. The endgame is to be able to see your data in an easily consumable fashion using nice visualisations, so you can spot patterns and solve problems.
The two main applications that can collect and display Open Telemetry compatible trace data are Jaeger and Zipkin. Zipkin is a bit older and doesn't have as nice a UI, so I'd personally recommend Jaeger. It looks something like this:
The above image shows the trace from a 'frontend' application. You can see how it makes calls to MySQL, Redis and external API's using HTTP requests. The length of each line shows how long it took to execute. You can easily see all of the major operations executed in a trace from end to end. You can also drill into each individual line and see extra information relevant to that part of the trace. I'll show you how you can run Jaeger and collect Open Telemetry data in my next blog post.
Spans
Each line in the Jaeger screenshot above is called a Span or in .NET is represented by the System.Activities.Activity
type. It has a unique identifier, start and end time along with a parent span unique identifier too, so it can be connected to other spans in a tree structure representing an overall trace. Finally, a span can also contain other ancillary data that I will discuss further on.
::: tip Unfortunately, .NET's naming has significantly deviated from the official Open Telemetry specification, resulting in quite a lot of confusion on my part. Happily, I've been through that confusion, so you don't have to!
My understanding is that .NET already contained a type called Activity
, so the .NET team decided to reuse it instead of creating a new Span
type like you'd expect. This means that a lot of naming does not match up with the Open Telemetry specification. From this point forward you can use the words 'span' and 'activity' interchangeably.
:::
Recording your own traces using spans is pretty simple. First we must create an ActivitySource
from which spans or activities can be recorded. This just contains a little information about the source of the spans created from it.
private static ActivitySource activitySource = new ActivitySource(
"companyname.product.library",
"semver1.0.0");
Then we can call StartActivity
to start recording and finally call Dispose
to stop recording the span.
using (var activity = activitySource.StartActivity("ActivityName")
{
// Pretend to do some work.
await LongRunningAsync().ConfigureAwait(false);
} // Activity gets stopped automatically at end of this block during dispose.
Events
Along with our span we can record events. These are timestamped events that occur at a single point in time within your span.
using (var activity = activitySource.StartActivity("ActivityName")
{
await LongRunningOperation().ConfigureAwait(false);
}
public async Task LongRunningOperationAsync()
{
await Task.Delay(1000).ConfigureAwait(false);
// Log timestamped events that can take place during an activity.
Activity.Current?.AddEvent(new ActivityEvent("Something happened."));
}
Within the LongRunningOperationAsync
method, we don't have access to the current span. One way to get hold of it would be to pass it in as a method parameter. However, a better way that decouples the two operations is to use Activity.Current
which gives you access to the current span within the currently running thread.
One common pitfall I can foresee is that Activity.Current
could be null
due to the caller deciding not to create a span for some reason. Therefore, we use the null conditional operator ?.
to only call AddEvent
if the current span is not null.
Attributes
Attributes are name value pairs of data that you can record as part of an individual span. The attribute names have a loose standard for how they are put together that I'll talk about further on.
::: tip
Tags
in .NET are called Attributes
in the Open Telemetry specification.
:::
using (var activity = activitySource.StartActivity("ActivityName")
{
await LongRunningOperation().ConfigureAwait(false);
}
public async Task LongRunningOperationAsync()
{
await Task.Delay(1000).ConfigureAwait(false);
// Log an attribute containing arbitrary data.
Activity.Current?.SetTag("http.method", "GET");
}
You can add new attributes or update existing attributes using the Activity.SetTag
method. There is also an Activity.AddTag
method but that will throw if an attribute does not already exist, so I'd avoid using it.
IsRecording
IsRecording
is a flag on a span that returns true
if the end time of the span has not yet been set and false
if it has, thus signifying whether the span has ended. In addition it can also be set to false
if the application is sampling Open Telemetry spans i.e. you don't want to collect a trace for every single execution of the code but might only want a trace for say 10% of executions to reduce the significant overhead of collecting telemetry.
::: tip
The Activity.IsAllDataRequested
property in .NET is called IsRecording
in the Open Telemetry specification.
:::
using (var activity = activitySource.StartActivity("ActivityName")
{
await LongRunningOperation().ConfigureAwait(false);
}
public async Task LongRunningOperationAsync()
{
await Task.Delay(1000).ConfigureAwait(false);
// It's possible to optionally request more data from a particular span.
var activity = Activity.Current;
if (activity != null && activity.IsAllDataRequested)
{
activity.SetTag("http.url", "http://www.mywebsite.com");
}
}
It's worth reading a bit more about Open Telemetry Sampling for more details. In most real world applications, collecting telemetry for every execution of your code is prohibitively expensive and unrealistic, so you will likely be using some form of sampling. Therefore the IsRecording
/IsAllDataRequested
flag becomes something you should probably always check (as in the above example) before you add events or attributes to your span.
Trace Semantic Conventions
Note the attribute names http.method
and http.url
I used in the above examples. There are certain commonly used attribute names that have been standardised in the Open Telemetry specification.
Standardised attribute names use a lower_kebab_case
syntax with .
separator characters. Standardising the names of commonly used attribute names gives applications like Jaeger the ability to show nice UI customisations. Attribute names have been categorised under a few different buckets, it's worth spending some time taking a look at them:
- General: General semantic attributes that may be used in describing different kinds of operations.
- HTTP: Spans for both HTTP client and server.
- Database: Spans for SQL and NoSQL client calls.
- RPC/RMI: Spans for remote procedure calls (e.g., gRPC).
- Messaging: Spans for interaction with messaging systems (queues, publish/subscribe, etc.).
- FaaS: Spans for Function as a Service (e.g., AWS Lambda).
- Exceptions: Attributes for recording exceptions associated with a span.
Exporting Telemetry
There are many plugins for exporting data collected using Open Telemetry which I'll discuss in my next blog post about using Open Telemetry in ASP.NET Core. Therefore, it's highly unlikely that you'd need to manually write your own code to consume data collected using Open Telemetry.
However, if you're interested then Jimmy Bogard has a very well written blog post about using ActivitySource
and ActivityListener
to listen to any incoming telemetry. In short, you can easily subscribe to consume Open Telemetry data like so:
using var subscriber = DiagnosticListener.AllListeners.Subscribe(
listener =>
{
Console.WriteLine($"Listener name {listener.Name}");
listener.Subscribe(kvp => Console.WriteLine($"Received event {kvp.Key}:{kvp.Value}"));
});
Crossing the Process Boundary
Earlier on I spoke about how it's possible to record a trace across process boundaries. For example collecting a trace from a client application through to a database and API both running in separate processes. Given what you now know about recording spans above, how is this possible?
This is where the W3C Trace Context standard comes in. It defines a series of HTTP headers that pass information from one process to another about any trace that is currently being recorded. There are two HTTP headers defined in the specification:
traceparent
- Contains theversion
,trace-id
,parent-id
andtrace-flags
in an encoded form separated by dashes.version
- The version of Open Telemetry being used which is always00
at the time of writing.trace-id
- The unique identifier of the trace.parent-id
- The unique identifier of the span which is acting as the current parent span.trace-flags
- A set of flags for the current trace which determines whether the current trace is being sampled and the trace level.
tracestate
- Vendor-specific data represented by a set of name/value pairs.
I'm not sure why but the HTTP headers are defined in lower-case. Here is an example of what these headers look like in a HTTP request:
traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01
tracestate: asp=00f067aa0ba902b7,redis=t61rcWkgMzE
If you're interested in what it looks like to actually implement the W3C Trace Context, Jimmy Bogard has been implementing Open Telemetry for NServiceBus and shows how it can be done.
Baggage
Similar to attributes, baggage is another way we can add data as name value pairs to a trace. The difference is that baggage travels across process boundaries using a baggage
HTTP header as defined in the W3C Baggage specification. It is also added to all spans in a trace.
baggage: userId=alice,serverNode=DF:28,isProduction=false
Similar to the way attributes can be recorded using the AddTag
and SetTag
methods, with baggage we can use the AddBaggage
method. For some reason a SetBaggage
method that would also update baggage does not exist.
using (var activity = activitySource.StartActivity("ActivityName")
{
await LongRunningOperation().ConfigureAwait(false);
}
public async Task LongRunningOperationAsync()
{
await Task.Delay(1000).ConfigureAwait(false);
// Log an attribute containing arbitrary data.
Activity.Current?.AddBaggage("http.method", "GET");
}
So why would you use baggage over attributes? Well, if you have a global unique identifier for a particular trace like a user ID, order ID or some session ID it might be useful to add it as baggage because it's relevant to all spans in your trace. However, you must be careful not to add too much baggage because it will add overhead when making HTTP requests.
You're Already Using It
The .NET team in their wisdom decided to take quite a large gamble on Open Telemetry. They not only repurposed their Activity
type to represent a span but they also instrumented several libraries, so you don't have to.
The HttpClient
already adds the W3C Trace Context HTTP headers from the current span automatically if a trace is being recorded. Also an ASP.NET Core application already reads W3C Trace Context HTTP headers from incoming requests and populates the current span with that information.
Since the .NET team has made it so easy to collect telemetry and integrated the Activity
type into the base class libraries, I expect a lot of other libraries and applications to follow this example.
The ILogger
interface from the Microsoft.Extensions.Logging
NuGet package used commonly in an ASP.NET Core application is also able to collect logs compatible with Open Telemetry too.
Up Next
I've discussed that Open Telemetry is all about collecting Logs, Metrics and Trace data and gone fairly deep into collecting Trace data. In my next post, I'll cover how you can optimally configure ASP.NET Core and Open Telemetry traces and logs. I'll also fire up Jaeger and show how you can get an ASP.NET Core app to export Open Telemetry data to it.
Open Telemetry for ASP.NET Core
- Open Telemetry - Deep Dive into Open Telemetry for .NET
- Open Telemetry - Configuring Open Telemetry for ASP.NET Core
- Open Telemetry - Exporting Open Telemetry Data to Jaeger
- Open Telemetry - Optimally Configuring Open Telemetry for ASP.NET Core
Configuring Open Telemetry for ASP.NET Core is a fairly simple process. In this post, I'll show you the simplest setup for tracing Open Telemetry in ASP.NET Core and then move to a more fully featured example.
To begin with, we'll just be exporting our Open Telemetry traces to the debug output so we can see what is being recorded but we'll soon move on to exporting to Jaeger in another post where we can see nice visualisations of our traces.
The Simplest Setup
Open Telemetry for ASP.NET Core ships as several NuGet packages. The OpenTelemetry.Extensions.Hosting
package is the required core package to add Open Telemetry to your application.
You can optionally add packages beginning with OpenTelemetry.Instrumentation.*
to collect extra span attributes e.g. the OpenTelemetry.Instrumentation.AspNetCore
package adds span attributes for the current request and response.
You can also optionally add packages beginning with OpenTelemetry.Exporter.*
to export trace data e.g. the OpenTelemetry.Exporter.Console
package exports all trace data to the console or debug output of your application.
<ItemGroup Label="Package References">
<PackageReference Include="OpenTelemetry.Exporter.Console" Version="1.0.1" />
<PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc2" />
<PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc2" />
</ItemGroup>
In our Startup
class's ConfigureServices
method, we can add Open Telemetry support with just a few lines of code using the AddOpenTelemetryTracing
method.
public class Startup
{
private readonly IWebHostEnvironment webHostEnvironment;
public Startup(IWebHostEnvironment webHostEnvironment) =>
this.webHostEnvironment = webHostEnvironment;
public virtual void ConfigureServices(IServiceCollection services)
{
// ...omitted
services.AddOpenTelemetryTracing(
builder =>
{
builder
.SetResourceBuilder(ResourceBuilder
.CreateDefault()
.AddService(webHostEnvironment.ApplicationName))
.AddAspNetCoreInstrumentation();
if (webHostEnvironment.IsDevelopment())
{
builder.AddConsoleExporter(options => options.Targets = ConsoleExporterOutputTargets.Debug);
}
});
}
public virtual void Configure(IApplicationBuilder application)
{
// ...omitted
}
}
The SetResourceBuilder
method is your opportunity to add a set of common attributes to all spans created in the application. In the above case, we've added an application name.
The AddAspNetCoreInstrumentation
method is where we enable collection of attributes relating to ASP.NET Core requests and responses.
Finally, we use AddConsoleExporter
to export the trace data to the debug output. You could also output to the console but there is a lot of trace data and the console is already outputting log information which results in duplication, so I prefer not to do that. Note that we only do this if we are running in the development environment.
The Trace Output
If we now start the application and execute a request/response cycle, we can see the following in our IDE's debug output window:
[09:14:35 INF] HTTP GET /favicon-32x32.png responded 200 in 0.7606 ms
Activity.Id: 00-674c4d4b579bc64baa18b3bcd86c2de4-9d35fca101782841-01
Activity.DisplayName: /favicon-32x32.png
Activity.Kind: Server
Activity.StartTime: 2021-02-11T09:14:35.0633272Z
Activity.Duration: 00:00:00.0113650
Activity.TagObjects:
http.host: localhost:5001
http.method: GET
http.path: /favicon-32x32.png
http.url: https://localhost:5001/favicon-32x32.png
http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36
http.status_code: 200
otel.status_code: UNSET
Resource associated with Activity:
service.name: ApiTemplate
service.instance.id: dd70a756-4347-4794-aed5-0a5e94bca423
The first line of the debug output is actually from our log output (I had only enabled information level logs). The second line is where the Open Telemetry trace starts and is broken up into several sections. Most of the trace output is pretty self explanatory and describes the request/response pretty well, including the span ID, path, response status code, start time and duration of the span.
What I personally found surprising is the last section. In the last section we get the application name that we setup in the SetResourceBuilder
call but we also get a unique identifier for the current instance of the application. This can be useful if we were running multiple instances of the application in Kubernetes or Docker Swarm for example.
Up Next
I've shown a basic example of setting up Open Telemetry and discussed the defaults of what trace data is collected in ASP.NET Core. In my next post, I'll cover how you can fire up Jaeger and show how you can get an ASP.NET Core app to export Open Telemetry data to it.
Exporting Open Telemetry Data to Jaeger
- Open Telemetry - Deep Dive into Open Telemetry for .NET
- Open Telemetry - Configuring Open Telemetry for ASP.NET Core
- Open Telemetry - Exporting Open Telemetry Data to Jaeger
- Open Telemetry - Optimally Configuring Open Telemetry for ASP.NET Core
As I talked about in my first post, the end goal is to get nice visualisations from our Open Telemetry data, so we can spot patterns and learn something from the behaviours of our applications.
In this post, I'll show how we can export the Open Telemetry traces, logs and metrics that we've collected to Jaeger and view them in the Jaeger dashboard.
Open Telemetry Protocol (OTLP)
There are actually two methods of exporting our telemetry to Jaeger. The first uses Jaeger's proprietary protocol and is not something I plan to cover in this post. The second uses the Open Telemetry Protocol (OTLP), which is an open standard that we can use to export Open Telemetry data to any application that supports it.
Jaeger is a pretty complex application that splits it's responsibilities into several separate binaries. It splits the collection of telemetry into a binary called a 'collector'. If we want Jaeger to collect telemetry using the Open Telemetry Protocol, we need to use the Jaeger Open Telemetry collector. I'm not going to cover how we can setup Jaeger in a full production setup. Instead I'll be using the jaegertracing/opentelemetry-all-in-one
Docker image which makes running Jaeger with the Open Telemetry collector as easy as running this command:
docker run --name jaeger -p 13133:13133 -p 16686:16686 -p 4317:55680 -d --restart=unless-stopped jaegertracing/opentelemetry-all-in-one
I've also opened up a few ports to the Docker container:
- 4317 - Open Telemetry Protocol (OTLP) receiver where we expect to receive Open Telemetry data.
- 16686 - Dashboard where users can see visualisations.
- 13133 - Jaeger health check.
::: warning
The default port used by the Open Telemetry Protocol has recently been changed from 55680
to 4317
. This change has not yet been made in Jaeger which uses the older port number.
:::
Exporting from .NET
In the last post I showed how we can collect Open Telemetry data in an ASP.NET Core application. I'm going to build on that example to export to Jaeger. To start with, we'll need to add an additional NuGet package called OpenTelemetry.Exporter.OpenTelemetryProtocol
:
<PackageReference Include="OpenTelemetry.Exporter.Console" Version="1.0.1" />
<PackageReference Include="OpenTelemetry.Exporter.OpenTelemetryProtocol" Version="1.0.1" />
<PackageReference Include="OpenTelemetry.Extensions.Hosting" Version="1.0.0-rc2" />
<PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" Version="1.0.0-rc2" />
Then we can use the AddOtlpExporter
method to configure where to export to. In this case, I'm exporting to http://localhost:4317
where port 4317
is the default port used by the Open Telemetry Protocol. Ideally, we'd retrieve this value from configuration but I'm keeping things simple in this example.
services.AddOpenTelemetryTracing(
builder =>
{
builder
.SetResourceBuilder(ResourceBuilder
.CreateDefault()
.AddService(webHostEnvironment.ApplicationName))
.AddAspNetCoreInstrumentation()
.AddOtlpExporter(options => options.Endpoint = new Uri("http://localhost:4317"));
if (webHostEnvironment.IsDevelopment())
{
builder.AddConsoleExporter(options => options.Targets = ConsoleExporterOutputTargets.Debug);
}
});
The Dashboard
We can now fire up the Jaeger dashboard in our browser which we can access at http://localhost:16686
. If we execute some request/response cycles in our application where we have added Open Telemetry support, we can see the telemetry for each request/response:
If we drill down into a particular request/response trace, we can view the spans (in this simple example, there is only one) and all attributes associated with the span. This is the same data we saw in the debug output from my previous post:
Up Next
In this post, I've shown how you can quickly fire up Jaeger and how you can get an ASP.NET Core app to export Open Telemetry data to it. In my next final post I'll discuss optimally configuring Open Telemetry for an ASP.NET application.
A System for Grouping & Sorting CSS Properties
There are no hard and fast rules for code style and as I've written about before it can get ugly when people have various opposing opinions on the subject. In CSS, which I'm quite fond of writing, I believe the answer is mostly given to us by using Prettier, the opinionated code formatter. Unfortunately, Prettier does not sort CSS properties for you and never will, so this post is one solution (not the correct solution because there is no correct solution).
There are automated tools like postcss-sorting that can help with this but I think it'd be difficult to use in real life because there will always be exceptions to the hard coded rules.
But why even bother to group and sort CSS properties? Well, I think it makes sense for two reasons. The first is that it can make it quicker to quickly scan the CSS and find what you need. The second is that if you're working in a team environment, it can make it easier to work on CSS that has one over arching consistent style.
Grouping CSS Properties
I believe you can split CSS properties into a few groups:
- Parent layout
- Layout
- Box Model
- Positioning
- Display
Here is an example of the four groups in real life:
.card {
/* Parent Layout */
grid-area: card;
/* Layout */
display: grid;
align-items: center;
gap: 10px;
grid-template-areas:
"header header"
"content content";
grid-template-columns: 1fr 1fr;
grid-template-rows: 1fr 1fr;
justify-items: center;
/* Box Model */
box-sizing: border-box;
width: 100px;
height: 100px;
margin: 10px;
padding: 10px;
/* Positioning */
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 10;
/* Display */
background-color: red;
border: 10px solid green;
color: white;
font-family: sans-serif;
font-size: 16px;
text-align: center;
}
Parent Layout
The parent layout is any CSS layout properties that effect or come from the parent element. This usually boils down to grid-area
if you're using grid-template-areas
which you totally should because it allows you to change the layout child elements without modifying the child elements CSS too much.
.card {
/* Parent Layout */
grid-area: card;
/* ... */
}
Layout
CSS Layout properties determine how the contents of the CSS class will be layed out. The common case is that you're using CSS Grid or FlexBox and want to group their respective properties together where they make the most sense.
I think it makes the most sense to start with the display
property because that determines the type of layout followed by other properties in alphabetical order.
.card {
/* ... */
/* Layout */
display: grid;
align-items: center;
gap: 10px;
grid-template-areas:
"header header"
"content content";
grid-template-columns: 1fr 1fr;
grid-template-rows: 1fr 1fr;
justify-items: center;
/* ... */
Box Model
CSS properties that affect the box model can come next. Again, I'm using alphabetical order except for width
and height
where it makes more sense for them to go together with width
always being first (there are a lot of exceptions to the rules in CSS).
.card {
/* ... */
/* Box Model */
box-sizing: border-box;
margin: 10px;
padding: 10px;
width: 100px;
height: 100px;
/* ... */
Positioning
CSS properties related to position
come next. Similar to display
, we put the position at the top and follow in alphabetical order. Again there is an exception to be made here with top
, right
, bottom
and left
which follow the order that margin
and padding
values take.
.card {
/* ... */
/* Positioning */
position: absolute;
top: 0;
right: 0;
bottom: 0;
left: 0;
z-index: 10;
/* ... */
Display
Finally, there are CSS display properties which affect the look and feel. This is also a kind of 'Other' category where you can place remaining properties which don't make sense in other groups.
.card {
/* ... */
/* Display */
background-color: red;
border: 10px solid green;
color: white;
font-family: sans-serif;
font-size: 16px;
text-align: center;
}
Final Comments
This is just one method of grouping and ordering CSS properties that I've found useful in real life projects. There is no correct answer to this problem and I think the problem space is probably too complex to make a tool like Prettier do the work for you because there will always be exceptions to the rules.
CSS General Rules of Thumb
Learning CSS is difficult and as someone who has tried to teach CSS others, it's also difficult to point to good teaching resources. There isn't a simple video course I can point to and say "go and watch this". I think part of the problem is that there are so many ways to do things in CSS and also that there are so many little tricks you have to learn. As yet, the best advice I've been able to give is to go and read the last few years worth of CSS Tricks blog posts but that isn't really an easy or quick task or even one that most people would do.
In this post, I wanted to give some super simple general rules of thumb that developers who are new to CSS can follow and get pretty far fairly quickly. I also wanted a public resource I could point to when I was reviewing CSS in pull requests. As these are rules of thumb, they won't be applicable everywhere but they should work in the general case.
Standards
There are lots of ways of doing things in CSS. Here are some friendly defaults that make a good start.
Block Element Modifier (BEM)
You should add CSS classes to any HTML elements you want to add CSS styles to. This means coming up with a naming convention for these CSS class names. Block Element Modifier (BEM) is a great naming convention to get you started. It's worth noting that there are many conventions out there, the key is to pick one and stay consistent.
/* Block component */
.button {
}
/* Element that depends upon the block */
.button__price {
}
/* Modifier that changes the style of the block */
.button--orange {
}
.button--big {
}
CSS Code Style
Keeping your CSS organised can help make reading it easier for yourself and others. Code style is a very subjective topic and everyone has their own opinions. I talk more about this here.
Layout
I find that people have a lot of trouble laying out content the way they want. This is a huge topic but here are some basic things I found useful.
HTML
If you're HTML isn't great, you are going to have a tough time with your CSS. This is usually because there are lots of extra div
or span
elements that you don't need. Get your HTML right first and avoid adding HTML elements just to get the layout right.
It's also a good idea to learn about Semantic HTML. Every HTML tag has a meaning for search engines and those who use assistive technologies like screen readers to navigate the web (there are more people using these than you think).
Grid vs FlexBox
As a general rule CSS Grid can cater to 80% of your layout needs, so learn it well. In particular pay close attention to grid-template-areas
which is a little more verbose to setup but makes your CSS Grid layout much easier to read and makes it more flexible because changing the layout means only changing the CSS for the container instead of each and every child of the container.
When you want to wrap content CSS FlexBox has your back.
Display Block vs Inline
Understanding the different display modes (inline
, block
, inline-block
, grid
, flex
, etc.) is a must. In particular pay attention to the first two because they are the default for a lot of HTML elements.
If your element is hugging the top instead of taking all the available space, it's probably because it's using display: block
and your good friend display: grid
will fix that for you.
Heights and Widths
Get your hands away from the keyboard and put them down very slowly. If you are setting height
and width
, it's usually the wrong thing to do and makes your layout brittle and unwilling to flex.
Rather than explicitly setting a size, try to allow the contents of the container to set the size for you. This means changing the way you think about layout. In general there is an ordering to the CSS properties you should use for sizing your elements:
auto-fit
/auto-fill
andminmax()
- Used in conjunction with CSS Grid'sgrid-template-columns
, these will allow your grid to become responsive.grid-gap
orgap
(in newer browsers) - Used with CSS Grid and FlexBox (only supportsgap
). This allows you to add spacing between elements in your container.padding
- It's generally preferable to use CSS properties that only affect the current element as opposed to the parent elements likemargin
does.margin
min-height
/max-height
andmin-width
/max-width
height
andwidth
- Use this with care if you really know what you're doing.
Browser Support
The web is a wild west, there are many features in CSS which have varying degrees of support in different browsers and the many different versions of each browser. Whenever you're considering using a new CSS feature you found online, it's a good idea to search on caniuse.com to see if it's supported in the web browsers you want to target for your application.
Colours
A colour in CSS can be written in several ways. Below, I'm setting the colour white in a few different ways. You should prefer hsl
because it's easiest for a human to understand how it works.
color: white;
color: #ffffff;
color: rgb(255, 255, 255);
color: hsl(0, 0%, 100%);
Accessibility
Making your web application accessible is a must, lets talk about how.
Accessible HTML
I've talked about HTML before but I'll repeat it here because it's so important. You must learn Semantic HTML and get your HTML right before thinking about the CSS.
Outline
Never set the outline
on a focusable element to none
. People need those outlines to see what they've focused on.
.foo {
outline: none;
}
Accessible Colours
Ensure that the background-color
and color
you've chosen are accessible. It's pretty easy to to do and you can read more here.
Closing Thoughts
CSS seems on the face of it to be a simple programming language (and yes it is one) but it has a lot of depth to it once you start using it. If you want to learn more, reading the CSS Tricks blog posts are a great way to learn, there are also some decent courses on Frontend Masters although you do have to pay to view those.
Web Component Custom Element Gotchas
::: tip Update (04 May 2021)
Chris Holt from the FAST UI team at Microsoft got in touch with me with an alternative workaround to using a wrapper element when required to use a semantic HTML element like a section
, so I've updated that section below.
:::
Recently I've been writing web components and found several gotchas that make working with them, that much more difficult. In this post, I'll describe some gotchas you can experience when using web components.
This post is framework agnostic but I've been using a lightweight library called FAST Element built by Microsoft. It is similar to Google's LitElement in that it provides a very lightweight wrapper around native web component API's. Overall the experience has been interesting but I'm not sure I'm willing to give up on Vue just yet. This post was written based on my experiences with it.
Non-Web Components
When writing a non-web component called custom-component
using a framework like Vue, React or Angular, you quite often end up with HTML and CSS that looks like this:
<div class="custom-component">
<h1>Hello</h1>
<p>World</p>
</div>
.custom-component {
// ...
}
The rendered HTML from these frameworks looks exactly the same as above. However, when writing web components, you have an addition custom HTML element rendered into the DOM with the contents of the element being rendered into the shadow DOM which can introduce bugs for the unwary developer.
<custom-component>
<!-- Shadow DOM -->
<div class="custom-component">
<h1>Hello</h1>
<p>World</p>
</div>
</custom-component>
The Wrapper div
The first gotcha we encounter is that we now have an extra HTML element that we don't actually need. Extra DOM elements, mean higher memory usage and slower performance. As I'll discuss in a moment, it can also mean a more complex layout. The fix for this is simple, we can simply remove the wrapper div
inside our component, so our code now becomes:
<h1>Hello</h1>
<p>World</p>
:host {
// ...
}
We can use the :host
pseudo-selector to style the custom-component
HTML element and now when our component is rendered we get the following:
<custom-component>
<!-- Shadow DOM -->
<h1>Hello</h1>
<p>World</p>
</custom-component>
Semantic HTML
Removing the wrapper div
is all well and good but what if it's not a div
but a semantic HTML element like section
or article
? Both of these tags have a specific meanings for screen readers and search engines and we must use these tags to support them. Well, in this case we have to bring back our wrapper element like so and encounter our second gotcha:
<section class="custom-component">
<h1>Hello</h1>
<p>World</p>
</section>
Our HTML will now be rendered as:
<custom-component>
<section class="custom-component">
<h1>Hello</h1>
<p>World</p>
</section>
</custom-component>
Now if we want to style the component we have to target the .custom-element
class where we can place most of our styles but we also need to target the :host
to change some defaults.
The default value for display
in a custom HTML element like <custom-element>
is actually inline
which is usually not what you will want (margin
, padding
, border
will not work as you expect), so you'll need to explicitly set your own default. This is our third gotcha! I think it makes sense to be explicit and do this for every web component.
In addition, if the content of your web component does not extend beyond the boundary of the component itself, it's a good idea to add contain: paint
for a small performance boost (The Mozilla Docs have more on contain).
:host {
display: block;
contain: paint;
}
.custom-component {
// ...
}
Template Element
One alternative to using a section
above pointed out by Chris Holt is to use a template
HTML element which gives you the ability to add custom HTML attributes to the custom-component
element itself.
<template role="section">
<h1>Hello</h1>
<p>World</p>
</template>
Our HTML will now be rendered as:
<custom-component role="section">
<h1>Hello</h1>
<p>World</p>
</custom-component>
In the example above, I've added role="section"
to tell search engines and screen readers to treat the custom-component
HTML element like a section
element. With this approach, we no longer have a wrapper HTML element which should help improve performance and lower memory usage (particularly on low powered phones). We also get the advantage of not having to add extra styles for the wrapper element. The downside is that we have to use the role
attribute.
:host {
display: block;
contain: paint;
// ...
}
Final Thoughts
The promise of web components are that they are lightweight and fast to run. The downside seems to be that there is more to think about when building a web component, as opposed to a standard framework based component using Vue, React or Angular.
Code Coverage & Frontend Testing
I was recently asked an interesting question about unit test code coverage and frontend testing by a colleague:
Policies describe 80% plus unit test coverage and our React devs are pushing back a lot, arguing there is little logic in React and it would be a wast of time. Rehan any advice/pointer for us on this?
Code coverage in tests is always a controversial topic with developers in which I don't think there is a 'correct' answer. The answer you're going to get from most people is 'it depends' 🤷🏼.
If you're developing a mars rover where one bad line of code could mean mission over or flight navigation software, go ahead and go for 100% code coverage, let alone 80%. The question is, how tolerant are you to the risk of a bug in the front end React code? If something doesn't work, how long will it take you to fix it and how much will that cost you versus the cost of writing extra tests?
Jest
Specifically regarding frontend unit testing, my team has been using Jest which is an excellent unit testing framework built by Facebook which I highly recommend. Jest is special because it's a single NPM package with everything you need rolled into it, including an assertion library, mocking framework and code coverage tools.
To reduce the burden of writing tests you can leverage snapshot testing. A snapshot test takes very few lines of code to write and generates a file containing the rendered HTML for a Vue/React/Angular/Web/Other component. My team writes at least one snapshot test for every component we write and it hasn't been too onerous for us.
I just ran the code coverage tool built into Jest on one of our projects and it came to 78.7% coverage. It's worth mentioning that our projects are mostly very simple and don't contain much complex branching logic. Jest also allows you to set a code coverage limit which will fail a build if code coverage drops below a certain level set by you. Here is an example of configuring Jest that in your package.json
file to do just that:
"jest": {
"coverageThreshold": {
"global": {
"branches": 70,
"functions": 70,
"lines": 70,
"statements": -10
}
}
}
Cypress
In addition, if you want a robust system, it's also worth setting up some integration or functional tests. I recommend a tool called Cypress for this job. According to the testing pyramid you need fewer of these tests as compared to unit tests as they can be more brittle.
I've found them very useful for making sure that you don't release something completely broken (happens more often in the wild than any dev would like you to think). In fact, I use Cypress for this very blog to verify every commit to make sure I don't accidentally break it.
.NET Boxed Visual Studio Integration
A few weeks ago Scott Hanselman blogged about creating dotnet new
based projects directly from Visual Studio. Unfortunately, at that time Visual Studio 16.9 didn't properly support full solution templates and only supported project templates.
Happily, Microsoft just released Visual Studio 16.10 and one of the things they didn't talk about was that it now adds a user interface for creating solutions from dotnet new
templates.
Given that I author the .NET Boxed solution and item templates, I thought I'd run through how it's done.
Step by Step
The first step is to install a dotnet new
based solution/project/item template NuGet package. Sadly, this step is still command line only but there are plans to add a UI so you can search for and install templates all through Visual Studio.
dotnet new --install Boxed.Templates
Next we can fire up Visual Studio and go to the 'New Project' dialogue. You can select '.NET Boxed' from the 'Project type' menu on the top right to see all .NET Boxed project templates.
The next step is where we can give the project a name as usual and decide where we want to store it on disk.
Finally, we get to the new interesting bit, where we can select from the many options that .NET Boxed templates provide:
Finally, we can hit 'Create' and start getting productive in Visual Studio.
That's it! Simple isn't it.
The Windows Package Manager
Winget is a package manager for Windows a bit like apt for linux or the open source Chocolatey for Windows. Version 1.1 of the Windows Package Manager (winget) was recently released. I've had my eye on it for a while now and its only recently gotten good enough to use for real.
It now has the ability to install Windows Store applications and its library of apps that you can search for and install has gotten quite big. My PowerShell script to get a new machine started quickly with all the essential applications that I use as a .NET/Web developer is shown below.
# Environment Variables
[System.Environment]::SetEnvironmentVariable('DOTNET_CLI_TELEMETRY_OPTOUT', '1', [EnvironmentVariableTarget]::Machine)
# Windows Features
# List features: Get-WindowsOptionalFeature -Online
Enable-WindowsOptionalFeature -Online -FeatureName 'Containers' -All
Enable-WindowsOptionalFeature -Online -FeatureName 'Microsoft-Hyper-V' -All
Enable-WindowsOptionalFeature -Online -FeatureName 'VirtualMachinePlatform' -All
# Office
winget install --id '9MSPC6MP8FM4' # Microsoft Whiteboard
start "https://github.com/zufuliu/notepad2/releases"
# Utilities
winget install --id '7zip.7zip' --interactive --scope machine
winget install --id 'XP89DCGQ3K6VLD' # Microsoft Power Toys
winget install --id '9NJ3KMH29VGJ' # Enpass
winget install --id 'WinSCP.WinSCP' --interactive --scope machine
winget install --id '9WZDNCRFJ3PV' # Windows Scan
# Pheripherals
winget install --id 'Elgato.ControlCenter' --interactive --scope machine
winget install --id 'Elgato.StreamDeck' --interactive --scope machine
# Browsers
winget install --id 'Google.Chrome' --interactive --scope machine
winget install --id 'Mozilla.Firefox' --interactive --scope machine
# Communication
winget install --id 'Microsoft.Teams' --interactive --scope machine
winget install --id 'OpenWhisperSystems.Signal' --interactive --scope machine
winget install --id '9WZDNCRDK3WP' # Slack
winget install --id '9WZDNCRFJ140' # Twitter
winget install --id 'XP99J3KP4XZ4VV' # Zoom
# Images
winget install --id '9N3SQK8PDS8G' # Screen To Gif
start https://www.getpaint.net/download.html # Paint.NET not yet available on winget
# Media
winget install --id 'XPDM1ZW6815MQM' # VLC
winget install --id 'plex.plexmediaplayer' --interactive --scope machine
winget install --id 'OBSProject.OBSStudio' --interactive --scope machine
winget install --id 'dev47apps.DroidCam' --interactive --scope machine
winget install --id 'XSplit.VCam' --interactive --scope machine
# Terminal
winget install --id 'Microsoft.WindowsTerminal' --interactive --scope machine
winget install --id 'Microsoft.Powershell' --interactive --scope machine
winget install --id 'JanDeDobbeleer.OhMyPosh' --interactive --scope machine
winget install --id '9P9TQF7MRM4R' # Windows Subsystem for Linux Preview
winget install --id '9NBLGGH4MSV6' # Ubuntu
winget install --id '9P804CRF0395' # Alpine
# Git
winget install --id 'Git.Git' --interactive --scope machine
winget install --id 'GitHub.GitLFS' --interactive --scope machine
winget install --id 'GitHub.cli' --interactive --scope machine
winget install --id 'Axosoft.GitKraken' --interactive --scope machine
# Azure
winget install --id 'Microsoft.AzureCLI' --interactive --scope machine
winget install --id 'Microsoft.AzureCosmosEmulator' --interactive --scope machine
winget install --id 'Microsoft.AzureDataStudio' --interactive --scope machine
winget install --id 'Microsoft.AzureStorageEmulator' --interactive --scope machine
winget install --id 'Microsoft.AzureStorageExplorer' --interactive --scope machine
# Tools
winget install --id 'Docker.DockerDesktop' --interactive --scope machine
winget install --id 'Microsoft.PowerBI' --interactive --scope machine
winget install --id 'Telerik.Fiddler' --interactive --scope machine
# IDE's
winget install --id 'Microsoft.VisualStudio.2022.Enterprise' --interactive --scope machine
winget install --id 'Microsoft.VisualStudioCode' --interactive --scope machine
# Frameworks
winget install --id 'OpenJS.NodeJS' --interactive --scope machine
winget install --id 'Microsoft.dotnet' --interactive --scope machine
A few of things to note in my script. All apps with random looking ID's like 9P9TQF7MRM4R
are Windows Store applications. Secondly, for non-Windows Store applications I always use the --interactive
flag because:
Don’t accept the defaults!
I never want a shortcut added to my desktop, extra toolbars or system tray icons, so never accept the defaults and always manually select the options you want in the installer. Maybe one day we can set the options we want from winget
itself (we can dream!).
Finally, I set the scope of the installation to machine
as opposed to user
. I'm not sure which installers respect this setting but I always want all applications available to whoever is using the machine.
The Problem with C# 10 Implicit Usings
::: tip Update (2021-10-14) Mark Rendle made an interesting suggestion on Twitter after seeing this blog post. I've updated the post below with his code. :::
Yesterday I livestreamed myself upgrading a project to .NET 6 and C# 10. Along the way I tried using a new C# 10 feature called implicit using statements and discovered that it wasn't quite as straightforward as I first thought and you should probably not use it under certain circumstances.
Here is the live stream for those who are interested (I'm eager to get any feedback on how I'm presenting as its not a natural skill for me):
https://www.youtube.com/watch?v=FjnS4oF8K3E
What are Implicit Using Statements?
Adding the line below to your .csproj
project file turns the feature on:
<ImplicitUsings>enable</ImplicitUsings>
Once enabled, depending on the type of project you have created you'll have the following global using statements added to your project implicitly.
SDK | Default namespaces |
---|---|
Microsoft.NET.Sdk | System System.Collections.Generic System.IO System.Linq System.Net.Http System.Threading System.Threading.Tasks |
Microsoft.NET.Sdk.Web | System.Net.Http.Json Microsoft.AspNetCore.Builder Microsoft.AspNetCore.Hosting Microsoft.AspNetCore.Http Microsoft.AspNetCore.Routing Microsoft.Extensions.Configuration Microsoft.Extensions.DependencyInjection Microsoft.Extensions.Hosting Microsoft.Extensions.Logging |
Microsoft.NET.Sdk.Worker | Microsoft.Extensions.Configuration Microsoft.Extensions.DependencyInjection Microsoft.Extensions.Hosting Microsoft.Extensions.Logging |
Sounds great, now you can delete a large portion of the using statements in your project right? Well not so fast, here are some problems I discovered along the way.
Build Errors
I discovered the first problem while multi-targetting a class library project for a NuGet package. I had targetted .NET 4.7.2 as well as other target frameworks like .NET 6 for backwards compatibility and found that System.Net.Http
could not be found. It turns out I hadn't referenced that particular NuGet package for .NET 4.7.2 and was now getting a build error.
I could add the System.Net.Http
NuGet package for .NET 4.7.2 on its own and that would solve the problem but I really didn't like having the overhead of another unnecessary package reference. That also means extra work for me to maintain updating the version number or relying on tools like Dependabot and Renovate to submit PR's to upgrade the version number for me.
<ItemGroup Label="Package References (.NET 4.7.2)" Condition="'$(TargetFramework)' == 'net472'">
<PackageReference Include="System.Net.Http" Version="4.3.4" />
</ItemGroup>
Mark Rendle on Twitter suggested another workaround after seeing this blog post. His suggestion was to remove the offending using statement in the .csproj
file.
<ItemGroup>
<Using Remove="System.Net.Http" />
</ItemGroup>
This looks awfully strange to me. I'm not sure how I feel about adding or removing namespaces from C# project files yet. It doesn't seem very discoverable to me. So in this particular case I'm happy to avoid using implicit using statements for now.
What Using's Were Added?
The second problem is trying to understand what using's have been added. As you can see from the table above, you could go and look in the documentation to figure this out but that's slow and time consuming. Another alternative is actually to build your project and then look in its obj
directory under:
My.Project\obj\Debug\net472\My.Project.GlobalUsings.g.cs
That's not ideal either. I think Visual Studio should ideally show you these using statements somehow.
Conclusions
Implicit usings are enabled by default in the latest blank project templates shipped with .NET. Overall this is a cool feature that can remove the need for many duplicated lines of code in your project but I think there is a little too much magic going on here for my liking, so I think I'll be more careful about using this feature in the future.
Optimally Configuring Open Telemetry Tracing for ASP.NET Core
- Open Telemetry - Deep Dive into Open Telemetry for .NET
- Open Telemetry - Configuring Open Telemetry for ASP.NET Core
- Open Telemetry - Exporting Open Telemetry Data to Jaeger
- Open Telemetry - Optimally Configuring Open Telemetry Tracing for ASP.NET Core
Configuring tracing in Open Telemetry for ASP.NET Core can be a fairly simple process but never accept the defaults! There is always more we can do to make improvements.
In this post, I'll show you how you can take the simplest setup for Open Telemetry tracing I showed you in 'Configuring Open Telemetry for ASP.NET Core' and move to a more fully featured example.
Simplest Setup
Here is a reminder of the simple setup I showed you in 'Configuring Open Telemetry for ASP.NET Core':
public virtual void ConfigureServices(
IServiceCollection services,
IWebHostEnvironment webHostEnvironment)
{
// ...omitted
services.AddOpenTelemetryTracing(
builder =>
{
builder
.SetResourceBuilder(ResourceBuilder
.CreateDefault()
.AddService(webHostEnvironment.ApplicationName))
.AddAspNetCoreInstrumentation();
if (webHostEnvironment.IsDevelopment())
{
builder.AddConsoleExporter(
options => options.Targets = ConsoleExporterOutputTargets.Debug);
}
});
}
And the tracing output you can expect for a request/response cycle:
Activity.Id: 00-dde96d459fee4144a83818e054e221b1-cac69896c1bcd14f-01
Activity.DisplayName: /favicon-32x32.png
Activity.Kind: Server
Activity.StartTime: 2021-02-01T10:28:25.4637044Z
Activity.Duration: 00:00:00.0086712
Activity.TagObjects:
http.host: localhost:5001
http.method: GET
http.path: /favicon-32x32.png
http.url: https://localhost:5001/favicon-32x32.png
http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36
http.status_code: 200
otel.status_code: UNSET
service.name: ApiTemplate
service.instance.id: defe9269-04f2-4b49-a05c-ebddf2112993
telemetry.sdk.name: opentelemetry
telemetry.sdk.language: dotnet
telemetry.sdk.version: 1.0.0.0
The Kitchen Sink Setup
This time we're going to setup a more advanced configuration. We're going to start off by adding a lot more information to our ResourceBuilder
we pass to the SetResourceBuilder
function above.
private static ResourceBuilder GetResourceBuilder(IWebHostEnvironment webHostEnvironment)
{
var version = Assembly
.GetExecutingAssembly()
.GetCustomAttribute<AssemblyFileVersionAttribute>()!
.Version
ResourceBuilder
.CreateEmpty()
.AddService(webHostEnvironment.ApplicationName, serviceVersion: version)
.AddAttributes(
new KeyValuePair<string, object>[]
{
new("deployment.environment", webHostEnvironment.EnvironmentName),
new("host.name", Environment.MachineName),
})
.AddEnvironmentVariableDetector();
}
This time we want to start with an empty resource builder calling CreateEmpty
. We then add the application name and version which we can retrieve from the current assembly. You may have multiple versions of your application running over time and its important to have a way to differentiate between them.
We then add a few attributes to every span including the environment name and machine name. The attribute names here are standardised as defined by the Open Telemetry specification. I decided that knowing the environment the application is running in and the machine name is important to know when troubleshooting issues.
Finally, we add an environment variable detector which we can use to add further attributes to every span using environment variables. Using ResourceBuilder.CreateDefault
already included this in the simple example above but since we started with an empty resource builder we need to add it explicitly. Here is a PowerShell example of how you can add add additional attributes to every span using the OTEL_RESOURCE_ATTRIBUTES
environment variable:
$env:OTEL_RESOURCE_ATTRIBUTES = 'key1=value1,key2=value2'
We can now plug the GetResourceBuilder
into our code below and add a few more goodies:
public virtual void ConfigureServices(
IServiceCollection services,
IWebHostEnvironment webHostEnvironment)
{
// ...omitted
services.AddOpenTelemetryTracing(
builder =>
{
builder
.SetResourceBuilder(GetResourceBuilder(webHostEnvironment))
.AddAspNetCoreInstrumentation(
options =>
{
options.Enrich = Enrich;
options.RecordException = true;
});
if (webHostEnvironment.IsDevelopment())
{
builder.AddConsoleExporter(
options => options.Targets = ConsoleExporterOutputTargets.Debug);
}
});
}
private static void Enrich(Activity activity, string eventName, object obj)
{
if (obj is HttpRequest request)
{
var context = request.HttpContext;
activity.AddTag("http.flavor", GetHttpFlavour(request.Protocol));
activity.AddTag("http.scheme", request.Scheme);
activity.AddTag("http.client_ip", context.Connection.RemoteIpAddress);
activity.AddTag("http.request_content_length", request.ContentLength);
activity.AddTag("http.request_content_type", request.ContentType);
var user = context.User;
if (user.Identity?.Name is not null)
{
activity.AddTag("enduser.id", user.Identity.Name);
activity.AddTag(
"enduser.scope",
string.Join(',', user.Claims.Select(x => x.Value)));
}
}
else if (obj is HttpResponse response)
{
activity.AddTag("http.response_content_length", response.ContentLength);
activity.AddTag("http.response_content_type", response.ContentType);
}
}
public static string GetHttpFlavour(string protocol)
{
if (HttpProtocol.IsHttp10(protocol))
{
return "1.0";
}
else if (HttpProtocol.IsHttp11(protocol))
{
return "1.1";
}
else if (HttpProtocol.IsHttp2(protocol))
{
return "2.0";
}
else if (HttpProtocol.IsHttp3(protocol))
{
return "3.0";
}
throw new InvalidOperationException($"Protocol {protocol} not recognised.");
}
Next, we configure AddAspNetCoreInstrumentation
to enrich the spans with additional information about the current request, response and the user (if any) using standardised attributes. Finally, we record details of exceptions from our controllers which would otherwise be lost. This outputs the following:
Activity.Id: 00-3d0f70e71a8e6e5e87f156bdcf94b8c9-ccdd8d23a2e3ba93-01
Activity.ActivitySourceName: OpenTelemetry.Instrumentation.AspNetCore
Activity.DisplayName: /favicon-32x32.png
Activity.Kind: Server
Activity.StartTime: 2022-02-03T10:52:47.6513334Z
Activity.Duration: 00:00:00.0077181
Activity.TagObjects:
http.host: localhost:5001
http.method: GET
http.target: /favicon-32x32.png
http.url: https://localhost:5001/favicon-32x32.png
http.user_agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36
http.flavor: 2.0
http.scheme: https
http.client_ip: ::1
http.request_content_length:
http.request_content_type:
http.status_code: 200
otel.status_code: UNSET
http.response_content_length: 628
http.response_content_type: image/png
deployment.environment: Development
host.name: REHANS-MACHINE
service.name: ApiTemplate
service.version: 5.1.1.0
service.instance.id: 4e364d08-4965-4d83-8afa-70769074ab0d
This time around you can see we've collected a lot more information. Now, this may not be 'optimal' for your application. Collecting additional information comes at a performance and monetary cost, so its up to you to judge what extra information is useful to you but I think most of the above is pretty essential basic information that would be valuable while debugging any issues.
Wrapping Up
In this post, I showed a simple example showing how you can configure Open Telemetry tracing and then went on to show a more advanced real world example.
Open Telemetry is gaining popularity and traction with even GitHub adopting it. So far in this blog series we've only discussed the basics of Open Telemetry and tracing in particular. When Open Telemetry metrics and logs comes out of alpha/beta, I'll write another post discussing configuring those.
Live Streaming .NET
I started live streaming my software development learnings simultaneously on YouTube and Twitch a few months ago. I'm by no means a professional and only have a couple hundred subscribers and a few thousand views at this time but I've had a lot of fun learning and sharing my learnings with the world.
After streaming for a while, I discovered that I hadn't been writing much code at work as my role at Microsoft has evolved into one that has meant more project management style work and leading a team of developers. So It's been rather liberating to force myself to allocate an hour to a live stream where I can go and learn something new or update one of my many open source projects with some new feature. I realized that I do indeed really like writing, thinking and talking about code and I want to continue to do more of it going forward.
I put together a few YouTube playlists of the live streams I've done so far.
.NET 6 and C# 10
This is where it all started, just before the release of .NET 6 and C# 10. There were a lot of hidden and not so obvious features in this release that I haven't seen many blog posts or videos cover. I've been gathering these gems for the last year and talk about each one at length in this YouTube playlist.
https://www.youtube.com/playlist?list=PLUAZAVKVXTmQEF67lddyErymHlBDaPpjU
Twitter Snowflake ID's
Twitter uses Snowflake ID's to generate unique identifiers for tweets. In this YouTube playlist I deep dive into how Twitter does this and using ASP.NET Core minimal API's to create an API. I think Twitter Snowflake ID's are a really cool way of generating really clean looking globally unique ID's and its worth looking into them as an alternative to ugly GUID's.
https://www.youtube.com/playlist?list=PLUAZAVKVXTmTS0Z1z-fmHv0jaxbF_Tys-
Pulumi, Azure and .NET
Pulumi is a tool used to build, deploy, and manage your cloud applications using pretty much any language on any cloud. I used Pulumi with .NET to play around with Azure Container Apps which I found to be promising but very early in its development. I'm also now looking into creating an Azure Kubernetes Service cluster using Pulumi.
https://www.youtube.com/playlist?list=PLUAZAVKVXTmTAb2Vko40UMnnRLLW51UhS
Feedback
Since I'm new to this, I'd love to hear your feedback and suggestions. Oh and don't forget to subscribe and smash that like button! Sorry, its obligatory to say that once you post videos to YouTube. I can't get over how silly it sounds when I say it.
I Was Awarded as an Open UK Honouree
Generally speaking I'm not the sort of person who generally receives awards or prizes, so I was rather surprised when out of the blue I was contacted by Open UK to receive a medal to signify being listed among their 2022 honours list.
I had never heard of Open UK before so I was initially very sceptical but it seems to be a legit organisation that is funded by the likes of arm, GitHub, Google, Huawei, Microsoft and Red Hat and partners with several open source organisations including the Linux Foundation.
I did a bunch of reading before I accepted any award to make sure they weren't doing anything shady and found that they seem to be lobbying the UK government in favour of Open Source which is fine by me!
The 2022 #openukgennext #openukhonouree list is made up of individuals with broad ranging experience in Open Technology identified as being ones to watch in the UK!
They hail from all walks of Open Source Software, Open Hardware and Open Data. This is the list of those to watch for the future of Open Technology. All are earmarked as leading the next generation of Open Technology whether through social media, their jobs, community contributions, policy or in education.
The British Honours system is something very specific to the UK and a means of rewarding an individual for their achievement or service. Medals are used within this system to recognise an activity or long or valuable service.
Congratulations to all of those listed. Enjoy the recognition of our New Year’s Honour from your peers at OpenUK and we look forward to seeing all that you will achieve in Open Technology through 2022 and beyond.
This is the second year the award has been handed out. I'm not certain what the selection criteria was apart from the fact that I'm a developer in the UK but I do have a fairly active GitHub profile so I suspect that is how they found me.
Overall, I'm very thankful that I was recognised (albeit without knowing exactly why) and am happy to add the Open UK medal to my small collection of Microsoft MVP awards from before I became a Microsoft employee.
On the Etiquette of Pull Request Comments
Picture the scene. A hard working newbie developer is mashing their keyboard for hours trying to get something working smoothly. Eventually, perseverance and maybe more than a little help from StackOverflow leads to success. The developer carefully commits their code and crafts a pull request, ready for the world to bask in its glory.
The next day the developer eagerly opens up their pull request and sees a dozen or so comments that all look something like this:
nit: Fix this.
You may see this comment and not see any problems. I'd like to suggest that this comment is an example of somewhat bad etiquette. Let us discuss the etiquette of pull request comments.
Being Short
I have a limited number of keystrokes before I die. Are my fellow developers not worthy of a share of those strokes? We're all in a rush to get something done but spending the time to write a full comment is not a waste of time, it shows that you care!
Fix this.
What needs to be fixed? How does it need to be fixed? Why does it need to be fixed? Answering the what, why and how is important to getting your point across.
In cases like this, saving yourself some keystrokes by assuming knowledge on the part of the developer submitting the PR is often a mistake. By not fully explaining yourself, you increase the chances of further questions being asked or even direct contact via messaging systems like Slack or Microsoft Teams. Since PR reviews are asynchronous in nature, this can increase the time it takes to review a PR by hours or even days.
Code Style
If you're writing pull request comments that are about code style, consider using a tool like StyleCop or Prettier to automate this. It also ends any conflict in a team about the style of code the team should adopt and makes everyone's code look the same for optimal readability. Thus you can avoid having comments like this in your pull requests:
Prefix your fields with
_
.
Jargon
The first time I saw 'nit:' in a PR review, I had to google its meaning. Evidently, it means that the comment is a minor point by evoking the spectre of a blood sucking parasite. It saves the writer around seven keystrokes but always evokes a mild sense of disgust in me personally. Why not save someone doing that Google search and disgust and just write 'Minor point:'?
nit: This can be done better.
Passive Aggressive
The way you word your PR comments can come across as passive aggressive if you're not careful. Is the commenter below suggesting that I hadn't considered the use of 'X'? What if I already did and discounted it?
Use X here.
Below are a couple of better ways to phrase the above comment. The comments assume nothing about the developers intent and simply ask a question.
Have you considered doing X here? Have you thought about X? Would X help here?
Emote 😆
Text is an imperfect low bandwidth means of communication. One way to increase that bandwidth is to use emoji and get an emotional point across along with your technical one. It also gets across a sense of fun to what can come across as something quite negative, since you are after all picking out all the mistakes you can spot in someone's hard work.
Looks like there is a 🐛 crawling on this line.
Positive Comments
Not every comment has to be a negative comment picking out a bug or mistake. It's useful to call out a positive change. Other reviewers in the team may see it and pick up on the practice too.
🚀 This is awesome, we need to do this elsewhere.
Bring Evidence
I sometimes see comments to the effect of the one below. Why is 'X' better? Says who?
X is a better way to do this.
Now imagine there was a link below to a blog post providing evidence as to why 'X' is better. We'll even use better language and the odd emoji:
Have you considered using X, it's 🔥?
Teachable Moments
Every PR comment is a teachable moment where you have the opportunity to take a little extra time and explain why you are requesting a particular change. As I've said above, take the time to add a link to a relevant article or blog post but I've found that sometimes writing a paragraph or two can also help.
One thing to watch out for though is that PR comments don't scale! If you find yourself making the same comment on a second persons PR, it's time to think about writing a blog post (as I've done in the past), wiki entry or some documentation somewhere and sharing that with your team.