Quantcast
Channel: Muhammad Rehan Saeed
Viewing all 138 articles
Browse latest View live

Writing Code while Asleep

$
0
0

This is a quick post about something I tend to do quite often that has really helped me be more productive and I think has helped the wider developer community in some small way too. It's something that I haven't seen other developers I've worked with do, so I thought I'd call it out here.

The Problem Statement

When I come into work for a day of developing software, I generally get into the zone (if I don't have any distractions...) and am hacking away at something. Sometimes (quite often really but I'm loath to admit it) I get stuck. I'm probably working with some new language, framework or technology and I can't figure out how to do a thing.

At this point I might try a few things that pop into my head, go to definition on some code to read the comments, even read the documentation if there is any. These are all good strategies to solve a problem. At what point though do you throw your hands up in despair and give up? One hour later? Maybe four? Maybe a couple of days? I've done all three in the past!

It might be chance, I don't know but usually when I get to this point, it's lunch time or time to go home and spend time with the family. What do you do now? You're stuck and you're going to have to come back to your desk at some point and bang your head against the problem for a second day. Maybe a clear and fresh mind can solve the problem? It's scary and maybe a bit sad how often I think of a solution to a problem I was having while walking home or lying in bed.

But what if even that does not help? The thought of having to go to work the next day where you will have to try again to solve a seemingly unsolvable problem can be quite depressing sometimes.

The Hive Mind can Help

The solution is to ask the kind people of the internet for help! Then go home and get some sleep. The chances are that come the morning, somebody has solved your problem for you! This is so obvious I feel kind of stupid for even writing this but in my experience when people hit a problem, they don't always ask for help, in fact I have a lot of anecdotal evidence for this:

Stack Overflow

Ask your question on Stack Overflow. The undisputed number one resource for every developer.

I have reviewed a lot of CV's in the last two years and there is a growing trend to list your GitHub and Stack Overflow profiles on there. Even if your profile is not listed, I can sometimes find it anyway (the internet is a stalkers dream).

In all the Stack Overflow profiles I've seen, there is a worrying trend. Very few people actually ask many Stack Overflow questions! The thing is, asking questions is the easiest way to get points and build a very nice Stack Overflow profile too, so it's silly not to do it.

In my three years actively using Stack Overflow (I was a lurker for a while), I've asked 191 questions and answered 143. I'm a bit behind in contributing but even my questions will help people as there will be others who had the same question and got an answer quickly because I had already asked it. In total, this has netted me almost 9,000 imaginary internet reputation, 7 gold badges, 86 silver ones and 152 bronze. Even if I had not answered any questions, and only asked them, I think I would have had a healthy reputation score of a few thousand.

It's amazing how quickly you can sometimes get answers to your questions using Stack Overflow too, the fastest I've seen is literally 30 seconds! It's such an amazing resource, you literally have people sitting there waiting for you to ask a question so they can get imaginary internet points!

I get around 5,000 visitors to this blog every week at the moment, it's surprising to me how many people actually contact me directly asking for help. The emails are always the same, "I read your blog post and I'm working on project X, I need help urgently because I have some deadline". No supporting code, just a vague hint of what the problem might be, as if I can divine the solution through some kind of telekinesis. I help these people where I can but I've started to feel I'm hindering them by doing so, they need to learn to use Stack Overflow just like everybody else, so that's where I've started pointing them lately.

GitHub and Forums

If you're dealing with a project that uses GitHub or a forum of some kind, use it! Find an existing GitHub issue or forum post and add a comment to it or open a new issue if one cannot be found. One or more developers will get a notification of your problem and they might even point you in the right direction. Once again, it's amazing how quickly you can get a reply sometimes.

Once again, I've seen a lot of GitHub profiles and very few people use the issues section to ask questions. You often have the developers who literally wrote the code that you're using, answer your question. What can be better than that?

Conclusions

This post sounds silly but developers don't generally ask for help for some reason. I used to work with a great junior developer who started only with a little VB script knowledge and would ask for help whenever he needed it, which was a dozen or more times a day sometimes. It was hard sometimes to get work done but it was great because after a while he started to get really good and now we had two minds to get work done and come up with ideas instead of one. Ask for help when you need it!


The Dotnet Watch Tool Revisited

$
0
0

I talked about using the dotnet watch tool with Visual Studio some time ago. Since then, a lot changed with the Visual Studio tooling and .NET Core 2.0 which broke the use of dotnet watch in Visual Studio, hence the reason for writing this post.

The dotnet watch tool is a file watcher for .NET that restarts the application when changes in the source code are detected. This is super useful when you just want to hack away at code and see the changes instantly when you refresh your browser. It increases productivity and reduces the magical inner-loop which reduces the time taken to write some code and then see it's effects. I also like using this tool because it opens a console window which lets you see all of your logs flashing by.

Dotnet Watch Run Console

::: warning In both cases you have to be careful to start the application by clicking Debug -> Start Without Debugging or hitting the ||CTRL+F5|| keyboard shortcut. :::

.NET Core 2.0 vs 2.1

Setting up the 'dotnet watch' tool is as easy as installing the Microsoft.DotNet.Watcher.Tools NuGet package if you are using .NET Core 2.0. If you are using .NET Core 2.1 or above, this tool comes pre-installed in the .NET Core SDK.

Now using powershell, you can navigate to your project folder and run the dotnet watch run command and your set. But using the command line is a bit lame if you are using Visual Studio, we can do one better.

launchSettings.json

The launchSettings.json file is used by Visual Studio to launch your application and controls what happens when you hit ||F5||. It turns out you can add additional launch settings here to launch the application using the dotnet watch tool. You can do so by adding a new launch configuration as I've done at the bottom of this file:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:5000/",
      "sslPort": 44300
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44300"
      }
    },
    "dotnet run": {
      "commandName": "Project",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44300"
      }
    },
    // dotnet watch run must be run without the Visual Studio debugger using CTRL+F5.
    "dotnet watch run": {
      "commandName": "Executable",
      "executablePath": "dotnet",
      "workingDirectory": "$(ProjectDir)",
      "commandLineArgs": "watch run",
      "launchBrowser": true,
      "launchUrl": "http://localhost:5000/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        "ASPNETCORE_HTTPS_PORT": "44300"
      }
    }
  }
}

Notice that I renamed the second launch profile (which already exists in the default template) to dotnet run because that's actually the command it's running and makes more sense.

The dotnet watch launch profile is running the dotnet watch run command as an executable and using the current working directory of the project. Now we can see the new launch profile in the Visual Studio toolbar like so:

Dotnet Watch in the Visual Studio Toolbar

.NET Boxed Templates

I have updated the .NET Boxed family of project templates with this feature built in. Happy coding!

.NET Boxed

$
0
0

.NET Boxed is a set of project templates with batteries included, providing the minimum amount of code required to get you going faster. Right now it includes API and GraphQL project templates.

ASP.NET Core API Boxed

The default ASP.NET Core API Boxed options will give you an API with Swagger, ASP.NET Core versioning, HTTPS and much more enabled right out of the box. You can totally turn any of that off if you want to, the point is that it's up to you.

ASP.NET Core API Boxed Preview

ASP.NET Core GraphQL Boxed

If you haven't read about or learned GraphQL yet, I really suggest you go and follow their short online tutorial. It's got some distinct advantages over standard REST'ful API's (and some disadvantages but in my opinion the advantages carry more weight).

Once you've done that, the next thing I suggest you do is to create a project from the ASP.NET Core GraphQL Boxed project template. It implements the GraphQL specification using GraphQL.NET and a few other NuGet packages. It also comes with a really cool GraphQL playground, so you can practice writing queries, mutations and subscriptions.

ASP.NET Core GraphQL Boxed Preview

This is the only GraphQL project template that I'm aware of at the time of writing and it's pretty fully featured with sample queries, mutations and subscriptions.

ASP.NET Core Boilerplate

.NET Boxed used to be called ASP.NET Core Boilerplate. That name was kind of forgettable and there was another great project that had a very similar name. I put off renaming for a long time because it was too much work but I finally relented and got it done.

In the end I think it was for the best. The new .NET Boxed branding and logo are much better and I've opened it up to .NET project templates in general, instead of just ASP.NET Core project templates.

Thanks to Jon Galloway and Jason Follas for helping to work out the branding.

How can I get it?

  1. Install the latest .NET Core SDK.
  2. Run dotnet new --install "Boxed.Templates::*" to install the project template.
  3. Run dotnet new api --help to see how to select the feature of the project.
  4. Run dotnet new api --name "MyTemplate" along with any other custom options to create a project from the template.

Boxed Updates

There are new features and improvements planned on the GitHub projects tab. ASP.NET Core 2.1 is coming out soon, so look out for updates which you can see in the GitHub releases tab when they go live.

Migrating to Entity Framework Core Seed Data

$
0
0

I was already using Entity Framework Core 2.0 and had written some custom code to enter some static seed data to certain tables. Entity Framework 2.1 added support for data seeding which manages your seed data for you and adds them to your Entity Framework Core migrations.

The problem is that if you've already got data in your tables, when you add a migration containing seed data, you will get exceptions thrown as Entity Framework tries to insert data that is already there. Entity Framework is naive, it assumes that it is the only thing editing the database.

Migrating to using data seeding requires a few extra steps that aren't documented anywhere and weren't obvious to me. Lets walk through an example. Assuming we have the following model and database context:

public class Car
{
    public int CarId { get; set; }

    public string Make { get; set; }

    public string Model { get; set; }
}

public class ApplicationDbContext : DbContext
{
    public ApplicationDbContext(DbContextOptions options)
        : base(options)
    {
    }

    public DbSet<Car> Cars { get; set; }
}

We can add some seed data by overriding the OnModelCreating method on our database context class. You need to make sure your seed data matches the existing data in your database.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Car>().HasData(
        new Car() { CarId = 1, Make = "Ferrari", Model = "F40" },
        new Car() { CarId = 2, Make = "Ferrari", Model = "F50" },
        new Car() { CarId = 3, Make = "Lambourghini", Model = "Countach" });
}

If we run a command to add a database migration, the generated code looks like this:

dotnet ef migrations add AddSeedData
public partial class AddSeedData : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 1, "Ferrari", "F40" });

        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 2, "Ferrari", "F50" });

        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 3, "Lambourghini", "Countach" });
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 1);

        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 2);

        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 3);
    }
}

This is what you need to do:

  1. Comment out all of the InsertData lines in the generated migration.
  2. Run the migration on your database containing the existing seed data. This is effectively doing a null operation but records the fact that the AddSeedData migration has been run.
  3. Uncomment the InsertData lines in the generated migration so that if you run the migrations on a fresh database, seed data still gets added. For your existing databases, since the migration has already been run on them, they will not add the seed data twice.

That's it, hope that helps someone.

Optimally Configuring Entity Framework Core

$
0
0

Lets talk about configuring your Entity Framework Core DbContext for a moment. There are several options you might want to consider turning on. This is how I configure mine in most micro services:

public virtual void ConfigureServices(IServiceCollection services) =>
    services.AddDbContextPool<MyDbContext>(
        options => options
            .UseSqlServer(
                this.databaseSettings.ConnectionString,
                x => x.EnableRetryOnFailure())
            .ConfigureWarnings(x => x.Throw(RelationalEventId.QueryClientEvaluationWarning))
            .EnableSensitiveDataLogging(this.hostingEnvironment.IsDevelopment())
            .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking))
    ...

EnableRetryOnFailure

EnableRetryOnFailure enables retries for transient exceptions. So what is a transient exception? Entity Framework Core has a SqlServerTransientExceptionDetector class that defines that. It turns out that any SqlException with a very specific list of SQL error codes or TimeoutExceptions are considered transient exceptions and thus, safe to retry.

ConfigureWarnings

By default, Entity Framework Core will log warnings when it can't translate your C# LINQ code to SQL and it will evaluate parts of your LINQ query it does not understand in-memory. This is usually catastrophic for performance because this usually means that EF Core will retrieve a huge amount of data from the database and then filter it down in-memory.

Luckily in EF Core 2.1, they added support to translate the GroupBy LINQ method to SQL. However, I found out yesterday that you have to write Where clauses after GroupBy for this to work. If you write the Where clause before your GroupBy, EF Core will evaluate your GroupBy in-memory in the client instead of in SQL. The key is to know when this is happening.

One thing you can do is throw an exception when you are evaluating a query in-memory instead of in SQL. That is what Throw on QueryClientEvaluationWarning is doing.

EnableSensitiveDataLogging

EnableSensitiveDataLogging enables application data to be included in exception messages. This can include SQL, secrets and other sensitive information, so I am only doing it when running in the development environment. It's useful to see warnings and errors coming from Entity Framework Core in the console window when I am debugging my application using the Kestrel webserver directly, instead of with IIS Express.

UseQueryTrackingBehavior

If you are building an ASP.NET Core API, each request creates a new instance of your DbContext and then this is disposed at the end of the request. Query tracking keeps track of entities in memory for the lifetime of your DbContext so that if they are updated any changes can be saved, this is a waste of resources if you are just going to throw away the DbContext at the end of the request. By passing NoTracking to the UseQueryTrackingBehavior method, you can turn off this default behaviour. Note that if you are performing updates to your entities, don't use this option, this is only for API's that perform reads and/or inserts.

Connection Strings

You can also pass certain settings to connection strings. These are specific to the database you are using, here I'm talking about SQL Server. Here is an example of a connection string:

Data Source=localhost;Initial Catalog=MyDatabase;Integrated Security=True;Min Pool Size=3;Application Name=MyApplication

Application Name

SQL Server can log or profile queries that are running through it. If you set the application name, you can more easily identify the applications that may be causing problems in your database with slow or failing queries.

Min Pool Size

Creating database connections is an expensive process that takes time. You can specify that you want a minimum pool of connections that should be created and kept open for the lifetime of the application. These are then reused for each database call. Ideally, you need to performance test with different values and see what works for you. Failing that you need to know how many concurrent connections you want to support at any one time.

The End...

It took me a while to craft this setup, I hope you find it useful. You can find out more by reading the excellent Entity Framework Core docs.

ASP.NET Core Hidden Gem - QueryHelpers

$
0
0

I discovered a hidden gem in ASP.NET Core a couple of weeks ago that can help to build up and parse URL's called QueryHelpers. Here's how you can use it to build a URL using the AddQueryString method:

var queryArguments = new Dictionary<string, string>()
{
    { "static-argument", "foo" },
};

if (someFlagIsEnabled)
{
    queryArguments.Add("dynamic-argument", "bar");
}

string url = QueryHelpers.AddQueryString("/example/path", queryArguments);

Notice that there are no question marks or ampersands in sight. Where this really shines is when you want to add multiple arguments and then need to write code to work out whether to add a question mark or ampersand.

It's also worth noting that the values of the query arguments are URL encoded for you too. The type also has a ParseQuery method to parse query strings but that's less useful to us as ASP.NET Core controllers do that for you.

Finally, .NET also has a type called UriBuilder that you should know about. It's more geared towards building up a full URL, rather than a relative URL as I'm doing above. It has a Query property that you can use to set the query string but it's only of type string, so much less useful than QueryHelpers.AddQueryString.

Optimally Configuring ASP.NET Core HttpClientFactory

$
0
0

::: warning Update (20 August 2018) Steve Gordon kindly suggested a further optimisation to use ConfigureHttpClient. I've updated the code below to reflect this. :::

In this post, I'm going to show how to optimally configure a HttpClient using the new HttpClientFactory API in ASP.NET Core 2.1. If you haven't already I recommend reading Steve Gordon's series of blog posts on the subject since this post builds on that knowledge. You should also read his post about Correlation ID's as I'm making use of that library in this post. The main aims of the code in this post are to:

  1. Use the HttpClientFactory typed client, I don't know why the ASP.NET team bothered to provide three ways to register a client, the typed client is the one to use. It provides type safety and removes the need for magic strings.
  2. Enable GZIP decompression of responses for better performance. Interestingly, the HttpClient and ASP.NET Core does not support compression of GZIP requests, only responses. Doing some searching online some time ago suggests that this is an optimisation that is not very common at all, I thought this was pretty unbelievable at the time.
  3. The HttpClient should time out after the server does not respond after a set amount of time.
  4. The HttpClient should retry requests which fail due to transient errors.
  5. The HttpClient should stop performing new requests for a period of time when a consecutive number of requests fail using the circuit breaker pattern. Failing fast in this way helps to protect an API or database that may be under high load and means the client gets a failed response quickly rather than waiting for a time-out.
  6. The URL, time-out, retry and circuit breaker settings should be configurable from the appsettings.json file.
  7. The HttpClient should send a User-Agent HTTP header telling the server the name and version of the calling application. If the server is logging this information, this can be useful for debugging purposes.
  8. The X-Correlation-ID HTTP header from the response should be passed on to the request made using the HttpClient. This would make it easy to correlate a request across multiple applications.

Usage Example

It doesn't really matter what the typed client HttpClient looks like, that's not what we're talking about but I include it for context.

public interface IRocketClient
{
    Task<TakeoffStatus> GetStatus(bool working);
}

public class RocketClient : IRocketClient
{
    private readonly HttpClient httpClient;

    public RocketClient(HttpClient httpClient) => this.httpClient = httpClient;

    public async Task<TakeoffStatus> GetStatus(bool working)
    {
        var response = await this.httpClient.GetAsync(working ? "status-working" : "status-failing");
        response.EnsureSuccessStatusCode();
        return await response.Content.ReadAsAsync<TakeoffStatus>();
    }
}

Here is how we register the typed client above with our dependency injection container. All of the meat lives in these three methods. AddCorrelationId adds a middleware written by Steve Gordon to handle Correlation ID's. AddPolicies registers a policy registry and the policies themselves (A policy is Polly's way of specifying how you want to deal with errors e.g. using retries, circuit breaker pattern etc.). Finally, we add the typed HttpClient but with configuration options, so we can configure it's settings from appsettings.json.

public virtual void ConfigureServices(IServiceCollection services) =>
    services
        .AddCorrelationId() // Add Correlation ID support to ASP.NET Core
        .AddPolicies(this.configuration) // Setup Polly policies.
        .AddHttpClient<IRocketClient, RocketClient, RocketClientOptions>(this.configuration, "RocketClient")
        ...;

The appsettings.json file below contains the base address for the endpoint we want to connect to, a time-out value of thirty seconds is used if the server is taking too long to respond and policy settings for retries and the circuit breaker.

The retry settings state that after a first failed request, another three attempts will be made (this means you can get up to four requests). There will be an exponentially longer back-off or delay between each request. The first retry request will occur after two seconds, the second after another four seconds and the third occurs after another eight seconds.

The circuit breaker states that it will allow 12 consecutive failed requests before breaking the circuit and throwing CircuitBrokenException for every attempted request. The circuit will be broken for thirty seconds.

Generally, my advice is when allowing a high number of exceptions before breaking, use a longer duration of break. When allowing a lower number of exceptions before breaking, keep the duration of break small. Another possibility I've not tried is to combine these two scenarios, so you have two circuit breakers. The circuit breaker with the lower limit would kick in first but only break the circuit for a short time, if exceptions are no longer thrown, then things go back to normal quickly. If exceptions continue to be thrown, then the other circuit breaker with a longer duration of break would kick in and the circuit would be broken for a longer period of time.

You can of course play with these numbers, what you set them to will depend on your application.

{
  "RocketClient": {
    "BaseAddress": "http://example.com",
    "Timeout": "00:00:30"
  },
  "Policies": {
    "HttpCircuitBreaker": {
      "DurationOfBreak": "00:00:30",
      "ExceptionsAllowedBeforeBreaking": 12
    },
    "HttpRetry": {
      "BackoffPower": 2,
      "Count": 3
    }
  }
}

Configuring Polly Policies

Below is the implementation for AddPollyPolicies. It starts by setting up and reading a configuration section in our appsettings.json file of type PolicyOptions. Then adds the PolicyRegistry which is where Polly stores it's policies. Finally, we add a retry and circuit breaker policy and configure them using the settings we've read from the PolicyOptions.

public static class ServiceCollectionExtensions
{
    private const string PoliciesConfigurationSectionName = "Policies";

    public static IServiceCollection AddPolicies(
        this IServiceCollection services,
        IConfiguration configuration,
        string configurationSectionName = PoliciesConfigurationSectionName)
    {
        var section = configuration.GetSection(configurationSectionName);
        services.Configure<PolicyOptions>(configuration);
        var policyOptions = section.Get<PolicyOptions>();

        var policyRegistry = services.AddPolicyRegistry();
        policyRegistry.Add(
            PolicyName.HttpRetry,
            HttpPolicyExtensions
                .HandleTransientHttpError()
                .WaitAndRetryAsync(
                    policyOptions.HttpRetry.Count,
                    retryAttempt => TimeSpan.FromSeconds(Math.Pow(policyOptions.HttpRetry.BackoffPower, retryAttempt))));
        policyRegistry.Add(
            PolicyName.HttpCircuitBreaker,
            HttpPolicyExtensions
                .HandleTransientHttpError()
                .CircuitBreakerAsync(
                    handledEventsAllowedBeforeBreaking: policyOptions.HttpCircuitBreaker.ExceptionsAllowedBeforeBreaking,
                    durationOfBreak: policyOptions.HttpCircuitBreaker.DurationOfBreak));

        return services;
    }
}

public static class PolicyName
{
    public const string HttpCircuitBreaker = nameof(HttpCircuitBreaker);
    public const string HttpRetry = nameof(HttpRetry);
}

public class PolicyOptions
{
    public CircuitBreakerPolicyOptions HttpCircuitBreaker { get; set; }
    public RetryPolicyOptions HttpRetry { get; set; }
}

public class CircuitBreakerPolicyOptions
{
    public TimeSpan DurationOfBreak { get; set; } = TimeSpan.FromSeconds(30);
    public int ExceptionsAllowedBeforeBreaking { get; set; } = 12;
}

public class RetryPolicyOptions
{
    public int Count { get; set; } = 3;
    public int BackoffPower { get; set; } = 2;
}

Notice that each policy is using the HandleTransientHttpError method which tells Polly when to apply the retry and circuit breakers. One important question is, what is a transient HTTP error according to Polly? Well, looking at the source code in the Polly.Extensions.Http GitHub repository, it looks like they consider any of the below as transient errors:

  1. Any HttpRequestException thrown. This can happen when the server is down.
  2. A response with a status code of 408 Request Timeout.
  3. A response with a status code of 500 or above.

Configuring HttpClient

Finally, we can get down to configuring our HttpClient itself. The AddHttpClient method starts by binding the TClientOptions type to a configuration section in appsettings.json. TClientOptions is a derived type of HttpClientOptions which just contains a base address and time-out value. I'll come back to CorrelationIdDelegatingHandler and UserAgentDelegatingHandler in a moment.

We set the HttpClientHandler to be DefaultHttpClientHandler. This type just enables GZIP and Deflate compression. Brotli support is being added soon, so watch out for that. Finally, we add the retry and circuit breaker policies to the HttpClient.

public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddHttpClient<TClient, TImplementation, TClientOptions>(
        this IServiceCollection services,
        IConfiguration configuration,
        string configurationSectionName)
        where TClient : class
        where TImplementation : class, TClient
        where TClientOptions : HttpClientOptions, new() =>
        services
            .Configure<TClientOptions>(configuration.GetSection(configurationSectionName))
            .AddTransient<CorrelationIdDelegatingHandler>()
            .AddTransient<UserAgentDelegatingHandler>()
            .AddHttpClient<TClient, TImplementation>()
            .ConfigureHttpClient(
                (sp, options) =>
                {
                    var httpClientOptions = sp
                        .GetRequiredService<IOptions<TClientOptions>>()
                        .Value;
                    options.BaseAddress = httpClientOptions.BaseAddress;
                    options.Timeout = httpClientOptions.Timeout;
                })
            .ConfigurePrimaryHttpMessageHandler(x => new DefaultHttpClientHandler())
            .AddPolicyHandlerFromRegistry(PolicyName.HttpRetry)
            .AddPolicyHandlerFromRegistry(PolicyName.HttpCircuitBreaker)
            .AddHttpMessageHandler<CorrelationIdDelegatingHandler>()
            .AddHttpMessageHandler<UserAgentDelegatingHandler>()
            .Services;
}

public class DefaultHttpClientHandler : HttpClientHandler
{
    public DefaultHttpClientHandler() => this.AutomaticDecompression = 
        DecompressionMethods.Deflate | DecompressionMethods.GZip;
}

public class HttpClientOptions
{
    public Uri BaseAddress { get; set; }

    public TimeSpan Timeout { get; set; }
}

CorrelationIdDelegatingHandler

When I'm making a HTTP request from an API i.e. it's an API to API call and I control both sides, I use the X-Correlation-ID HTTP header to trace requests as they move down the stack. The CorrelationIdDelegatingHandler is used to take the correlation ID for the current HTTP request and pass it down to the request made in the API to API call. The implementation is pretty simple, it's just setting a HTTP header.

The power comes when you are using something like Application Insights, Kibana or Seq for logging. You can now take the correlation ID for a request and see the logs for it from multiple API's or services. This is really invaluable when you are dealing with a micro services architecture.

public class CorrelationIdDelegatingHandler : DelegatingHandler
{
    private readonly ICorrelationContextAccessor correlationContextAccessor;
    private readonly IOptions<CorrelationIdOptions> options;

    public CorrelationIdDelegatingHandler(
        ICorrelationContextAccessor correlationContextAccessor,
        IOptions<CorrelationIdOptions> options)
    {
        this.correlationContextAccessor = correlationContextAccessor;
        this.options = options;
    }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        if (!request.Headers.Contains(this.options.Value.Header))
        {
            request.Headers.Add(this.options.Value.Header, correlationContextAccessor.CorrelationContext.CorrelationId);
        }

        // Else the header has already been added due to a retry.

        return base.SendAsync(request, cancellationToken);
    }
}

UserAgentDelegatingHandler

It's often useful to know something about the client that is calling your API for logging and debugging purposes. You can use the User-Agent HTTP header for this purpose.

The UserAgentDelegatingHandler just sets the User-Agent HTTP header by taking the API's assembly name and version attributes. You need to set the Version and Product attributes in your csproj file for this to work. The name and version are then placed along with the current operating system into the User-Agent string.

Now the next time you get an error in your API, you'll know the client application that caused it (if it's under your control).

public class UserAgentDelegatingHandler : DelegatingHandler
{
    public UserAgentDelegatingHandler()
        : this(Assembly.GetEntryAssembly())
    {
    }

    public UserAgentDelegatingHandler(Assembly assembly)
        : this(GetProduct(assembly), GetVersion(assembly))
    {
    }

    public UserAgentDelegatingHandler(string applicationName, string applicationVersion)
    {
        if (applicationName == null)
        {
            throw new ArgumentNullException(nameof(applicationName));
        }

        if (applicationVersion == null)
        {
            throw new ArgumentNullException(nameof(applicationVersion));
        }

        this.UserAgentValues = new List<ProductInfoHeaderValue>()
        {
            new ProductInfoHeaderValue(applicationName.Replace(' ', '-'), applicationVersion),
            new ProductInfoHeaderValue($"({Environment.OSVersion})"),
        };
    }

    public UserAgentDelegatingHandler(List<ProductInfoHeaderValue> userAgentValues) =>
        this.UserAgentValues = userAgentValues ?? throw new ArgumentNullException(nameof(userAgentValues));

    public List<ProductInfoHeaderValue> UserAgentValues { get; set; }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        if (!request.Headers.UserAgent.Any())
        {
            foreach (var userAgentValue in this.UserAgentValues)
            {
                request.Headers.UserAgent.Add(userAgentValue);
            }
        }

        // Else the header has already been added due to a retry.

        return base.SendAsync(request, cancellationToken);
    }

    private static string GetProduct(Assembly assembly) => 
        GetAttributeValue<AssemblyProductAttribute>(assembly);

    private static string GetVersion(Assembly assembly)
    {
        var infoVersion = GetAttributeValue<AssemblyInformationalVersionAttribute>(assembly);
        if (infoVersion != null)
        {
            return infoVersion;
        }

        return GetAttributeValue<AssemblyFileVersionAttribute>(assembly);
    }

    private static string GetAttributeValue<T>(Assembly assembly)
        where T : Attribute
    {
        var type = typeof(T);
        var attribute = assembly
            .CustomAttributes
            .Where(x => x.AttributeType == type)
            .Select(x => x.ConstructorArguments.FirstOrDefault())
            .FirstOrDefault();
        return attribute == null ? string.Empty : attribute.Value.ToString();
    }
}
<PropertyGroup Label="Package">
  <Version>1.0.0</Version>
  <Product>My Application</Product>
  <!-- ... -->
</PropertyGroup>

Sample GitHub Project

I realize that was a lot of boilerplate code to write. It was difficult to write this as more than one blog post. To aid in digestion, I've created a GitHub sample project with the full working code.

The sample project contains two API's. One makes a HTTP request to the other. You can pass a query argument to decide whether the callee API will fail or not and try out the retry and circuit breaker logic. Feel free to play with the configuration in appsettings.json and see what options work best for your application.

PluralSight vs LinkedIn Learning vs FrontendMasters vs Egghead.io vs YouTube

$
0
0

I use a lot of video resources to keep up to date with .NET, JavaScript and other tech like Docker, Kubernetes etc. I've compiled here a list of these resources and my impressions of using them and often paying for them out of my own pocket. There is something useful in all of them but I tend to find that some cater for certain technologies better than others.

PluralSight

PluralSight is the service that tries to be all things for all people. It tended to be more .NET focused in the past but things are changing on that front. In more recent times, the breadth of topics covered has definitely gotten a lot wider. They've added IT operations courses for example (I recently watched a really good course on Bash which is not something that's easily available on the internet strangely), as well as courses on Adobe products and like Photoshop and Illustrator which is handy.

In terms of software development, the courses are very high quality but they also take a lot of time for the authors to produce, so don't expect long in-depth courses on newer technologies e.g. the courses on Kubernetes and Vue.js have only recently been added and are certainly more on the 'Getting Started' end of the spectrum. However, in time I expect the portfolio to fill out.

There is also definitely still a .NET bias to the site, there aren't as many in-depth frontend JavaScript courses as I would like for example. The ones that do exist are not from the well known frontend developers in the community. Also, some of the courses can be quite old (The tech world does move so fast). You'd think you would find some decent courses on CSS for example but the courses available are pretty ancient.

They have apps for all the usual platforms that let you download video offline which is a must for me, for when I travel on the London underground. The monthly cost is not prohibitive for the quantity of courses available at $35 per month. I've paid for it in the past but get it free right now as a Microsoft MVP.

I'd recommend this as a primary source of information when learning some new technology.

LinkedIn Learning

I only discovered that LinkedIn Learning existed last year when I learned that Microsoft MVP's get it for free. Apparently LinkedIn Learning used to be called Lynda.com which I had heard of and trialled in the past. I've always thought of Lynda as a 'How to use X software' kind of resource. They've literally got hours and hours worth of courses on Adobe Photoshop for example.

I was surprised at how much content they actually have. The ground is a bit thin when it comes to .NET content however and the courses that I have ended up watching are pretty short and to the point with not a huge amount of depth. However, I think this varies a lot, I've seen Adobe illustrator courses that are 14 hours long!

In the end I've used LinkedIn Learning for learning Kubernetes, due to PluralSight's library being a bit thin on that subject and also GraphQL.NET where LinkedIn Learning has the only course available on the internet.

It costs $25 per year to subscribe, so it's cheaper than the other offerings. Overall, I probably wouldn't pay for this service if I didn't get it for free. At best, I might subscribe for a month at $30 to view a particular course. I also feel like I should be spending more time exploring their content.

Frontend Masters

Frontend Masters does exactly what it says on the tin. They get industry leading frontend professionals to present courses on HTML, CSS, and JavaScript mainly, although they also delve into the backend with Node.js, Mongo and GraphQL courses.

The quality and depth of these courses is extremely high. The format is unusual in that the expert is delivering the course to an actual audience of people and there are also question/answer sections at the end of each module. This means that the courses tend to be quite long. If you're like me and you want to know every gritty detail, then that's great.

The library of courses is not very large but I'd definitely recommend this service to anyone interested in frontend or GraphQL Node.js development. The price is quite steep at $39 per month, considering the smaller number of targeted courses available. I'm waiting to see if they have a sale at the end of the year to drop hard cash on this learning resource.

Egghead.io

Egghead.io is a unique learning resource. It's USP is that it serves a series of short two minute videos that make up a course. If you run the videos at 1.5x or 2x speed, you can be done learning something in 15 minutes! In the real world, I found that each video was so concise and full of useful information, I found myself having to go back and watch things again. This is definitely the fastest way to learn something.

The content is similar to Frontend Masters i.e. it's mainly focused on the frontend, with a few forays into Node.js, ElasticSearch, Mongo and Docker. Although, they tend to have a focus on JavaScript frameworks.

The cost of this service is $300 per year but if you wait until the sale at the end of the year like I did, you can bag a subscription for $100 which I think is more reasonable. I'm coming up for renewal time and I'm not sure I will renew because I've pretty much watched all of the courses that I was interested in. Because the courses are very short and fairly limited in number, you can get through them pretty quickly. That said, it was definitely worth investing in a years subscription. I might purchase a subscription again in a year or two when they add more content.

YouTube/Vimeo/Channel9

YouTube, Vimeo and Channel 9 have a wealth of videos that you should not ignore. Plus the best part is that it's all free. Here are some channels I find useful:

NDC Conferences

The NDC Conferences seem to never end. They take place three times a year (at last count) but they release videos all year round, so it's a never ending battle to keep up. For that reason, I've been trying to avoid watching them lately. The best place to watch them is on Vimeo where you can easily download them offline in high quality.

You have expert speakers who often repeat their talk multiple times, so you often end up wondering whether you've seen a talk already. The talks are often very high level and often non-technical talks about design, management, managing your career or just telling stories about how some software was built.

Honestly, it can be fun to watch but I don't feel like I learn a lot watching these talks, so I've been a lot more strict about what I do watch.

Google Developers

The Google Developers YouTube channel clogs up your feed with a lot of pointless two minute videos throughout the year. Then once a year, they hold their developer conference where the talks are actually interesting. The videos are Google focused, so think Chrome, JavaScript, Workbox and Android.

Microsoft Conferences

Microsoft holds developer conferences like Build and Ignite all the time. You can watch them on Channel 9 or YouTube. Microsoft builds a lot of tech, so talks are fairly varied.

Azure Friday

Azure Friday is available on YouTube or Channel 9 and lets you keep up to date with Microsoft Azure's constantly evolving cloud platform. The videos are short and released once a week or so.

CSS Day

CSS Day is a conferences that runs every year where CSS experts stand up and deliver a talk on a particular subject. Often regarding some new CSS feature or some feature that has not yet been standardised. Well worth watching, none of the resources above do a good job of covering CSS in my opinion, except maybe Frontend Masters to some extent.

.NET Foundation

The .NET Foundation videos can be found on YouTube. It's really two channels combined. One for .NET in general and one for ASP.NET.

The .NET videos typically have very in depth discussions about what features to add to the .NET Framework. They also sometimes release a video explaining some new features of .NET. Not something I watch often but worth keeping an eye on occasionally.

The ASP.NET Community Stand-up releases a video on most Tuesday's discussing new features being added to ASP.NET Core or sometimes .NET Core in general. Always worth watching.

Heptio

The Heptio YouTube channel is a bit like the ASP.NET Community Stand-up for Kubernetes. There are new videos every week but they vary a lot from beginner to extreme expert level and it's difficult to tell what the level is going to be. If you're interested in Kubernetes, it's worth watching the first 10 minutes of every show, so you can keep up to date with what's new in Kubernetes.

Grab a Bargain

With the Christmas period approaching, most of the paid for services will offer some kind of sale. Now is the time to keep an eye out for that and grab a bargain.


Is ASP.NET Core now a Mature Platform?

$
0
0

::: tip Update (12 January 2019) It seems that Damian Edwards (The ASP.NET Core Project Manager) likes this post and agrees with the points I've made! It's great to hear that he's is in alignment with my thoughts and that's a great indication that the pain points of the platform will get solved in the future. Take a look at what he says in the ASP.NET Community Stand-up below: :::

https://www.youtube.com/watch?v=ho-VF2dAszI

The Upgrade Train

I started using ASP.NET Core back when it was still called ASP.NET 5 and it was still in beta. In those early days every release introduced a sea change. The beta's were not beta's at all but more like alpha quality bits. I spent more time than I'd like just updating things to the latest version with each release.

Compared to the past, updates are moving at a glacial pace. Compared to the full fat .NET Framework though, it's been like moving from a camel to an electric car. When releases do come there is still a lot in each release. If you have a number of micro services using ASP.NET Core, it's not quick to get them all updated. Also, it's not just ASP.NET Core but all of the satellite assemblies built on top of .NET Core that keep changing too, things like Serilog and Swashbuckle.

What about other platforms? Well, I'm familiar with Node.js and the situation there is bordering on silly. Packages are very unstable and constantly being rev'ed. Keeping up and staying on latest is a constant battle almost every day. Each time you upgrade a package, there is also a danger that you will break something. With .NET Core, there are fewer packages and they are much more stable.

Overall, things move fast in software development in general and for me that's what keeps it interesting. ASP.NET Core is no exception.

Show me the API's!

.NET Core and ASP.NET Core started out very lightweight. There were few API's available. You often had to roll your own code, even for basic features that should exist.

In today's world, a lot of API's have been added and where there are gaps, the community has filled them in many places. The .NET Framework still has a lot of API's that have not been ported across yet. A lot of these gaps are Windows specific and I'm sure a lot will be filled in the .NET Core 3.0 time frame.

When I make a comparison with Node.js and take wider community packages into consideration, I'd say that .NET Core has fewer API's. Image compression API's don't even exist on .NET for example. We were late to the party with Brotli compression which was recently added to .NET Core and is soon going to be added to the ASP.NET Core compression middleware, so we'll get there eventually. We have GraphQL.NET which is very feature rich but it still lags behind the JavaScript Apollo implementation slightly where it has first party support (Perhaps that comparison is a little unfair as GraphQL is native to Node.js). When I wanted to add Figlet font support to Colorful.Console (Figlet fonts let you draw characters using ASCII art), I had to base my implementation off of a JavaScript one. I'm not the only one who translates JavaScript code to C# either.

With all this said, Node.js and JavaScript in general has it's own unique problems, otherwise I'd be using it instead of being a mainly .NET Core developer.

It's Open Sauce

Making .NET Core and ASP.NET Core open source has made a huge difference. We'd all occasionally visit the .NET Framework docs to understand how an API worked but today the place to go is GitHub where you can not only see the code but read other peoples issues and even raise issues of your own. There is often someone who has been there and done it all before you.

Not only that but a huge community has grown up with bloggers and new projects being more commonplace. It cannot be underestimated how much this change has improved a developers standard of living. Just take a look at the brilliant discoverdot.net site where you can see 634 GitHub .NET projects for all the evidence you need.

Feel the Powa!

ASP.NET Core's emphasis on performance is refreshing. It's doing well in the TechEmpower benchmarks with more improvements in sight. It's nice to get performance boosts from your applications every time you upgrade your application without having to do any work at all yourself.

While the platform is miles ahead of Node.js there are newer languages like Go that are also quite nice to write code for but blazing fast too. However, I'm not sure you can be as productive writing Go as with .NET Core. Also, you've got to use the write tool for the job. There are definitely cases where Go does a better job.

One interesting effort that I've been keeping an eye on for some time now is .NET Native where C# code is compiled down to native code instead of an intermediate language. This means that the intermediate language does not need to be JIT'ed and turned into machine code at runtime which speeds up execution the first time the application is run. A nice side effect of doing this is that you also end up with a single executable file. You get all the benefits of a low level language like Go or Rust with none of the major drawbacks! I've been expecting this to hit for some time now but it's still not quite ready.

Security is Boring but Important

This is a subject that most people have never thought about much. It's trivial for an evil doer to insert some rogue code into an update to a package and have that code running in applications soon after. In fact that's what happened with the event-stream NPM package recently. I highly recommend reading Jake Archibald's post "What happens when packages go bad".

What about .NET Core? Well, .NET is in fairly rare position of having a large number of official packages written by and maintained by Microsoft. This means that you need less third party packages and in fact you can sometimes get away with using no third party dependencies what so ever. What this also means, is that your third party dependencies that you do end up using also have fewer other dependencies in turn.

NuGet also recently added support for signed packages, which stops packages from being tampered with between NuGet's server and your build machine.

Overall this is all about reducing risk. There will always be a chance that somebody will do something bad. I'd argue that there is less of a risk of that happening on the .NET platform.

Who is using it?

Bing.com is running on ASP.NET Core and a site doesn't get much bigger than that. Stack Overflow is working on their transition to .NET Core. The Orchard CMS uses .NET Core. Even WordPress and various PHP applications can be run on .NET Core these days using peachpie.

What's Still Missing?

First of all, let me say that every platform has gaps that are sometimes filled by the community. There are several missing API's that seem obvious to me but have yet to be built or improved enough. Here are a few basic examples of things that could be improved and where maybe the small team of 20 ASP.NET Core developers (Yes, their team is that small and they've done a tremendous job of building so much with so few resources, so they definitely deserve a pat on the back) could perhaps better direct their efforts.

Caching Could be Better

The response caching still only supports in-memory caching. If you want to cache to Redis using the IDistributedCache, bad luck. Even if you go with it and use the in-memory cache, if you're using cookies or the Authorization HTTP header, you've only got more bad luck as response caching turns itself off in those cases. Caching is an intrinsic part of the web, we need to do a better job of making it easier to work with.

Everyone is Partying with Lets Encrypt

Security is hard! HTTPS is hard! Dealing with certificates is hard! What if you could use some middleware and supply it with a couple of lines of configuration and never have to think about any of it ever again? Isn't that something you'd want? Well, it turns out that Nate McMaster has built a LetsEncrypt middleware that does just that but he needs some help to persuade his boss to build the feature, so up-vote this issue.

Microsoft seems a bit late to the part, it's also one of the top voted feature requests on Azure's User Voice too.

HTTP/2 and HTTP/3

HTTP/2 support in ASP.NET Core is available in 2.2 but it's not battle tested so you can't run it at the edge, wide open to the internet for fear of getting hacked.

HTTP/3 (formerly named QUIC) support has been talked about and the ground work for it has already been done so that the Kestrel web server can support multiple protocols easily. Lets see hoq quickly we can get support.

One interesting thing about adding support for more protocols to ASP.NET Core is that most people can't make use of them or don't need to. ASP.NET Core applications are often hidden away behind a reverse proxy web server like IIS or NGINX who implement these protocols themselves. Even using something like Azure App Service means that you run behind a special fork of IIS. So I've been thinking, what is the point? Well, you've could use Kubernetes to expose your ASP.NET Core app over port 80 and get the performance boost of not having to use a reverse proxy web server as a man in the middle. Also, contrary to popular belief, Kubernetes can expose multiple ASP.NET Core applications over port 80 (at least Azure AKS can).

Serving Static Files

Serving static files is one of the most basic features. There are a few things that could make this a lot better. You can't use the authorization middleware to limit access to static files but I believe that's changing in ASP.NET Core 3.0. Serving GZIP'ed or Brotli'ed content is a must today. Luckily dynamic Brotli compression will soon be available. What's not available is serving pre-compressed static files.

Is It A Mature Platform?

There is a lot less churn. There are a lot of open source projects you can leverage. A large enough developer base has now grown up, so you see a lot more GitHub projects, Stack Overflow questions, bloggers like myself and companies who make their living from the platform.

There seems to be a trend at the moment where people are jumping ship from long standing platforms and languages to brand new ones. Android developers have jumped from Java to Kotlin (and have managed to delete half their code in the process, Java is so verbose!). The poor souls who wrote Objective C, have jumped to Swift. Where once applications would be written in C++, they are now written in Go or Rust. Where once people wrote JavaScript, they are still writing JavaScript (TypeScript has taken off but not completely)...ok that has not changed. .NET Core seems to be the only one that seems to have bucked the trend and tried to reinvent itself completely while not changing things too much and still succeeding in the process.

So yes, yes it is, is my answer.

A Simple and Fast Object Mapper

$
0
0

I have a confession to make...I don't use Automapper. For those who don't know Automapper is the number one object to object mapper library on NuGet by far. It takes properties from one object and copies them to another. I couldn't name the second place contender and looking on NuGet, nothing else comes close. This post talks about object mappers, why you might not want to use Automapper and introduces a faster, simpler object mapper that you might want to use instead.

Why use an Object Mapper

This is a really good question. Most of the time, it boils down to using Entity Framework. Developers want to be good citizens and not expose their EF Core models in the API surface area because this can have really bad security implications (See overposting here).

I have received a lot of comments at this point in the conversation saying "Why don't you use Dapper instead. Then you don't need model classes for your data layer, you can just go direct to your view model classes via Dapper". Dapper is really great, don't get me wrong but it's not always the right tool for the job, there are distinct disadvantages to using Dapper instead of EF Core:

  1. I have to write SQL. That's not so bad (You should learn SQL!) but it takes time to context switch and you often find yourself copying and pasting code back and forth from SQL Management Studio or Azure Data Studio (I've started using it, you should too). It just makes development a bit slower, that's all.
  2. EF Core can be run in-memory, making for very fast unit tests. With Dapper, I have to run functional tests against a real SQL Server database which is slow, brittle and a pain to setup. Before each test, you need to ensure the database is setup with just the right data, so your tests are repeatable, otherwise you end up with flaky tests. Don't underestimate the power of this point.
  3. EF Core Migrations can automatically generate the database for me. With Dapper, I have to use external tools like Visual Studio Database Projects, DbUp or Flyway to create my database. That's an extra headache at deployment time. EF Core lets you cut out the extra time required to manage all of that.
  4. EF Core Migrations can automatically handle database migrations for me. Migrating databases is hard! Keeping track of what state the database is in and making sure you've written the right ALTER TABLE scripts is extra work that can be automated. EF Core handles all that for me. Alternatively, Visual Studio Database Projects can also get around this problem.
  5. I can switch database provider easily. Ok...ok...nobody does this in the real world and I can only think of one case where this happened. People always mention this point though for some reason.
  6. EF Core defaults to using the right data types, while on the other hand human beings...have too often chosen the wrong data types and then paid the penalties later on when the app is in production. Use NVARCHAR instead of VARCHAR and DATETIMEOFFSET instead of DATETIME2 or even DATETIME people! I've seen professional database developers make these mistakes all the time. Automating this ensures that the correct decision is made all the time.
  7. EF Core is not that much slower than using Dapper. We're not talking about orders of magnitude slower as it was with EF6. Throwing away all of the above benefits for slightly better speed is not a trade-off that everyone can make though, it depends on the app and situation.

You need to use the right tool for the right job. I personally use Dapper, where there is an existing database with all the migrations etc. already handled by external tools and use EF Core where I'm working with a brand new database.

What is good about Automapper?

Automapper is great when you have a small project that you want to throw together quickly and the objects you are mapping to and from have the same or similar property names and structure.

It's also great for unit testing because once you've written your mapper, testing it is just a matter of adding a one liner to test that all the properties in your object have a mapping setup for them.

Finally if you use Automapper with Entity Framework, you can use the ProjectTo method which uses the property mapping information to limit the number of fields pulled back from your database making the query a lot more efficient. I think this is probably the biggest selling point of Automapper. The alternative is to write your own Entity Framework Core projection.

What is wrong with Automapper?

Cezary Piatek writes a very good rundown of some of the problems when using Automapper. I'm not going to repeat what he says but here is a short description:

  1. In the real world, mapping between identical or similar classes is not that common.
  2. If you have similar classes you are mapping between, there is no guarantee that they will not diverge, requiring you to write increasingly complex Automapper code or rewriting the mapping logic without Automapper.
  3. Finding all usages of a property no longer works when using Automapper unless you explicitly map every property, lowering discoverability.
  4. If you have a complex scenario, Jimmy Bogard (the author of the tool) suggests not using Automapper:
    • DO NOT use AutoMapper except in cases where the destination type is a flattened subset of properties of the source type.
    • DO NOT use AutoMapper to support a complex layered architecture.
    • AVOID using AutoMapper when you have a significant percentage of custom configuration in the form of Ignore or MapFrom.
  5. If you're mapping from database models to view models in an API, then dumping your database schema out as JSON makes for a bad API. You usually want more complex nested objects.
  6. How much time does it really save? Object mapping code is the simplest code a developer can write, I can do it without thinking and knock a few mappings out in a couple of minutes.
  7. Automapper is complex, it has a massive documentation site just to show you how to use it and just checkout the 29 point list of guidelines on how to use it. Why should copying values from one object to another need to be so complex?

A Simple and Fast Object Mapper

I wrote an object mapper library that consists of a couple of interfaces and a handful of extension methods to make mapping objects slightly easier. The API is super simple and very light and thus fast. You can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project. Lets look at an example. I want to map to and from instances of these two classes:

public class MapFrom
{
    public bool BooleanFrom { get; set; }
    public DateTimeOffset DateTimeOffsetFrom { get; set; }
    public int IntegerFrom { get; set; }
    public string StringFrom { get; set; }
}

public class MapTo
{
    public bool BooleanTo { get; set; }
    public DateTimeOffset DateTimeOffsetTo { get; set; }
    public int IntegerTo { get; set; }
    public string StringTo { get; set; }
}

The implementation for an object mapper using the .NET Boxed Mapper is shown below. Note the IMapper interface which is the heart of the .NET Boxed Mapper. There is also an IAsyncMapper if for any reason you need to map between two objects asynchronously, the only difference being that it returns a Task.

public class DemoMapper : IMapper
{
    public void Map(MapFrom source, MapTo destination)
    {
        destination.BooleanTo = source.BooleanFrom;
        destination.DateTimeOffsetTo = source.DateTimeOffsetFrom;
        destination.IntegerTo = source.IntegerFrom;
        destination.StringTo = source.StringFrom;
    }
}

And here is an example of how you would actually map a single object, array or list:

public class UsageExample
{
    private readonly IMapper mapper = new DemoMapper();

    public MapTo MapOneObject(MapFrom source) => this.mapper.Map(source);

    public MapTo[] MapArray(List source) => this.mapper.MapArray(source);

    public List MapList(List source) => this.mapper.MapList(source);
}

I told you it was simple! Just a few convenience extension methods bundled together with an interface that makes it just ever so slightly quicker to write object mapping than rolling your own implementation. If you have more complex mappings, you can compose your mappers in the same way that your models are composed.

Performance

Keeping things simple makes the .NET Boxed Mapper fast. I put together some benchmarks using Benchmark.NET which you can find here. The baseline is hand written mapping code and I compare that to Automapper and the .NET Boxed Mapper.

I even got a bit of help from the great Jon Skeet himself on how to improve the performance of instantiating an instance when using the generic new() constraint which it turns out is pretty slow because it uses Activator.CreateInstance under the hood.

Object to Object Mapping Benchmark

This benchmark measures the time taken to map from a MapFrom object to the MapTo object which I show above.

Simple object to object mapping benchmark

Method Runtime Mean Ratio Gen 0/1k Op Allocated Memory/Op
Baseline Clr 7.877 ns 1.00 0.0178 56 B
BoxedMapper Clr 25.431 ns 3.07 0.0178 56 B
Automapper Clr 264.934 ns 31.97 0.0277 88 B
Baseline Core 9.327 ns 1.00 0.0178 56 B
BoxedMapper Core 17.174 ns 1.84 0.0178 56 B
Automapper Core 158.218 ns 16.97 0.0279 88 B

List Mapping Benchmark

This benchmark measures the time taken to map a List of MapFrom objects to a list of MapTo objects.

List to list mapping benchmark

Method Runtime Mean Ratio Gen 0/1k Op Allocated Memory/Op
Baseline Clr 1.833 us 1.00 2.0542 6.31 KB
BoxedMapper Clr 3.295 us 1.80 2.0523 6.31 KB
Automapper Clr 10.569 us 5.77 2.4872 7.65 KB
Baseline Core 1.735 us 1.00 2.0542 6.31 KB
BoxedMapper Core 2.237 us 1.29 2.0523 6.31 KB
Automapper Core 3.220 us 1.86 2.4872 7.65 KB

Speed

It turns out that Automapper does a really good job on .NET Core in terms of speed but is quite a bit slower on .NET Framework. This is probably down to the intrinsic improvements in .NET Core itself. .NET Boxed is quite a bit faster than Automapper on .NET Framework but the difference on .NET Core is much less at around one and a half times. The .NET Boxed Mapper is also very close to the baseline but is a bit slower. I believe that this is due to the use of method calls on interfaces, whereas the baseline mapping code is only using method calls on concrete classes.

Zero Allocations

.NET Boxed has zero allocations of memory while Automapper allocates a small amount per mapping. Since object mapping is a fairly common operation these small differences can add up over time and cause pauses in the app while the garbage collector cleans up the memory. There seems to be a trend I've seen in .NET for having zero allocation code. If you care about that, then this might help.

Conclusions

What I've tried to do with the .NET Boxed Mapper is fill a niche which I thought that Automapper was not quite filling. A super simple and fast object mapper that's just a couple of interfaces and extension methods to help you along the way and provide a skeleton on which to hang your code. If Automapper fits your app better, go ahead and use that. If you think it's useful, you can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project.

Securing ASP.NET Core in Docker

$
0
0

Some time ago, I blogged about how you can get some extra security when running Docker containers by making their file systems read-only. This ensures that should an attacker get into the container somehow, they won't be able to change any files. This only works with certain containers that support it however and unfortunately, at that time ASP.NET Core did not support running in a Docker container with a read-only file system. Happily, this is now fixed!

Lets see an example. I created a brand new hello world ASP.NET Core project and added this Dockerfile:

FROM microsoft/dotnet:2.2-sdk AS builder
WORKDIR /source
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish --output /app/ --configuration Release

FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=builder /app .
ENTRYPOINT ["dotnet", "ReadOnlyTest.dll"]

I build the Docker image using this command:

docker build -t read-only-test .

If I run this image with a read-only file system:

docker run --rm --read-only -it -p 8000:80 read-only-test

This outputs the following error as read-only file systems are not supported by default:

Failed to initialize CoreCLR, HRESULT: 0x80004005

If I now run the same image with the COMPlus_EnableDiagnostics environment variable turned off:

docker run --rm --read-only -it -p 8000:80 -e COMPlus_EnableDiagnostics=0 read-only-test

The app now starts! The COMPlus_EnableDiagnostics environment variable (which is undocumented) turns off debugging and profiling support, so I would not bake this environment variable into the Dockerfile. For some reason these features need a read/write file system to work properly. If you'd like to try this yourself, you can checkout all the code in this repo.

Git Cloning the Windows OS Repo

$
0
0

::: warning Disclaimer I'm a Microsoft employee but my opinions in this personal blog post are my own and nothing to do with Microsoft. The information in this blog post is already publicly available and I talk in very general terms. :::

I recently had the unique opportunity to git clone the Windows OS repository. For me as a developer, I think that has got to be a bucket list (a list of things to do before you die) level achievement!

A colleague who was doing some work in the repo was on leave and the task of completing the job unexpectedly fell on me to finish up. I asked around to see if anyone had any pointers on what to do and I was pointed towards an Azure DevOps project. The first thing I naively tried was running:

git clone https://microsoft.fake.com/foo/bar/os

This gave me the very helpful error:

remote: This repo requires GVFS. Ensure the version of git you are using supports GVFS.
fatal: protocol error: bad pack header

This triggered a memory in the dark recesses of my mind about GVFS (Git Virtual File System). The Windows OS repository is around 250GB in size. When you consider that there are tens or maybe hundreds of developers committing changes every day, you are not going to have a very pleasant developer experiences if you just used Git and tried to pull all 250GB of files. So GVFS abstracts away the file system and only downloads files when you try to access them.

The Windows OS has a very large and thorough internal Wiki. This wiki has sections covering all areas of the Windows OS going back for years. After a short time searching the wiki I discovered a very thorough getting started guide for new developers.

The getting started guide involves running some PowerShell files which install a very specific but recent version of Git and setting up GVFS. Interestingly, you can also optionally point your Git client at a cache server to speed up git commands. There are a few cache servers all over the world to choose from. Finally, there is a VS Code extension specific to the OS repo that gives you some extra intelli-sense, very fancy.

Even though pulling the code using GVFS should in theory only pull what you need at any given time, it still took a fair amount of time to get started. Standard git commands still worked but took tens of seconds to execute, so you had to be pretty sure of what you were doing.

At this point a colleague warned against using 'find in files', as this would cause GVFS to pull all files to disk. I think search would do the same. An alternative approach I used instead was to search via the Azure DevOps website where you can view all files in any repo.

Once I'd had a chance to have a root around the repo, I realised that it was probably the largest folder structure I'd ever seen. There are many obscure sounding folders like 'ds' and 'net'. The reason for the wiki's existence became clear.

Other random things I found was that the repo contains an 'src' folder just like a lot of other repositories. There is a tonne of file extensions I've never seen or heard of before and there are binaries checked into the repo which seems suboptimal on the face of it. I even found the Newtonsoft.Json binary in there.

I was pleasantly surprised to see an .editorconfig file in the repo. It turns out that spaces are preferred over tabs and line endings are CRLF (I don't know what else I expected).

There is a tools folder with dozens of tools in it. In fact, I had to use one of these tools to get my job done. The tool I used was a package manager a bit like NuGet. You can use a CLI tool to version and upload a folder of files. This made sense. The OS repo does not a mono repo in that it doesn't contain every line of code in Windows. There are many other repositories that package up and upload their binaries using this tool.

Some further reading on this package manager and I discovered that the Windows OS does some de-duplication of files to save space. I'm guessing they still have to fit Windows onto a DVD (How quaint, do people still use DVD's?), so file size is important.

While trying to figure out how to use the package manager, I accidentally executed a search through all packages. Text came streaming down the page like in the Matrix. Eventually I managed to fumble the right keys on the keyboard to cancel the search.

Once I'd finished with my changes I checked in and found that I had to rebase because newer commits were found on the server. I re-based as normal, except for the very long delay in executing git commands.

Once I'd finally pushed the branch containing my changes up to the server, I created a pull request in Azure DevOps. As soon as I'd done that, I got inundated with emails from Azure Pipelines telling me that a build had started and various reviewers had been added to my pull request.

The Azure Pipelines build only took 25 minutes to complete. A quick look shows a bunch of builds with five hours or more. I'm guessing that my changes had only gone through a cursory initial build to make sure nothing was completely broken.

A few days later I got a notification telling me my pull request had been merged. All I did was change a few config files and upload a package or two, but it was an interesting experience none the less.

.gitattributes Best Practices

$
0
0

.gitignore

If you've messed with Git for long enough, you're aware that you can use the .gitignore file to exclude files from being checked into your repository. There is even a whole GitHub repository with nothing but pre-made .gitignore files you can download. If you work with anything vaguely in the Microsoft space with Visual Studio, you probably want the 'Visual Studio' .gitignore file.

.gitattributes

There is a lesser know .gitattributes file that can control a bunch of Git settings that you should consider adding to almost every repository as a matter of course.

Line Endings

If you've studied a little computer science, you'll have seen that operating systems use different characters to represent line feeds in text files. Windows uses a Carriage Return (CR) followed by the Line Feed (LF) character, while Unix based operating systems use the Line Feed (LF) alone. All of this has it's origin in typewriters which is pretty amazing given how antiquated they are. I recommend reading the Newline Wikipedia article for more on the subject.

Newline characters often cause problems in Git when you have developers working on different operating systems (Windows, Mac and Linux). If you've ever seen a phantom file change where there are no visible changes, that could be because the line endings in the file have been changed from CRLF to LF or vice versa.

Git can actually be configured to automatically handle line endings using a setting called autocrlf. This automatically changes the line endings in files depending on the operating system. However, you shouldn't rely on people having correctly configured Git installations. If someone with an incorrect configuration checked in a file, it would not be easily visible in a pull request and you'd end up with a repository with inconsistent line endings.

The solution to this is to add a .gitattributes file at the root of your repository and set the line endings to be automatically normalised like so:

# Set default behaviour to automatically normalize line endings.
* text=auto

# Force bash scripts to always use lf line endings so that if a repo is accessed
# in Unix via a file share from Windows, the scripts will work.
*.sh text eol=lf

The second line is not strictly necessary. It hard codes the line endings for bash scripts to be LF, so that they can be executed via a file share. It's a practice I picked up from the corefx repository.

Git Large File System (LFS)

It's pretty common to want to checking binary files into your Git repository. Building a website for example, involves images, fonts, maybe some compressed archives too. The problem with these binary files is that they bloat the repository a fair bit. Every time you check-in a change to a binary file, you've now got both files saved in Git's history. Over time this bloats the repository and makes cloning it slow. A much better solution is to use Git Large File System (LFS). LFS stores binary files in a separate file system. When you clone a repository, you only download the latest copies of the binary files and not every single changed version of them.

LFS is supported by most source control providers like GitHub, Bitbucket and Azure DevOps. It a plugin to Git that has to be separately installed (It's a checkbox in the Git installer) and it even has it's own CLI command 'git lfs' so you can run queries and operations against the files in LFS. You can control which files fall under LFS's remit in the .gitattributes file like so:

 # Archives
*.7z filter=lfs diff=lfs merge=lfs -text
*.br filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text

# Documents
*.pdf filter=lfs diff=lfs merge=lfs -text

# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.ico filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.psd filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text

# Fonts
*.woff2 filter=lfs diff=lfs merge=lfs -text

# Other
*.exe filter=lfs diff=lfs merge=lfs -text 

So here I've added a whole list of file extensions for various file types I want to be controlled by Git LFS. I tell Git that I want to filter, diff and merge using the LFS tool and finally the -text argument tells Git that this is not a text file, which is a strange way to tell it that it's a binary file.

A quick warning about adding LFS to an existing repository with existing binary files checked into it. The existing binary files will be checked into Git and not LFS without rewriting Git history which would be bad and you shouldn't do unless you are the only developer. You will have to add a one off commit to take the latest versions of all binary files and add them to LFS. Everyone who uses the repository will also have to re-clone the repository (I found this out the hard way in a team of 15 people. Many apologies were made over the course of a week). Ideally you add this from day one and educate developers about Git's treatment of binary files, so people don't check-in any binary files not controlled by LFS.

Binary Files

When talking about the .gitattributes file, you will quite often hear some people talk about explicitly listing all binary files instead of relying on Git to auto-detect binary files (yes Git is clever enough to do that) like this:

# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary

As you saw above, we already do this with Git LFS but if you don't use LFS, read on as you may need to explicitly list binary files in certain rare circumstances.

I was interested so I asked a Stack Overflow question and got great answers. If you look at the Git source code, it checks first 8,000 bytes of a file to see if it contains a NUL character. If it does, the file is assumed to be binary. However, there are cases where you may need to do it explicitly:

  • UTF-16 encoded files could be mis-detected as binary.
  • Some image format or file that consists only of printable ASCII bytes. This is pretty weird and sounds unlikely to happen.

Final Form

This is what the final .gitattributes file I copy to most repositories looks like:

###############################
# Git Line Endings            #
###############################

# Set default behaviour to automatically normalize line endings.
* text=auto

# Force bash scripts to always use lf line endings so that if a repo is accessed
# in Unix via a file share from Windows, the scripts will work.
*.sh text eol=lf

###############################
# Git Large File System (LFS) #
###############################

# Archives
*.7z filter=lfs diff=lfs merge=lfs -text
*.br filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text

# Documents
*.pdf filter=lfs diff=lfs merge=lfs -text

# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.ico filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.psd filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text

# Fonts
*.woff2 filter=lfs diff=lfs merge=lfs -text

# Other
*.exe filter=lfs diff=lfs merge=lfs -text

Conclusions

All of the above are bits and pieces I've put together over time. Are there any other settings that should be considered best practice and added to any .gitattributes file?

Unit Testing dotnet new Templates

$
0
0

As I talked about in my previous post some time ago about dotnet new project templates, it's possible to enable feature selection, so that developers can toggle certain features of a project template on or off. This is not a feature that many templates in the wild use a lot. Quite often I've seen templates have no optional features or only a few. One reason is that it gets very complicated to test that toggling your optional features doesn't break the generated project in some way by stopping it from building for example. This is why I decided to write a small unit test helper library for dotnet new project templates. It is unit test framework agnostic and can work with xUnit, NUnit, MSTest or any other unit test framework.

Example Usage

Below is an example showing how you can use it inside an xUnit test project.

public class ApiTemplateTest
{
    public ApiTemplateTest() => DotnetNew.Install<ApiTemplateTest>("ApiTemplate.sln").Wait();

    [Theory]
    [InlineData("StatusEndpointOn", "status-endpoint=true")]
    [InlineData("StatusEndpointOff", "status-endpoint=false")]
    public async Task RestoreAndBuild_CustomArguments_IsSuccessful(string name, params string[] arguments)
    {
        using (var tempDirectory = TempDirectory.NewTempDirectory())
        {
            var dictionary = arguments
                .Select(x => x.Split('=', StringSplitOptions.RemoveEmptyEntries))
                .ToDictionary(x => x.First(), x => x.Last());
            var project = await tempDirectory.DotnetNew("api", name, dictionary);
            await project.DotnetRestore();
            await project.DotnetBuild();
        }
    }

    [Fact]
    public async Task Run_DefaultArguments_IsSuccessful()
    {
        using (var tempDirectory = TempDirectory.NewTempDirectory())
        {
            var project = await tempDirectory.DotnetNew("api", "DefaultArguments");
            await project.DotnetRestore();
            await project.DotnetBuild();
            await project.DotnetRun(
                @"Source\DefaultArguments",
                async (httpClient, httpsClient) =>
                {
                    var httpResponse = await httpsClient.GetAsync("status");
                    Assert.Equal(HttpStatusCode.OK, httpResponse.StatusCode);
                });
        }
    }
}

The first thing it does in the constructor is install the dotnet new project templates in your solution. It needs to know the name of the solution file. It then walks the sub-directory tree below your solution file and installs all project templates for you.

If we then look at the first unit test, we first need a temporary directory, where we can create a project from our dotnet new project template. We will generate a project from the template in this directory and then delete the directory at the end of the test. We then run dotnet new with the name of a project template, the name we want to give to the generated project and any custom arguments that particular project template supports. Using xUnit, I've parametrised the arguments, so we can run multiple tests while tweaking the arguments for each test. Running dotnet new returns a project which contains some metadata about the project that we've just created and we can also use it to further dotnet commands against.

Finally, we run dotnet restore and dotnet build against the project. So this test ensures that toggling the StatusEndpointOn option on our project template doesn't stop the generated project from restoring NuGet packages or building successfully.

The second unit test method is where it gets really cool. If the project template is an ASP.NET Core project, we can use dotnet run to start the project listening on some random free ports on the machine. The unit test framework then gives you two HttpClient's (One for HTTP and one for HTTPS) with which to call your newly generated project. In summary, not only can you test that the generated projects build, you can test that the features in your generated project work as they should.

This API is pretty similar to the ASP.NET Core TestHost API that also gives you a HttpClient to test the API with. The difference is that this framework is actually running the app using the dotnet run command. I have experimented with using the TestHost API to run the generated project in memory, so it could be run a bit faster but the .NET Core API's for dynamically loading DLL files needs some work which .NET Core 3.0 might solve.

Where To Get It?

You can download the Boxed.DotnetNewTest NuGet package or see the source code on GitHub.

What dotnet new Could Be

$
0
0

The 'dotnet new' CLI command is a great way to create projects from templates in dotnet. However, I think it could provide a much better experience than it currently does. I also suspect it isn't used much, mostly because templates authored for the dotnet new experience are not included in the Visual Studio File -> New Project experience. For template authors, the experience of developing templates could do with some improvements. I tweeted about it this morning and got asked to write a short gist about what could be improved, so this is that list.

Any plans to improve the dotnet new templating engine? Lots of unfixed bugs. Lots of rough edges needing smoothing. A 'dotnet new ui' command to create projects using a visual editor would be cool

Why do I Care?

I author a Swagger API, GraphQL API, Microsoft Orleans and NuGet project templates in my Dotnet Boxed project. The project currently has 1,900 stars on GitHub and the Boxed.Templates NuGet package has around 12,149 downloads at the time of writing. The Dotnet Boxed templates are also some of the more complex templates using dotnet new. They all have a dozen or more optional features.

Visual Studio Integration

In the past, I also authored the ASP.NET Core Boilerplate project templates which are published as a Visual Studio extension. This extension currently has 159,307 installs which is an order of magnitude more than the 12,149 installs of my dotnet new based Boxed.Templates NuGet package.

I have read in the dotnet/templating GitHub issues that there is eventually going to be Visual Studio integration in which you'd be able to search and install dotnet new based templates on NuGet, and then create projects from those templates much as you would with Visual Studio today. Given the download counts of my two projects, this would be the number one feature I'd like to see implemented.

You could create a Visual Studio extension that wraps your dotnet new templates but having messed around with them in the past, it's a lot of effort. I'm in the template making business, not in the extension making business. Also, given the above rumour, I've held off going this route.

NuGet/Visual Studio Marketplace Integration

Currently there is no way to search for a list of all dotnet new based project templates on NuGet or on the Visual Studio marketplace. There is this list buried in the dotnet/templating GitHub project but the only people who are going to find that are template authors. It would be great if there was some kind of marketplace or store to find templates, rate them, provide feedback etc.

dotnet new ui

If you've seen the Vue CLI, it has a magical UI for creating projects from it's template. This is the benchmark by which I now measure all project creation experiences. Just take a look at it's majesty:

Vue CLI Create a New Project

Imagine executing dotnet new ui, then seeing a nice browser dialogue popup like the one above where you could find, install and even create projects from templates. Creating a project would involve entering the name of your project, the directory where you want it to be saved and then toggling any custom options that the project template might offer.

That last bit is where having a UI shines. There aren't many dotnet new templates that use the templating engine to it's full potential and have additional optional features. When you use the current command line experience it's unwieldy and slow to set custom options. Having a custom UI with some check boxes and drop downs would be a far quicker and more delightful experience.

Missing Features

There are a bunch of cool missing or half implemented features in the dotnet new templating engine that could use finishing. Chief among these are called post actions. These are a set of custom actions that can be performed once your project has been created.

As far as I can work out, the only post action that works is the one that restores all NuGet packages in your project. This was implemented because the basic Microsoft project templates wanted to use them but I understand that they no longer do for reasons unknown to me. Happily I still use this one and it works nicely.

Other post actions that are half implemented (They exist and you can use them but they just print content to the console) are for opening files in the editor, opening files or links in the web browser or even running arbitrary scripts. The last one has the potential for being a security risk however, so it would be better to have a health list of post actions for specific tasks. I'd love to be able to open the ReadMe.md file that ships with my project template.

In terms of new post actions, I'd really like to see one that removes and sorts using statements. I have a lot of optional pieces of code in my project templates, so I have to have a lot of #if #endif code to tell the templating engine which lines of code to remove. It's particularly easy to get this wrong with using statements, leaving you with a fresh project that doesn't compile because you've removed one too many using statements by accident. To avoid this, I created my own unit testing framework for dotnet new projects called Boxed.DotnetNewTest.

Docs, Docs & Docs

There is one page of documentation on how to create project templates in the official docs page. There is a bunch more in the dotnet/templating wiki and some crucial bits of information in comments of GitHub issues. In particular, there is precious little information about how to conditionally remove code or files based on options the user selects. There is also very little about post actions. It would be great if this could be tidied up.

Secondary to the docs is the GitHub issues . There are currently 168 open issues with a large number having only one comment from the original author. Given the lack of documentation, having questions answered is really important.

Fixing Bugs

The latest version of the dotnet CLI has fixed some bugs but there are still a few that really get in the way of a great experience:

  • #1544/#348 - Running dotnet new foo --help outputs some pretty terrible looking text if you have any custom options.
  • #2208 - You cannot conditionally remove text from a file if it has no file extension, so that means Dockerfile, .gitignore, .editorconfig files.
  • #2209 - Complex conditionals fail if not wrapped in parentheses. I always forget to do this. There is no warnings, your template won't work.
  • #1438 - Using conditional code in .csproj files requires some workarounds to work.

Conclusions

The Vue CLI has really shown how great a new project creation experience can be. With a bit of work, the dotnet new experience could be just as great.


ASP.NET Core Integration Testing & Mocking using Moq

$
0
0

If you want to run an integration test for your ASP.NET Core app without also testing lots of external dependencies like databases and the like, then the lengthy official 'Integration tests in ASP.NET Core' documentation shows how you can use stubs to replace code that talks to a database or some other external service. If you want to use mocks using Moq, this is where you run out of guidance and runway. It does in fact require a fair amount of setup to do it correctly and reliably without getting flaky tests.

Startup

The ConfigureServices and Configure methods in your applications Startup class must be virtual. This is so that we can inherit from this class in our tests and replace production versions of certain services with mock versions.

public class Startup
{
    private readonly IConfiguration configuration;
    private readonly IWebHostingEnvironment webHostingEnvironment;

    public Startup(
        IConfiguration configuration,
        IWebHostingEnvironment webHostingEnvironment)
    {
        this.configuration = configuration;
        this.webHostingEnvironment = webHostingEnvironment;
    }

    public virtual void ConfigureServices(IServiceCollection services) => ...

    public virtual void Configure(IApplicationBuilder application) => ...
}

TestStartup

In your test project, inherit from the Startup class and override the ConfigureServices method with one that registers the mock and the mock object with IoC container.

I like to use strict mocks using MockBehavior.Strict, this ensures that nothing is mocked unless I specifically setup a mock.

public class TestStartup : Startup
{
    private readonly Mock clockServiceMock;

    public TestStartup(
        IConfiguration configuration,
        IHostingEnvironment hostingEnvironment)
        : base(configuration, hostingEnvironment)
    {
        this.clockServiceMock = new Mock(MockBehavior.Strict);
    }

    public override void ConfigureServices(IServiceCollection services)
    {
        services
            .AddSingleton(this.clockServiceMock);

        base.ConfigureServices(services);

        services
            .AddSingleton(this.clockServiceMock.Object);
    }
}

CustomWebApplicationFactory

In your test project, write a custom WebApplicationFactory that configures the HttpClient and resolves the mocks from the TestStartup, then exposes them as properties, ready for our integration test to consume them. Note that I'm also changing the environment to Testing and telling it to use the TestStartup class for startup.

Note also that I've implemented IDisposable's Dispose method to verify all of my strict mocks. This means I don't need to verify any mocks manually myself. Verification of all mock setups happens automatically when xUnit is disposing the test class.

public class CustomWebApplicationFactory : WebApplicationFactory
    where TEntryPoint : class
{
    public CustomWebApplicationFactory()
    {
        this.ClientOptions.AllowAutoRedirect = false;
        this.ClientOptions.BaseAddress = new Uri("https://localhost");
    }

    public ApplicationOptions ApplicationOptions { get; private set; }

    public Mock ClockServiceMock { get; private set; }

    public void VerifyAllMocks() => Mock.VerifyAll(this.ClockServiceMock);

    protected override void ConfigureClient(HttpClient client)
    {
        using (var serviceScope = this.Services.CreateScope())
        {
            var serviceProvider = serviceScope.ServiceProvider;
            this.ApplicationOptions = serviceProvider
                .GetRequiredService<IOptions<ApplicationOptions>>().Value;
            this.ClockServiceMock = serviceProvider
                .GetRequiredService<Mock<IClockService>>();
        }

        base.ConfigureClient(client);
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder) =>
        builder
            .UseEnvironment("Testing")
            .UseStartup();

    protected override void Dispose(bool disposing)
    {
        if (disposing)
        {
            this.VerifyAllMocks();
        }

        base.Dispose(disposing);
    }
}

Integration Tests

I'm using xUnit to write my tests. Note that the generic type passed to CustomWebApplicationFactory is Startup and not TestStartup. This generic type is used to find the location of your application project on disk and not to start the application.

I setup a mock in my test and I've implemented IDisposable to verify all mocks for all my tests at the end but you can do this step in the test method itself if you like.

Note also, that I'm not using xUnit's IClassFixture to only boot up the application once as the ASP.NET Core documentation tells you to do. If I did so, I'd have to reset the mocks between each test and also you would only be able to run the integration tests serially one at a time. With the method below, each test is fully isolated and they can be run in parallel. This uses up more CPU and each test takes longer to execute but I think it's worth it.

public class FooControllerTest : CustomWebApplicationFactory
{
    private readonly HttpClient client;
    private readonly Mock clockServiceMock;

    public FooControllerTest()
    {
        this.client = this.CreateClient();
        this.clockServiceMock = this.ClockServiceMock;
    }

    [Fact]
    public async Task GetFoo_Default_Returns200OK()
    {
        this.clockServiceMock
            .Setup(x => x.UtcNow)
            .ReturnsAsync(new DateTimeOffset(2000, 1, 1));

        var response = await this.client.GetAsync("/foo");

        Assert.Equal(HttpStatusCode.OK, response.StatusCode);
    }
}

xunit.runner.json

I'm using xUnit. We need to turn off shadow copying, so any separate files like appsettings.json are placed in the right place beside the application DLL file. This ensures that our application running in an integration test can still read the appsettings.json file.

{
  "shadowCopy": false
}

appsettings.Testing.json

Should you have configuration that you want to change just for your integration tests, you can add a appsettings.Testing.json file into your application. This configuration file will only be read in our integration tests because we set the environment name to 'Testing'.

Working Examples

If you'd like to see an end to end working example of how this all works. You can create a project using the Dotnet Boxed API project template or the GraphQL project template.

Conclusions

I wrote this because there is little to no information on how to combine ASP.NET Core with Moq in integration tests. I've messed about with using IClassFixture as the ASP.NET Core documentation tells you to do and it's just not a good idea with Moq which needs a clean slate before each test. I hope this stops others going through much pain.

Choosing a Static Site Generator

$
0
0

I recently rebuilt this blog using a static site generator called Gridsome which is based on Vue.js and GraphQL. This is the story of all the static site generators I tried or read up on, how I moved from WordPress to Gridsome and what I discovered along the way.

I want blogging to be as low friction as possible. Quite frankly if there is even a little friction, I'll stop writing posts which is what has kind of happened over the last couple of years where my output has definitely dropped.

WordPress was Giving Me a Bad Time

I was using WordPress in the past which was achingly slow and a major barrier to writing posts. Thinking of writing a post in WordPress just put me off it altogether.

WordPress stores its posts in HTML. You can use markdown using the new Gutenberg plugin but if you want to edit the post after the fact, you're back to HTML.

The other issue is that WordPress gets hacked all the time because people don't keep it and any installed plugins up to date. However, I found over the years that when I upgraded a plugin, there was a 50% chance that something was going to break and another 20% chance that I wouldn't find out about it until a week later.

Plugins were the bane of my life. My code formatting plugin became unsupported without an easy alternative that I could migrate to. This meant I had to stick to an older version of PHP and wait for another plugin to add a migration path.

Finally, I was paying for hosting on Azure Web apps which isn't the simplest or cheapest option I could have gone with. It was time to move...

Static vs Dynamic Sites

So why switch from a dynamic site like WordPress, to a statically generated site? I found so many reasons, some of which I hadn't thought of before I made the switch.

The obvious ones are that a static site is going to be faster and cheaper to run. A nice side effect of being faster is that search engines will also give you a little more link juice and rank you higher.

The biggest win for me was that I can now finally own my own content. It's strange to say that hosting my own WordPress site was not owning my own content but the fact is, I was not fully in control of the content that WordPress was generating. I became painfully aware of this after I had exported my posts from WordPress to markdown.

I had used certain plugins that formatted content strangely or I was using short codes which don't translate well. There were a myriad of issues. The amazing thing was that I could use VS Code to easily find and replace broken formatting with the help of some regular expression magic (I know enough to be dangerous!). The downside was that I had to manually go back, fix and check every blog post I'd ever written.

At the end of the day, my content is now simple markdown files formatted to my liking. There is nothing simpler than a text file and I am finally in control. If I ever need to pick up and switch to a different static file generator, I can pick-up my content which is in an open, easy to move format and move it easily. I'm never going back.

The other thing that I really loved was building the site itself. I have a folder of bookmarks containing cool little things you can do on the web that I have collected over the years. I put nearly all of it into practice, which was a lot of fun but did take me a couple of months to finish. It turns out, I love HTML, CSS and JavaScript but particularly CSS where visual feedback is instant and very satisfying.

Picking a Static Site Generator

Surveying the static site generator landscape was a dizzying experience. There didn't seem to be a clear winner at first, so this is what I found:

::: warning I certainly didn't do an in-depth review of each static site generator. These are my personal views based on the limited knowledge, limited time looking at each project and in-built human biases I have. :::

Jekyll

The one that started it all, Jekyll is backed by GitHub and has been going for a long time, which means it has a large community and lots of plugins. Hosting with GitHub pages was easy, since there were some nice integrations.

My only problem with it was that it was built on Ruby which in my experience a year ago, does not play well on Windows. I hope that has changed but I suppose you could use the Windows Subsystem for Linux (WSL) to run Ruby instead.

Khalid Abuhakmeh uses Jekyll and his blog looks pretty amazing, so well worth a look.

Gatsby

Gatsby is built with React and GraphQL. Plugins can be used used to hook up various data sources like markdown files and GraphQL makes querying extremely simple.

I'm not much of a React fan personally but if you are, it has a huge following, so community support and plugins are easy to find. Plus I cannot understate how much simpler the GraphQL plugins make it, to consume arbitrary content. Knowing GraphQL is important but it's not too difficult to learn.

Statiq

Statiq is built on ASP.NET Core and uses Razor to write views. It's a very new static site generator and I believe is an evolution of another project called Wyam. If you're not too knowledgeable in web technologies and live in the C# and .NET space, this may be a really good choice for you.

Hugo

I've heard a lot of good things about Hugo. It's built on Go. The community is pretty large and the project is pretty popular. If I hadn't gone with Gridsome, I would have liked to spend some more time with Hugo.

My only issue with Hugo having used Kubernetes Helm templates, was that I found the Go templating quite difficult to read due to the liberal use of brackets everywhere. However, the system is fairly intuitive to use.

{{ define "main" }}
    <main aria-role="main">
      <header class="homepage-header">
        <h1>{{.Title}}</h1>
        {{ with .Params.subtitle }}
        <span class="subtitle">{{.}}</span>
        {{ end }}
      </header>
      <div class="homepage-content"></div>
      <div>
        {{ range first 10 .Site.RegularPages }}
            {{ .Render "summary"}}
        {{ end }}
      </div>
    </main>
{{ end }}

VuePress

I took a pretty deep dive into Vuepress which is built on Vue.js. I'm a huge fan of Vue.js due to its single file components which allow you to write HTML, JavaScript and CSS in a single file. React has a lot to say about the first two but leaves CSS up to you, which has led to the whole CSS-in-JS movement. Personally, I think writing simple CSS or SCSS is where it's at.

In the end though, I found that VuePress is not really geared towards building blogs, it's more geared towards documentation sites for open source projects. It does a really good job in that space.

Nuxt.js/Next.js

Nuxt.js is a well known framework which uses Vue.js in combination with server side rendering. Next.js is a similar project for React. What some don't know is that you can also create static sites using these tools.

Nuxt.js recently released Nuxt Content which allows you to drive content from Markdown files. This came too late for me to try but I'd certainly take a deeper look the next time.

Gridsome

Gridsome is the Vue.js equivalent of Gatsby, in that it also uses the power of GraphQL.

There are a lot of plugins that can connect your static site to various sources of data using GraphQL. Want a data driven site? Want to consume some markdown, JSON, random images, content from WordPress or Ghost? Just install a plugin and write a simple GraphQL query.

In the end, I chose Gridsome for its simplicity. Its just HTML, SCSS and JavaScript at the end of the day (albeit in a single file component). It's the closest solution to the web and also why I really enjoy Vue.js.

Hosting Your Static Site

Netlify

I realized that a lot of people are using Netlify in my reading. It has a lot of nice little extra features but I personally didn't need any of them but they're worth a look. Netlify has a free tier but does ramp up to being quite costly after that, so beware and make sure your site doesn't use a lot of bandwidth.

GitHub Pages

I chose to host my site on GitHub pages. It's free until GitHub thinks you're abusing it and asks you to move. It doesn't have any bells or whistles but it works. There are two issues I wish they would fix:

  1. Your GitHub project has to end with .github.io.
  2. The branch containing the actual static files to be served has to be called master.

With all that said, it's simple and there was one less thing I had to worry about, since my content was already on GitHub.

Conclusions

I picked Gridsome and am pretty happy but you may have a different knowledge set and something else might suit you. Whatever makes you happy! The key is that writing blog posts needs to be low friction, so that you actually end up doing it.

In my next post, I'll talk about the features I look to build in a blog and which of them I think are essential to a good blog site.

Spicing up your Browser Console

$
0
0

Wouldn't it be cool if when you opened the browser console up on a site, you saw a cool secret message? There are many sites that do this with quite a few business's advertising frontend development jobs in this way. I wanted to join in on the fun, so...I did. Here is my story of two hours I'm not getting back.

I've done some work on Colorful.Console which is an amazing C# console library that lets you write text in ASCII art using figlet fonts. I wanted to do the same for my blog. I was too lazy to use Colorful.Console and used a random online generator I found. I tried a couple of different fonts out and came up with this JavaScript code:

const consoleOptions = 'background: #ffffff; color: #6b17e8';

// Standard Figlet Font
console.log('%c  ____      _                 ', consoleOptions);
console.log('%c |  _ \\ ___| |__   __ _ _ __  ', consoleOptions);
console.log('%c | |_) / _ \\ '_ \\ / _` | '_ \\ ', consoleOptions);
console.log('%c |  _ <  __/ | | | (_| | | | |', consoleOptions);
console.log('%c |_| \\_\\___|_| |_|\\__,_|_| |_|', consoleOptions);

// o8 Figlet Font
console.log('%c oooooooooo             oooo                              ', consoleOptions);
console.log('%c  888    888 ooooooooo8  888ooooo    ooooooo   oo oooooo  ', consoleOptions);
console.log('%c  888oooo88 888oooooo8   888   888   ooooo888   888   888 ', consoleOptions);
console.log('%c  888  88o  888          888   888 888    888   888   888 ', consoleOptions);
console.log('%c o888o  88o8  88oooo888 o888o o888o 88ooo88 8o o888o o888o', consoleOptions);

For the standard font, I had to escape quite a few characters using a back slash \, so watch out for that. The results in a browser console were pretty terrible and hard to read...

My name rendered in ASCII art using a figlet font

Notice that I passed options to the console.log API to set the background and foreground colour of the text. The Chrome browser adds a lot of space between lines and the font just looks a little anaemic and hard to read. I rooted around the Character Map app in Windows, to see if I could find a more substantial set of characters that would show up more brightly instead of using dashes, pipes and numbers. Then I found these: ▀ ▄ █ ▌ ▐ ▲ ► ▼ ◄.

My Final Form

I took the o8 figlet font text above and simply did a find and replace on it. I replaced the 8 character with and I also replaced the o character with :

console.log('%c ▄▄▄▄     ▄▄▄▄            ▄▄▄▄                                                                ▄▄▄▄', consoleOptions);
console.log('%c  ████▄   ███ ▄▄▄▄  ▄▄▄▄   ███▄▄▄▄▄    ▄▄▄▄▄▄▄   ▄▄ ▄▄▄ ▄▄▄▄   ▄▄ ▄▄▄ ▄▄▄▄    ▄▄▄▄▄▄▄    ▄▄▄▄▄███ ', consoleOptions);
console.log('%c  ██ ███▄█ ██  ███   ███   ███   ███   ▄▄▄▄▄███   ███ ███ ███   ███ ███ ███   ▄▄▄▄▄███ ███    ███ ', consoleOptions);
console.log('%c  ██  ███  ██  ███   ███   ███   ███ ███    ███   ███ ███ ███   ███ ███ ███ ███    ███ ███    ███ ', consoleOptions);
console.log('%c ▄██▄  █  ▄██▄  ███▄██ █▄ ▄███▄ ▄███▄ ██▄▄▄██ █▄ ▄███▄███▄███▄ ▄███▄███▄███▄ ██▄▄▄██ █▄  ██▄▄▄███▄', consoleOptions);

console.log('%c ▄▄▄▄▄▄▄▄▄▄             ▄▄▄▄                              ', consoleOptions);
console.log('%c  ███    ███ ▄▄▄▄▄▄▄▄▄█  ███▄▄▄▄▄    ▄▄▄▄▄▄▄   ▄▄ ▄▄▄▄▄▄  ', consoleOptions);
console.log('%c  ███▄▄▄▄██ ███▄▄▄▄▄▄█   ███   ███   ▄▄▄▄▄███   ███   ███ ', consoleOptions);
console.log('%c  ███  ██▄  ███          ███   ███ ███    ███   ███   ███ ', consoleOptions);
console.log('%c ▄███▄  ██▄█  ██▄▄▄▄███ ▄███▄ ▄███▄ ██▄▄▄██ █▄ ▄███▄ ▄███▄', consoleOptions);

console.log('%c ▄▄▄▄▄▄▄▄█                                          ▄▄▄▄ ', consoleOptions);
console.log('%c ███           ▄▄▄▄▄▄▄   ▄▄▄▄▄▄▄▄▄█ ▄▄▄▄▄▄▄▄▄█  ▄▄▄▄▄███ ', consoleOptions);
console.log('%c ███▄▄▄▄▄▄    ▄▄▄▄▄███  ███▄▄▄▄▄▄█ ███▄▄▄▄▄▄█ ███    ███ ', consoleOptions);
console.log('%c         ███ ███    ███ ███        ███        ███    ███ ', consoleOptions);
console.log('%c ▄██▄▄▄▄███   ██▄▄▄██ █▄  ██▄▄▄▄███  ██▄▄▄▄███  ██▄▄▄███▄', consoleOptions);

This seemed to work great and created a cool effect:

My name rendered in ASCII art using a figlet font

Conclusions

All you need to do now is copy and paste that text somewhere in the main part of your app. Now, when someone opens the browser console, they'll see your cool surprise. You can hit ||F12|| right now and take a look at mine.

Racism in Software Development & Beyond

$
0
0

If you're of North African, Middle Eastern or South Asian origin, you have to send send up to 90% more job applications than your white counterparts in the United Kingdom. This level of discrimination has been unchanged since the 1960's. Let that sink in for a moment.

I read this statistic in a Guardian article last year and it's been on my mind ever since. The study was carried out by the Centre for Social Investigation at Nuffield College, University of Oxford. You can read their full report here.

Those carrying out the study applied to nearly 3200 jobs, randomly varying the minority background of fictitious job applicants while holding their skills, qualifications and work experience constant.

They looked at high and low skill jobs. What I found particularly interesting was that the high skill jobs they looked at included software engineers. They found that there was no difference between applying for high or low skill jobs, you were going to be equally discriminated against.

Chart showing Middle Eastern & North African's need to send 90% more job applications

If you don't live in the UK, then as the study points out there is a similar situation in other countries. Spain, Germany, Netherlands and Norway are called out by name and here is a similar study about the United States. Racism is a human condition, it affects everyone.

If you are from a majority ethic group, the next time you're at work (I hope we defeat this virus soon and can get back), take a look around at your colleagues. If one of them is from a minority, they were either very lucky or perhaps had to work harder than you to get the same job. They may even have had to be more qualified than you.

Economic Racism > Physical Racism

When physical or verbal racist abuse occurs, people know about it, they can deal with it and hopefully move on. When people are discriminated against economically by not being able to get a job or perhaps getting a lower paid job, there is no way for them to know that they've been discriminated against.

We know that if you're an ethnic minority in Britain, you're more likely to live in poverty and we know that that living in poverty means that you're more likely to have physical and mental health problems, you're less likely to do well in education, you're more likely to go to prison and you are liable to die sooner.

COVID-19 has revealed the contrast even more starkly. People from black, asian and minority ethnic (BAME) backgrounds have suffered from higher rates of death than their white counterparts. If you're black, you're 3.9 times more likely to die. In a report the government tried to hide, it was revealed that historical racism plays a part.

Hit a man in the face and he'll bleed for a day, hit a man in the pocket and he'll bleed for a lifetime.

This is Personal

I was born and bred in East London but my parents are originally from Pakistan and I happen to be a Muslim too. As one of those people who has to send 40% more job applications according to the report above, this is kind of personal.

I have experienced direct verbal racist abuse before (Most people of any ethnic minority have at least once) but it's rare for me at least. I have also received abuse online on Twitter and even GitHub but it's par for the course online.

I do not know...I cannot know if I have ever been discriminated against when applying for jobs. If I had a different name, would I have been offered more chances? I don't think I would have but the statistics say differently. It did take me years of applying to get a job at Microsoft but so did one of my colleagues and in the end I did get one.

With that said I'm also one of the lucky ones. If I think back to the people that hired me and gave me a chance in my first few jobs, I'd really like to thank them.

What Can be Done?

Clearly there are many problems but I would hate to leave things there without suggesting potential solutions, so here are three.

1. Anonymous Job Applications

Anonymous job applications, where names are stripped from job applications are not as far fetched as you might think. They were trialled by 100 British firms in 2012, backed by the former deputy prime minister Nick Clegg. Some still use the special software that enables this system but there is resistance from some employers.

Eventually, a candidate will have to meet someone in an in-person interview where there is no hiding their ethnicity but at least it will mean getting more ethnic minorities a foot in the door that they otherwise clearly are not getting according to the evidence.

2. Ethnicity Pay Gap Reporting

In the UK, if your company has more than 250 employees, you must report the gender pay gap between your male and female employees. This has highlighted the disparity with eight out of ten employers paying men more than women.

Employers should be required to do the same for ethnicity. There is reportedly, a £3.2bn pound gap in wages between ethnic minorities and their white colleagues doing the same jobs.

Email your local member of parliament and urge them to support the requirement for large firms to publish an ethnicity pay gap and perhaps anonymous job applications too.

3. Stop Putting Racists in Power

My prime minister, Boris Johnson, once called black people "piccaninnies" with "watermelon smiles". When writing about the continent of Africa he said:

The problem is not that we were once in charge, but that we are not in charge anymore.

He's been anti-semitic too, writing an entire book about powerful Jews controlling the media and influencing elections.

He's made Islamophobic remarks about Muslim women looking like bank robbers and letterboxes that caused an up-tick in violence towards Muslim women with racists sometimes using Mr Johnson's exact words. This was apparently in an effort to help them he says. I hope he doesn't try helping anyone else.

That's not even the half of it, he's also insulted or abused half a dozen other groups of people. It's been fun to watch right wing politicians and political commentators contort themselves into strange shapes trying to defend his blatant racism.

The sad thing is, if you put a racist in power, many others follow them in their wake. The Conservative party is now riddled with racist members, councillors and even members of parliament. The far right group Britain First has even said that 5,000 of their members have even joined the Conservative party.

The solution for me is clear. If you put a racist in charge, they build a whole pyramid of racists underneath them.

Conclusions

As many others have said, it's not enough to be a passive non-racist in society. Over the last half a century, structural and institutional racism still pervades our society. It is necessary for us all to be actively working against racism. We also need to recognise that a lot of the inequalities we see come from implicit biases that all human beings have, including our own selves. If we recognise that we ourselves are deficient, perhaps we can do something about it.

The Easiest Way to Version NuGet Packages

$
0
0

The easiest way to version NuGet packages using semantic versioning in my opinion is to use MinVer. Getting started is literally as easy as adding the MinVer NuGet package. Getting finished is not too much more than that.

In this post I'll discuss the semantic versioning 2.0 standard and show you how you can semantically version your NuGet packages and the DLL's within them MinVer and Git tags.

What is Semantic Versioning?

Semantic versioning is the idea that each part of a version number has some intrinsic meaning. Lets break down an example of a full version number into it's constituent parts:

1.2.3-preview.0.4+b34215d3d2539837ac3e20fc3111ba7d46670064
  • 1 - The major version number. Incrementing this means that a major breaking change has occurred.
  • 2 - The minor version number. Incrementing this means that a non-breaking change has occurred.
  • 3 - The patch version number. Incrementing this means that a patch or fix has been issued for a bug.
  • preview (Optional) - This determines that the build is a pre-release build. This pre-release label is often set to alpha or beta.
  • 0 (Optional) - This is the pre-release version number.
  • 4 (Optional) - The Git height or the number of commits since the last non-pre-release build.
  • b34215d3d2539837ac3e20fc3111ba7d46670064 (Optional) - The Git SHA or hash of the current commit.

Isn't that cool! Every number in there has so much significance. Lets break it down, just by looking at version numbers we can determine:

  • Whether something was fixed, enhanced or broken when comparing one version to the previous one.
  • Whether it is a release or pre-release version.
  • Which commit the code was built with.
  • How many commits were made after the last release.

Versioning the Wrong Way

In the past I've tried to generate version numbers in quite a few different ways, none of which has been very satisfactory and none have conformed to semantic versioning 2.0. I've tried using the current date and time to generate a version number. This tells you when the package was created but nothing more.

[Year].[Month].[Day].[Hour][Minutes]
2020.7.2.0908

I've also generated version numbers based on the automatically incrementing continuous integration (CI) build number but how do you turn one number into three? Well in my case I hard coded a major or minor version and used the CI build number for the patch version. Using this method lets you tie a package version back to a CI build and through inference a Git commit but it's less than ideal.

[Hard Coded].[Hard Coded].[CI Build Number]
1.2.3

MinVer

MinVer leans on Git tags to help version your NuGet packages and the assemblies within them. Lets start by adding the MinVer NuGet package to a new class library project:

<ItemGroup Label="Package References">
  <PackageReference Include="MinVer" PrivateAssets="All" Version="2.3.0" />
</ItemGroup>

We'll need an initial version number for our NuGet package, so I'll tag the current commit as 0.0.1 and push the tag to my Git repository. Then I'll build my NuGet package:

git tag -a 0.0.1 -m "Initial"
git push --tags
dotnet build

If you now use an IL decompiler tool like dnSpy (which is free and open source) to take a peek inside the resulting DLL, you'll notice the following version assembly level attributes have been automatically added:

dnSpy showing assembly level attributes

[assembly: AssemblyVersion("0.0.0.0")]
[assembly: AssemblyFileVersion("0.0.1.0")]
[assembly: AssemblyInformationalVersion("0.0.1+362b09133bfbad28ef8a015c634efdb35eb17122")]

If you now run dotnet pack to build a NuGet package, you'll notice that it has the correct version. Note that 0.0.1 is a release version of our NuGet package i.e. something we might want push to nuget.org in this case.

NuGet package with release version set to 0.0.1

Now lets make a random change in our repository and then rebuild and repack our NuGet package:

git add .
git commit -m "Some changes"
dotnet build
dotnet pack

Now MinVer has automatically generated a pre-release version of our NuGet package. The patch version has been automatically incremented, a pre-release name preview has been given with a pre-release version of 0. We also have a git height of one because we have made one commit since our last release and we still have the git commit SHA too:

NuGet package with pre-release version set to 0.0.2-preview.0.1

If we crack open our DLL and view it's assembly level attributes again, we'll see more details:

[assembly: AssemblyVersion("0.0.0.0")]
[assembly: AssemblyFileVersion("0.0.2.0")]
[assembly: AssemblyInformationalVersion("0.0.2-preview.0.1+7af23ee0f769ddf0eb8991d59ad09dcbc8d82855")]

Now at this stage you could make some more commits and you'd see the major, minor and patch versions stay the same but the preview version, git height and git SHA would change. Eventually though, you will want to get another release version of your NuGet package ready. Well, this is as simple as creating another git tag:

git tag -a 0.0.2 -m "The next amazing version"
git push --tags
dotnet build

Now you can simply take the latest 0.0.2 release and push it to nuget.org.

Nerdbank.GitVersioning

There is a more popular competitor to MinVer out there called Nerdbank.GitVersioning which is part of the .NET Foundation and is worth talking about because it works slightly differently. It requires you to have a version.json file in your repository to contain the version information, instead of using Git tags.

{
  "$schema": "https://raw.githubusercontent.com/dotnet/Nerdbank.GitVersioning/master/src/NerdBank.GitVersioning/version.schema.json",
  "version": "1.0-beta"
  // This file can get very complicated...
}

In my opinion, this is not as nice. Git tags are an underused feature of Git and using them to tag release versions of your packages is a great use case. Git allows you to checkout code from a tag, so you can easily view the code in a package just by knowing it's version.

git checkout 0.0.1

Having a version number in a file, also means lots of commits just to edit the version number.

Conclusions

MinVer is an easy way to version you NuGet packages and DLL's. It also comes with a CLI tool that you can use to version other things like Docker images which I'll cover in another post. If you'd like to see an example of MinVer in action, you can try my Dotnet Boxed NuGet package project template by running a few simple commands to create a new project:

dotnet new --install Boxed.Templates
dotnet new nuget --name "MyProject"
Viewing all 138 articles
Browse latest View live