Quantcast
Channel: Muhammad Rehan Saeed
Viewing all 138 articles
Browse latest View live

.NET Boxed

$
0
0

.NET Boxed is a set of project templates with batteries included, providing the minimum amount of code required to get you going faster. Right now it includes API and GraphQL project templates.

.NET Boxed Icon

ASP.NET Core API Boxed

The default ASP.NET Core API Boxed options will give you an API with Swagger, ASP.NET Core versioning, HTTPS and much more enabled right out of the box. You can totally turn any of that off if you want to, the point is that it’s up to you.

ASP.NET Core API Boxed Preview

ASP.NET Core GraphQL Boxed

If you haven’t read about or learned GraphQL yet, I really suggest you go and follow their short online tutorial. It’s got some distinct advantages over standard REST’ful API’s (and some disadvantages but in my opinion the advantages carry more weight).

Once you’ve done that, the next thing I suggest you do is to create a project from the ASP.NET Core GraphQL Boxed project template. It implements the GraphQL specification using GraphQL.NET and a few other NuGet packages. It also comes with a really cool GraphQL playground, so you can practice writing queries, mutations and subscriptions.

ASP.NET Core GraphQL Boxed Preview

This is the only GraphQL project template that I’m aware of at the time of writing and it’s pretty fully featured with sample queries, mutations and subscriptions.

ASP.NET Core Boilerplate

.NET Boxed used to be called ASP.NET Core Boilerplate. That name was kind of forgettable and there was another great project that had a very similar name. I put off renaming for a long time because it was too much work but I finally relented and got it done.

In the end I think it was for the best. The new .NET Boxed branding and logo are much better and I’ve opened it up to .NET project templatess in general, instead of just ASP.NET Core project templates.

Thanks to Jon Galloway and Jason Follas for helping to work out the branding.

How can I get it?

  1. Install the latest .NET Core SDK.
  2. Run dotnet new --install "Boxed.Templates::*" to install the project template.
  3. Run dotnet new api --help to see how to select the feature of the project.
  4. Run dotnet new api --name "MyTemplate" along with any other custom options to create a project from the template.

Boxed Updates

There are new features and improvements planned on the GitHub projects tab. ASP.NET Core 2.1 is coming out soon, so look out for updates which you can see in the GitHub releases tab when they go live.

The post .NET Boxed appeared first on Muhammad Rehan Saeed.


Migrating to Entity Framework Core Seed Data

$
0
0

I was already using Entity Framework Core 2.0 and had written some custom code to enter some static seed data to certain tables. Entity Framework 2.1 added support for data seeding which manages your seed data for you and adds them to your Entity Framework Core migrations.

The problem is that if you’ve already got data in your tables, when you add a migration containing seed data, you will get exceptions thrown as Entity Framework tries to insert data that is already there. Entity Framework is naive, it assumes that it is the only thing editing the database.

Migrating to using data seeding requires a few extra steps that aren’t documented anywhere and weren’t obvious to me. Lets walk through an example. Assuming we have the following model and database context:

public class Car
{
    public int CarId { get; set; }

    public string Make { get; set; }

    public string Model { get; set; }
}

public class ApplicationDbContext : DbContext
{
    public ApplicationDbContext(DbContextOptions options)
        : base(options)
    {
    }

    public DbSet<Car> Cars { get; set; }
}

We can add some seed data by overriding the OnModelCreating method on our database context class. You need to make sure your seed data matches the existing data in your database.

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Car>().HasData(
        new Car() { CarId = 1, Make = "Ferrari", Model = "F40" },
        new Car() { CarId = 2, Make = "Ferrari", Model = "F50" },
        new Car() { CarId = 3, Make = "Labourghini", Model = "Countach" });
}

If we run a command to add a database migration, the generated code looks like this:

dotnet ef migrations add AddSeedData

public partial class AddSeedData : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 1, "Ferrari", "F40" });

        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 2, "Ferrari", "F50" });

        migrationBuilder.InsertData(
            table: "Cars",
            columns: new[] { "CarId", "Make", "Model" },
            values: new object[] { 3, "Labourghini", "Countach" });
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 1);

        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 2);

        migrationBuilder.DeleteData(
            table: "Cars",
            keyColumn: "CarId",
            keyValue: 3);
    }
}

This is what you need to do:

  1. Comment out all of the InsertData lines in the generated migration.
  2. Run the migration on your database containing the existing seed data. This is effectively doing a null operation but records the fact that the AddSeedData migration has been run.
  3. Uncomment the InsertData lines in the generated migration so that if you run the migrations on a fresh database, seed data still gets added. For your existing databases, since the migration has already been run on them, they will not add the seed data twice.

Thats it, hope that helps someone.

The post Migrating to Entity Framework Core Seed Data appeared first on Muhammad Rehan Saeed.

Optimally Configuring Entity Framework Core

$
0
0

Lets talk about configuring your Entity Framework Core DbContext for a moment. There are several options you might want to consider turning on. This is how I configure mine in most microservices:

public virtual void ConfigureServices(IServiceCollection services) =>
    services.AddDbContextPool<MyDbContext>(
        options => options
            .UseSqlServer(
                this.databaseSettings.ConnectionString,
                x => x.EnableRetryOnFailure())
            .ConfigureWarnings(x => x.Throw(RelationalEventId.QueryClientEvaluationWarning))
            .EnableSensitiveDataLogging(this.hostingEnvironment.IsDevelopment())
            .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking))
    ...

EnableRetryOnFailure

EnableRetryOnFailure enables retries for transient exceptions. So what is a transient exception? Entity Framework Core has a SqlServerTransientExceptionDetector class that defines that. It turns out that any SqlException with a very specific list of SQL error codes or TimeoutExceptions are considered transient exceptions and thus, safe to retry.

ConfigureWarnings

By default, Entity Framework Core will log warnings when it can’t translate your C# LINQ code to SQL and it will evaluate parts of your LINQ query it does not understand in-memory. This is usually catastrophic for performance because this usually means that EF Core will retrieve a huge amount of data from the database and then filter it down in-memory.

Luckily in EF Core 2.1, they added support to translate the GroupBy LINQ method to SQL. However, I found out yesterday that you have to write Where clauses after GroupBy for this to work. If you write the Where clause before your GroupBy, EF Core will evaluate your GroupBy in-memory in the client instead of in SQL. The key is to know when this is happening.

One thing you can do is throw an exception when you are evaluating a query in-memory instead of in SQL. That is what Throw on QueryClientEvaluationWarning is doing.

EnableSensitiveDataLogging

EnableSensitiveDataLogging enables application data to be included in exception messages. This can include SQL, secrets and other sensitive information, so I am only doing it when running in the development environment. It’s useful to see warnings and errors coming from Entity Framework Core in the console window when I am debugging my application using the Kestrel webserver directly, instead of with IIS Express.

UseQueryTrackingBehavior

If you are building an ASP.NET Core API, each request creates a new instance of your DbContext and then this is disposed at the end of the request. Query tracking keeps track of entities in memory for the lifetime of your DbContext so that if they are updated any changes can be saved, this is a waste of resources if you are just going to throw away the DbContext at the end of the request. By passing NoTracking to the UseQueryTrackingBehavior method, you can turn off this default behaviour. Note that if you are performing updates to your entities, don’t use this option, this is only for API’s that perform reads and/or inserts.

Connection Strings

You can also pass certain settings to connection strings. These are specific to the database you are using, here I’m talking about SQL Server. Here is an example of a connection string:

Data Source=localhost;Initial Catalog=MyDatabase;Integrated Security=True;Min Pool Size=3;Application Name=MyApplication

Application Name

SQL Server can log or profile queries that are running through it. If you set the application name, you can more easily identify the applications that may be causing problems in your database with slow or failing queries.

Min Pool Size

Creating database connections is an expensive process that takes time. You can specify that you want a minimum pool of connections that should be created and kept open for the lifetime of the application. These are then reused for each database call. Ideally, you need to performance test with different values and see what works for you. Failing that you need to know how many concurrent connections you want to support at any one time.

The End…

It took me a while to craft this setup, I hope you find it useful. You can find out more by reading the excellent Entity Framework Core docs.

The post Optimally Configuring Entity Framework Core appeared first on Muhammad Rehan Saeed.

ASP.NET Core Hidden Gem – QueryHelpers

$
0
0

I discovered a hidden gem in ASP.NET Core a couple of weeks ago that can help to build up and parse URL’s called QueryHelpers. Here’s how you can use it to build a URL using the AddQueryString method:

var queryArguments = new Dictionary<string, string>()
{
    { "static-argument", "foo" },
};

if (someFlagIsEnabled)
{
    queryArguments.Add("dynamic-argument", "bar");
}

string url = QueryHelpers.AddQueryString("/example/path", queryArguments);

Notice that there are no question marks or ampersands in sight. Where this really shines is when you want to add multiple arguments and then need to write code to work out whether to add a question mark or ampersand.

It’s also worth noting that the values of the query arguments are URL encoded for you too. The type also has a ParseQuery method to parse query strings but that’s less useful to us as ASP.NET Core controllers do that for you.

Finally, .NET also has a type called UriBuilder that you should know about. It’s more geared towards building up a full URL, rather than a relative URL as I’m doing above. It has a Query property that you can use to set the query string but it’s only of type string, so much less useful than QueryHelpers.AddQueryString.

The post ASP.NET Core Hidden Gem – QueryHelpers appeared first on Muhammad Rehan Saeed.

Optimally Configuring ASP.NET Core HttpClientFactory

$
0
0

Update (20-08-2018): Steve Gordon kindly suggested a further optimisation to use ConfigureHttpClient. I’ve updated the code below to reflect this.

In this post, I’m going to show how to optimally configure a HttpClient using the new HttpClientFactory API in ASP.NET Core 2.1. If you haven’t already I recommend reading Steve Gordons series of blog posts on the subject since this post builds on that knowledge. You should also read his post about Correlation ID’s as I’m making use of that library in this post. The main aims of the code in this post are to:

  1. Use the HttpClientFactory typed client, I don’t know why the ASP.NET team bothered to provide three ways to register a client, the typed client is the one to use. It provides type safety and removes the need for magic strings.
  2. Enable GZIP decompression of responses for better performance. Interestingly, the HttpClient and ASP.NET Core does not support compression of GZIP requests, only responses. Doing some searching online some time ago suggests that this is an optimisation that is not very common at all, I thought this was pretty unbelievable at the time.
  3. The HttpClient should time out after the server does not respond after a set amount of time.
  4. The HttpClient should retry requests which fail due to transient errors.
  5. The HttpClient should stop performing new requests for a period of time when a consequtive number of requests fail using the circuit breaker pattern. Failing fast in this way helps to protect an API or database that may be under high load and means the client gets a failed response quickly rather than waiting for a timeout.
  6. The Url, timeout, retry and circuit breaker settings should be configurable from the appsettings.json file.
  7. The HttpClient should send a User-Agent HTTP header telling the server the name and version of the calling application. If the server is logging this information, this can be useful for debugging purposes.
  8. The X-Correlation-ID HTTP header from the response should be passed on to the request made using the HttpClient. This would make it easy to correlate a request accross multiple applications.

Usage Example

It doesn’t really matter what the typed client HttpClient looks like, that’s not what we’re talking about but I include it for context.

public interface IRocketClient
{
    Task<TakeoffStatus> GetStatus(bool working);
}

public class RocketClient : IRocketClient
{
    private readonly HttpClient httpClient;

    public RocketClient(HttpClient httpClient) => this.httpClient = httpClient;

    public async Task<TakeoffStatus> GetStatus(bool working)
    {
        var response = await this.httpClient.GetAsync(working ? "status-working" : "status-failing");
        response.EnsureSuccessStatusCode();
        return await response.Content.ReadAsAsync<TakeoffStatus>();
    }
}

Here is how we register the typed client above with our dependency injection container. All of the meat lives in these three methods. AddCorrelationId adds a middleware written by Steve Gordon to handle Correlation ID’s. AddPolicies registers a policy registry and the policies themselves (A policy is Polly’s way of specifying how you want to deal with errors e.g. using retries, circuit breaker pattern etc.). Finally, we add the typed HttpClient but with configuration options, so we can configure it’s settings from appsettings.json.

public virtual void ConfigureServices(IServiceCollection services) =>
    services
        .AddCorrelationId() // Add Correlation ID support to ASP.NET Core
        .AddPolicies(this.configuration) // Setup Polly policies.
        .AddHttpClient<IRocketClient, RocketClient, RocketClientOptions>(this.configuration, "RocketClient")
        ...;

The appsettings.json file below contains the base address for the endpoint we want to connect to, a timeout value of thirty seconds is used if the server is taking too long to respond and policy settings for retries and the circuit breaker.

The retry settings state that after a first failed request, another three attempts will be made (this means you can get up to four requests). There will be an exponentially longer backoff or delay between each request. The first retry request will occur after two seconds, the second after another four seconds and the third occurs after another eight seconds.

The circuit breaker states that it will allow 12 consequitive failed requests before breaking the circuit and throwing CircuitBrokenException for every attempted request. The circuit will be broken for thirty seconds.

Generally, my advice is when allowing a high number of exceptions before breaking, use a longer duration of break. When allowing a lower numer of exceptions before breaking, keep the duration of break small. Another possibility I’ve not tried is to combine these two scenarios, so you have two circuit breakers. The curcuit breaker with the lower limit would kick in first but only break the circuit for a short time, if exceptions are no longer thrown, then things go back to normal quickly. If exceptions continue to be thrown, then the other circuit breaker with a longer duration of break would kick in and the circuit would be broken for a longer period of time.

You can of course play with these numbers, what you set them to will depend on your application.

{
  "RocketClient": {
    "BaseAddress": "http://example.com",
    "Timeout": "00:00:30"
  },
  "Policies": {
    "HttpCircuitBreaker": {
      "DurationOfBreak": "00:00:30",
      "ExceptionsAllowedBeforeBreaking": 12
    },
    "HttpRetry": {
      "BackoffPower: 2
      "Count": 3
    }
  }
}

Configuring Polly Policies

Below is the implementation for AddPollyPolicies. It starts by setting up and reading a configuration section in our appsettings.json file of type PolicyOptions. Then adds the Polly PolicyRegistry which is where Polly stores it’s policies. Finally, we add a retry and circuit breaker policy and configure them using the settings we’ve read from the PolicyOptions.

public static class ServiceCollectionExtensions
{
    private const string PoliciesConfigurationSectionName = "Policies";

    public static IServiceCollection AddPolicies(
        this IServiceCollection services,
        IConfiguration configuration,
        string configurationSectionName = PoliciesConfigurationSectionName)
    {
        var section = configuration.GetSection(configurationSectionName);
        services.Configure<PolicyOptions>(configuration);
        var policyOptions = section.Get<PolicyOptions>();

        var policyRegistry = services.AddPolicyRegistry();
        policyRegistry.Add(
            PolicyName.HttpRetry,
            HttpPolicyExtensions
                .HandleTransientHttpError()
                .WaitAndRetryAsync(
                    policyOptions.HttpRetry.Count,
                    retryAttempt => TimeSpan.FromSeconds(Math.Pow(policyOptions.HttpRetry.BackoffPower, retryAttempt))));
        policyRegistry.Add(
            PolicyName.HttpCircuitBreaker,
            HttpPolicyExtensions
                .HandleTransientHttpError()
                .CircuitBreakerAsync(
                    handledEventsAllowedBeforeBreaking: policyOptions.HttpCircuitBreaker.ExceptionsAllowedBeforeBreaking,
                    durationOfBreak: policyOptions.HttpCircuitBreaker.DurationOfBreak));

        return services;
    }
}

public static class PolicyName
{
    public const string HttpCircuitBreaker = nameof(HttpCircuitBreaker);
    public const string HttpRetry = nameof(HttpRetry);
}

public class PolicyOptions
{
    public CircuitBreakerPolicyOptions HttpCircuitBreaker { get; set; }
    public RetryPolicyOptions HttpRetry { get; set; }
}

public class CircuitBreakerPolicyOptions
{
    public TimeSpan DurationOfBreak { get; set; } = TimeSpan.FromSeconds(30);
    public int ExceptionsAllowedBeforeBreaking { get; set; } = 12;
}

public class RetryPolicyOptions
{
    public int Count { get; set; } = 3;
    public int BackoffPower { get; set; } = 2;
}

Notice that each policy is using the HandleTransientHttpError method which tells Polly when to apply the retry and cicuit breakers. One important question is, what is a transient HTTP error according to Polly? Well, looking at the source code in the Polly.Extensions.Http GitHub repository, it looks like they consider any of the below as transient errors:

  1. Any HttpRequestException thrown. This can happen when the server is down.
  2. A response with a status code of 408 Request Timeout.
  3. A response with a status code of 500 or above.

Configuring HttpClient

Finally, we can get down to configuring our HttpClient itself. The AddHttpClient method starts by binding the TClientOptions type to a configuration section in appsettings.json. TClientOptions is a derived type of HttpClientOptions which just contains a base address and timeout value. I’ll come back to CorrelationIdDelegatingHandler and UserAgentDelegatingHandler.

We set the HttpClientHandler to be DefaultHttpClientHandler. This type just enables GZIP and Deflate compression. Brotli support is being added soon, so watch out for that. Finally, we add the retry and circuit breaker policies to the HttpClient.

public static class ServiceCollectionExtensions
{
    public static IServiceCollection AddHttpClient<TClient, TImplementation, TClientOptions>(
        this IServiceCollection services,
        IConfiguration configuration,
        string configurationSectionName)
        where TClient : class
        where TImplementation : class, TClient
        where TClientOptions : HttpClientOptions, new() =>
        services
            .Configure<TClientOptions>(configuration.GetSection(configurationSectionName))
            .AddTransient<CorrelationIdDelegatingHandler>()
            .AddTransient<UserAgentDelegatingHandler>()
            .AddHttpClient<TClient, TImplementation>()
            .ConfigureHttpClient(
                (sp, options) =>
                {
                    var httpClientOptions = sp
                        .GetRequiredService<IOptions<TClientOptions>>()
                        .Value;
                    options.BaseAddress = httpClientOptions.BaseAddress;
                    options.Timeout = httpClientOptions.Timeout;
                })
            .ConfigurePrimaryHttpMessageHandler(x => new DefaultHttpClientHandler())
            .AddPolicyHandlerFromRegistry(PolicyName.HttpRetry)
            .AddPolicyHandlerFromRegistry(PolicyName.HttpCircuitBreaker)
            .AddHttpMessageHandler<CorrelationIdDelegatingHandler>()
            .AddHttpMessageHandler<UserAgentDelegatingHandler>()
            .Services;
}

public class DefaultHttpClientHandler : HttpClientHandler
{
    public DefaultHttpClientHandler() =>
        this.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
}

public class HttpClientOptions
{
    public Uri BaseAddress { get; set; }
    
    public TimeSpan Timeout { get; set; }
}

CorrelationIdDelegatingHandler

When I’m making a HTTP request from an API i.e. it’s an API to API call and I control both sides, I use the X-Correlation-ID HTTP header to trace requests as they move down the stack. The CorrelationIdDelegatingHandler is used to take the correlation ID for the current HTTP request and pass it down to the request made in the API to API call. The implementation is pretty simple, it’s just setting a HTTP header.

The power comes when you are using something like Application Insights, Kibana or Seq for logging. You can now take the correlation ID for a request and see the logs for it from multiple API’s or services. This is really invaluable when you are dealing with a microservices architecture.

public class CorrelationIdDelegatingHandler : DelegatingHandler
{
    private readonly ICorrelationContextAccessor correlationContextAccessor;
    private readonly IOptions<CorrelationIdOptions> options;

    public CorrelationIdDelegatingHandler(
        ICorrelationContextAccessor correlationContextAccessor,
        IOptions<CorrelationIdOptions> options)
    {
        this.correlationContextAccessor = correlationContextAccessor;
        this.options = options;
    }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        if (!request.Headers.Contains(this.options.Value.Header))
        {
            request.Headers.Add(this.options.Value.Header, correlationContextAccessor.CorrelationContext.CorrelationId);
        }

        // Else the header has already been added due to a retry.

        return base.SendAsync(request, cancellationToken);
    }
}

UserAgentDelegatingHandler

It’s often useful to know something about the client that is calling your API for logging and debugging purposes. You can use the User-Agent HTTP header for this purpose.

The UserAgentDelegatingHandler just sets the User-Agent HTTP header by taking the API’s assembly name and version attributes. You need to set the Version and Product attributes in your csproj file for this to work. The name and version are then placed along with the current operating system into the User-Agent string.

Now the next time you get an error in your API, you’ll know the client application that caused it (if it’s under your control).

public class UserAgentDelegatingHandler : DelegatingHandler
{
    public UserAgentDelegatingHandler()
        : this(Assembly.GetEntryAssembly())
    {
    }

    public UserAgentDelegatingHandler(Assembly assembly)
        : this(GetProduct(assembly), GetVersion(assembly))
    {
    }

    public UserAgentDelegatingHandler(string applicationName, string applicationVersion)
    {
        if (applicationName == null)
        {
            throw new ArgumentNullException(nameof(applicationName));
        }

        if (applicationVersion == null)
        {
            throw new ArgumentNullException(nameof(applicationVersion));
        }

        this.UserAgentValues = new List<ProductInfoHeaderValue>()
        {
            new ProductInfoHeaderValue(applicationName.Replace(' ', '-'), applicationVersion),
            new ProductInfoHeaderValue($"({Environment.OSVersion})"),
        };
    }

    public UserAgentDelegatingHandler(List<ProductInfoHeaderValue> userAgentValues) =>
        this.UserAgentValues = userAgentValues ?? throw new ArgumentNullException(nameof(userAgentValues));

    public List<ProductInfoHeaderValue> UserAgentValues { get; set; }

    protected override Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        if (!request.Headers.UserAgent.Any())
        {
            foreach (var userAgentValue in this.UserAgentValues)
            {
                request.Headers.UserAgent.Add(userAgentValue);
            }
        }

        // Else the header has already been added due to a retry.

        return base.SendAsync(request, cancellationToken);
    }

    private static string GetProduct(Assembly assembly) => GetAttributeValue<AssemblyProductAttribute>(assembly);

    private static string GetVersion(Assembly assembly)
    {
        var infoVersion = GetAttributeValue<AssemblyInformationalVersionAttribute>(assembly);
        if (infoVersion != null)
        {
            return infoVersion;
        }

        return GetAttributeValue<AssemblyFileVersionAttribute>(assembly);
    }

    private static string GetAttributeValue<T>(Assembly assembly)
        where T : Attribute
    {
        var type = typeof(T);
        var attribute = assembly
            .CustomAttributes
            .Where(x => x.AttributeType == type)
            .Select(x => x.ConstructorArguments.FirstOrDefault())
            .FirstOrDefault();
        return attribute == null ? string.Empty : attribute.Value.ToString();
    }
}

<PropertyGroup Label="Package">
  <Version>1.0.0</Version>
  <Product>My Application</Product>
  <!-- ... -->
</PropertyGroup>

Sample GitHub Project

I realize that was a lot of boilerplate code to write. It was difficult to write this as more than one blog post. To aid in digestion, I’ve created a GitHub sample project with the full working code.

The sample project contains two API’s. One makes a HTTP request to the other. You can pass a query argument to decide whether the callee API will fail or not and try out the retry and circuit breaker logic. Feel free to play with the configuration in appsettings.json and see what options work best for your application.

The post Optimally Configuring ASP.NET Core HttpClientFactory appeared first on Muhammad Rehan Saeed.

PluralSight vs LinkedIn Learning vs FrontendMasters vs Egghead.io vs YouTube

$
0
0

I use a lot of video resources to keep up to date with .NET, JavaScript and other tech like Docker, Kubernetes etc. I’ve compiled here a list of these resources and my impressions of using them and often paying for them out of my own pocket. There is something useful in all of them but I tend to find that some cater for certain technologies better than others.

PluralSight

PluralSight is the service that tries to be all things for all people. It tended to be more .NET focused in the past but things are changing on that front. In more recent times, the breadth of topics covered has definately gotten a lot wider. They’ve added IT operations courses for example (I recently watched a really good course on Bash which is not something that’s easily available on the internet strangely), as well as courses on Adobe products and like Photoshop and Illustrator which is handy.

In terms of software development, the courses are very high quality but they also take a lot of time for the authors to produce, so don’t expect long in-depth courses on newer technologies e.g. the courses on Kubernetes and Vue.js have only recently been added and are certainly more on the ‘Getting Started’ end of the spectrum. However, in time I expect the portfolio to fill out.

There is also definately still a .NET bias to the site, there aren’t as many in-depth frontend JavaScript courses as I would like for example. The ones that do exist are not from the well known frontend developers in the community. Also, some of the courses can be quite old (The tech world does move so fast). You’d think you would find some decent courses on CSS for example but the courses available are pretty ancient.

They have apps for all the usual platforms that let you download video offline which is a must for me, for when I travel on the London underground. The monthly cost is not prohibitive for the quantity of courses available at $35 per month. I’ve paid for it in the past but get it free right now as a Microsoft MVP.

I’d recommend this as a primary source of information when learning some new technology.

LinkedIn Learning

I only discovered that LinkedIn Learning existed last year when I learned that Microsoft MVP’s get it for free. Apparently LinkedIn Learning used to be called Lynda.com which I had heard of and trialed in the past. I’ve always thought of Lynda as a ‘How to use X software’ kind of resource. They’ve literally got hours and hours worth of courses on Adobe Photoshop for example.

I was surprised at how much content they actually have. The ground is a bit thin when it comes to .NET content however and the courses that I have ended up watching are pretty short and to the point with not a huge amount of depth. However, I think this varies a lot, I’ve seen Adobe illustrator courses that are 14 hours long!

In the end I’ve used LinkedIn Learning for learning Kubernetes, due to PluralSight’s library being a bit thin on that subject and also GraphQL.NET where LinkedIn Learning has the only course available on the internet.

It costs $25 per year to subscribe, so it’s cheaper than the other offerings. Overall, I probably wouldn’t pay for this service if I didn’t get it for free. At best, I might subscribe for a month at $30 to view a particular course. I also feel like I should be spending more time exploring their content.

Frontend Masters

Frontend Masters does exactly what it says on the tin. They get industry leading frontend professionals to present courses on HTML, CSS, and JavaScript mainly, although they also delve into the backend with Node.js, Mongo and GraphQL courses.

The quality and depth of these courses is extremely high. The format is unusual in that the expert is delivering the course to an actual audience of people and there are also question/answer sections at the end of each module. This means that the courses tend to be quite long. If you’re like me and you want to know every gritty detail, then thats great.

The library of courses is not very large but I’d definately recommend this service to anyone interested in frontend or GraphQL Node.js development. The price is quite steep at $39 per month, considering the smaller number of targeted courses available. I’m waiting to see if they have a sale at the end of the year to drop hard cash on this learning resource.

Egghead.io

Egghead.io is a unique learning resource. It’s USP is that it serves a series of short two minute videos that make up a course. If you run the videos at 1.5x or 2x speed, you can be done learning something in 15 minutes! In the real world, I found that each video was so concise and full of useful information, I found myself having to go back and watch things again. This is definately the fastest way to learn something.

The content is similar to Frontend Masters i.e. it’s mainly focused on the frontend, with a few forays into Node.js, ElasticSearch, Mongo and Docker. Although, they tend to have a focus on JavaScript frameworks.

The cost of this service is $300 per year but if you wait until the sale at the end of the year like I did, you can bag a subscription for $100 which I think is more reasonable. I’m coming up for renewal time and I’m not sure I will renew because I’ve pretty much watched all of the courses that I was interested in. Because the courses are very short and fairly limited in number, you can get through them pretty quickly. That said, it was definately worth investing in a years subscription. I might purchase a subscription again in a year or two when they add more content.

YouTube/Vimeo/Channel9

YouTube, Vimeo and Channel 9 have a wealth of videos that you should not ignore. Plus the best part is that it’s all free. Here are some channels I find useful:

NDC Conferences

The NDC Conferences seem to never end. They take place three times a year (at last count) but they release videos all year round, so it’s a never ending battle to keep up. For that reason, I’ve been trying to avoid watching them lately. The best place to watch them is on Vimeo where you can easily download them offline in high quality.

You have expert speakers who often repeat their talk multiple times, so you often end up wondering whether you’ve seen a talk already. The talks are often very high level and often non-technical talks about design, management, managing your career or just telling stories about how some software was built.

Honestly, it can be fun to watch but I don’t feel like I learn a lot watching these talks, so I’ve been a lot more strict about what I do watch.

Google Developers

The Google Developers YouTube channel clogs up your feed with a lot of pointless two minute videos throughout the year. Then once a year, they hold their developer conference where the talks are actually interesting. The videos are Goolge focused, so think Chrome, JavaScript, Workbox and Android.

Microsoft Conferences

Microsoft holds developer conferences like Build and Ignite all the time. You can watch them on Channel 9 or YouTube. Microsoft builds a lot of tech, so talks are fairly varied.

Azure Friday

Azure Friday is available on YouTube or Channel 9 and lets you keep up to date with Microsoft Azure’s constantly evolving cloud platform. The videos are short and released once a week or so.

CSS Day

CSS Day is a conferences that runs every year where CSS experts stand up and deliver a talk on a particular subject. Often regarding some new CSS feature or some feature that has not yet been standardised. Well worth watching, none of the resources above do a good job of covering CSS in my opinion, except maybe Frontend Masters to some extent.

.NET Foundation

The .NET Foundation videos can be found on YouTube. It’s really two channels combined. One for .NET in general and one for ASP.NET.

The .NET videos typically have very in depth discussions about what features to add to the .NET Framework. They also sometimes release a video explaining some new features of .NET. Not something I watch often but worth keeping an eye on occasionally.

The ASP.NET Community Standup releases a video on most Tuesday’s discussing new features being added to ASP.NET Core or sometimes .NET Core in general. Always worth watching.

Heptio

The Heptio YouTube channel is a bit like the ASP.NET Community Standup for Kubernetes. There are new videos every week but they vary a lot from beginner to extreme expert level and it’s difficult to tell what the level is going to be. If you’re interested in Kubernetes, it’s worth watching the first 10 mins of every show, so you can keep up to date with what’s new in Kubernetes.

Grab a Bargain

With the Christmas period approaching, most of the paid for services will offer some kind of sale. Now is the time to keep an eye out for that and grab a bargain.

The post PluralSight vs LinkedIn Learning vs FrontendMasters vs Egghead.io vs YouTube appeared first on Muhammad Rehan Saeed.

Is ASP.NET Core now a Mature Platform?

$
0
0

The Upgrade Train

I started using ASP.NET Core back when it was still called ASP.NET 5 and it was still in beta. In those early days every release introduced a sea change. The beta’s were not beta’s at all but more like alpha quality bits. I spent more time than I’d like just updating things to the latest version with each release.

Compared to the past, updates are moving at a glacial pace. Compared to the full fat .NET Framework though, it’s been like moving from a camel to an electric car. When releases do come there is still a lot in each release. If you have a number of microservices using ASP.NET Core, it’s not quick to get them all updated. Also, it’s not just ASP.NET Core but all of the satelite assemblies built on top of .NET Core that keep changing too, things like Serilog and Swashbuckle (By the way, should I be looking into NSwag, it seems the new hotness?).

What about other platforms? Well, I’m familiar with Node.js and the situation there is bordering on silly. Packages are very unstable and constantly being rev’ed. Keeping up and staying on latest is a constant battle almost every day. Each time you upgrade a package, there is also a danger that you will break something. With .NET Core, there are fewer packages and they are much more stable.

Overall, things move fast in software development in general and for me that’s what keeps it interesting. ASP.NET Core is no exception.

Show me the API’s!

.NET Core and ASP.NET Core started out very lightweight. There were few API’s available. You often had to roll your own code, even for basic features that should exist.

In todays world, a lot of API’s have been added and where there are gaps, the community has filled them in many places. The .NET Framework still has a lot of API’s that have not been ported across yet. A lot of these gaps are Windows specific and I’m sure a lot will be filled in the .NET Core 3.0 timeframe.

When I make a comparison with Node.js and take wider community packages into consideration, I’d say that .NET Core has fewer API’s. Image compression API’s don’t even exist on .NET for example. We were late to the party with Brotli compression which was recently added to .NET Core and is soon going to be added to the ASP.NET Core compression middleware, so we’ll get there eventually. We have GraphQL.NET which is very feature rich but it still lags behind the JavaScript Apollo implementation slightly where it has first party support (Perhaps that comparison is a little unfair as GraphQL is native to Node.js). When I wanted to add Figlet font support to Colorful.Console (Figlet fonts let you draw characters using ASCII art), I had to base my implementation off of a JavaScript one. I’m not the only one who translates JavaScript code to C# either.

With all this said, Node.js and JavaScript in general has it’s own unique problems, otherwise I’d be using it instead of being a mainly .NET Core developer.

It’s Open Sauce

Making .NET Core and ASP.NET Core open source has made a huge difference. We’d all occasionally visit the .NET Framework docs to understand how an API worked but today the place to go is GitHub where you can not only see the code but read other peoples issues and even raise issues of your own. There is often someone who has been there and done it all before you.

Not only that but a huge community has grown up with bloggers and new projects being more commonplace. It cannot be underestimated how much this change has improved a developers standard of living. Just take a look at the brilliant discoverdot.net site where you can see 634 GitHub .NET projects for all the evicence you need.

Feel the Powa!

ASP.NET Core’s emphasis on performance is refreshing. It’s doing well in the TechEmpower benchmarks with more improvements in sight. It’s nice to get performance boosts from your applications every time you upgrade your application without having to do any work at all yourself.

While the platform is miles ahead of Node.js there are newer languages like Go that are also quite nice to write code for but blazing fast too. However, I’m not sure you can be as productive writing Go as with .NET Core. Also, you’ve got to use the write tool for the job. There are definately cases where Go does a better job.

One interesting effort that I’ve been keeping an eye on for some time now is .NET Native where C# code is compiled down to native code instead of an intermediate language. This means that the intermediate language does not need to be JIT’ed and turned into machine code at runtime which speeds up execution the first time the application is run. A nice side effect of doing this is that you also end up with a single executable file. You get all the benefits of a low level language like Go or Rust with none of the major drawbacks! I’ve been expecting this to hit for some time now but it’s still not quite ready.

Security is Boring but Important

This is a subject that most people have never thought about much. It’s trivial for an evil doer to insert some rogue code into an update to a package and have that code running in apps soon after. In fact that’s what happened with the event-stream NPM package recently. I highly recommend reading Jake Archibald’s post “What happens when packages go bad”.

What about .NET Core? Well, .NET is in fairly rare position of having a large number of official packages written by and maintained by Microsoft. This means that you need less third party packages and in fact you can sometimes get away with using no third party dependencies what so ever. What this also means, is that your third party dependencies that you do end up using also have fewer other dependencies in turn.

NuGet also recently added support for signed packages, which stops packages from being tampered with between NuGet’s server and your build machine.

Overall this is all about reducing risk. There will always be a chance that somebody will do something bad. I’d argue that there is less of a risk of that happening on the .NET platform.

Who is using it?

Bing.com is running on ASP.NET Core and a site doesn’t get much bigger than that. StackOverflow is working on their transition to .NET Core. The Orchard CMS uses .NET Core. Even WordPress and various PHP apps can be run on .NET Core these days using peachpie.

What’s Still Missing?

First of all, let me say that every platform has gaps that are sometimes filled by the community. There are several missing API’s that seem obvious to me but have yet to be built or improved enough. Here are a few basic examples of things that could be improved and where maybe the small team of 20 ASP.NET Core developers (Yes, their team is that small and they’ve done a tremendous job of building so much with so few resources, so they definately deserve a pat on the back) could perhaps better direct their efforts.

Caching Could be Better

The response caching still only supports in-memory caching. If you want to cache to Redis using the IDistributedCache, bad luck. Even if you go with it and use the in-memory cache, if you’re using cookies or the Authorization HTTP header, you’ve only got more bad luck as response caching turns itself off in those cases. Caching is an intrinsic part of the web, we need to do a better job of making it easier to work with.

Everyone is Partying with Lets Encrypt

Security is hard! HTTPS is hard! Dealing with certificates is hard! What if you could use some middleware and supply it with a couple of lines of configuration and never have to think about any of it ever again? Isn’t that something you’d want? Well, it turns out that Nate McMaster has built a LetsEncrypt middleware that does just that but he needs some help to persuade his boss to build the feature, so upvote this issue.

Microsoft seems a bit late to the part, it’s also one of the top voted feature requests on Azure’s User Voice too.

HTTP/2 and HTTP/3

HTTP/2 support in ASP.NET Core is available in 2.2 but it’s not battle tested so you can’t run it at the edge, wide open to the internet for fear of getting hacked.

HTTP/3 (formerly named QUIC) support has been talked about and the ground work for it has already been done so that the Kestrel web server can support multiple protocols easily. Lets see hoq quickly we can get support.

One interesting thing about adding support for more protocols to ASP.NET Core is that most people can’t make use of them or don’t need to. ASP.NET Core apps are often hidden away behind a reverse proxy web server like IIS or NGINX who implement these protocols themselves. Even using something like Azure App Service means that you run behind a special fork of IIS. So I’ve been thinking, what is the point? Well, you’ve could use Kubernetes to expose your ASP.NET Core app over port 80 and get the performance boost of not having to use a reverse proxy web server as a man in the middle. Also, contrary to popular belief, Kubernetes can expose multiple ASP.NET Core apps over port 80 (at least Azure AKS can).

Serving Static Files

Serving static files is one of the most basic features. There are a few things that could make this a lot better. You can’t use the authorization middleware to limit access to static files but I believe that’s changing in ASP.NET Core 3.0. Serving GZIP’ed or Brotli’ed content is a must today. Luckily dynamic Brotli compression will soon be available. What’s not available is serving pre-compressed static files.

Is It A Mature Platform?

There is a lot less churn. There are a lot of open source projects you can leverage. A large enough developer base has now grown up, so you see a lot more GitHub projects, StackOverflow questions, bloggers like myself and companies who make their living from the platform.

There seems to be a trend at the moment where people are jumping ship from long standing platforms and languages to brand new ones. Android developers have jumped from Java to Kotlin (and have managed to delete half their code in the process, Java is so verbose!). The poor souls who wrote Objective C, have jumped to Swift. Where once apps would be written in C++, they are now written in Go or Rust. Where once people wrote JavaScript, they are still writing JavaScript (TypeScript has taken off but not completely)…ok that has not changed. .NET Core seems to be the only one that seems to have bucked the trend and tried to reinvent itself completely while not chaning things too much and still succeeding in the process.

So yes, yes it is, is my answer.

The post Is ASP.NET Core now a Mature Platform? appeared first on Muhammad Rehan Saeed.

A Simple and Fast Object Mapper

$
0
0

I have a confession to make…I don’t use Automapper. For those who don’t know Automapper is the number one object to object mapper library on NuGet by far. It takes properties from one object and copies them to another. I couldn’t name the second place contender and looking on NuGet, nothing else comes close. This post talks about object mappers, why you might not want to use Automapper and introduces a faster, simpler object mapper that you might want to use instead.

Why use an Object Mapper

This is a really good question. Most of the time, it boils down to using Entity Framework. Developers want to be good citizens and not expose their EF Core models in the API surface area because this can have really bad security implications (See overposting here).

I’ve recieved a lot of comments at this point in the conversation saying “Why don’t you use Dapper instead. Then you don’t need model classes for your data layer, you can just go direct to your view model classes via Dapper”. Dapper is really great, don’t get me wrong but it’s not always the right tool for the job, there are distinct disadvantages to using Dapper instead of EF Core:

  1. I have to write SQL. That’s not so bad (You should learn SQL!) but it takes time to context switch and you often find yourself copying and pasting code back and forth from SQL Management Studio or Azure Data Studio (I’ve started using it, you should too). It just makes development a bit slower, thats all.
  2. EF Core can be run in-memory, making for very fast unit tests. With Dapper, I have to run functional tests against a real SQL Server database which is slow, brittle and a pain to setup. Before each test, you need to ensure the database is setup with just the right data, so your tests are repeatable, otherwise you end up with flaky tests. Don’t underestimate the power of this point.
  3. EF Core Migrations can automatically generate the database for me. With Dapper, I have to use external tools like Visual Studio Database Projects, DbUp or Flyway to create my database. That’s an extra headache at deployment time. EF Core lets you cut out the extra time required to manage all of that.
  4. EF Core Migrations can automatically handle database migrations for me. Migrating databases is hard! Keeping track of what state the database is in and making sure you’ve written the right ALTER TABLE scripts is extra work that can be automated. EF Core handles all that for me. Alternatively, Visual Studio Database Projects can also get around this problem.
  5. I can switch database provider easily. Ok…ok…nobody does this in the real world and I can only think of one case where this happened. People always mention this point though for some reason.
  6. EF Core defaults to using the right data types, while on the other hand human beings…have too often chosen the wrong data types and then paid the penalties later on when the app is in production. Use NVARCHAR instead of VARCHAR and DATETIMEOFFSET instead of DATETIME2 or even DATETIME people! I’ve seen professional database developers make these mistakes all the time. Automating this ensures that the correct decision is made all the time.
  7. EF Core is not that much slower than using Dapper. We’re not talking about orders of magnitude slower as it was with EF6. Throwing away all of the above benefits for slightly better speed is not a tradeoff that everyone can make though, it depends on the app and situation.

You need to use the right tool for the right job. I personally use Dapper, where there is an existing database with all the migrations etc. already handled by external tools and use EF Core where I’m working with a brand new database.

What is good about Automapper?

Automapper is great when you have a small project that you want to throw together quickly and the objects you are mapping to and from have the same or similar property names and structure.

It’s also great for unit testing because once you’ve written your mapper, testing it is just a matter of adding a one liner to test that all the properties in your object have a mapping setup for them.

Finally if you use Automapper with Entity Framework, you can use the ProjectTo method which uses the property mapping information to limit the number of fields pulled back from your database making the query a lot more efficient. I think this is probably the biggest selling point of Automapper. The alternative is to write your own Entity Framework Core projection.

What is wrong with Automapper?

Cezary Piatek writes a very good rundown of some of the problems when using Automapper. I’m not going to repeat what he says but here is a short summary:

  1. In the real world, mapping between identical or similar classes is not that common.
  2. If you have similar classes you are mapping between, there is no guarantee that they will not diverge, requiring you to write increasingly complex Automapper code or rewriting the mapping logic without Automapper.
  3. Finding all usages of a property no longer works when using Automapper unless you explicitly map every property, lowering discoverability.
  4. If you have a complex scenario, Jimmy Bogard (the author of the tool) suggests not using Automapper:
    • “DO NOT use AutoMapper except in cases where the destination type is a flattened subset of properties of the source type”
    • “DO NOT use AutoMapper to support a complex layered architecture”
    • “AVOID using AutoMapper when you have a significant percentage of custom configuration in the form of Ignore or MapFrom”
  5. If you’re mapping from database models to view models in an API, then dumping your database schema out as JSON makes for a bad API. You usually want more complex nested objects.
  6. How much time does it really save? Object mapping code is the simplest code a developer can write, I can do it without thinking and knock a few mappings out in a couple of minutes.
  7. Automapper is complex, it has a massive documentation site just to show you how to use it and just checkout the 29 point list of guidelines on how to use it. Why should copying values from one object to another need to be so complex?

A Simple and Fast Object Mapper

I wrote an object mapper library that consists of a couple of interfaces and a handful of extension methods to make mapping objects slightly easier. The API is super simple and very light and thus fast. You can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project. Lets look at an example. I want to map to and from instances of these two classes:

public class MapFrom
{
    public bool BooleanFrom { get; set; }
    public DateTimeOffset DateTimeOffsetFrom { get; set; }
    public int IntegerFrom { get; set; }
    public string StringFrom { get; set; }
}

public class MapTo
{
    public bool BooleanTo { get; set; }
    public DateTimeOffset DateTimeOffsetTo { get; set; }
    public int IntegerTo { get; set; }
    public string StringTo { get; set; }
}

The implementation for an object mapper using the .NET Boxed Mapper is shown below. Note the IMapper interface which is the heart of the .NET Boxed Mapper. There is also an IAsyncMapper if for any reason you need to map between two objects asynchronously, the only difference being that it returns a Task.

public class DemoMapper : IMapper<MapFrom, MapTo>
{
    public void Map(MapFrom source, MapTo destination)
    {
        destination.BooleanTo = source.BooleanFrom;
        destination.DateTimeOffsetTo = source.DateTimeOffsetFrom;
        destination.IntegerTo = source.IntegerFrom;
        destination.StringTo = source.StringFrom;
    }
}

And here is an example of how you would actually map a single object, array or list:

public class UsageExample
{
    private readonly IMapper<MapFrom, MapTo> mapper = new DemoMapper();
    
    public MapTo MapOneObject(MapFrom source) => this.mapper.Map(source);
    
    public MapTo[] MapArray(List<MapFrom> source) => this.mapper.MapArray(source);
    
    public List<MapTo> MapList(List<MapFrom> source) => this.mapper.MapList(source);
}

I told you it was simple! Just a few convenience extension methods bundled together with an interface that makes it just ever so slightly quicker to write object mapping than rolling your own implementation. If you have more complex mappings, you can compose your mappers in the same way that your models are composed.

Performance

Keeping things simple makes the .NET Boxed Mapper fast. I put together some benchmarks using Benchmark.NET which you can find here. The baseline is hand written mapping code and I compare that to Automapper and the .NET Boxed Mapper.

I even got a bit of help from the great Jon Skeet himself on how to improve the performance of instantiating an instance when using the generic new() constraint which it turns out is pretty slow because it uses Activator.CreateInstance under the hood.

Object to Object Mapping Benchmark

This benchmark measures the time taken to map from a MapFrom object to the MapTo object which I show above.

Simple object to object mapping benchmark
Simple object to object mapping benchmark
Method       Runtime        Mean  Ratio  Gen 0/1k Op  Allocated Memory/Op
Baseline     Clr        7.877 ns   1.00       0.0178                 56 B
BoxedMapper  Clr       25.431 ns   3.07       0.0178                 56 B
Automapper   Clr      264.934 ns  31.97       0.0277                 88 B
Baseline     Core       9.327 ns   1.00       0.0178                 56 B
BoxedMapper  Core      17.174 ns   1.84       0.0178                 56 B
Automapper   Core     158.218 ns  16.97       0.0279                 88 B

List Mapping Benchmark

This benchmark measures the time taken to map a List of MapFrom objects to a list of MapTo objects.

object list to object list mapping benchmark
Method       Runtime        Mean  Ratio  Gen 0/1k Op  Allocated Memory/Op
Baseline     Clr        1.833 us   1.00       2.0542              6.31 KB
BoxedMapper  Clr        3.295 us   1.80       2.0523              6.31 KB
Automapper   Clr       10.569 us   5.77       2.4872              7.65 KB
Baseline     Core       1.735 us   1.00       2.0542              6.31 KB
BoxedMapper  Core       2.237 us   1.29       2.0523              6.31 KB
Automapper   Core       3.220 us   1.86       2.4872              7.65 KB

Speed

It turns out that Automapper does a really good job on .NET Core in terms of speed but is quite a bit slower on .NET Framework. This is probably down to the intrinsic improvements in .NET Core itself. .NET Boxed is quite a bit faster than Automapper on .NET Framework but the difference on .NET Core is much less at around one and a half times. The .NET Boxed Mapper is also very close to the baseline but is a bit slower. I believe that this is due to the use of method calls on interfaces, whereas the baseline mapping code is only using method calls on concrete classes.

Zero Allocations

.NET Boxed has zero allocations of memory while Automapper allocates a small amount per mapping. Since object mapping is a fairly common operation these small differences can add up over time and cause pauses in the app while the garbage collector cleans up the memory. There seems to be a trend I’ve seen in .NET for having zero allocation code. If you care about that, then this might help.

Conclusions

What I’ve tried to do with the .NET Boxed Mapper is fill a niche which I thought that Automapper was not quite filling. A super simple and fast object mapper that’s just a couple of interfaces and extension methods to help you along the way and provide a skeleton on which to hang your code. If Automapper fits your app better, go ahead and use that. If you think it’s useful, you can use the Boxed.Mapping NuGet package or look at the code at on GitHub in the Dotnet-Boxed/Framework project.

The post A Simple and Fast Object Mapper appeared first on Muhammad Rehan Saeed.


Securing ASP.NET Core in Docker

$
0
0

Some time ago, I blogged about how you can get some extra security when running Docker containers by making their file systems read-only. This ensures that should an attacker get into the container somehow, they won’t be able to change any files. This only works with certain containers that support it however and unfortunately, at that time ASP.NET Core did not support running in a Docker container with a read-only file system. Happily, this is now fixed!

Lets see an example. I created a brand new hello world ASP.NET Core project and added this Dockerfile:

FROM microsoft/dotnet:2.2-sdk AS builder
WORKDIR /source
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish --output /app/ --configuration Release

FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=builder /app .
ENTRYPOINT ["dotnet", "ReadOnlyTest.dll"]

I build the Docker image using this command:

docker build -t read-only-test .

If I run this image with a read-only file system:

docker run --rm --read-only -it -p 8000:80 read-only-test

This outputs the following error as read-only file systems are not supported by default:

Failed to initialize CoreCLR, HRESULT: 0x80004005

If I now run the same image with the COMPlus_EnableDiagnostics environment variable turned off:

docker run --rm --read-only -it -p 8000:80 -e COMPlus_EnableDiagnostics=0 read-only-test

The app now starts! The COMPlus_EnableDiagnostics environment variable (which is undocumented) turns off debugging and profiling support, so I would not bake this environment variable into the Dockerfile. For some reason these features need a read/write file system to work properly. If you’d like to try this yourself, you can checkout all the code in this repo.

The post Securing ASP.NET Core in Docker appeared first on Muhammad Rehan Saeed.

Git Cloning the Windows OS Repo

$
0
0

Disclaimer: I’m a Microsoft employee but my opinions in this personal blog post are my own and nothing to do with Microsoft. The information in this blog post is already publicly available and I talk in very general terms.

I recently had the unique opportunity to git clone the Windows OS repository. For me as a developer, I think that has got to be a bucket list (a list of things to do before you die) level achievement!

A colleague who was doing some work in the repo was on leave and the task of completing the job unexpectedly fell on me to finish up. I asked around to see if anyone had any pointers on what to do and I was pointed towards an Azure DevOps project. The first thing I naively tried was running:

git clone https://microsoft.fake.com/foo/bar/os

This gave me the very helpful error:

remote: This repository requires GVFS. Ensure the version of git you are using supports GVFS.
fatal: protocol error: bad pack header

This triggered a memory in the dark recesses of my mind about GVFS (Git Virtual File System). The Windows OS repository is around 250GB in size. When you consider that there are tens or maybe hundreds of developers committing changes every day, you are not going to have a very pleasant developer experiences if you just used Git and tried to pull all 250GB of files. So GVFS abstracts away the file system and only downloads files when you try to access them.

The Windows OS has a very large and thorough internal Wiki. This wiki has sections covering all areas of the Windows OS going back for years. After a short time searching the wiki I discovered a very thorough getting started guide for new developers.

The getting started guide involves running some PowerShell files which install a very specific but recent version of Git and setting up GVFS. Interestingly, you can also optionally point your Git client at a cache server to speed up git commands. There are a few cache servers all over the world to choose from. Finally, there is a VS Code extension specific to the OS repo that gives you some extra intelli-sense, very fancy.

Even though pulling the code using GVFS should in theory only pull what you need at any given time, it still took a fair amount of time to get started. Standard git commands still worked but took tens of seconds to execute, so you had to be pretty sure of what you were doing.

At this point a colleague warned against using ‘find in files’, as this would cause GVFS to pull all files to disk. I think search would do the same. An alternative approach I used instead was to search via the Azure DevOps website where you can view all files in any repo.

Once I’d had a chance to have a root around the repo, I realised that it was probably the largest folder structure I’d ever seen. There are many obscure sounding folders like ‘ds’ and ‘net’. The reason for the wiki’s existence became clear.

Other random things I found was that the repo contains an ‘src’ folder just like a lot of other repos. There is a tonne of file extensions I’ve never seen or heard of before and there are binaries checked into the repo which seems suboptimal on the face of it. I even found the Newtonsoft.Json binary in there.

I was pleasantly surprised to see an .editorconfig file in the repo. It turns out that spaces are preferred over tabs and line endings are CRLF (I don’t know what else I expected).

There is a tools folder with dozens of tools in it. In fact, I had to use one of these tools to get my job done. The tool I used was a package manager a bit like NuGet. You can use a CLI tool to version and upload a folder of files. This made sense. The OS repo does not a mono repo in that it doesn’t contain every line of code in Windows. There are many other repos that package up and upload their binaries using this tool.

Some further reading on this package manager and I discovered that the Windows OS does some deduplication of files to save space. I’m guessing they still have to fit Windows onto a DVD (How quaint, do people still use DVD’s?), so file size is important.

While trying to figure out how to use the package manager, I accidentally executed a search through all packages. Text came streaming down the page like in the Matrix. Eventually I managed to fumble the right keys on the keyboard to cancel the search.

Once I’d finished with my changes I checked in and found that I had to rebase because newer commits were found on the server. I rebased as normal, except for the very long delay in executing git commands.

Once I’d finally pushed the branch containing my changes up to the server, I created a pull request in Azure DevOps. As soon as I’d done that, I got inundated with emails from Azure Pipelines telling me that a build had started and various reviewers had been added to my pull request.

The Azure Pipelines build only took 25 minutes to complete. A quick look shows a bunch of builds with five hours or more. I’m guessing that my changes had only gone through a cursory initial build to make sure nothing was completely broken.

A few days later I got a notification telling me my PR had been merged. All I did was change a few config files and upload a package or two, but it was an interesting experience none the less.

The post Git Cloning the Windows OS Repo appeared first on Muhammad Rehan Saeed.

.gitattributes Best Practices

$
0
0

.gitignore

If you’ve messed with Git for long enough, you’re aware that you can use the .gitignore file to exclude files from being checked into your repository. There is even a whole GitHub repository with nothing but premade .gitignore files you can download. If you work with anything vaguely in the Microsoft space with Visual Studio, you probably want the ‘Visual Studio’ .gitignore file.

.gitattributes

There is a lesser know .gitattributes file that can control a bunch of Git settings that you should consider adding to almost every repository as a matter of course.

Line Endings

If you’ve studied a little computer science, you’ll have seen that operating systems use different characters to represent line feeds in text files. Windows uses a Carriage Return (CR) followed by the Line Feed (LF) character, while Unix based operating systems use the Line Feed (LF) alone. All of this has it’s origin in typewriters which is pretty amazing given how antiquated they are. I recommend reading the Newline Wikipedia article for more on the subject.

Newline characters often cause problems in Git when you have developers working on different operating systems (Windows, Mac and Linux). If you’ve ever seen a phantom file change where there are no visible changes, that could be because the line endings in the file have been changed from CRLF to LF or vica versa.

Git can actually be configured to automatically handle line endings using a setting called autocrlf. This automatically changes the line endings in files depending on the operating system. However, you shouldn’t rely on people having correctly configured Git installations. If someone with an incorrect configuration checked in a file, it would not be easily visible in a PR and you’d end up with a repository with inconsistent line endings.

The solution to this is to add a .gitattributes file at the root of your repository and set the line endings to be automatically normalised like so:

# Set default behavior to automatically normalize line endings.
* text=auto

# Force bash scripts to always use lf line endings so that if a repo is accessed
# in Unix via a file share from Windows, the scripts will work.
*.sh text eol=lf

The second line is not strictly necessary. It hard codes the line endings for bash scripts to be LF, so that they can be executed via a file share. It’s a practice I picked up from the corefx repository.

Git Large File System (LFS)

It’s pretty common to want to checking binary files into your Git repository. Building a website for example, involves images, fonts, maybe some compressed archives too. The problem with these binary files is that they bloat the repository a fair bit. Every time you checkin a change to a binary file, you’ve now got both files saved in Git’s history. Over time this bloats the repository and makes cloning it slow. A much better solution is to use Git Large File System (LFS). LFS stores binary files in a separate file system. When you clone a repository, you only download the latest copies of the binary files and not every single changed version of them.

LFS is supported by most source control providers like GitHub, Bitbucket and Azure DevOps. It a plugin to Git that has to be separately installed (It’s a checkbox in the Git installer) and it even has it’s own CLI command ‘git lfs’ so you can run queries and operations against the files in LFS. You can control which files fall under LFS’s remit in the .gitattributes file like so:

# Archives
*.7z filter=lfs diff=lfs merge=lfs -text
*.br filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text

# Documents
*.pdf filter=lfs diff=lfs merge=lfs -text

# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.ico filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.psd filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text

# Fonts
*.woff2 filter=lfs diff=lfs merge=lfs -text

# Other
*.exe filter=lfs diff=lfs merge=lfs -text

So here I’ve added a whole list of file extensions for various file types I want to be controlled by Git LFS. I tell Git that I want to filter, diff and merge using the LFS tool and finally the ‘-text’ argument tells Git that this is not a text file, which is a strange way to tell it that it’s a binary file.

A quick warning about adding LFS to an existing repository with existing binary files checked into it. The existing binary files will be checked into Git and not LFS without rewriting Git history which would be bad and you shouldn’t do unless you are the only developer. You will have to add a one off commit to take the latest versions of all binary files and add them to LFS. Everyone who uses the repository will also have to re-clone the repository (I found this out the hard way in a team of 15 people. Many apologies were made over the course of a week). Ideally you add this from day one and educate developers about Git’s treatment of binary files, so people don’t checkin any binary files not controlled by LFS.

Binary Files

When talking about the .gitattributes file, you will quite often hear some people talk about explicitly listing all binary files instead of relying on Git to auto-detect binary files (yes Git is clever enough to do that) like this:

# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary

As you saw above, we already do this with Git LFS but if you don’t use LFS, read on as you may need to explicitly list binary files in certain rare circumstances.

I was interested so I asked a StackOverflow question and got great answers. If you look at the Git source code, it checks first 8,000 bytes of a file to see if it contains a NUL character. If it does, the file is assumed to be binary. However, there are cases where you may need to do it explicitly:

  • UTF-16 encoded files could be mis-detected as binary.
  • Some image format or file that consists only of printable ASCII bytes. This is pretty weird and sounds unlikely to happen.

Final Form

This is what the final .gitattributes file I copy to most repositories looks like:

###############################
# Git Line Endings            #
###############################

# Set default behavior to automatically normalize line endings.
* text=auto

# Force bash scripts to always use lf line endings so that if a repo is accessed
# in Unix via a file share from Windows, the scripts will work.
*.sh text eol=lf

###############################
# Git Large File System (LFS) #
###############################

# Archives
*.7z filter=lfs diff=lfs merge=lfs -text
*.br filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text

# Documents
*.pdf filter=lfs diff=lfs merge=lfs -text

# Images
*.gif filter=lfs diff=lfs merge=lfs -text
*.ico filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.pdf filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.psd filter=lfs diff=lfs merge=lfs -text
*.webp filter=lfs diff=lfs merge=lfs -text

# Fonts
*.woff2 filter=lfs diff=lfs merge=lfs -text

# Other
*.exe filter=lfs diff=lfs merge=lfs -text

Conclusions

All of the above are bits and pieces I’ve put together over time. Are there any other settings that should be considered best practice and added to any .gitattributes file?

The post .gitattributes Best Practices appeared first on Muhammad Rehan Saeed.

Unit Testing dotnet new Templates

$
0
0

As I talked about in my previous post some time ago about dotnet new project templates, it’s possible to enable feature selection, so that developers can toggle certain features of a project template on or off. This is not a feature that many templates in the wild use a lot. Quite often I’ve seen templates have no optional features or only a few. One reason is that it gets very complicated to test that toggling your optional features doesn’t break the generated project in some way by stopping it from building for example. This is why I decided to write a small unit test helper library for dotnet new project templates. It is unit test framework agnostic and can work with xUnit, NUnit, MSTest or any other unit test framework.

Example Usage

Below is an example showing how you can use it inside an xUnit test project.

public class ApiTemplateTest
{
    public ApiTemplateTest() => DotnetNew.Install<ApiTemplateTest>("ApiTemplate.sln").Wait();

    [Theory]
    [InlineData("StatusEndpointOn", "status-endpoint=true")]
    [InlineData("StatusEndpointOff", "status-endpoint=false")]
    public async Task RestoreAndBuild_CustomArguments_IsSuccessful(string name, params string[] arguments)
    {
        using (var tempDirectory = TempDirectory.NewTempDirectory())
        {
            var dictionary = arguments
                .Select(x => x.Split('=', StringSplitOptions.RemoveEmptyEntries))
                .ToDictionary(x => x.First(), x => x.Last());
            var project = await tempDirectory.DotnetNew("api", name, dictionary);
            await project.DotnetRestore();
            await project.DotnetBuild();
        }
    }

    [Fact]
    public async Task Run_DefaultArguments_IsSuccessful()
    {
        using (var tempDirectory = TempDirectory.NewTempDirectory())
        {
            var project = await tempDirectory.DotnetNew("api", "DefaultArguments");
            await project.DotnetRestore();
            await project.DotnetBuild();
            await project.DotnetRun(
                @"Source\DefaultArguments",
                async (httpClient, httpsClient) =>
                {
                    var httpResponse = await httpsClient.GetAsync("status");
                    Assert.Equal(HttpStatusCode.OK, httpResponse.StatusCode);
                });
        }
    }
}

The first thing it does in the constructor is install the dotnet new project templates in your solution. It needs to know the name of the solution file. It then walks the sub-directory tree below your solution file and installs all project templates for you.

If we then look at the first unit test, we first need a temporary directory, where we can create a project from our dotnet new project template. We will generate a project from the template in this directory and then delete the directory at the end of the test. We then run dotnet new with the name of a project template, the name we want to give to the generated project and any custom arguments that particular project template supports. Using xUnit, I’ve parameterised the arguments, so we can run multiple tests while tweaking the arguments for each test. Running DotnetNew returns a project which contains some metadata about the project that we’ve just created and we can also use it to further dotnet commands against.

Finally, we run dotnet restore and dotnet build against the project. So this test ensures that toggling the StatusEndpointOn option on our project template doesn’t stop the generated project from restoring NuGet packages or building successfully.

The second unit test method is where it get’s really cool. If the project template is an ASP.NET Core project, we can use dotnet run to start the project listening on some random free ports on the machine. The unit test framework then gives you two HttpClient’s (One for HTTP and one for HTTPS) with which to call your newly generated project. In summary, not only can you test that the generated projects build, you can test that the features in your generated project work as they should.

This API is pretty similar to the ASP.NET Core TestHost API that also gives you a HttpClient to test the API with. The difference is that this framework is actually running the app using the dotnet run command. I have experimented with using the TestHost API to run the generated project in memory, so it could be run a bit faster but the .NET Core API’s for dynamically loading DLL files needs some work which .NET Core 3.0 might solve.

Where To Get It?

You can download the Boxed.DotnetNewTest NuGet package or see the source code on GitHub.

The post Unit Testing dotnet new Templates appeared first on Muhammad Rehan Saeed.

What dotnet new Could Be

$
0
0

.NET Boxed

The ‘dotnet new‘ CLI command is a great way to create projects from templates in dotnet. However, I think it could provide a much better experience than it currently does. I also suspect it isn’t used much, mostly because templates authored for the ‘dotnet new’ experience are not included in the Visual Studio File -> New Project experience. For template authors, the experience of developing templates could do with some improvements. I tweeted about it this morning and got asked to write a short gist about what could be improved, so this is that list.

Why do I Care?

I author a Swagger API, GraphQL API and Microsoft Orleans project templates in my Dotnet Boxed project. The project currently has 1,900 stars on GitHub and the Boxed.Templates NuGet package has around 12,149 downloads at the time of writing. The Dotnet Boxed templates are also some of the more complex templates using ‘dotnet new’. They all have a dozen or more optional features.

Visual Studio Integration

In the past, I also authored the ASP.NET Core Boilerplate project templates which are published as a Visual Studio extension. This extension currently has 159,307 installs which is an order of magnitude more than the 12,149 installs of my ‘dotnet new’ based Boxed.Templates NuGet package.

I’ve read in the dotnet/templating GitHub issues that there is eventually going to be Visual Studio integration in which you’d be able to search and install ‘dotnet new’ based templates on NuGet, and then create projects from those templates much as you would with Visual Studio today. Given the download counts of my two projects, this would be the number one feature I’d like to see implemented.

You could create a Visual Studio extension that wraps your ‘dotnet new’ templates but having messed around with them in the past, it’s a lot of effort. I’m in the template making business, not in the extension making business. Also, given the above rumour, I’ve held off going this route.

NuGet/Visual Studio Marketplace Integration

Currently there is no way to search for a list of all ‘dotnet new’ based project templates on NuGet or on the Visual Studio marketplace. There is this list buried in the dotnet/templating GitHub project but the only people who are going to find that are template authors. It would be great if there was some kind of marketplace or store to find templates, rate them, provide feedback etc.

dotnet new ui

If you’ve seen the Vue CLI, it has a magical UI for creating projects from it’s template. This is the benchmark by which I now measure all project creation experiences. Just take a look at it’s majesty:

Vue CLI Create a New Project

Imagine executing ‘dotnet new ui’, then seeing a nice browser dialogue popup like the one above where you could find, install and even create projects from templates. Creating a project would involve entering the name of your project, the directory where you want it to be saved and then toggling any custom options that the project template might offer.

That last bit is where having a UI shines. There aren’t many ‘dotnet new’ templates that use the templating engine to it’s full potential and have additional optional features. When you use the current command line experience it’s unweildy and slow to set custom options. Having a custom UI with some check boxes and drop downs would be a far quicker and more delightful experience.

Missing Features

There are a bunch of cool missing or half implemented features in the ‘dotnet new’ templating engine that could use finishing. Chief among these are called post actions. These are a set of custom actions that can be performed once your project has been created.

As far as I can work out, the only post action that works is the one that restores all NuGet packages in your project. This was implemented because the basic Microsoft project templates wanted to use them but I understand that they no longer do for reasons unknown to me. Happily I still use this one and it works nicely.

Other post actions that are half implemented (They exist and you can use them but they just print content to the console) are for opening files in the editor, opening files or links in the web browser or even running arbitrary scripts. The last one has the potential for being a security risk however, so it would be better to have a health list of post actions for specific tasks. I’d love to be able to open the ReadMe.md file that ships with my project template.

In terms of new post actions, I’d really like to see one that removes and sorts using statements. I have a lot of optional pieces of code in my project templates, so I have to have a lot of #if #endif code to tell the templating engine which lines of code to remove. It’s particularly easy to get this wrong with using statements, leaving you with a fresh project that doesn’t compile because you’ve removed one too many using statements by accident. To avoid this, I created my own unit testing framework for dotnet new projects called Boxed.DotnetNewTest.

Docs, Docs & Docs

There is one page of documentation on how to create project templates in the official docs page. There is a bunch more in the dotnet/templating wiki and some crucial bits of information in comments of GitHub issues. In particular, there is precious little information about how to conditionally remove code or files based on options the user selects. There is also very little about post actions. It would be great if this could be tidied up.

Secondary to the docs is the GitHub issues . There are currently 168 open issues with a large number having only one comment from the original author. Given the lack of documentation, having questions answered is really important.

Fixing Bugs

The latest version of the dotnet CLI has fixed some bugs but there are still a few that really get in the way of a great experience:

  • #1544/#348 – Running dotnet new foo –help outputs some pretty terrible looking text if you have any custom options.
  • #2208 – You cannot conditionally remove text from a file if it has no file extension, so that means Dockerfile, .gitignore, .editorconfig files.
  • #2209 – Complex conditionals fail if not wrapped in parentheses. I always forget to do this. There is no warnings, your template won’t work.
  • #1438 – Using conditional code in csproj files requires some workarounds to work.

Conclusions

The Vue CLI has really shown how great a new project creation experience can be. With a bit of work, the ‘dotnet new’ experience could be just as great.

The post What dotnet new Could Be appeared first on Muhammad Rehan Saeed.

ASP.NET Core Integration Testing & Mocking using Moq

$
0
0

Microsft .NET Logo

If you want to run an integration test for your ASP.NET Core app without also testing lots of external dependencies like databases and the like, then the lengthy official ‘Integration tests in ASP.NET Core‘ documentation shows how you can use stubs to replace code that talks to a database or some other external service. If you want to use mocks, this is where you run out of guidance and runway. It does in fact require a fair amount of setup to do it correctly and reliably without getting flaky tests.

Startup

The ConfigureServices and Configure methods in your applications Startup class must be virtual. This is so that we can iherit from this class in our tests and replace production versions of certain services with mock versions.

public class Startup
{
    private readonly IConfiguration configuration;
    private readonly IWebHostingEnvironment webHostingEnvironment;

    public Startup(IConfiguration configuration, IWebHostingEnvironment webHostingEnvironment)
    {
        this.configuration = configuration;
        this.webHostingEnvironment = webHostingEnvironment;
    }

    public virtual void ConfigureServices(IServiceCollection services) =>
        ...

    public virtual void Configure(IApplicationBuilder application) =>
        ...
}

TestStartup

In your test project, inherit from the Startup class and override the ConfigureServices method with one that registers the mock and the mock object with IoC container.

I like to use strict mocks using MockBehavior.Strict, this ensures that nothing is mocked unless I specifically setup a mock.

public class TestStartup : Startup
{
    private readonly Mock clockServiceMock;

    public TestStartup(IConfiguration configuration, IHostingEnvironment hostingEnvironment)
        : base(configuration, hostingEnvironment)
    {
        this.clockServiceMock = new Mock(MockBehavior.Strict);
    }

    public override void ConfigureServices(IServiceCollection services)
    {
        services
            .AddSingleton(this.clockServiceMock);

        base.ConfigureServices(services);

        services
            .AddSingleton(this.clockServiceMock.Object);
    }
}

CustomWebApplicationFactory

In your test project, write a custom WebApplicationFactory that configures the HttpClient and resolves the mocks from the TestStartup, then exposes them as properties, ready for our integration test to consume them. Note that I’m also changing the environment to Testing and telling it to use the TestStartup class for startup.

Note also that I’ve implemented IDisposable‘s Dispose method to verify all of my strict mocks. This means I don’t need to verify any mocks manually myself. Verification of all mock setups happens automatically when xUnit is disposing the test class.

public class CustomWebApplicationFactory : WebApplicationFactory
    where TEntryPoint : class
{
    public CustomWebApplicationFactory()
    {
        this.ClientOptions.AllowAutoRedirect = false;
        this.ClientOptions.BaseAddress = new Uri("https://localhost");
    }

    public ApplicationOptions ApplicationOptions { get; private set; }

    public Mock ClockServiceMock { get; private set; }

    public void VerifyAllMocks() => Mock.VerifyAll(this.ClockServiceMock);

    protected override void ConfigureClient(HttpClient client)
    {
        using (var serviceScope = this.Services.CreateScope())
        {
            var serviceProvider = serviceScope.ServiceProvider;
            this.ApplicationOptions = serviceProvider.GetRequiredService>().Value;
            this.ClockServiceMock = serviceProvider.GetRequiredService>();
        }

        base.ConfigureClient(client);
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder) =>
        builder
            .UseEnvironment("Testing")
            .UseStartup();

    protected override void Dispose(bool disposing)
    {
        if (disposing)
        {
            this.VerifyAllMocks();
        }

        base.Dispose(disposing);
    }
}

Integration Tests

I’m using xUnit to write my tests. Note that the generic type passed to CustomWebApplicationFactory is Startup and not TestStartup. This generic type is used to find the location of your application project on disk and not to start the application.

I setup a mock in my test and I’ve implemented IDisposable to verify all mocks for all my tests at the end but you can do this step in the test method itself if you like.

Note also, that I’m not using xUnit’s IClassFixture to only boot up the application once as the ASP.NET Core documentation tells you to do. If I did so, I’d have to reset the mocks between each test and also you would only be able to run the integration tests serially one at a time. With the method below, each test is fully isolated and they can be run in parallel. This uses up more CPU and each test takes longer to execute but I think it’s worth it.

public class FooControllerTest : CustomWebApplicationFactory
{
    private readonly HttpClient client;
    private readonly Mock clockServiceMock;

    public FooControllerTest()
    {
        this.client = this.CreateClient();
        this.clockServiceMock = this.ClockServiceMock;
    }

    [Fact]
    public async Task GetFoo_Default_Returns200OK()
    {
        this.clockServiceMock.Setup(x => x.UtcNow).ReturnsAsync(new DateTimeOffset(2000, 1, 1));

        var response = await this.client.GetAsync("/foo");

        Assert.Equal(HttpStatusCode.OK, response.StatusCode);
    }
}

xunit.runner.json

I’m using xUnit. We need to turn off shadown copying, so any separate files like appsettings.json are placed in the right place beside the application DLL file. This ensures that our application running in an integration test can still read the appsettings.json file.

{
  "shadowCopy": false
}

appsettings.Testing.json

Should you have configuration that you want to change just for your integration tests, you can add a appsettings.Testing.json file into your application. This configuration file will only be read in our integration tests because we set the environment name to ‘Testing’.

Working Examples

If you’d like to see an end to end working example of how this all works. You can create a project using the Dotnet Boxed API project template or the GraphQL project template.

Conclusions

I wrote this because there is little to no information on how to combine ASP.NET Core with Moq in integration tests. I’ve messed about with using IClassFixture as the ASP.NET Core documentation tells you to do and it’s just not a good idea with Moq which needs a clean slate before each test. I hope this stops others going through much pain.

The post ASP.NET Core Integration Testing & Mocking using Moq appeared first on Muhammad Rehan Saeed.

Just Start - The First Blog Post

$
0
0

I thought I should begin this first blog post with a few words on what I hope to achieve. I started creating this website because I wanted to create a space where I could post interesting things I found or learned whilst working as a software developer or just generally in life. I hope to not just blog about software development, best practices or cool code snippets, but also look into some of the other “soft skills” a developer might need, such as software design and aesthetics which I have a particular interest in.

I have reached a point where I've helped build some pretty cool stuff in my career so far. I now want to share some of these things with the wider community. Hopefully, I'll have written some code which is of use to someone. Feel free to drop a comment now and again…


ConfigureAwait in Task Parallel Library (TPL)

$
0
0

The Task Parallel Library in conjunction with the async and await keywords are great but there are some subtleties which you should consider. One of these is the use of the ConfigureAwait method.

If I wanted to get a list of the titles of the new posts from my RSS feed I could write the following code:

private async Task<IEnumerable<string>> GetBlogTitles()
{
    // Current Thread = UI Thread
    HttpClient httpClient = new HttpClient();

    // GetStringAsync = ThreadPool Thread
    string rss = await httpClient.GetStringAsync("https://rehansaeed.com/feed/");

    // Current Thread = UI Thread
    List<string> blogTitles = XDocument.Parse(rss)
        .Descendants("item")
        .Elements("title")
        .Select(x => x.Value)
        .ToList();

    // Current Thread = UI Thread
    return blogTitles;
}

public async Task UpdateUserInterface()
{
    // Current Thread = UI Thread
    IEnumerable<string> blogTitles = await this.GetBlogTitles();

    // Current Thread = UI Thread
    this.ListBox.ItemsSource = blogTitles;
}

If I was to call this method, then the entire method would execute on the calling thread except the bit where we call GetStringAsync which would go off and do its work on the ThreadPool thread and then we come back onto the original thread and do all our XML manipulation.

Now if this was a client WPF or WinRT application which has a UI thread, all of the XML manipulation we are doing would be done on the UI thread. This is placing extra burden on the UI thread which could mean application freeze ups if the UI thread is being heavily taxed. The solution is simple, we add ConfigureAwait(false) to the end of the call we are making to get the RSS XML. So now our new code looks like this:

private async Task<IEnumerable<string>> GetBlogTitles()
{
    // Current Thread = UI Thread
    HttpClient httpClient = new HttpClient();

    // GetStringAsync = ThreadPool Thread
    string rss = await httpClient.GetStringAsync("https://rehansaeed.com/feed/").ConfigureAwait(false);

    // Current Thread = ThreadPool Thread
    List<string> blogTitles = XDocument.Parse(rss)
        .Descendants("item")
        .Elements("title")
        .Select(x => x.Value)
        .ToList();

    // Current Thread = ThreadPool Thread
    return blogTitles;
}

public async Task UpdateUserInterface()
{
    // Current Thread = UI Thread
    IEnumerable<string> blogTitles = await this.GetBlogTitles();

    // Current Thread = UI Thread
    this.ListBox.ItemsSource = blogTitles;
}

So now all our XML manipulation is done on the ThreadPool thread along with the HTTP GET we are doing using the HttpClient. Notice however, that when we return the blog titles to the calling method we are back on the UI thread. Each time you do an await, the default behaviour is to continue on the thread we started with. By adding ConfigureAwait(false), we are overriding this behaviour to continue on whatever thread the Task was running on.

For more on the Task Parallel Library (TPL) I highly recommend reading Stephen Toub's blog.

Stop The Brace Wars, Use StyleCop

$
0
0

There is an on-going war among developers. This silent war has claimed countless hours of developer time through hours wasted in pointless meetings and millions of small skirmishes over the style of each developers written code. This post outlines a peace treaty, a way forward if you will but first I will outline the problem.

Unverscores Versus the 'this' Keyword

This is the main battlefront where most time is wasted and where developers are most entrenched in their forward positions. To use underscores for your field names or the this keyword. I myself am in this this camp but neither has a clear advantage in the battlefield. The underscores make it marginally quicker to access your fields using intelli-sense, while the this keyword makes it quicker to differentiate class members from static members.

private int _property;

public int Property
{
    get { return _property; }
}
private int property;

public int Property
{
    get { return this.property; }
}

The Brace War

This lesser known conflict is where JavaScript styling has leaked into C#. The default formatting rules in visual studio usually quashes this conflict but there are still those who see white space as wasted space and will go the extra mile by changing the Visual Studio settings to 'fix' this problem. I personally stick to the defaults and find the other method hard to read, a small sacrifice of a few extra lines is worth the gain in readability.

public int Property
{
    get { return this.property; }
}
public int Property {
    get { return this.property; }
}

Field, Property, Constructor and Method Ordering

Some (like me), like to have all fields, properties, constructors and methods separated into their own groups. Others prefer fields grouped with properties, and members from implemented interfaces kept together. Again, there is no real right way, the former allows quick navigation to find what you need, while the latter allows quick navigation of members which relate to each other.

Using Statements Inside or Outside the Namespace

Here is one area where there is a clear advantage in one camp. As outlined in this Stack Overflow post, adding using statements inside the namespace can sometimes save a few seconds of troubleshooting. Yet even here, Visual Studio lets us down by having using statements outside the namespace as the default.

Stepping on Peoples Feet

Looking at another developers code can be an interesting experience. The style, look and feel of any code can vary wildly, even if they are written in the same language. Many times, I have found it difficult to find what I'm looking for and it takes time to adjust to each unique style.

Many a time, these large differences can occur even in the same teams, reducing productivity. This leads to the inevitable 'standards' meetings, where a teams of developers sit in a room and discuss underscores versus this and other differences at length. My own experience is that each side is entrenched and hunkering down into their positions, not wanting to have to change their writing style. In the end, the majority wins out or people continue working in their own way and people get used to it.

The Solution For Peace

I would argue that as there is no clear superior coding style, it is a pointless waste of time arguing over it. However, there is something to be said for a commons coding style. This is where StyleCop comes in. It is a set of style rules which can be applied to your C# code.

There need be no lengthy discussion or arguing over it. Keep the default rules (turn off the comment rules if you choose) and you quickly have a set of standards that can be universally applied and tested not just in your team but universally by all C# developers around the world.

Download some sample code from GitHub and be at ease at the familiar look to the hopefully well designed code. I paint a rosy picture but I see more and more developers using StyleCop. It takes a week to get used to the change in your writing style, I myself switched from underscores to this and have never looked back. You can too.

Reactive Extensions (Rx) - Part 1 - Replacing C# Events

$
0
0

For those who have not tried Reactive Extensions (Rx) yet, I highly recommend it. If I had to describe it in a few words it would be 'Linq to events'. If you have not already learned about it, this is by far the best resource on learning its intricacies.

I have spent a lot of time reading about Reactive Extensions but what I have not found in my research is examples or pointers on how or even where it should be used in preference to other code. One area where you should definitely consider using Reactive Extensions is as a direct replacement for bog standard C# events, which have been around since C# 1.0. This post will explain how.

Exposing an Event

Here is an example of a standard C# event using the standard recommended pattern:

public class JetFighter
{
    public event EventHandler<JetFighterEventArgs> PlaneSpotted;

    public void SpotPlane(JetFighter jetFighter)
    {
        EventHandler<JetFighterEventArgs> eventHandler = this.PlaneSpotted;
        if (eventHandler != null)
        {
            eventHandler(this, new JetFighterEventArgs(jetfighter));
        }
    }
}

Now this is how you replace it using Reactive Extensions:

public class JetFighter
{
    private Subject<JetFighter> planeSpotted = new Subject<JetFighter>();

    public IObservable<JetFighter> PlaneSpotted => this.planeSpotted.AsObservable();

    public void SpotPlane(JetFighter jetFighter) => this.planeSpotted.OnNext(jetFighter);
}

So far it's all pretty straightforward, we have replaced the event with a property returning IObservable<T>. Raising the event is a simple matter of calling the OnNext method on the Subject class. Finally, we do not return our Subject<T> directly in our PlaneSpotted property, as someone could cast it back to Subject<T> and raise their own events! Instead we use the AsObservable method which returns a middle man. So far so good.

Reactive Extensions also has the added concept of errors and completion, which C# events do not have. These are optional added concepts and not required for replacing C# events directly but worth knowing about, as they add an extra dimension to events which may be useful to you.

The first concept is dealing with errors. What happens if there is an exception while you are spotting the plane and you want to notify your subscribers that there is a problem? Well you can do that, like this:

public void SpotPlane(JetFighter jetFighter)
{
    try
    {
        if (string.Equals(jetFighter.Name, "UFO"))
        {
            throw new Exception("UFO Found")
        }

        this.planeSpotted.OnNext(jetFighter);
    }
    catch (Exception exception)
    {
        this.planeSpotted.OnError(exception);
    }
}

Here we are using the OnError method to notify all the events subscribers that there has been an exception.

So what about the concept of completion? Well, that's just as simple. Suppose that you have spotted all the planes and you want to notify all your subscribers that there will be no more spotted planes. You can do that like this:

public void AllPlanesSpotted() => this.planeSpotted.OnCompleted();

So now all the code put together looks like this:

public class JetFighter
{
    private Subject<JetFighter> planeSpotted = new Subject<JetFighter>();

    public IObservable<JetFighter> PlaneSpotted => this.planeSpotted;

    public void AllPlanesSpotted() => this.planeSpotted.OnCompleted();

    public void SpotPlane(JetFighter jetFighter)
    {
        try
        {
            if (string.Equals(jetFighter.Name, "UFO"))
            {
                throw new Exception("UFO Found")
            }

            this.planeSpotted.OnNext(jetFighter);
        }
        catch (Exception exception)
        {
            this.planeSpotted.OnError(exception);
        }
    }
}

Consuming an Event

Consuming the Reactive Extensions events is just as easy and this is where you start to see the real benefits of Reactive Extensions. This is how you subscribe and unsubscribe (often forgotten, which can lead to memory leaks) to a standard C# event:

public class BomberControl : IDisposable
{
    private JetFighter jetfighter;

    public BomberControl(JetFighter jetFighter) =>
        jetfighter.PlaneSpotted += this.OnPlaneSpotted;

    public void Dispose() =>
        jetfighter.PlaneSpotted -= this.OnPlaneSpotted;

    private void OnPlaneSpotted(object sender, JetFighterEventArgs e) =>
        JetFighter spottedPlane = e.SpottedPlane;
}

I'm not going to go into it in too much detail, you subscribe using += and unsubscribe using -= operators.

This is how the same thing can be accomplished using Reactive Extensions:

public class BomberControl : IDisposable
{
    private IDisposable planeSpottedSubscription;

    public BomberControl(JetFighter jetFighter) =>
        this. planeSpottedSubscription = jetfighter.PlaneSpotted.Subscribe(this.OnPlaneSpotted);

    public void Dispose() =>
        this.planeSpottedSubscription.Dispose();

    private void OnPlaneSpotted(JetFighter jetFighter) =>
        JetFighter spottedPlane = jetfighter;
}

The key things to note here are first, the use of the Subscribe method to register for plane spotted events. Second, the subscription to the event is stored in an IDisposable which can later be disposed of, to un-register from the event. This is where things get interesting, since we now have an IObservable<T> we can now use all kinds of Linq queries on it like this:

jetfighter.PlaneSpotted.Where(x => string.Equals(x.Name, “Eurofighter”)).Subscribe(this.OnPlaneSpotted);

So in the above line of code, I'm using a Linq query to only register to events where the name of the spotted plane is Eurofighter. There are a lot more Linq methods you can use but that's beyond the scope of this post and also where you should take a look at this website.

Conclusions

Reactive Extensions (Rx) is a pretty large library which does a lot of stuff which overlaps with other libraries like the Task Parallel Library (TPL). It brings no new capabilities but does bring new ways to do things (much like Linq), while writing less code and with more elegance. It can be confusing coming to it as a newcomer, as to where exactly it can be used effectively. Replacing basic events with IObservable<T> is definitely one area where we can leverage its power.

Reactive Extensions (Rx) - Part 2 - Wrapping C# Events

$
0
0

Sometimes it is not possible to replace a C# event with a Reactive Extensions (Rx) event entirely. This is usually because we are implementing an interface which has a C# event and we don't own the interface.

However, as I'll show in this post, its possible to create IObservable<T> wrappers for C# events and even to hide the C# events entirely from consumers of the class.

The method of wrapping C# events depends on the type of event handler used. Below are the three type of event handler and the method of wrapping them with an observable event.

Wrapping an EventHandler C# Event

The FromEventPattern method is used to wrap the event. Notice we have to specify delegates for subscribing (+=) and unsubscribing (-=) to the event.

public event EventHandler BunnyRabbitsAttack;

public IObservable<object> WhenBunnyRabbitsAttack
{
    get
    {
        return Observable
            .FromEventPattern(
                h => this.BunnyRabbitsAttack += h,
                h => this.BunnyRabbitsAttack -= h);
    }
}

Wrapping an EventHandler C# Event

This example is much the same as the last, except we have to deal with the event arguments. The FromEventPattern method returns an EventPattern<T> object, which contains the sender and the event arguments. We're only interested in the contents of the event arguments, so we use a Select to return just the BunnyRabbits property.

public event EventHandler<BunnyRabbitsEventArgs> BunnyRabbitsAttack;

public IObservable<BunnyRabbits> WhenBunnyRabbitsAttack
{
    get
    {
        return Observable
            .FromEventPattern<BunnyRabbitsEventArgs>(
                h => this.BunnyRabbitsAttack += h,
                h => this.BunnyRabbitsAttack -= h)
            .Select(x => x.EventArgs.BunnyRabbits);
    }
}

Wrapping a Custom Event Handler C# Event

Some C# events use a custom event handler. In this case we have to specify the type of the event handler as a generic argument in the FromEventPattern method.

public event BunnyRabbitsEventHandler BunnyRabbitsAttack;

public IObservable<BunnyRabbits> WhenBunnyRabbitsAttack
{
    get
    {
        return Observable
            .FromEventPattern<BunnyRabbitsEventHandler, BunnyRabbitsEventArgs>(
                h => this.BunnyRabbitsAttack += h,
                h => this.BunnyRabbitsAttack -= h)
            .Select(x => x.EventArgs.BunnyRabbits);
    }
}

Hiding Existing Events Using Explicit Interface Implementation

The disadvantage of the above approach is that we now have two ways to access our event. One with the old style C# event and the other with our new Reactive Extensions event. With a bit of trickery we can hide the C# event in some cases.

The INotifyPropertyChanged interface is very commonly used by XAML developers. It has a single event called PropertyChanged. To hide the PropertyChanged C# event we can explicitly implement the interface (Click here for details on implicit versus explicit implementations of interfaces). Secondly, we wrap the event as we did before.

Now the PropertyChanged C# event can only be accessed by first casting the object to INotifyPropertyChanged (Binding in XAML languages, which uses this interface continues to work). Our new Reactive Extensions observable event is now the default method of subscribing for property changed events.

public abstract class NotifyPropertyChanges : INotifyPropertyChanged
{
    event PropertyChangedEventHandler INotifyPropertyChanged.PropertyChanged
    {
        add { this.propertyChanged += value; }
        remove { this.propertyChanged -= value; }
    }

    private event PropertyChangedEventHandler propertyChanged;

    public IObservable<string> WhenPropertyChanged
    {
        get
        {
            return Observable
                .FromEventPattern<PropertyChangedEventHandler, PropertyChangedEventArgs>(
                    h => this.propertyChanged += h,
                    h => this.propertyChanged -= h)
                .Select(x => x.EventArgs.PropertyName);
        }
    }

    protected void OnPropertyChanged(string propertyName) =>
        this.propertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}

Summing Up

So it may not always be possible to get rid of, dare I say it legacy C# events but we can certainly wrap them with Reactive Extension observables and even hide them altogether.

Reactive Extensions (Rx) - Part 3 - Naming Conventions

$
0
0

Standard C# events do not have any real naming convention, except using the English language to suggest that something has happened e.g. PropertyChanged. Should a property returning an IObservable<T> have a naming convention? I'm not entirely certain but I'll explain why I have used one and why.

C# events are easily differentiated in a class from properties and methods because they have a different icon in the Visual Studio Intelli-Sense. Visual Studio does not provide IObservable<T> properties any differentiation. This may change in the future if Microsoft decides to integrate Reactive Extensions (Rx) more deeply into Visual Studio.

The second reason for using a naming convention is that I often wrap existing C# events with a Reactive Extensions event. It's not possible to have the same name for a C# event and an IObservable<T> property.

You will have noticed already if you've looked at my previous posts that I use the word 'When' prefixed before the name of the property. I believe, this nicely indicates that an event has occurred and also groups all our Reactive Extension event properties together under Intelli-Sense.

public IObservable<string> WhenPropertyChanged
{
    get { ... };
}

I have read in a few places people suggesting that so called 'Hot' and 'Cold' (See here for an explanation) observables should have different naming conventions. I personally feel that this is an implementation detail and I can't see why the subscriber to an event would need to know that an event was 'Hot' or 'Cold' (Prove me wrong). Also, trying to teach this concept to other developers and get them to implement it would mean constantly looking up the meanings (I keep forgetting myself), whereas using 'When' is a nice simple concept which anyone can understand.

This is a pretty open question at the moment. What are your thoughts on the subject?

Viewing all 138 articles
Browse latest View live