Quantcast
Channel: Muhammad Rehan Saeed
Viewing all 138 articles
Browse latest View live

ASP.NET Core Fluent Interface Extensions

$
0
0

Last week Khalid Abuhakmeh wrote a very interesting blog post called Middleware Builder for ASP.NET Core which I highly recommend you read. In it, he attempts to write some extension methods to help with writing the Configure method in your ASP.NET Core Startup class with a fluent interface. I've taken his blog post to heart and gone on a mission to 'fluent all the things' in ASP.NET Core.

IApplicationBuilder and ILoggerFactory

This is an example of what your current Configure method might look like in a typical ASP.NET Core Startup class:

public void Configure(
    IApplicationBuilder application, 
    IHostingEnvironment environment, 
    ILoggerFactory loggerFactory)
{
    if (environment.IsDevelopment())
    {
        // Do stuff on your local machine.
        loggerFactory
            .AddConsole(...)
            .AddDebug();
        application.UseDeveloperExceptionPage();
    }
    else
    {
        // Do stuff on when running in your production environment.
        loggerFactory.AddSerilog(...);
        application.UseStatusCodePagesWithReExecute("/error/{0}/");
    }

    if (environment.IsStaging())
    {
        // Do stuff in the staging environment.
        application.UseStagingSpecificMiddleware(); 
    }

    application
        .UseStaticFiles()
        .UseMvc();
}

And this is the same code using the shorter, and prettier fluent interface style:

public void Configure(
    IApplicationBuilder application, 
    IHostingEnvironment environment, 
    ILoggerFactory loggerFactory)
{
    loggerfactory
        .AddIfElse(
            hostingEnvironment.IsDevelopment(),
            x => x.AddConsole(...).AddDebug(),
            x => x.AddSerilog(...));

    application
        .UseIfElse(
            environment.IsDevelopment(),
            x => x.UseDeveloperExceptionPage(),
            x => x.UseStatusCodePagesWithReExecute("/error/{0}/"))
        .UseIf(
            environment.IsStaging(),
            x => x.UseStagingSpecificMiddleware())
        .UseStaticFiles()
        .UseMvc();
}

In the above code, you can see that I've added UseIf and UseIfElse extension methods to the IApplicationBuilder  which lets us use the fluent interface. What you'll also notice is that ILoggerFactory also has AddIf and AddIfElse extension methods.

IConfigurationBuilder

I didn't just stop there, I added similar AddIf and AddIfElse extension methods for IConfigurationBuilder:

public Startup(IHostingEnvironment hostingEnvironment)
{
    this.hostingEnvironment = hostingEnvironment;
    var configurationBuilder = new ConfigurationBuilder()
        .SetBasePath(hostingEnvironment.ContentRootPath)
        .AddJsonFile("config.json")
        .AddJsonFile($"config.{hostingEnvironment.EnvironmentName}.json", optional: true);

        if (hostingEnvironment.IsDevelopment())
        {
            configurationBuilder.AddUserSecrets();
        }

        this.configuration = configurationBuilder
            .AddEnvironmentVariables()
            .AddApplicationInsightsSettings(developerMode: !hostingEnvironment.IsProduction())
            .Build();
}

public Startup(IHostingEnvironment hostingEnvironment)
{
    this.hostingEnvironment = hostingEnvironment;
    this.configuration = new ConfigurationBuilder()
        .SetBasePath(hostingEnvironment.ContentRootPath)
        .AddJsonFile("config.json")
        .AddJsonFile($"config.{hostingEnvironment.EnvironmentName}.json", optional: true)
        .AddIf(
            hostingEnvironment.IsDevelopment(),
            x => x.AddUserSecrets())
        .AddEnvironmentVariables()
        .AddApplicationInsightsSettings(developerMode: !hostingEnvironment.IsProduction())
        .Build();
}

IServiceCollection

As if that wasn't enough I also did the same with IServiceCollection with the same AddIf and AddIfElse extension methods. In my experience, these would be used less often but I've added them for completeness.

Fluent me up!

You can get these extension methods and much more by installing the Boxed.AspNetCore NuGet package or create a project using the .NET Boxed project templates. Finally, if you are so inclined, you can also take a look at the code for these extension methods in the .NET Boxed Framework project.


NGINX for ASP.NET Core In-Depth

$
0
0

There are only two things a web server needs to be.....fast.....really fast.....and secure.

Muhammad Rehan Saeed

About NGINX

NGINX (Pronounced engine-x) is a popular open source web server. It can act as a reverse proxy server for TCP, UDP, HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and a HTTP cache.

NGINX in fact overtook Apache as the most popular web server among the top 1000 websites. After playing with it for a while now, I have to say that I can see why.

There are two flavours of NGINX. The first is the open source version which is free, the other is called NGINX Plus and provides some more advanced features (all of which can be replicated with open source plugins but with a lot effort) and proper support but at the cost of a few thousand dollars.

There is a Windows version of NGINX but I wouldn't recommend using it for real as it doesn't perform as well as the Linux version and it's not as well tested. You can however use it to try out NGINX.

Alternatively, if you are running on Windows 10 Anniversary Update, you can install Bash for Windows and install the linux version. However the process is not that straightforward. Again, the caveat is that it can only be used for testing and not in production.

IIS vs NGINX

NGINX has no UI, it's all command line driven but don't let that put you off, the CLI interface only has three commands you actually need:

  1. Check my NGINX config (nginx -t).
  2. Load my NGINX config (nginx -s reload).
  3. By default the nginx.conf file is located in the NGINX installation folder. You can use that file or your own using (nginx -c [nginx.conf File Path]).

IIS on the other hand does have a UI and what a travesty it is. It hasn't really changed for several years and really needs a usability study to hack it to pieces and start again.

The command line experience for IIS is another matter. It has very powerful IIS extensions you can install and the latest version of IIS even has an API that you can use to make simple HTTP calls to to update it.

Configuration is where NGINX shines. It has a single super simple nginx.conf file which is pretty well documented. IIS is also actually pretty simple to configure if you only rely on the web.config file.

Setting up NGINX

The ASP.NET Core Documentation site has some very good documentation on how to get started on Ubuntu. Unfortunately, it's not as simple as just installing NGINX using apt-get install nginx, there are a few moving parts to the process and a lot more moving parts if you want to install any additional modules.

If you're on Windows, as I mentioned earlier you have the options of installing NGINX using Bash for Windows 10 Anniversary Update but I couldn't get this working. Alternatively you can download the NGINX executable for Windows. If you do this, beware that NGINX tries to start on port 80 and there are a number of things that use that port already on Windows:

  1. Skype uses port 80 (Why?), turn it off in the advanced settings.
  2. Turn off IIS.
  3. Stop the SQL Server Reporting Services service.

Once you have NGINX setup, you need to run your ASP.NET Core app using the Kestrel web server. Why does ASP.NET Core use two web servers? Well Kestrel is not security hardened enough to be exposed on the internet and it does not have all of the features that a full blown web server like IIS or NGINX has. NGINX takes the role of a reverse proxy and simply forwards requests to the Kestrel web server. One day this may change. Reliably keeping your ASP.NET Core app running in Linux is also described in the ASP.NET Core Documentation.

Aiming For The Perfect Config File

You've got NGINX running, all you need now is a nginx.conf file to forward requests from the internet to your ASP.NET Core app running using the Kestrel web server.

I have taken the time to combine the recommendations from the HTML5 Boilerplate project, the ASP.NET Core NGINX Documentation, the NGINX Docs and my own experience to build the nginx.config (and mime.types file) file below specifically for the best performance and security and to target .NET Core apps.

Not only that but I've gone to extreme lengths to find out what every setting actually does and have written short comments describing each and every setting. The config file is self describing, from this point forward it needs no explanation.

# Configure the Nginx web server to run your ASP.NET Core site efficiently.
# See https://docs.asp.net/en/latest/publishing/linuxproduction.html
# See http://nginx.org/en/docs/ and https://www.nginx.com/resources/wiki/

# Set another default user than root for security reasons.
# user                        xxx;

# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_c
worker_processes            1;

# Maximum file descriptors that can be opened per process
# This should be > worker_connections
worker_rlimit_nofile        8192;

# Log errors to the following location. Feel free to change these.
error_log                    logs/error.log;
# Log NXingx process errors to the following location. Feel free to change these.
pid                            logs/nginx.pid;

events {

    # When you need > 8000 * cpu_cores connections, you start optimizing
    # your OS, and this is probably the point at where you hire people
    # who are smarter than you, this is *a lot* of requests.
    worker_connections        8000;

    # This sets up some smart queueing for accept(2)'ing requests
    # Set it to "on" if you have > worker_processes
    accept_mutex            off;

    # These settings are OS specific, by defualt Nginx uses select(2),
    # however, for a large number of requests epoll(2) and kqueue(2)
    # are generally faster than the default (select(2))
    # use epoll; # enable for Linux 2.6+
    # use kqueue; # enable for *BSD (FreeBSD, OS X, ..)

}

http {

    # Include MIME type to file extension mappings list.
    include                 mime.types;
    # The default fallback MIME type.
    default_type            application/octet-stream;

    # Format for our log files.
    log_format              main '$remote_addr - $remote_user [$time_local]  $status '
                                 '"$request" $body_bytes_sent "$http_referer" '
                                 '"$http_user_agent" "$http_x_forwarded_for"';

    # Log requests to the following location. Feel free to change this.
    access_log              logs/access.log  main;

    # The number of seconds to keep a connection open.
    keepalive_timeout       29;
    # Defines a timeout for reading client request body.
    client_body_timeout     10;
    # Defines a timeout for reading client request header.
    client_header_timeout   10;
    # Sets a timeout for transmitting a response to the client.
    send_timeout            10;
    # Limit requests from an IP address to five requests per second.
    # See http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone
    limit_req_zone          $binary_remote_addr zone=one:10m rate=5r/s;

    # Disables emitting Nginx version in error messages and in the 'Server' HTTP response header.
    server_tokens           off;

    # To serve static files using Nginx efficiently.
    sendfile                on;
    tcp_nopush              on;
    tcp_nodelay             off;

    # Enable GZIP compression.
    gzip                    on;
    # Enable GZIP maximum compression level. Ranges from 1 to 9.
    gzip_comp_level         9;
    # Enable GZIP over HTTP 1.0 (The default is HTTP 1.1).
    gzip_http_version       1.0;
    # Disable GZIP compression for IE 1 to 6.
    gzip_disable            "MSIE [1-6]\."
    # Enable GZIP compression for the following MIME types (text/html is included by default).
    gzip_types              # Plain Text
                            text/plain
                            text/css
                            text/mathml
                            application/rtf
                            # JSON
                            application/javascript
                            application/json
                            application/manifest+json
                            application/x-web-app-manifest+json
                            text/cache-manifest
                            # XML
                            application/atom+xml
                            application/rss+xml
                            application/xslt+xml
                            application/xml
                            # Fonts
                            font/opentype
                            font/otf
                            font/truetype
                            application/font-woff
                            application/vnd.ms-fontobject
                            application/x-font-ttf
                            # Images
                            image/svg+xml
                            image/x-icon;
    # Enables inserting the 'Vary: Accept-Encoding' response header.
    gzip_vary               on;

    # Sets configuration for a virtual server. You can have multiple virtual servers.
    # See http://nginx.org/en/docs/http/ngx_http_core_module.html#server
    server {

        # Listen for requests on specified port including support for HTTP 2.0.
        # See http://nginx.org/en/docs/http/ngx_http_core_module.html#listen
        listen                      80 http2 default;
        # Or, if using HTTPS, use this:
        # listen                      443 http2 ssl default;
        # Configure SSL/TLS
        # See http://nginx.org/en/docs/http/configuring_https_servers.html
        ssl_certificate             /etc/ssl/certs/testCert.crt;
        ssl_certificate_key         /etc/ssl/certs/testCert.key;
        ssl_protocols               TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers   on;
        ssl_ciphers                 "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
        ssl_ecdh_curve              secp384r1;
        ssl_session_cache           shared:SSL:10m;
        ssl_session_tickets         off;
        # Ensure your cert is capable before turning on SSL Stapling.
        ssl_stapling                on;
        ssl_stapling_verify         on;

        # The name of the virtual server where you can specify one or more domains that you own.
        server_name                    localhost;
        # server_name    example.com www.example.com *.example.com www.example.*;

        # Match incoming requests with the following path and forward them to the specified location.
        # See http://nginx.org/en/docs/http/ngx_http_core_module.html#location
        location / {

            proxy_pass              http://localhost:1025;

            # The default minimum configuration required for ASP.NET Core
            # See https://docs.asp.net/en/latest/publishing/linuxproduction.html?highlight=nginx#configure-a-reverse-proxy-server
            proxy_cache_bypass      $http_upgrade;
            # Turn off changing the URL's in headers like the 'Location' HTTP header.
            proxy_redirect          off;
            # Forwards the Host HTTP header.
            proxy_set_header        Host $host;
            # The Kestrel web server we are forwarding requests to only speaks HTTP 1.1.
            proxy_http_version      1.1;
            proxy_set_header        Upgrade $http_upgrade;
            # Adds the 'Connection: keep-alive' HTTP header.
            proxy_set_header        Connection keep-alive;

            # Sets the maximum allowed size of the client request body.
            client_max_body_size    10m;
            # Sets buffer size for reading client request body.
            client_body_buffer_size 128k;
            # Defines a timeout for establishing a connection with a proxied server.
            proxy_connect_timeout   90;
            # Sets a timeout for transmitting a request to the proxied server.
            proxy_send_timeout      90;
            # Defines a timeout for reading a response from the proxied server.
            proxy_read_timeout      90;
            # Sets the number and size of the buffers used for reading a response from the proxied server.
            proxy_buffers           32 4k;

        }

    }

}

types {

    # An expanded list of MIME type to file extension mappings for Nginx.

    # Data Interchange
    application/atom+xml                  atom;
    application/json                      json map topojson;
    application/ld+json                   jsonld;
    application/rss+xml                   rss;
    application/vnd.geo+json              geojson;
    application/xml                       rdf xml;

    # JavaScript
    application/javascript                js;

    # Manifest files
    application/manifest+json             webmanifest;
    application/x-web-app-manifest+json   webapp;
    text/cache-manifest                   appcache;

    # Media files
    audio/midi                            mid midi kar;
    audio/mp4                             aac f4a f4b m4a;
    audio/mpeg                            mp3;
    audio/ogg                             oga ogg opus;
    audio/x-realaudio                     ra;
    audio/x-wav                           wav;
    image/x-icon                          cur ico;
    image/bmp                             bmp;
    image/gif                             gif;
    image/jpeg                            jpeg jpg;
    image/png                             png;
    image/svg+xml                         svg svgz;
    image/tiff                            tif tiff;
    image/vnd.wap.wbmp                    wbmp;
    image/webp                            webp;
    image/x-jng                           jng;
    video/3gpp                            3gp 3gpp;
    video/mp4                             f4p f4v m4v mp4;
    video/mpeg                            mpeg mpg;
    video/ogg                             ogv;
    video/quicktime                       mov;
    video/webm                            webm;
    video/x-flv                           flv;
    video/x-mng                           mng;
    video/x-ms-asf                        asf asx;
    video/x-ms-wmv                        wmv;
    video/x-msvideo                       avi;

    # Microsoft Office
    application/msword                                                         doc;
    application/vnd.ms-excel                                                   xls;
    application/vnd.ms-powerpoint                                              ppt;
    application/vnd.openxmlformats-officedocument.wordprocessingml.document    docx;
    application/vnd.openxmlformats-officedocument.spreadsheetml.sheet          xlsx;
    application/vnd.openxmlformats-officedocument.presentationml.presentation  pptx;

    # Web Fonts
    application/font-woff                 woff;
    application/font-woff2                woff2;
    application/vnd.ms-fontobject         eot;
    application/x-font-ttf                ttc ttf;
    font/opentype                         otf;

    # Other
    application/java-archive              ear jar war;
    application/mac-binhex40              hqx;
    application/octet-stream              bin deb dll dmg exe img iso msi msm msp safariextz;
    application/pdf                       pdf;
    application/postscript                ai eps ps;
    application/rtf                       rtf;
    application/vnd.google-earth.kml+xml  kml;
    application/vnd.google-earth.kmz      kmz;
    application/vnd.wap.wmlc              wmlc;
    application/x-7z-compressed           7z;
    application/x-bb-appworld             bbaw;
    application/x-bittorrent              torrent;
    application/x-chrome-extension        crx;
    application/x-cocoa                   cco;
    application/x-java-archive-diff       jardiff;
    application/x-java-jnlp-file          jnlp;
    application/x-makeself                run;
    application/x-opera-extension         oex;
    application/x-perl                    pl pm;
    application/x-pilot                   pdb prc;
    application/x-rar-compressed          rar;
    application/x-redhat-package-manager  rpm;
    application/x-sea                     sea;
    application/x-shockwave-flash         swf;
    application/x-stuffit                 sit;
    application/x-tcl                     tcl tk;
    application/x-x509-ca-cert            crt der pem;
    application/x-xpinstall               xpi;
    application/xhtml+xml                 xhtml;
    application/xslt+xml                  xsl;
    application/zip                       zip;
    text/css                              css;
    text/html                             htm html shtml;
    text/mathml                           mml;
    text/plain                            txt;
    text/vcard                            vcard vcf;
    text/vnd.rim.location.xloc            xloc;
    text/vnd.sun.j2me.app-descriptor      jad;
    text/vnd.wap.wml                      wml;
    text/vtt                              vtt;
    text/x-component                      htc;

}

NGINX Modules

Like IIS, NGINX has modules that you can add to it, to provide extra features. There are a number of them out there. I've listed two that I care about and you should too.

Installing modules is best done by downloading the NGINX source, as well as the modules you need and then compiling the application. There is a feature called dynamic modules which lets you dynamically load additional separate modules after installing NGINX but the link suggests third party modules may not be supported so I didn't try it out.

HTTP 2.0

The ngx_http_v2_module module lets you use HTTP 2.0. HTTP 2.0 gives your site a very rough ~3-5% performance boost and that's before using any of it's more advanced features which not many people are using yet.

Brotli Compression

The ngx_brotli module lets NGINX use the Brotli compression algorithm. If you haven't heard about Brotli, you should take note. Brotli is a compression algorithm built by Google and is perhaps set to take over from GZIP as the compression algorithm of the web. It's already fully supported on Firefox, Chrome and Opera with only Edge lagging behind.

Depending on how much extra CPU power you are wanting to use (it can max out your CPU at the highest compression levels, which could DoS your site if someone makes too many requests, so be careful what compression level you choose), Brotli can compress files and save you around 10-20% bandwidth over what GZIP can do! Those are some significant savings.

.NET Boxed

I have updated the .NET Boxed project template, so you can now choose the web server (IIS or NGINX) you want to use. If you choose to use NGINX, you can have it pre-configured just for you, right out of the box.

Conclusions

The main reason, I've been taking a serious look at NGINX is hard cash. Running Linux servers in the cloud can costs around half the price of a Windows server. Also, you can nab yourself some pretty big performance wins by using the modules I've listed.

There are some interesting overlaps between ASP.NET Core and NGINX. Both can be used to serve static files, HTTP headers, GZIP files etc. I think ASP.NET Core is slowly going to take on more of the role that traditionally was the preserve of the web server.

The cool thing is that because ASP.NET Core is just C#, we'll have a lot of power to configure things using code. NGINX lets you do more advanced configuration using the Lua language and soon even in JavaScript but putting that logic in the app where it belongs and where you can do powerful things makes sense to me.

The Dotnet Watch Tool

$
0
0

The dotnet watch tool is a file watcher for .NET that restarts the application when changes in the source code are detected. If you are using IIS Express then, it actually does this restart for you already. The dotnet watch tool is only really useful if you like to run your app in the console. I personally like to do this over using IIS Express because I can see all my logs flashing by in the console like the movies which is occasionally useful if you get an exception.

Dotnet Watch Run Console

::: warning In both cases you have to be careful to start the application by clicking Debug -> Start Without Debugging or hitting the ||CTRL+F5|| keyboard shortcut. :::

project.json

Setting up the dotnet watch tool is as easy as installing the Microsoft.DotNet.Watcher.Tools NuGet package into the tools section of your project.json file like so (You may need to manually restore packages as there is a bug in the tooling which doesn't restore packages if you only change the tools section):

{
  //...

  "tools": {
    "Microsoft.DotNet.Watcher.Tools": "1.0.0-preview2-final"
    //...
  },

  //...
}

Now using powershell, you can navigate to your project folder and run the dotnet watch run command and your set. But using the command line is a bit lame if you are using Visual Studio, we can do one better.

launchSettings.json

The launchSettings.json file is used by Visual Studio to launch your application and controls what happens when you hit ||F5||. It turns out you can add additional launch settings here to launch the application using the dotnet watch tool. You can do so by adding a new launch configuration as I've done at the bottom of this file:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:8080/",
      "sslPort": 44300
    }
  },
  "profiles": {
    // Run the app using IIS Express. Use CTRL+F5 or Debug -> Start Without Debugging to edit code and refresh the browser 
    // to see your changes while the app is running.
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "https://localhost:44300/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    // Run the app in console mode using 'dotnet run'.
    "dotnet run": {
      "commandName": "Project",
      "commandLineArgs": "--server.urls http://*:8080",
      "launchBrowser": true,
      "launchUrl": "http://localhost:8080/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    // Use CTRL+F5 or Debug -> Start Without Debugging to use this launch profile. Launches the app using 'dotnet watch', 
    // which allows you to edit code and refresh the browser to see your changes while the app is running.
    "dotnet watch": {
      "executablePath": "C:\\Program Files\\dotnet\\dotnet.exe",
      "commandLineArgs": "watch run --server.urls http://*:8080",
      "launchBrowser": true,
      "launchUrl": "http://localhost:8080/",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

Notice that I renamed the second launch profile (which already exists in the default template) to dotnet run because that's actually the command it's running and makes more sense.

The dotnet watch launch profile is running dotnet watch run but it's also passing in the server.urls argument which lets us override the port number. Now we can see the new launch profile in the Visual Studio toolbar like so:

Dotnet Watch in the Visual Studio Toolbar

.NET Boxed

If you read my blog posts, you'll be seeing a trend by now. I built the above feature into the .NET Boxed project templates by default so you can create a new project with this feature built-in, right out of the box. Happy coding!

Making Application Insights Fast & Secure

$
0
0

What is Application Insights?

It's an application monitoring tool available on Microsoft's Azure cloud that you can use to detect errors and usage in your application. For ASP.NET Core apps, it can do this for both your C# and JavaScript code. It's main competitors are New Relic and RayGun.

Implementing Application Insights

Following the Getting Started guide for ASP.NET Core applications requires you to add the following HTML helper to your _Layout.cshtml file:

<head>
    @* ...Omitted *@

    @Html.ApplicationInsightsJavaScript(TelemetryConfiguration) 
</head>

This HTML helper adds an inline script containing the minified JavaScript in snippet.js.

<script type="text/javascript">
    var appInsights=window.appInsights||function(config){{
        function i(config){{t[config]=function(){{var i=arguments;t.queue.push(function(){{t[config].apply(t,i)}})}}}}var t={{config:config}},u=document,e=window,o="script",s="AuthenticatedUserContext",h="start",c="stop",l="Track",a=l+"Event",v=l+"Page",y=u.createElement(o),r,f;y.src=config.url||"https://az416426.vo.msecnd.net/scripts/a/ai.0.js";u.getElementsByTagName(o)[0].parentNode.appendChild(y);try{{t.cookie=u.cookie}}catch(p){{}}for(t.queue=[],t.version="1.0",r=["Event","Exception","Metric","PageView","Trace","Dependency"];r.length;)i("track"+r.pop());return i("set"+s),i("clear"+s),i(h+a),i(c+a),i(h+v),i(c+v),i("flush"),config.disableExceptionTracking||(r="onerror",i("_"+r),f=e[r],e[r]=function(config,i,u,e,o){{var s=f&amp;&amp;f(config,i,u,e,o);return s!==!0&amp;&amp;t["_"+r](config,i,u,e,o),s}}),t
    }}({{
        instrumentationKey: '{0}'
    }});

    window.appInsights=appInsights;
    appInsights.trackPageView();
</script>

This script is responsible for:

  1. Containing the users instrumentation key (The HTML helper adds this for you).
  2. Downloading the full application insights script asynchronously which actually does all the work.
  3. Recording any logs that occur while the full script is downloaded

The Problem

For most websites, this is fine and you can stop here. Here is what can be improved for the rest:

  1. The above adds 1KB to every HTML page. Moving this script into a separate file would mean that this script could be cached in the browser the first time it was downloaded. A separate file could also be distributed to a CDN and globally distributed very quickly.
  2. If you are using a Content Security Policy (CSP) to secure your site using inline scripts in your site is a big no no. You could use a nonce (A nonce means you can't cache the page as each page becomes unique) or even better a hash of the script contents but browser support for CSP 2.0 is not great. Using an external script would be the simplest option.

Making It Slightly Faster and More Secure

So what does it take to move the above snippet.js file into a separate file? Well, it turns out that you can get snippet.js from the applicationinsights-js NPM package which you can add to your package.json like so:

{
  "dependencies": {
    "applicationinsights-js": "1.0.5"
    // ...
  }
  // ...
}

The next step is to inject your instrumentation key into snippet.js and also the URL to the full application insights script which is missing from the snippet.js file in the NPM package. I do this using a gulp task like so:

var gulp = require('gulp'),
    sourcemaps = require('gulp-sourcemaps'),    // Creates source map files (https://www.npmjs.com/package/gulp-sourcemaps/)
    replace = require('gulp-replace-task'),     // String replace (https://www.npmjs.com/package/gulp-replace-task/)
    uglify = require('gulp-uglify');            // Minifies JavaScript (https://www.npmjs.com/package/gulp-uglify/)

gulp.task('build-app-insights-js',
    function() {
        return gulp
            .src('./node_modules/ApplicationInsights-JS/JavaScript/JavaScriptSDK/snippet.js')
            .pipe(sourcemaps.init())               // Set up the generation of .map source files for the JavaScript.
            .pipe(
                replace({                          // Carry out the specified find and replace.
                    patterns: [
                        {
                            // match - The string or regular expression to find.
                            match: 'CDN_PATH',
                            // replacement - The string or function used to make the replacement.
                            replacement: 'https://az416426.vo.msecnd.net/scripts/a/ai.0.js'
                        },
                        {
                           match: 'INSTRUMENTATION_KEY',
                           replacement: '11111111-2222-3333-4444-555555555555'
                        }
                    ],
                    usePrefix: false
                }))
            .pipe(uglify())                        // Minifies the JavaScript.
            .pipe(sourcemaps.write('.'))           // Generates source .map files for the JavaScript.
            .pipe(gulp.dest('./wwwroot/js/'));     // Saves the JavaScript file to the specified destination path.
});

Finally we can include the script in our HTML. Don't forget to include the crossorigin attribute on all your script tags, which allows full stack traces to be reported. You can read more about the crossorigin attribute here.

<script asp-append-version="true"
        crossorigin="anonymous"
        src="~/js/application-insights.js"></script>

Conclusion

As usual, all of the above is built in to the ASP.NET Core Boilerplate project template, available as a Visual Studio extension if you select the optional Application Insights feature.

SEO Friendly URL's for ASP.NET Core

$
0
0

For some reason there are not a lot of Search Engine Optimization (SEO) blog posts or projects out there. Taking a few simple steps can make your site rank higher in Google or Bing search results so it's well worth doing. Here are a few other of my SEO related blog posts:

What is an SEO Friendly URL?

This Mozilla blog post called '15 best practices for structuring URL's' is the best article on the subject of SEO friendly URL's I found and it's well worth a read.

Essentially you want a simple short URL that tells the user what they are clicking on at a glance. It should also contain keywords pertaining to what is on the page for better Search Engine Optimization (SEO). In short, a page will appear higher up in search results if the term a user searches for appears in the URL. Your URL should look like this:

SEO Friendly URL Example

The URL contains an ID for a product and ends with a friendly title. The title contains alphanumeric characters with dashes instead of spaces. Note that the ID of the product is still included in the URL, to avoid having to deal with two friendly titles with the same name.

If you elect to omit the ID, then you have to do a lot of footwork to make things work. Firstly, you have to use the title as a kind of primary key to get the product data from your database and secondly, you also have to figure out what to do when there are two pages with the same title. Each time you want to create a new title, you have to scan your data store to see if the title already exists and if it does either error and force the creation of a different title or add make it unique by adding a number on the end. This is a lot of work but does produce a nicer URL, the choice is yours.

How to Build One

Take a look at the controller action below. It is a very simple example of how to use SEO friendly URL's. In our example we have a product class which has a ID and title properties, where the title is just the name of the product.

[HttpGet("product/{id}/{title}", Name = "GetProduct")]
public IActionResult GetProduct(int id, string title)
{
    // Get the product as indicated by the ID from a database or some repository.
    var product = this.productRepository.Find(id);

    // If a product with the specified ID was not found, return a 404 Not Found response.
    if (product == null)
    {
        return this.NotFound();
    }

    // Get the actual friendly version of the title.
    string friendlyTitle = FriendlyUrlHelper.GetFriendlyTitle(product.Title);

    // Compare the title with the friendly title.
    if (!string.Equals(friendlyTitle, title, StringComparison.Ordinal))
    {
        // If the title is null, empty or does not match the friendly title, return a 301 Permanent
        // Redirect to the correct friendly URL.
        return this.RedirectToRoutePermanent("GetProduct", new { id = id, title = friendlyTitle });
    }

    // The URL the client has browsed to is correct, show them the view containing the product.
    return this.View(product);
}

All the work is done by the FriendlyUrlHelper which turns the product title which may contain spaces, numbers or other special characters (which would not be allowed in a URL without escaping them) into a lower-kebab-case title.

This generated friendly title is compared with the one that is passed in and if it is different (Someone may have omitted the friendly title or mis-spelled it) we perform a permanent redirect to the product with the same ID but now with the friendly title. This is important for SEO purposes, we want search engines to only find one URL for each product. Finally, if the friendly title matches the one passed in we return the product view.

The FriendlyUrlHelper

The FriendlyUrlHelper was inspired by a famous Stack Overflow question 'How does Stack Overflow generate its SEO-friendly URLs?'. The full source code for it is shown below.

/// <summary>
/// Helps convert <see cref="string"/> title text to URL friendly <see cref="string"/>'s that can safely be
/// displayed in a URL.
/// </summary>
public static class FriendlyUrlHelper
{
    /// <summary>
    /// Converts the specified title so that it is more human and search engine readable e.g.
    /// http://example.com/product/123/this-is-the-seo-and-human-friendly-product-title. Note that the ID of the
    /// product is still included in the URL, to avoid having to deal with two titles with the same name. Search
    /// Engine Optimization (SEO) friendly URL's gives your site a boost in search rankings by including keywords
    /// in your URL's. They are also easier to read by users and can give them an indication of what they are
    /// clicking on when they look at a URL. Refer to the code example below to see how this helper can be used.
    /// Go to definition on this method to see a code example. To learn more about friendly URL's see
    /// https://moz.com/blog/15-seo-best-practices-for-structuring-urls.
    /// To learn more about how this was implemented see
    /// http://stackoverflow.com/questions/25259/how-does-stack-overflow-generate-its-seo-friendly-urls/25486#25486
    /// </summary>
    /// <param name="title">The title of the URL.</param>
    /// <param name="remapToAscii">if set to <c>true</c>, remaps special UTF8 characters like 'è' to their ASCII
    /// equivalent 'e'. All modern browsers except Internet Explorer display the 'è' correctly. Older browsers and
    /// Internet Explorer percent encode these international characters so they are displayed as'%C3%A8'. What you
    /// set this to depends on whether your target users are English speakers or not.</param>
    /// <param name="maxlength">The maximum allowed length of the title.</param>
    /// <returns>The SEO and human friendly title.</returns>
    /// <code>
    /// [HttpGet("product/{id}/{title}", Name = "GetDetails")]
    /// public IActionResult Product(int id, string title)
    /// {
    ///     // Get the product as indicated by the ID from a database or some repository.
    ///     var product = ProductRepository.Find(id);
    ///
    ///     // If a product with the specified ID was not found, return a 404 Not Found response.
    ///     if (product == null)
    ///     {
    ///         return this.HttpNotFound();
    ///     }
    ///
    ///     // Get the actual friendly version of the title.
    ///     var friendlyTitle = FriendlyUrlHelper.GetFriendlyTitle(product.Title);
    ///
    ///     // Compare the title with the friendly title.
    ///     if (!string.Equals(friendlyTitle, title, StringComparison.Ordinal))
    ///     {
    ///         // If the title is null, empty or does not match the friendly title, return a 301 Permanent
    ///         // Redirect to the correct friendly URL.
    ///         return this.RedirectToRoutePermanent("GetProduct", new { id = id, title = friendlyTitle });
    ///     }
    ///
    ///     // The URL the client has browsed to is correct, show them the view containing the product.
    ///     return this.View(product);
    /// }
    /// </code>
    public static string GetFriendlyTitle(string title, bool remapToAscii = false, int maxlength = 80)
    {
        if (title == null)
        {
            return string.Empty;
        }

        int length = title.Length;
        bool prevdash = false;
        StringBuilder stringBuilder = new StringBuilder(length);
        char c;

        for (int i = 0; i < length; ++i)
        {
            c = title[i];
            if ((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9'))
            {
                stringBuilder.Append(c);
                prevdash = false;
            }
            else if (c >= 'A' && c <= 'Z')
            {
                // tricky way to convert to lower-case
                stringBuilder.Append((char)(c | 32));
                prevdash = false;
            }
            else if ((c == ' ') || (c == ',') || (c == '.') || (c == '/') ||
                (c == '\\') || (c == '-') || (c == '_') || (c == '='))
            {
                if (!prevdash && (stringBuilder.Length > 0))
                {
                    stringBuilder.Append('-');
                    prevdash = true;
                }
            }
            else if (c >= 128)
            {
                int previousLength = stringBuilder.Length;

                if (remapToAscii)
                {
                    stringBuilder.Append(RemapInternationalCharToAscii(c));
                }
                else
                {
                    stringBuilder.Append(c);
                }

                if (previousLength != stringBuilder.Length)
                {
                    prevdash = false;
                }
            }

            if (i == maxlength)
            {
                break;
            }
        }

        if (prevdash)
        {
            return stringBuilder.ToString().Substring(0, stringBuilder.Length - 1);
        }
        else
        {
            return stringBuilder.ToString();
        }
    }

    /// <summary>
    /// Remaps the international character to their equivalent ASCII characters. See
    /// http://meta.stackexchange.com/questions/7435/non-us-ascii-characters-dropped-from-full-profile-url/7696#7696
    /// </summary>
    /// <param name="character">The character to remap to its ASCII equivalent.</param>
    /// <returns>The remapped character</returns>
    private static string RemapInternationalCharToAscii(char character)
    {
        string s = character.ToString().ToLowerInvariant();
        if ("àåáâäãåąā".Contains(s))
        {
            return "a";
        }
        else if ("èéêëę".Contains(s))
        {
            return "e";
        }
        else if ("ìíîïı".Contains(s))
        {
            return "i";
        }
        else if ("òóôõöøőð".Contains(s))
        {
            return "o";
        }
        else if ("ùúûüŭů".Contains(s))
        {
            return "u";
        }
        else if ("çćčĉ".Contains(s))
        {
            return "c";
        }
        else if ("żźž".Contains(s))
        {
            return "z";
        }
        else if ("śşšŝ".Contains(s))
        {
            return "s";
        }
        else if ("ñń".Contains(s))
        {
            return "n";
        }
        else if ("ýÿ".Contains(s))
        {
            return "y";
        }
        else if ("ğĝ".Contains(s))
        {
            return "g";
        }
        else if (character == 'ř')
        {
            return "r";
        }
        else if (character == 'ł')
        {
            return "l";
        }
        else if (character == 'đ')
        {
            return "d";
        }
        else if (character == 'ß')
        {
            return "ss";
        }
        else if (character == 'Þ')
        {
            return "th";
        }
        else if (character == 'ĥ')
        {
            return "h";
        }
        else if (character == 'ĵ')
        {
            return "j";
        }
        else
        {
            return string.Empty;
        }
    }
}

The difference between my version and the one in the Stack Overflow answer is that mine optionally handles non-ASCII characters using the boolean remapToAscii parameter. This parameter remaps special UTF8 characters like è to their ASCII equivalent e. If there is no equivalent, then those characters are dropped. All modern browsers except Internet Explorer and Edge display the è correctly. Older browsers like Internet Explorer percent encode these international characters so they are displayed as %C3%A8. What you set this to depends on whether your target users are English speakers and if you care about supporting IE and Edge. I must say that I was hoping Edge would have added support so that remapToAscii could be turned off by default but I'm sorely disappointed.

Using the third parameter you can specify a maximum length for the title with any additional characters being dropped. Finally, the last thing to say about this method is that it has been tuned for speed.

Where Can I Get It?

This is a great little snippet of code to make your URL's a human readable, while giving your site an SEO boost. It doesn't take much effort to use either. This helper class is available in the Boxed.AspNetCore NuGet package or you can look at the source code in the .NET Boxed Framework GitHub page.

Reactive Extensions (Rx) - Part 8 - Timeouts

$
0
0

In part six of this series of blog posts I talked about using Reactive Extensions for adding timeout logic to asynchronous tasks. Something like this:

public async Task<string> WaitForFirstResultWithTimeOut()
{
    Task<string> task = this.DownloadTheInternet();

    return await task
        .ToObservable()
        .Timeout(TimeSpan.FromMilliseconds(1000))
        .FirstAsync();
}

Last week I was working on a project and wanted to add a Timeout to my task but since it was an ASP.NET MVC project, I had no references to Reactive Extensions. After some thought I discovered another possible method of performing a timeout which may help in certain circumstances.

using (var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMilliseconds(1000)))
{
    try
    {
        return await this.DownloadTheInternet(cancellationTokenSource.Token);
    }
    catch (OperationCanceledException exception)
    {
        Console.WriteLine("Timed Out");
    }
}

I'm using an overload on CancellationTokenSource which takes a timeout value. Then passing the CancellationToken to DownloadTheInternet. This method should be periodically checking the CancellationToken to see if it has been cancelled and if so, throw an OperationCanceledException. In this example you'd probably use HttpClient which handles this for you if you give it the CancellationToken.

The main reason why this method is better is that the task is actually being cancelled and stopped from doing any more work. In my above reactive extensions example, the task continues doing work but it's result is just ignored.

Custom Project Templates Using dotnet new

$
0
0

Current dotnet new

If you run dotnet new today, you can create a simple console app. The command has very few options, including selecting the language you want to use (C#, VB or F#). However, this is all about to change. Sayed I. Hashimi and Mike Lorbetske who work at Microsoft in the .NET tooling team have been kind enough to show me what they've been working on with the intention of getting some feedback.

old dotnet new

dotnet new3

Microsoft is working on a new version of the dotnet new command with support for installing custom project templates from NuGet packages, zip files or folders. If you head over to the dotnet/templating GitHub repository you can follow the very simple instructions and try out a fairly complete version of this command which is temporarily called dotnet new3. The full dotnet new experience is due to be released in conjunction with Visual Studio 2017.

dotnet new3

If you take a look at the screenshot above, you'll notice that there are a lot more options available. You can list all installed project templates and install new ones too.

Creating New Templates

Creating a new project template involves taking a folder containing your project (Mine is called Api-CSharp) and adding a .template.config folder to it containing two files.

Custom project template example folder structure

Template Metadata

The template.json file is where you specify metadata about your project template. This metadata is displayed when someone lists their installed project templates. A really basic one looks like this:

{
  "author": "Muhammad Rehan Saeed (RehanSaeed.com)",
  "classifications": [ "WebAPI", "Boxed" ], // Tags used to search for the template.
  "name": "Dotnet Boxed API",
  "identity": "Dotnet.Boxed.Api.CSharp",    // A unique ID for the project template.
  "shortName": "api",                       // You can create the project using this short name instead of the one above.
  "tags": {
    "language": "C#"                        // Specify that this template is in C#.
  },
  "sourceName": "ApiTemplate",              // Name of the csproj file and namespace that will be replaced.
  "guids": [                                // GUID's used in the project that will be replaced by new ones.
    "837bc53e-0271-4e9c-b5b5-c60ea7a7c7b5",
    "113f2d04-69f0-40c3-8797-ba3f356dd812"
  ],
}

The templating repositories Wiki page talks about what all of the properties mean in a lot more detail but I've added some basic comments for your understanding.

Installing Templates

Installing the above template from a folder is as easy as using the install command. You can also install templates from zip files and NuGet packages the same way.

dotnet new3 install

Template NuGet Packages

So how do you create a NuGet package containing a project template that's compatible with dotnet new? I'm assuming you are familiar with creating NuGet packages, if not take a look at the NuGet documentation. You can create NuGet packages of your project templates by creating a Templates.nuspec file like the one below and placing all of your templates in a content folder beside it. The content folder is a special folder which NuGet understands to contain static files. If you look at the nuspec file below, you'll notice the packageType element. This is a new way to tell NuGet that this NuGet package contains project templates.

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <id>Boxed.Templates</id>
    <version>1.0.0</version>
    <description>My project description.</description>
    <authors>Muhammad Rehan Saeed (RehanSaeed.com)</authors>
    <packageTypes>
      <packageType name="Template" />
    </packageTypes>
  </metadata>
</package>

There is More!

What I've not told you is that it's possible to add features to your project template that developers can turn on or off based on command line switches a bit like Yeoman does for Node based NPM packages. As many of you will know I already do this in my ASP.NET Core Boilerplate project template but I came up with my own custom method. dotnet new makes this all a lot easier and I'll cover how to do this in a later blog post.

Why This is Better

Traditionally, to create project templates, you could use Visual Studio to create zip files containing your project template or if you were brave you could create Visual Studio extensions (VSIX) to enable installing them directly into Visual Studio and share them on the Visual Studio Marketplace.

This new method makes creating project templates about as easy as it's ever going to get and allows really easy sharing, versioning and personalization of project templates. At some point I envisage a website (Possible the Visual Studio Marketplace) where you could go and install these NuGet based project templates.

.NET Boxed API

I have been working on a brand new project template for building API's using dotnet new with a lot of help from the guys at Microsoft. My project templates are quite complex so it's a good test of the system. The API comes jam packed full of security, performance and best practice features and also implements Swagger right out of the box. You can try installing it with dotnet new from NuGet.

Conclusions

Overall I'm really impressed with where the new project templating system is headed. It's very easy to do something simple but also very powerful should you need to do something complicated. There is a few blog posts worth of material here, so expect a few more posts in the coming weeks.

Cross-Platform DevOps for .NET Core

$
0
0

If you're a library author or writing a cross-platform application then .NET Core is great but it throws up the question, how do you test that your code works on all operating systems? Well the answer is simple, you build and test your code on each platform.

This post builds on Andrew Lock's work where he shows in two blog posts how to build, test and deploy your .NET Core NuGet packages using AppVeyor (Windows) and Travis CI (Mac and Linux) continuous integration build systems.

In Andrew's blog posts, he writes PowerShell (Windows) or Bash (Mac and Linux) scripts to build, test and deploy his code. There were two problems here.

  1. Code is duplicated because you have to write your shell scripts twice.
  2. I've already grudgingly learned how to write PowerShell and done a little Bash but found both languages pretty ugly and difficult to use for more complex scenarios.

I only want to write my shell script once, I don't want to have to learn Bash in-depth and I don't want to write PowerShell if I can help it. Around the same time I was reading Andrew's blog posts, I read about Cake build.

Cake

Cake lets you write your build, test and deployment script in C# and it provides lots of helper methods to get stuff done making your script very terse. You can get syntax highlighting and intellisense for your Cake scripts by installing the Visual Studio or Visual Studio Code extensions.

Building and testing your .NET Core code using Cake is dead dimple. Grab the build.cake, build.ps1 and build.sh files from the Cake Getting Started guide and drop them at the root of your project. Here is an example of my project and the files we'll be dealing with in this post:

Cake Files

The build.ps1 and build.sh files are shell scripts that download the Cake executable and execute the build.cake C# script. They also take any parameters that are passed to them and pass them onto your cake script. Now paste the following into your build.cake file:

// Target - The task you want to start. Runs the Default task if not specified.
var target = Argument("Target", "Default");
// Configuration - The build configuration (Debug/Release) to use.
// 1. If command line parameter parameter passed, use that.
// 2. Otherwise if an Environment variable exists, use that.
var configuration = 
    HasArgument("Configuration") ? Argument("Configuration") :
    EnvironmentVariable("Configuration") != null ? EnvironmentVariable("Configuration") : "Release";
// The build number to use in the version number of the built NuGet packages.
// There are multiple ways this value can be passed, this is a common pattern.
// 1. If command line parameter parameter passed, use that.
// 2. Otherwise if running on AppVeyor, get it's build number.
// 3. Otherwise if running on Travis CI, get it's build number.
// 4. Otherwise if an Environment variable exists, use that.
// 5. Otherwise default the build number to 0.
var buildNumber =
    HasArgument("BuildNumber") ? Argument<int>("BuildNumber") :
    AppVeyor.IsRunningOnAppVeyor ? AppVeyor.Environment.Build.Number :
    TravisCI.IsRunningOnTravisCI ? TravisCI.Environment.Build.BuildNumber :
    EnvironmentVariable("BuildNumber") != null ? int.Parse(EnvironmentVariable("BuildNumber")) : 0;

// A directory path to an Artefacts directory.
var artefactsDirectory = Directory("./Artefacts");

// Deletes the contents of the Artefacts folder if it should contain anything from a previous build.
Task("Clean")
    .Does(() =>
    {
        CleanDirectory(artefactsDirectory);
    });

// Run dotnet restore to restore all package references.
Task("Restore")
    .IsDependentOn("Clean")
    .Does(() =>
    {
        DotNetCoreRestore();
    });

// Find all csproj projects and build them using the build configuration specified as an argument.
 Task("Build")
    .IsDependentOn("Restore")
    .Does(() =>
    {
        var projects = GetFiles("./**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreBuild(
                project.GetDirectory().FullPath,
                new DotNetCoreBuildSettings()
                {
                    Configuration = configuration
                });
        }
    });

// Look under a 'Tests' folder and run dotnet test against all of those projects.
// Then drop the XML test results file in the Artefacts folder at the root.
Task("Test")
    .IsDependentOn("Build")
    .Does(() =>
    {
        var projects = GetFiles("./Tests/**/*.csproj");
        foreach(var project in projects)
        {
            DotNetCoreTest(
                project.GetDirectory().FullPath,
                new DotNetCoreTestSettings()
                {
                    ArgumentCustomization = args => args
                        .Append("-xml")
                        .Append(artefactsDirectory.Path.CombineWithFilePath(project.GetFilenameWithoutExtension()).FullPath + ".xml"),
                    Configuration = configuration,
                    NoBuild = true
                });
        }
    });

// Run dotnet pack to produce NuGet packages from our projects. Versions the package
// using the build number argument on the script which is used as the revision number 
// (Last number in 1.0.0.0). The packages are dropped in the Artefacts directory.
Task("Pack")
    .IsDependentOn("Test")
    .Does(() =>
    {
        var revision = buildNumber.ToString("D4");
        foreach (var project in GetFiles("./Source/**/*.csproj"))
        {
            DotNetCorePack(
                project.GetDirectory().FullPath,
                new DotNetCorePackSettings()
                {
                    Configuration = configuration,
                    OutputDirectory = artefactsDirectory,
                    VersionSuffix = revision
                });
        }
    });

// The default task to run if none is explicitly specified. In this case, we want
// to run everything starting from Clean, all the way up to Pack.
Task("Default")
    .IsDependentOn("Pack");

// Executes the task specified in the target argument.
RunTarget(target);

At the top of the script some arguments are defined. Values for these arguments can be set by passing values to the shell scripts via command line, they can come from environment variables or they can come from continuous integration build systems that Cake knows about (It knows all the common ones including TFS, TeamCity, Jenkins and Bamboo). In the above script I show how to get a build number from AppVeyor or Travis CI if the script is currently being run using those systems. This makes the code very short, terse and to the point.

The rest of the script is made up of a series of chained tasks which execute one after the other, starting with the task with no dependencies. Alternatively you can pass in a Target argument which specifies which task you'd like the script to start executing from. A key thing to note is that the script does not need to know about any file names or file paths, everything is done by convention.

One very important effect of using Cake is that your build script is easily testable. I've used many continuous integration systems that have their own proprietary tasks and when a slower build fails, debugging it was a nightmare, since it could only be done on the build machine. Since Cake is just a script, you can run it on your local machine and test it to your hearts content which gives you a quicker tighter development loop.

AppVeyor

AppVeyor is my favourite CI system but only works if you are hosting your code with Git based repositories and it only runs builds on Windows. All you need to do is sign-up, enable AppVeyor for your git repository and add an appveyor.yml file which is in YAML format. Here is one of my commented appveyor.yml files:

version: '{build}'

pull_requests:
  # Do not increment build number for pull requests
  do_not_increment_build_number: true

nuget:
  # Do not publish NuGet packages for pull requests
  disable_publish_on_pr: true

environment:
  # Set the DOTNET_SKIP_FIRST_TIME_EXPERIENCE environment variable to stop wasting time caching packages
  DOTNET_SKIP_FIRST_TIME_EXPERIENCE: true
  # Disable sending usage data to Microsoft
  DOTNET_CLI_TELEMETRY_OPTOUT: true

build_script:
- ps: .\build.ps1

test: off

artifacts:
# Store NuGet packages
- path: .\Artefacts\**\*.nupkg
  name: NuGet
# Store xUnit Test Results
- path: .\Artefacts\**\*.xml
  name: xUnit Test Results

deploy:

# Publish NuGet packages
- provider: NuGet
  name: production
  api_key:
    secure: 73eFUWSfho6pxCy1VRP1H0AYh/SFiyEREV+/ATcoj0I+sSH9dec/WXs6H2Jy5vlS
  on:
    # Only publish from the master branch
    branch: master
    # Only publish if the trigger was a Git tag
    # git tag v0.1.0-beta
    # git push origin --tags
    appveyor_repo_tag: true

It basically executes the build.ps1 file at the root of my project and collects all the NuGet package and XML unit test result files in my artefacts folder. I also set some environment variables to turn off some lesser known .NET Core features for a faster build.

AppVeyor, knows about NuGet and I use AppVeyor as my primary build system to publish my NuGet packages (You don't want AppVeyor and Travis CI both publishing your NuGet packages). Now I could have created a task in my cake file to publish NuGet packages and only execute that task if I was running on AppVeyor but AppVeyor has a pretty easy to use configuration file that I've chosen to do this step instead.

To publish packages to NuGet, you sign-up and receive an API key. Of course, you don't want to share that with the whole world by checking it into GitHub or Bitbucket, so AppVeyor lets you encrypt it and paste the encrypted value into the appveyor.yml file.

Travis CI

Travis CI is very similar to AppVeyor but it targets both Mac and Linux. All you have to do is sign-up, turn on Travis for your repository and stick a .travis.yml file in the root of your project. Here is mine:

language: csharp
os:
  - linux
  - osx

# .NET CLI require Ubuntu 14.04
sudo: required
dist: trusty
addons:
  apt:
    packages:
    - gettext
    - libcurl4-openssl-dev
    - libicu-dev
    - libssl-dev
    - libunwind8
    - zlib1g

# .NET CLI requires OSX 10.11
osx_image: xcode7.2

# Ensure that .NET Core is installed
dotnet: 1.0.0-preview2-1-003177
# Ensure Mono is installed
mono: latest

env:
    # Set the DOTNET_SKIP_FIRST_TIME_EXPERIENCE environment variable to stop wasting time caching packages
  - DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true
    # Disable sending usage data to Microsoft
  - DOTNET_CLI_TELEMETRY_OPTOUT=true

# You must run this command to give Travis permissions to execute the build.sh shell script:
# git update-index --chmod=+x build.sh
script:
  - ./build.sh

You'll notice that we are specifying that we want to build our code on both Mac and Linux. Travis CI will actually run one build for each operating system. We then specify some details about the version of operating system we want to use and what we would like to install on them.

Once again, I set the .NET environment variables to make the build a bit quicker and finally we run the build.sh Bash script to kick things off. Note that you need to run the following command to give Travis permission to execute the build.sh file (This is Linux after all):

git update-index --chmod=+x build.sh

Another thing to note is that if you are still using the older xproj project system and your unit tests are using xUnit, then your tests will not run due to this bug. There is a very nasty workaround in the link.

Conclusions

If you want to learn how to add AppVeyor and Travis CI build status badges to your Git repository ReadMe or learn how to deploy to MyGet/NuGet using tags, I recommend going back to read Andrew's blog post which is still useful. If you're looking for more examples of Cake build scripts, you can take a look at the following Cake repositories:

  • Cake - Cake builds itself with Cake! They have a very complicated build setup. This repository is great for learning about Cake helper methods that you can use in your scripts.
  • Serilog.Exceptions - Builds, tests and deploys .NET Core NuGet packages.
  • .NET Boxed Framework - Builds, tests and deploys .NET Core NuGet packages.
  • .NET Boxed Templates - Builds, tests and deploys a dotnet new NuGet package.

Cleaning Up CSPROJ

$
0
0

::: tip TLDR I show how to make csproj XML concise and pretty for hand editing. :::

I used project.json since Beta 7 and got used to hand editing it, I've continues that practice with .csproj files and I think you should too. Recent version of Visual Studio have made a lot of performance improvements but it's still a lot slower than hand editing a text file.

The NuGet package screen in Visual Studio is achingly slow. Bulk editing takes seconds. I can update NuGet package references, package properties etc. all in one go, rather than visiting multiple disparate UI's in Visual Studio. Finally, I create new projects by copying and pasting an existing csproj and tweaking it. Much faster than Visual Studio's New Project dialogue.

Install Project File Tools

The Project File Tools Visual Studio extension gives you intellisense for NuGet packages in the new csproj projects. Unfortunately, due to MSBuild being around for so long and being so complex, intellisense for the rest of the project XML consists of a massive list of possible properties so it becomes less useful than it was in project.json.

dotnet migrate - Wow that's ugly!

After migrating my project.json projects to csproj using Visual Studio 2017 (You could also use the dotnet migrate command), I found that that the XML generated was pretty ugly and contained superfluous elements you just didn't need. Here is an example csproj library project straight after migration:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <Description>...</Description>
    <Copyright>Copyright © Muhammad Rehan Saeed. All rights Reserved</Copyright>
    <AssemblyTitle>Dotnet Boxed Framework</AssemblyTitle>
    <VersionPrefix>2.2.2</VersionPrefix>
    <Authors>Muhammad Rehan Saeed (RehanSaeed.com)</Authors>
    <TargetFrameworks>netstandard1.6;net461</TargetFrameworks>
    <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
    <AssemblyName>Boxed.AspNetCore</AssemblyName>
    <AssemblyOriginatorKeyFile>../../../Key.snk</AssemblyOriginatorKeyFile>
    <SignAssembly>true</SignAssembly>
    <PublicSign Condition=" '$(OS)' != 'Windows_NT' ">true</PublicSign>
    <PackageId>Boxed.AspNetCore</PackageId>
    <PackageTags>ASP.NET;ASP.NET Core;MVC;Boxed;Muhammad Rehan Saeed;Framework</PackageTags>
    <PackageReleaseNotes>Updated to ASP.NET Core 1.1.2.</PackageReleaseNotes>
    <PackageIconUrl>https://raw.githubusercontent.com/Dotnet-Boxed/Framework/master/Images/Icon.png</PackageIconUrl>
    <PackageProjectUrl>https://github.com/Dotnet-Boxed/Framework</PackageProjectUrl>
    <PackageLicenseUrl>https://github.com/Dotnet-Boxed/Framework/blob/master/LICENSE</PackageLicenseUrl>
    <PackageRequireLicenseAcceptance>true</PackageRequireLicenseAcceptance>
    <RepositoryType>git</RepositoryType>
    <RepositoryUrl>https://github.com/Dotnet-Boxed/Framework.git</RepositoryUrl>
    <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
    <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
    <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
  </PropertyGroup>

  <ItemGroup>
    <ProjectReference Include="..\Framework\Framework.csproj" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Abstractions" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />
    <PackageReference Include="Microsoft.Extensions.Caching.Abstractions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="1.1.1" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />
    <PackageReference Include="StyleCop.Analyzers" Version="1.0.0">
      <PrivateAssets>All</PrivateAssets>
    </PackageReference>
  </ItemGroup>

  <ItemGroup Condition=" '$(TargetFramework)' == 'netstandard1.6' ">
    <PackageReference Include="System.Xml.XDocument" Version="4.3.0" />
  </ItemGroup>

  <ItemGroup Condition=" '$(TargetFramework)' == 'net461' ">
    <Reference Include="System.ServiceModel" />
    <Reference Include="System.Xml" />
    <Reference Include="System.Xml.Linq" />
    <Reference Include="System" />
    <Reference Include="Microsoft.CSharp" />
  </ItemGroup>

</Project>

Understanding new csproj Projects

The top of the project contains a new SDK property. This imports some MSBuild targets and props files in your dotnet installation folder shown below:

dotnet SDK's

If you root around in those files, you can find defaults for all kinds of settings. Here are some of the nuggets I discovered about the web projects:

  • The NETStandard.Library version 1.6.1 NuGet package is referenced for you by default.
  • The wwwroot folder is excluded from compilation but included in the published output.
  • web.config, .cshtml and .json files are published by default.
  • Server garbage collection is turned on by default using the ServerGarbageCollection setting.
  • PreserveCompilationContext is set to true by default.
  • node_modules, jspm_packages and bower_components are excluded by default.

AssemblyInfo.cs is Partially Dead

You don't need AssemblyInfo.cs anymore by default as the csproj Package settings also set many of the assembly attributes. In fact, you didn't really need it with project.json either but the default templates mostly included it for some reason. However, I still found I needed to resurrect it in some cases to use the InternalsVisibleTo attribute. InternalsVisibleTo allows my unit test projects to access internal members in my library project. After a dotnet migrate, you may see the following elements which stop certain assembly attributes from being generated. You can safely delete these.

<PropertyGroup>
  <!-- ...Omitted -->
  <GenerateAssemblyConfigurationAttribute>false</GenerateAssemblyConfigurationAttribute>
  <GenerateAssemblyCompanyAttribute>false</GenerateAssemblyCompanyAttribute>
  <GenerateAssemblyProductAttribute>false</GenerateAssemblyProductAttribute>
<PropertyGroup>

Remove System.* References

You no longer need to explicitly reference System.* references in your csproj. David Fowler recommends that you always reference the NETStandard.Library meta NuGet package gives you most System.* references. You get NETStandard.Library by default if you use the SDK attribute at the top of the csproj:

<Project Sdk="Microsoft.NET.Sdk">
  <!-- ...Omitted -->
</Project>

This meant that I could remove the entire code block below except System.ServiceModel because that reference is not given to you by the NETStandard.Library NuGet package.

<ItemGroup Condition=" '$(TargetFramework)' == 'netstandard1.6' ">
  <PackageReference Include="System.Xml.XDocument" Version="4.3.0" />
</ItemGroup>

<ItemGroup Condition=" '$(TargetFramework)' == 'net461' ">
  <Reference Include="System.ServiceModel" />
  <Reference Include="System.Xml" />
  <Reference Include="System.Xml.Linq" />
  <Reference Include="System" />
  <Reference Include="Microsoft.CSharp" />
</ItemGroup>

Turn Elements into Attributes

For some reason dotnet migrate produces overly verbose XML in some cases by outputting XML elements instead of attributes. I have a NuGet reference to StyleCop.Analyzers which is a build time dependency and I don't want it to be output to my bin directory. You do this by setting the PrivateAssets property but you can turn this:

<PackageReference Include="StyleCop.Analyzers" Version="1.0.0">
  <PrivateAssets>All</PrivateAssets>
</PackageReference>

Into this:

<PackageReference Include="StyleCop.Analyzers" PrivateAssets="All" Version="1.0.0" />

Label your Sections

You can label your PropertyGroup and ItemGroup elements using the Label attribute:

<PropertyGroup Label="Package">
  <!-- NuGet Packages Omitted -->
</PropertyGroup>

So the question becomes, how should we label them? Well, the convention I use is to use the same label names as the ones in Visual Studio's project properties screen:

Project Properties Tabs

The End Result

This is what my csproj looks like at the end of all that. I've removed all the extra fluff you don't need and labelled the properties in a way that makes navigating the file with your eye that much quicker.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup Label="Build">
    <TargetFrameworks>netstandard1.6;net461</TargetFrameworks>
    <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
    <GenerateDocumentationFile>true</GenerateDocumentationFile>
    <CodeAnalysisRuleSet>../../../MinimumRecommendedRulesWithStyleCop.ruleset</CodeAnalysisRuleSet>
  </PropertyGroup>

  <PropertyGroup Label="Package">
    <VersionPrefix>2.2.2</VersionPrefix>
    <Authors>Muhammad Rehan Saeed (RehanSaeed.com)</Authors>
    <Product>Dotnet Boxed Framework</Product>
    <Description>...</Description>
    <Copyright>Copyright © Muhammad Rehan Saeed. All rights Reserved</Copyright>
    <PackageRequireLicenseAcceptance>true</PackageRequireLicenseAcceptance>
    <PackageLicenseUrl>https://github.com/Dotnet-Boxed/Framework/blob/master/LICENSE</PackageLicenseUrl>
    <PackageProjectUrl>https://github.com/Dotnet-Boxed/Framework</PackageProjectUrl>
    <PackageIconUrl>https://raw.githubusercontent.com/Dotnet-Boxed/Framework/master/Images/Icon.png</PackageIconUrl>
    <RepositoryUrl>https://github.com/Dotnet-Boxed/Framework.git</RepositoryUrl>
    <RepositoryType>git</RepositoryType>
    <PackageTags>ASP.NET;ASP.NET Core;MVC;Boxed;Muhammad Rehan Saeed;Framework</PackageTags>
    <PackageReleaseNotes>Updated to ASP.NET Core 1.1.2.</PackageReleaseNotes>
  </PropertyGroup>

  <PropertyGroup Label="Signing">
    <SignAssembly>true</SignAssembly>
    <AssemblyOriginatorKeyFile>../../../Key.snk</AssemblyOriginatorKeyFile>
    <PublicSign Condition=" '$(OS)' != 'Windows_NT' ">true</PublicSign>
  </PropertyGroup>

  <ItemGroup Label="Project References">
    <ProjectReference Include="..\Boilerplate\Boilerplate.csproj" />
  </ItemGroup>

  <ItemGroup Label="Package References">
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Abstractions" Version="1.1.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Core" Version="1.1.2" />
    <PackageReference Include="Microsoft.Extensions.Caching.Abstractions" Version="1.1.1" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Binder" Version="1.1.1" />
    <PackageReference Include="Newtonsoft.Json" Version="9.0.1" />
    <PackageReference Include="StyleCop.Analyzers" PrivateAssets="All" Version="1.0.0" />
  </ItemGroup>

  <ItemGroup Condition=" '$(TargetFramework)' == 'net461' " Label=".NET 4.6.1 Package References">
    <Reference Include="System.ServiceModel" />
  </ItemGroup>

</Project>

dotnet new Feature Selection

$
0
0

In my last post I showed how to get started with using dotnet new to build project templates. In this post, I'm going to build on that knowledge and show how to add feature selection to your project template so developers can choose to add or remove bits of your template. If you check out my .NET Boxed API project template, you'll see that I have 17 features for you to set. If you run the help command against my template you'll see a description of each and instructions on how you can set them (I've cleaned up the CLI output, the current help commands output is pretty awful but this is being addressed in the next version of dotnet new).

PS C:\Users\rehan.saeed> dotnet new api --help
Template Instantiation Commands for .NET Core CLI.

Usage: dotnet new [arguments] [options]

Arguments:
  template  The template to instantiate.

Options:
  -l|--list         List templates containing the specified name.
  -lang|--language  Specifies the language of the template to create
  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is
used.
  -o|--output       Location to place the generated output.
  -h|--help         Displays help for this command.
  -all|--show-all   Shows all templates

.NET Boxed API (C#)
Author: Muhammad Rehan Saeed (RehanSaeed.com)
Options:
  -Ti|--Title: The name of the project which determines the assembly product name. If the Swagger feature is enabled,
    shows the title on the Swagger UI.
    string - Optional
    Default: Project Title
  -D|--Description: A description of the project which determines the assembly description. If the Swagger feature is
    enabled, shows the description on the Swagger UI.
    string - Optional
    Default: Project Description
  -Au|--Author: The name of the author of the project which determines the assembly author, company and copyright
    information.
    string - Optional
    Default: Project Author
  -Sw|--Swagger: Swagger is a format for describing the endpoints in your API. Swashbuckle is used to generate a
    Swagger document and to generate beautiful API documentation, including a UI to explore and test operations,
    directly from your routes, controllers and models.
    bool - Optional
    Default: true
  -T|--TargetFramework: Decide which version of the .NET Framework to target.
    .NET Core         - Run cross platform (on Windows, Mac and Linux). The framework is made up of NuGet packages
                        which can be shipped with the application so it is fully stand-alone.
    .NET Framework    - Gives you access to the full breadth of libraries available in .NET instead of the subset
                        available in .NET Core but requires it to be pre-installed.
    Both              - Target both .NET Core and .NET Framework.
    Default: Both
  -P|--PrimaryWebServer: The primary web server you want to use to host the site.
    Kestrel        - A web server for ASP.NET Core that is not intended to be internet facing as it has not been
                     security tested. IIS or NGINX should be placed in front as reverse proxy web servers.
    WebListener    - A Windows only web server. It gives you the option to take advantage of Windows specific
                     features, like Windows authentication, port sharing, HTTPS with SNI, HTTP/2 over TLS
                     (Windows 10), direct file transmission, and response caching WebSockets (Windows 8).
    Default: Kestrel
  -Re|--ReverseProxyWebServer: The internet facing reverse proxy web server you want to use in front ofthe primary
    web server to host the site.
    Internet Information Services (IIS) - A flexible, secure and manageable Web server for hosting anything on the
                                          Web using Windows Server. Select this option if you are deploying your site
                                          to Azure web apps. IIS is preconfigured to set request limits for security.
    NGINX                               - A free, open-source, cross-platform high-performance HTTP server and
                                          reverse proxy, as well as an IMAP/POP3 proxy server. It does have a Windows
                                          version but its not very fast and IIS is better on that platform. If the
                                          HTTPS Everywhere feature is enabled, NGINX is pre-configured to enable the
                                          most secure TLS protocols and ciphers for security and to enable HTTP 2.0
                                          and SSL stapling for performance.
    Both                                - Support both reverse proxy web servers.
    Default: Both
  -C|--CloudProvider: Select which cloud provider you are using if any, to add cloud specific features.
    Azure    - The Microsoft Azure cloud. Adds logging features that let you see logs in the Azure portal.
    None     - No cloud provider is being used.
    Default: None
  -A|--Analytics: Monitor internal information about how your application is running, as well as external user
    information.
    Application Insights    - Monitor internal information about how your application is running, as well as
                              external user information using the Microsoft Azure cloud.
    None                    - Not using any analytics.
    Default: None
  -Ap|--ApplicationInsightsInstrumentationKey: Your Application Insights instrumentation key
    e.g. 11111111-2222-3333-4444-555555555555.
    string - Optional
    Default: APPLICATION-INSIGHTS-INSTRUMENTATION-KEY
  -H|--HttpsEverywhere: Use the HTTPS scheme and TLS security across the entire site, redirects HTTP to HTTPS and
    adds a Strict Transport Security (HSTS) HTTP header with preloading enabled. Configures the primary and reverse
    proxy web servers for best security and adds a development certificate file for use in your development environment.
    bool - Optional
    Default: true
  -Pu|--PublicKeyPinning: Adds the Public-Key-Pins (HPKP) HTTP header to responses. It stops man-in-the-middle
    attacks by telling browsers exactly which TLS certificate you expect. You must have two TLS certificates for this
    to work, if you get this wrong you will have performed a denial of service attack on yourself.
    bool - Optional
    Default: false
  -CO|--CORS: Browser security prevents a web page from making AJAX requests to another domain. This restriction is
    called the same-origin policy, and prevents a malicious site from reading sensitive data from another site.
    CORS is a W3C standard that allows a server to relax the same-origin policy. Using CORS, a server can explicitly
    allow some cross-origin requests while rejecting others.
    bool - Optional
    Default: true
  -X|--XmlFormatter: Choose whether to use the XML input/output formatter and which serializer to use.
    DataContractSerializer - The default XML serializer you should use. Requires the use of [DataContract] and
                             [DataMember] attributes.
    XmlSerializer          - The alternative XML serializer which is slower but gives more control. Uses the
                             [XmlRoot], [XmlElement] and [XmlAttribute] attributes.
    None                   - No XML formatter.
    Default: None
  -S|--StatusController: An endpoint that returns the status of this API and its dependencies, giving an indication
    of its health. This endpoint can be called by site monitoring tools which ping the site or by load balancers
    which can remove an instance of this API if it is not functioning correctly.
    bool - Optional
    Default: true
  -R|--RequestId: Require that all requests send the X-Request-ID HTTP header containing a GUID. This is useful where
    you have access to the client and server logs and want to correlate a request and response between the two.
    bool - Optional
    Default: false
  -U|--UserAgent: Require that all requests send the User-Agent HTTP header containing the application name and
    version of the caller.
    bool - Optional
    Default: false
  -Ro|--RobotsTxt: Adds a robots.txt file to tell search engines not to index this site.
    bool - Optional
    Default: true
  -Hu|--HumansTxt: Adds a humans.txt file where you can tell the world who wrote the application. This file is a good
    place to thank your developers.
    bool - Optional
    Default: true

As you can see from the output, there are a few different types of feature you can create. You can also choose to make a feature required or optional. An optional feature, if not specified by the user will fall-back to a default value. Here are the different types available:

  • bool - This feature can be turned on or off and has a default of true or false.
  • string - This can be used to do a string replacement in your template. It has a default value which you can set to any arbitrary value.
  • choice - This is a feature with two or more named choices. Each choice can have it's own description. The default value must be one of the choices.
  • computed - These are features flags that can be computed based on other symbols.

Bool Symbols

You can create a boolean feature by adding symbols section to your template.json file. If you look at the example below, I've specified an optional bool symbol, with a default value of true.

{
  ...
  "symbols": {
    "Swagger": {
      "type": "parameter",
      "datatype": "bool",
      "isRequired": false,
      "defaultValue": "true",
      "description": "Your description..."
    }
  }
}

In your code, you can then use the symbol name, in this case Swagger as a pre-processor directive in C# code:

#if (Swagger)
Console.WriteLine("Swagger feature was selected");
#else
Console.WriteLine("Swagger feature was not selected");
#endif

This is really cool because you can still run the application as a template author and the project will still work. If you define a Swagger constant in your project properties, your feature will turn on or off too. This makes debugging your project template very easy as a template author.

If you want to use the symbol in files other than C# where pre-processor directives do not exist, you can use the comment syntax specific to that file extension, so in a JavaScript file would use the // syntax:

//#if (Swagger)
console.log('Swagger feature was selected');
//#else
console.log('Swagger feature was not selected');
//#endif

Most file extensions that have their own comment syntax have been catered for. For text files where there is no comment syntax or for any file extension that the templating engine doesn't know about you can use the # character:

#if (Swagger)
Swagger feature was selected
#else
Swagger feature was not selected
#endif

You can look at this code in the templating engine for a full list of supported file extensions and comment types.

String Symbols

String symbols can be used to do simple file replace operations.

{
  ...
  "symbols": {
    "Title": {
      "type": "parameter",
      "datatype": "string",
      "isRequired": false,
      "defaultValue": "Default Project Title",
      "replaces": "PROJECT-TITLE",
      "description": "Your description..."
    }
  }
}

The above symbol looks for a PROJECT-TITLE string and replaces it with whatever the user specifies or with the default value Default Project Title if the user doesn't set anything.

Choice Symbols

A choice symbol is useful when you have more than two options and can't use bool.

{
  ...
  "symbols": {
    "TargetFramework": {
      "type": "parameter",
      "datatype": "choice",
      "isRequired": false,
      "choices": [
        {
          "choice": ".NET Core",
          "description": "Your description..."
        },
        {
          "choice": ".NET Framework",
          "description": "Your description..."
        },
        {
          "choice": "Both",
          "description": "Your description..."
        }
      ],
      "defaultValue": "Both",
      "description": "Your description..."
    }
}

In the example above, you have the choice of selecting a target framework, with a value of .NET Core.NET Framework or Both. Each choice has it's own description and the overall symbol also has it's description.

Computed Symbols

In the above example, you can't use the value '.NET Core' as a C# pre-processor variable because it contains a dot and a space. This is where a computed symbol comes in handy.

{
  ...
  "symbols": {
   "NETCore": {
      "type": "computed",
      "value": "(TargetFramework == \".NET Core\" || TargetFramework == \"Both\")"
    },
    "NETFramework": {
      "type": "computed",
      "value": "(TargetFramework == \".NET Framework\" || TargetFramework == \"Both\")"
    }
  }
}

Here I have set up two computed symbols which determines whether '.NET Core' or '.NET Framework' was selected individually in the previous choice symbol. I have named these symbols without a dot or space i.e. NETCore and NETFramework so I can use these as C# pre-processor symbols, the same way I showed above.

Conditionally Deleting Files or Folders

You can also use symbols to delete certain files or folders. In this example, I've extended my bool symbol example to additionally remove two files and a folder if the feature is deselected by the user.

{
  ...
  "symbols": {
    "Swagger": {
      "type": "parameter",
      "datatype": "bool",
      "isRequired": false,
      "defaultValue": "true",
      "description": "Your description..."
    }
  },
  "sources": [
    {
      "modifiers": [
        {
          "condition": "(!Swagger)",
          "exclude": [
            "Constants/HomeControllerRoute.cs",
            "Controllers/HomeController.cs",
            "ViewModelSchemaFilters/**/*"
          ]
        }
      ]
    }
  ]
}

You do this by adding source modifiers. I've added one here with a condition and three file and folder exclusions. The exclusions use a globbing pattern.

What's Next?

There are several other useful features of the templating engine which I'll cover in a follow up post as this is starting to get quite long. Feel free to take a look at the source code for my API template to see a full example.

ASP.NET Core Lazy Command Pattern

$
0
0

::: tip TLDR Move your ASP.NET Core MVC action method logic into lazily loaded commands using the command pattern. :::

When writing your Controllers in ASP.NET Core, you can end up with a very long class if you're not careful. You may have written several action methods with a few lines of code in each, you may be injecting a few services into your controller and you may have commented your action methods to support Swagger. The point is it's very easy to do, here is an example:

[Route("[controller]")]
public class RocketController : Controller
{
    private readonly IPlanetRepository planetRepository;
    private readonly IRocketRepository rocketRepository;

    public RocketController(
        IPlanetRepository planetRepository,
        IRocketRepository rocketRepository)
    {
        this.planetRepository = planetRepository;
        this.rocketRepository = rocketRepository;
    }

    [HttpGet("{rocketId}")]
    public async Task<IActionResult> GetRocket(int rocketId)
    {
        var rocket = await this.rocketRepository.GetRocket(rocketId);
        if (rocket == null)
        {
            return this.NotFound();
       }
        return this.Ok(rocket);
    }

    [HttpGet("{rocketId}/launch/{planetId}")]
    public async Task<IActionResult> LaunchRocket(int rocketId, int planetId)
    {
        var rocket = await this.rocketRepository.GetRocket(rocketId);
        if (rocket == null)
        {
            return this.NotFound();
        }
        var planet = await this.planetRepository.GetPlanet(planetId);
        if (planet == null)
        {
            return this.NotFound();
        }
        this.rocketRepository.VisitPlanet(rocket, planet);
        return this.Ok(rocket);
    }
}

The Command Pattern

This is where the command pattern can come in handy. The command pattern moves logic from each action method and injected dependencies into their own class like so:

[Route("[controller]")]
public class RocketController : Controller
{
    private readonly Lazy<IGetRocketCommand> getRocketCommand;
    private readonly Lazy<ILaunchRocketCommand> launchRocketCommand;

    public RocketController(
        Lazy<IGetRocketCommand> getRocketCommand,
        Lazy<ILaunchRocketCommand> launchRocketCommand)
    {
        this.getRocketCommand = getRocketCommand;
        this.launchRocketCommand = launchRocketCommand;
    }

    [HttpGet("{rocketId}")]
    public Task<IActionResult> GetRocket(int rocketId) =>
        this.getRocketCommand.Value.ExecuteAsync(rocketId);

    [HttpGet("{rocketId}/launch/{planetId}")]
    public Task<IActionResult> LaunchRocket(int rocketId, int planetId) =>
        this.launchRocketCommand.Value.ExecuteAsync(rocketId, planetId);
}

public interface IGetRocketCommand : IAsyncCommand<int>
{
}

public class GetRocketCommand : IGetRocketCommand
{
    private readonly IRocketRepository rocketRepository;

    public GetRocketCommand(IRocketRepository rocketRepository) =>
        this.rocketRepository = rocketRepository;

    public async Task<IActionResult> ExecuteAsync(int rocketId)
    {
        var rocket = await this.rocketRepository.GetRocket(rocketId);
        if (rocket == null)
        {
            return new NotFoundResult();
        }
        return new OkObjectResult(rocket);
    }
}

All the logic and dependencies in the controllers gets moved to the command which now has a single responsibility. The controller now has a different set of dependencies, it now lazily injects one command per action method.

You may have noticed the IAsyncCommand interface. I keep four of these handy to inherit from. They all outline an ExecuteAsync method to execute the command and return an IActionResult but they have a differing number of parameters. I personally feel if you are needing more than three parameters you should be using a class to represent your parameters, so I've put the limit on three parameters.

public interface IAsyncCommand
{
    Task<IActionResult> ExecuteAsync();
}
public interface IAsyncCommand<T>
{
    Task<IActionResult> ExecuteAsync(T parameter);
}
public interface IAsyncCommand<T1, T2>
{
    Task<IActionResult> ExecuteAsync(T1 parameter1, T2 parameter2);
}
public interface IAsyncCommand<T1, T2, T3>
{
    Task<IActionResult> ExecuteAsync(T1 parameter1, T2 parameter2, T3 parameter3);
}

Why so Lazy?

Why do we use Lazy<T>? Well the answer is that if we have multiple action methods on our controller, we don't want to instantiate the dependencies for every action method if we are only planning on using one action method. Registering our Lazy commands requires a bit of extra work in out Startup.cs. We can register lazy dependencies like so:

public void ConfigureServices(IServiceCollection services)
{
    // ...Omitted
    services
        .AddScoped<IGetRocketCommand, GetRocketCommand>()
        .AddScoped(x => new Lazy<IGetRocketCommand>(
            () => x.GetRequiredService<IGetRocketCommand>()));
}

HttpContext and ActionContext

Now you might be thinking, how do I access the HttpContext or ActionContext if I want to set a HTTP header for example? Well, you can use the IHttpContextAccessor or IActionContextAccessor interfaces for this purpose. You can register them in your Startup class like so:

public void ConfigureServices(IServiceCollection services)
{
    // ...Omitted
    services
        .AddSingleton<IHttpContextAccessor, HttpContextAccessor>()
        .AddSingleton<IActionContextAccessor, ActionContextAccessor>();
}

Notice that they can be registered as singletons. You can then use them to get hold of the HttpContext or ActionContext objects for the current HTTP request. Here is a really simple example.

public class SetHttpHeaderCommand : ISetHttpHeaderCommand
{
    private readonly IHttpContextAccessor httpContextAccessor;

    public GetRocketCommand(IHttpContextAccessor httpContextAccessor) =>
        this.httpContextAccessor = httpContextAccessor;

    public async Task<IActionResult> ExecuteAsync()
    {
        this.httpContextAccessor.HttpContext.Response.Headers.Add("X-Rocket", "Saturn V");
        return new OkResult();
    }
}

Unit Testing

Another upside to the command pattern is that testing each command becomes super simple. You don't need to setup a controller with lots of dependencies that you don't care about. You only need to write test code for that single feature.

Conclusions

For a full working example, take a look at the .NET Boxed API project template which makes full use of the Lazy Command Pattern.

Structured Data using Schema.NET

$
0
0

What is Schema.org?

Schema.org defines a set of standard classes and their properties for objects and services in the real world. There are nearly 700 classes at the time of writing defined by schema.org. This machine readable format is a common standard used across the web for describing things.

Where is Schema.org Used?

Websites

Websites can define Structured Data in the head section of their html to enable search engines to show richer information in their search results. Here is an example of how Google can display extended metadata about your site in it's search results.

Google Logo Structured Data Example

Using structured data in html requires the use of a script tag with a MIME type of application/ld+json like so:

<script type="application/ld+json">
{
  "@context": "http://schema.org",
  "@type": "Organization",
  "url": "http://www.example.com",
  "name": "Unlimited Ball Bearings Corp.",
  "contactPoint": {
    "@type": "ContactPoint",
    "telephone": "+1-401-555-1212",
    "contactType": "Customer service"
  }
}
</script>

Windows UWP Sharing

Windows UWP apps let you share data using schema.org classes. Here is an example showing how to share metadata about a book.

Enter Schema.NET

Schema.NET is Schema.org objects turned into strongly typed C# POCO classes for use in .NET. All classes can be serialized into JSON/JSON-LD. Here is a simple Schema.NET example that defines the name and URL of a website:

var website = new WebSite()
{
    AlternateName = "An Alternative Name",
    Name = "Your Site Name",
    Url = new Uri("https://example.com")
};
var jsonLd = website.ToString();

The code above outputs the following JSON-LD:

{
    "@context":"http://schema.org",
    "@type":"WebSite",
    "alternateName":"An Alternative Name",
    "name":"Your Site Name",
    "url":"https://example.com"
}

There are dozens more examples based on Google's Structured Data documentation with links to the relevant page in the unit tests of the Schema.NET project.

Classes & Properties

schema.org defines classes and properties, where each property can have a single value or an array of multiple values. Additionally, properties can have multiple types e.g. an Address property could have a type of string or a type of PostalAddress which has it's own properties such as StreetAddress or PostalCode which breaks up an address into it's constituent parts.

To facilitate this Schema.NET uses some clever C# generics and implicit type conversions so that setting a single or multiple values is possible and that setting a string or PostalAddress is also possible:

// Single string address
var organization = new Organization()
{
    Address = "123 Old Kent Road E10 6RL"
};

// Multiple string addresses
var organization = new Organization()
{
    Address = new List<string>()
    { 
        "123 Old Kent Road E10 6RL",
        "456 Finsbury Park Road SW1 2JS"
    }
};

// Single PostalAddress address
var organization = new Organization()
{
    Address = new PostalAddress()
    {
        StreetAddress = "123 Old Kent Road",
        PostalCode = "E10 6RL"
    }
};

// Multiple PostalAddress addresses
var organization = new Organization()
{
    Address = new List<PostalAddress>()
    {
        new PostalAddress()
        {
            StreetAddress = "123 Old Kent Road",
            PostalCode = "E10 6RL"
        },
        new PostalAddress()
        {
            StreetAddress = "456 Finsbury Park Road",
            PostalCode = "SW1 2JS"
        }
    }
};

This magic is all carried out using the Value<T>, Value<T1, T2>, Value<T1, T2, T3> etc. types. These types are all structs for best performance too.

Where to Get It?

Download the Schema.NET NuGet package or take a look at the code on GitHub. At some point I'll find the time to write a quick ASP.NET Core tag helper that wraps Schema.NET.

Keeping Up With Software Development

$
0
0

Keeping up with the changes in the software development industry always feels like a losing battle. Just when I feel like I've started to catch up and learn the things I need to know to do my job (and hobby), a new version of something is released or even worse, something totally revolutionary comes out which means we have to start learning from first principles again.

WebAssembly is one such technology which is a game changer but still at the prototype stage. When it does hit, prepare for the whirlwind. Prepare to throw away most of what you knew and re-learn that which is now deemed the state of the art and the way to do things.

At some point in history it was possible for a single human being to learn all scientific knowledge known to mankind if they could somehow get hold of the information. We've long since surpassed that point with a single technology like .NET, let alone software development in general.

Then you've got the cross-functional developers out there. The ones that try to learn everything required to build a complete application. I try to do this but I have the constant feeling that I'm behind, that I need to catch up. It's impossible to learn everything in any real depth, so you have to skim the surface of some technologies to get by. I wish I knew more about T-SQL, ElasticSearch, Webpack and pretty much all of the myriad of JavaScript frameworks.

I'm not complaining of course, having to learn is what keeps it interesting and what keeps us coming back for more.

The list below is my personal list of RSS feeds that I follow to keep up to date. I've put this list together over several years and use an RSS feed reader called Feedly which keeps track of the articles I've read. Hopefully, it helps you keep up to date.

Link Aggregators

These are sites which aggregate articles from various sources.

  • ASP.NET Community Spotlight (RSS) - Quality blog posts from the ASP.NET community.
  • CSS-Tricks (RSS) - Learn about CSS mainly but also HTML and JavaScript tricks and browser features and differences.
  • The Morning Brew (RSS) - A great list of articles put together by Chris Alcock every day without fail. A great service to the community.

Products

Keep up to date with product announcements and updates.

  • The Visual Studio Blog (RSS) - Various Visual Studio related official announcements.
  • Microsoft Azure Blog (RSS) - Official announcements about new Azure services. The cloud is changing fast, keep up here.
  • Windows Blog (RSS) - Find out about new features in Windows and Windows Phone.
  • SQL Server Blog (RSS) - Keep up to date with the latest SQL Server announcements.
  • ASP.NET Docs (RSS) - Read new articles in the ASP.NET Core docs.
  • Microsoft Power BI Blog (RSS) - If you haven't used Power BI yet, think of it like Excel for your SQL or No-SQL database. You can produces tables and nice visualizations from your raw data.
  • Bootstrap Blog (RSS) - I use Bootstrap to build web UI's. I've looked into Foundation and Material Design but so far I've not changed my preference.
  • Cake (RSS) - If you are not yet using Cake Build to build your code, you're missing out on the simplicity as compared to PowerShell or Bash, plus it runs cross platform.
  • Octopus Deploy (RSS) - This is a great tool to manage the complexity of deploying your application, including versioning, rollbacks, multi-tenancy, secrets, etc.
  • Docker Blog (RSS) - I've recently started to use Docker Swarm, it's still early days but it has a lot of potential.
  • Google Developers (RSS) - Get updates on the Google Chrome browser and various web standards.
  • Project Nami News (RSS) - This blog runs on WordPress using uses Project Nami which allows me to use SQL Server.

Online Services

Find out how various online services are changing.

  • The GitHub Blog (RSS) - I use GitHub daily, contributing to open source myself or for looking at source code. Find out what they are doing to improve the site.
  • Cloudflare Blog (RSS) - You should probably be using Cloudflare as your CDN for speed and cost savings, I use it for this site. They post product updates but also about major security vulnerabilities and hacking attempts.
  • The NuGet Team Blog (RSS) - Find out how NuGet is changing, they don't post too often but useful when they do.
  • Schema.org Blog (RSS) - Schema.org defines a set of standard classes and their properties for objects and services in the real world. There are nearly 700 classes at the time of writing defined by schema.org. This machine readable format is a common standard used across the web for describing things. They produce blog posts when they add or change any schemas. I recently wrote about Schema.NET which turns the 700 schemas into .NET POCO classes.

Software Development Individuals

ASP.NET Core

Web Technologies

  • Marius Schulz (RSS) - Has written a great series of blog posts on TypeScript.
  • Addy Osmani (RSS) - Google developer, I think he's on the Chrome team as blog posts feature new web standards built into the Chrome browser.
  • Jake Archibald's Blog (RSS) - A Google developer posting about new web standards and browser differences.

Web Security

Windows

I used to do a lot more Windows development with WPF, Silverlight and UWP etc. I don't really follow anyone from those days anymore except Mike Taulty

  • Mike Taulty (RSS) - Microsoft evangelist. Writes Windows and Hololens blog posts.

Other

These people don't really fit into any one category but are well worth reading:

Videos

Videos are a great way to learn but they do suck up a lot of time so be selective.

  • NDC Conferences's (RSS) - The NDC conferences happen in London, Oslo and Sydney every year. They post way too much content for me to handle. You have to be very selective about what you watch. Talks also get repeated quite a bit in different conferences. I'm still watching videos from the last conference when the next one rolls up.
  • CSS Day (RSS) - CSS Day is an annual conference about CSS, posting a dozen videos a year that is well worth watching.
  • Azure Friday (RSS) - Find out about new features of Azure in these brief introductory videos.

A Very Generic .editorconfig File (Updated)

$
0
0

What is a .editorconfig File?

A .editorconfig file helps developers define and maintain consistent coding styles between different editors and IDEs for file with different file extensions. These configuration files are easily readable and they work nicely with version control systems. An .editorconfig file defines various settings per file extension such as charsets and tabs vs spaces.

Scott Hanselman recently wrote a blog post about this file. You can also find out more from the official docs at editorconfig.org and the Visual Studio Docs which I recommend you read.

A Very Generic .editorconfig

I wrote a generic .editorconfig file supporting the following file types:

  • C# - .cs, .csx, .cake
  • Visual Basic - .vb
  • Script - .sh, .ps1, .psm1, .bat, .cmd
  • XML - .xml, .config, .props, .targets, .nuspec, .resx, .ruleset
  • JSON - .json, .json5
  • YAML - .yml, .yaml
  • HTML - .htm, .html
  • JavaScript - .js, .ts, .tsx, .vue
  • CSS - .css, .sass, .scss, .less
  • SVG - .svg
  • Markdown - .md
  • Visual Studio - .sln, .csproj, .vbproj, .vcxproj, .vcxproj.filters, .proj, .projitems, .shproj
  • Makefile

Extensive code style settings for C# and VB.NET have been defined that require the latest C# features to be used. In addition, it sets various more advanced C# style settings. All C# related code styles are consistent with StyleCop's default styles. You can find our more about the C# code style settings from the official docs and also in Kent Boogaart's blog post.

How do I use It?

All you have to do is drop it into the root of your project. Then any time you open a file in Visual Studio, the .editorconfig file settings will be used to help format the document and also raise warnings if your code style and formatting does not conform.

For Visual Studio Code, you can install the EditorConfig for VS Code extension to get support.

Exciting July 2018 Update

I noticed that Microsoft silently released several new C# code style settings. I'm not sure when they were released but they're available in the current Visual Studio 15.7 update. The majority of them are to enforce the use of newer C# 7.3 syntax. I updated my generic .editorconfig file to add these new settings with C# 7.3 as the default.

Microsoft also updated their documentation for .editorconfig settings pertaining to .NET, so I added links to the docs site, so it's easy to see what each setting does and change it, if it's not to your liking. I've also included a undocumented dozen settings. There is an open issue on GitHub to get them documented, so it's easy to see what they do.

In addition, while I was working on this, I added support for a few more file extensions, including yaml (yml was already there), json5 (If you haven't heard of json5, check it out), cmd and bat (If you haven't switched to PowerShell yet, what are you waiting for).

Finally, Microsoft announced last week that the Visual Studio 15.8 update which is currently being released as preview 3 will automatically fix errors when you format the document using the ||CTRL+K|| followed by ||CTRL+D|| shortcut. This is huge! It means that you can drop a .editorconfig file in an existing codebase and with a few clicks or keyboard shortcuts (if that's how you roll) you can clean up your code base to use the latest C# 7.3 features and a code style that suits you.

ASP.NET Core Caching in Practice

$
0
0

Cache-Control HTTP Header

The Cache-Control HTTP header can be used to set how long your resource can be cached for. However, the problem with this HTTP header is that you need to be able to predict the future and know before hand when the cache will become invalid. For some use cases, like writing an API where someone could change the resource at any time that's just not feasible.

I recommend you read the response caching middleware documentation, it's not necessary as I do a quick overview next but the knowledge below builds upon it. The simple way to set the cache control header is directly on the action method like so:

[HttpGet, ResponseCache(Duration = 3600, Location = ResponseCacheLocation.Any)]
public IActionResult GetCats()

Adding the ResponseCache attribute just adds the Cache-Control HTTP header but does not actually cache the response on the server. To do that you also need to add the response caching middleware like so:

public void Configure(IApplicationBuilder application) =>
    application.UseResponseCaching().UseMvc();

Instead of hard coding all of your cache settings in the ResponseCache attribute, it's possible to store them in the appsettings.json configuration file. To do so, you need to use a feature called cache profiles which look like this:

[HttpGet, ResponseCache(CacheProfile="Cache1Hour")]
public IActionResult GetCats()

public class Startup
{    
    private readonly IConfiguration configuration;

    public Startup() => this.configuration = configuration;

    public void ConfigureServices(IServiceCollection services) =>
        services
            .Configure<Dictionary<string, CacheProfile>>(configuration.GetSection("CacheProfiles"))
            .AddMvc(options =>
            {
                // Read cache profiles from appsettings.json configuration file
                var cacheProfiles = this.configuration.GetSection<Dictionary<string, CacheProfile>>();
                foreach (var keyValuePair in cacheProfiles)
                {
                    options.CacheProfiles.Add(keyValuePair);
                }
            });

    // Omitted
}
{
  "CacheProfiles": {
    "Cache1Hour": {
      "Duration": 3600,
     "Location": "Any"
    }
  },
  // Omitted...
}

Now all your caching can be configured from a single configuration file.

Cache-Control Immutable Directive

Cache-Control also has a new draft directive called immutable. When you add this to the HTTP header value, you are basically telling the client that this resource never changes even if it has expired. You might be asking, why do we need this? Well, it turns out that when you refresh a page in a browser, it goes off to the server and checks to see if the resource has expired or not.

Cache-Control: max-age=365000000, immutable

It turns out that you get a massive reduction in requests to your server by implementing this directive. Read more about it in these links:

This directive has not yet been implemented in ASP.NET Core but I've raised an issue on GitHub here and there is also another issue here to add the immutable directive to the static files middleware. If you really wanted to, it's really easy to add this directive today, as you just need to append the word immutable onto the end of your Cache-Control HTTP header.

A word of warning! You need to make sure that your resource really never changes. You can do this in Razor by using the asp-append-version attribute on your script tags:

<script src="~/site.js" asp-append-version="true"></script>

This will append a query string to the link to site.js which will contain a hash of the contents of the file. Each time the file changes, the hash is changed and thus you can safely mark the resource as immutable.

E-Tags

E-tags are typically generated in three ways (Read the link to understand what they are):

  1. Hashing the HTTP response body - You'd want to use a very fast and collision resistant hash function like MD5 (MD5 is broken security wise and you should never use it but it's ok to use it for caching). Unfortunately, this method is slow because you have to load the entire response body into memory (which is not the default in ASP.NET Core which streams it straight to the client for better performance) to hash it. If you're still interested in implementing this E-Tag's using this method Mads Kristensen wrote a nice blog post showing how it can be done.
  2. Last modification timestamp - The E-Tag can literally be the time the object was last modified which you can store in your database (I usually store created and modified timestamps for anything I store in a database anyway). This solves the performance problem above but now what is the difference between doing this and using the Last Modified HTTP header?
  3. Revision Number - This could be some kind of integer stored in the database which gets incremented each time the data is modified. I don't see any advantage of doing this over using the last modification timestamp above, unless you have a naturally occurring revision number in your data that you could use.

One additional thing you need to be careful of is the Accept, Accept-Encoding and Accept-Language HTTP headers. Any time you send a different response based on these HTTP headers, your E-Tag needs to be different e.g. a JSON non-gzip'ed response in Mandarin needs to have a different E-Tag to an XML gzip'ed response in Urdu.

For option one, this can be achieved by calculating the hash after the response body has gone through GZIP compression. For the second and third options, you would need to append the value of the Accept HTTP headers to the last modified date or revision number and then hash all of that.

Last-Modified & If-Modified-Since

I'm assuming you already know about the Last-Modified and If-Modified-Since HTTP headers. If not, go ahead and read the links. Below is an example controller and action method that returns a list of cats.

[Route("[controller]")]
public class CatsController : ControllerBase
{
    private readonly ICatRepository catRepository;
    private readonly ICatMapper catMapper;

    public CatsController(
        ICatRepository catRepository,
        ICatMapper catMapper)
    {
        this.catRepository = catRepository;
        this.catMapper = catMapper;
    }

    [HttpGet("")]
    public async Task<IActionResult> GetCats(CancellationToken cancellationToken)
    {
        var cats = await this.catRepository.GetAll(cancellationToken);
        var lastModified = cats.Count == 0 ? 
            (DateTimeOffset?)null : 
            cats.Max(x => x.ModifiedTimestamp);

        this.Response.GetTypedHeaders().LastModified = lastModified;

        var requestHeaders = this.Request.GetTypedHeaders();
        if (requestHeaders.IfModifiedSince.HasValue &&
            requestHeaders.IfModifiedSince.Value >= lastModified)
        {
            return this.StatusCode(StatusCodes.Status304NotModified);
        }

        var catViewModels = this.catMapper.MapList(cats);
        return this.Ok(catViewModels);
    }
}

All of our cats have a ModifiedTimestamp, so we know when they were last changed. There are four scenarios that this action method handles:

  1. Our repository does not contain any cats, so just always return an empty list.
  2. No Last-Modified HTTP header exists in the request, so we just return all cats.
  3. Last-Modified HTTP header exists and cats have been modified since that date, so return all cats.
  4. Last-Modified HTTP header exists but no cats have been modified since that date, so return a 304 Not Modified response.

In all cases, except when we have no cats at all, we set the Last-Modified date to the latest date than any cat has been modified.

Conclusions

Which caching HTTP headers you pick, depends on your data but at a minimum, I would add E-Tag's or Last-Modified. Add Cache-Control where possible, usually for static assets.


Docker Read-Only File Systems

$
0
0

For a little bit of added security you can make the file system of your container read-only, excluding any volumes you may have created. If anyone hacks into your container, they will be unable to change any files.

Docker Run

When using the docker run command using the CLI, you can simply use the following command:

docker run --read-only redis

Docker Compose/Swarm

To set a read-only file system, you simply need to set the read_only flag to true, like so:

version: '3.3'

services:
  redis:
    image: redis:4.0.1-alpine
    networks:
      - myoverlay
    read_only: true

networks:
  myoverlay:

So above, I have a Docker stack file for use with Docker Swarm showing how to start Redis with a read-only file system.

What is Supported?

Not all images support having them started with a read-only file system. Some require access to write temp files and the like. You can usually get away with using a volume in this case because volumes are still writeable even if you enable the read-only file system. In my research, I found it hard to determine if an image supported the feature, so I simply tried it out and found that most failed.

I discovered that Redis was the only image that I was running that had full support, several Elastic Stack containers failed to start and even my ASP.NET Core images failed to start. I since raised a GitHub issue here, trying to find out why the container fails to start and seeing if there is any workaround.

Docker Labels in Depth

$
0
0

Static Docker Labels

Docker image names are short and usually not very descriptive. You have the ability to label your Docker images to give them some extra metadata. You can add any information you like, labels are just key value pairs. Here I've added an author label to my Dockerfile:

FROM microsoft/aspnetcore:2.0
LABEL "author"="Muhammad Rehan Saeed"
LABEL "company"="Acme Co."
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Bridge.Turtle.dll"]

Note that prior to Docker 1.10, it was recommended to combine all labels into a single LABEL instruction, to prevent extra layers from being created. This is no longer necessary, but combining labels is still supported.

Dynamic Docker Labels

This is great for static data like the author but not so great for dynamic data like an automated build number or git changeset number that you might want to use. That way you'll know exactly which build built the image and the source code it was built from. This can be valuable information when you're in a pickle with a production issue.

To add dynamic labels you can pass them from the command line when you run the docker build command like so:

docker image build --tag foo:1.0.0 --label "build"="123" --label "changeset"="0d9c7d3b77817caab3977b16d1d76bb3eb024837" .

Open Containers Annotations Spec

The Open Containers Initiative (OCI) is a standards body defining open standards for container formats and runtimes. They've already defined a standard set of labels (they call them annotations) for you to use in your Docker images:

  • org.opencontainers.image.created - date and time on which the image was built (string, date-time as defined by RFC 3339).
  • org.opencontainers.image.authors - Contact details of the people or organization responsible for the image (free-form string).
  • org.opencontainers.image.url - URL to find more information on the image (string).
  • org.opencontainers.image.documentation - URL to get documentation on the image (string).
  • org.opencontainers.image.source - URL to get source code for building the image (string).
  • org.opencontainers.image.version - Version of the packaged software.
  • org.opencontainers.image.revision - Source control revision identifier for the packaged software.
  • org.opencontainers.image.vendor - Name of the distributing entity, organization or individual.
  • org.opencontainers.image.licenses - License(s) under which contained software is distributed as an SPDX License Expression.
  • org.opencontainers.image.ref.name - Name of the reference for a target (string).
  • org.opencontainers.image.title - Human-readable title of the image (string).
  • org.opencontainers.image.description - Human-readable description of the software packaged in the image (string).

Naming Conventions

The labels in the Open Containers Annotations specification and a few others I've seen use a kind of dot separated namespace. The official Docker documentation suggests that this is only required if your image is a "third party tool" which I think means if the image will ever be used as a base for another image:

  • Authors of third-party tools should prefix each label key with the reverse DNS notation of a domain they own, such as com.example.some-label.
  • Do not use a domain in your label key without the domain owner's permission.
  • The com.docker.*, io.docker.*, and org.dockerproject.* namespaces are reserved by Docker for internal use.
  • Label keys should begin and end with a lower-case letter and should only contain lower-case alphanumeric characters, the period character (.), and the hyphen character (-). Consecutive periods or hyphens are not allowed.
  • The period character (.) separates namespace "fields". Label keys without namespaces are reserved for CLI use, allowing users of the CLI to interactively label Docker objects using shorter typing-friendly strings.

For any other images, you can just use simple single word labels or at least, that's what I'm doing.

Useful Docker Images - Part 1

$
0
0
  1. Useful Docker Images - Part 1 - Administering Docker
  2. Useful Docker Images - Part 2 - The EKL-B Stack

I have been running Docker Swarm in production for a few API's and single page applications for a couple of months now. Here are some Docker images I've found generally useful. Most of these images are not specific to Docker Swarm. For each image, I'm also going to show a docker-stack.yml file that you can use to deploy the image and the settings I use for them. To deploy a Docker stack file, just run the following commands:

# To enable Docker Swarm mode on your local machine if you haven't already.
docker swarm init
# To deploy a Docker stack file to your Swarm.
docker stack deploy --compose-file docker-stack.yml

Docker Swarm Visualizer

The Docker Swarm Visualizer image connects to the Docker socket and shows a really nice visualization showing all of the nodes in your Docker cluster (or just one on your development machine) and all of the containers running on it.

Docker Visualizer

A word of warning about using this image. It has full unimpeded access to your Docker socket which lets it do basically anything that Docker can do (and that's a lot). This image is useful for development and testing purposes. If you want to use it in production, don't expose it to the internet, only run it in your local network if you trust the users in your local network that is. You don't want your Docker Swarm turning into a Bitcoin mining farm. Here is a Docker stack file you can use to deploy this image:

version: '3.3'

services: 
  visualizer:
    image: dockersamples/visualizer
    ports:
      - "8080:8080"
    deploy:
      placement:
        constraints: [node.role == manager]
      resources:
        limits:
          cpus: '0.1'
          memory: 100M
    networks:
      - visualizeroverlay
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"

networks:
  visualizeroverlay:

The container has to run on a manager node, so I've added that constraint and also added access to the Docker socket using a volume mount. I've also limited the resources the container can consume. Finally, I've also given the service it's own dedicated overlay network, so it can't talk to my other containers.

Portainer

Portainer is a free and open source Docker image you can use to administer your Docker cluster. It has full support for standalone Docker and Docker Swarm. It lets you do everything from seeing what's running on your nodes, starting containers, viewing logs and shelling into your running Docker containers. I find the last two particularly useful.

Portainer

Portainer also has a visualization similar to the Visualizer image I spoke about earlier but it's not nearly as nice and is buried in a few sub-menus which is why I prefer Visualizer. It's basically competing with Docker Enterprise Edition (EE) which is a seriously expensive piece of kit, while this is totally free!

Portainer has user and team management built into it, so it's not wide open to the internet if you expose a port. Interestingly, Portainer also exposes an API. It's a possibility I've explored yet but you could use said API to deploy your Docker applications from your CI/CD process. Here is a Docker stack file you can use to deploy this image:

version: '3.3'

services: 
  portainer:
    image: portainer/portainer
    command: --host unix:///var/run/docker.sock
    deploy:
      placement:
        constraints: [node.role == manager]
    ports:
      - "9000:9000"
    networks:
      - portaineroverlay
    volumes:
      - portainer:/data
      - "/var/run/docker.sock:/var/run/docker.sock"

networks:
  portaineroverlay:

Once again, we are binding the image to the Docker socket using a volume mount but also giving Portainer another volume to store it's data. We also set a constraint, so that the container runs on a manager node.

Sonatype Nexus

[https://hub.docker.com/r/sonatype/nexus3/](Sonatype Nexus) is an open source repository manager that can be used as a private Docker registry to store your images. In fact, it can also be used as a repository for NuGet, Maven, Ruby and NPM too. It's pretty powerful stuff and has user management built in too.

Sonatype Nexus

version: '3.3'

  nexus:
    image: sonatype/nexus3:3.6.1
    deploy:
      resources:
        reservations:
          cpus: '2'
          memory: 4GB
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost/service/metrics/healthcheck"]
      interval: 60s
      timeout: 5s
      retries: 3
    ports:
      - "8081:8081"
      - "8082:8082"
      - "8083:8083"
    networks:
      - nexusoverlay
    volumes:
      - artefacts:/nexus-data

networks:
  nexusoverlay:

Sonatype Nexus has some pretty hefty minimum system requirements, so I've reserved the necessary CPU and memory. I've added three ports to support HTTP, HTTPS and a third port for my Docker registry, you can configure this in the admin menu when you add a Docker registry. Thankfully it's just a matter of a few clicks to setup and here are my registry settings:

Sonatype Nexus Administration

I have also gone to the effort of setting up a health check. Health checks are a wonderful feature of Docker. The container will not start and join the network until the health check has succeeded. This has stopped failed production releases for me in the past for my ASP.NET Core apps. Use health checks people!

Conclusions

This blog post is getting a bit long, so I'll split it into two pieces. In the next part, expect to hear about how you can use the ELK-B stack which is made up of a few bits of software: ElasticSearch, Kibana, Filebeat, Metricbeat and Heartbeat. Also, be sure to read Andrew Lock's piece on Rancher, which is a bit like Portainer. I'd never heard of Rancher, it'll be interesting to do a comparison.

Useful Docker Images - Part 2

$
0
0
  1. Useful Docker Images - Part 1 - Administering Docker
  2. Useful Docker Images - Part 2 - The EKL-B Stack

Filebeat, Metricbeat & Hearbeat

Knowing what is happening in Docker and in your applications running on Docker is critical. To collect logs from my Swarm and monitor the health of it, I use the ELK-B stack which is made up of four pieces of software called ElasticSearch, LogStash (I recommend that you use Beats instead of LogStash), Kibana and various Beats.

ElasticSearch is basically a No-SQL database that is geared towards storing JSON documents and searching across them. Kibana is a visualization took that gives you a nice UI to view all of your data and produce nice visualizations and dashboards. There are several Beats which are used to ship data into ElasticSearch from various sources.

Kibana

While you could use Docker to host ElasticSearch and Kibana, I use the ElasticCloud at work, you could also use instances hosted by AWS and Azure. Using a hosted version takes some of the pain out of maintaining ElasticSearch. I had a look at the ElasticSearch Docker container and if you really want to go down the Docker route and create an ElasticSearch cluster, it looks fairly straightforward but a bit unorthodox. There is a cost versus effort trade-off in this decision and it's up to you where you decide to go.

In terms of Beats, I use three of them which I'll talk about below:

Filebeat

Filebeat is a tool used to ship Docker log files to ElasticSearch. The latest version 6.0 queries Docker APIs and enriches these logs with the container name, image, labels, and so on which is a great feature, because you can then filter and search your logs by these properties. You can then view these logs in a fully customizable Kibana dashboard. Filebeat ships with a sample Kibana dashboard that looks like this:

Filebeat Kibana Dashboard

As well as shipping Docker logs, I write the logs from my ASP.NET Core applications to disk (The best way to make sure you never lose log information) and then use Filebeat to ship these log files to ElasticSearch.

The Dockerfile below is used to add Filebeat configuration files to the base Filebeat image and nothing more. The configuration files are pretty lengthy and heavily commented so I've omitted them:

FROM docker.elastic.co/beats/filebeat:6.0.0
COPY filebeat.yml filebeat.template.json /usr/share/filebeat/
USER root
RUN chown filebeat /usr/share/filebeat/filebeat.yml && /
    chown filebeat /usr/share/filebeat/filebeat.template.json && /
    chmod go-w /usr/share/filebeat/filebeat.yml && /
    chmod go-w /usr/share/filebeat/filebeat.template.json
USER filebeat

In the Docker stack file below, I setup a shared volume called 'logs' in which my website container stores all of it's log files. My custom Filebeat image then picks up logs from the 'logs' volume and pushes them to ElasticSearch. Filebeat is also configured so that one instance of the container runs on every Docker node, so that it can pick up Docker logs from every node in my Swarm.

version: '3.3'

  filebeat:
    image: my-custom-filebeat-image:latest
    deploy:
      mode: global # One docker container per node
    networks:
      - defaultoverlay
    volumes:
      - logs:/var/log/my-company-name

  website-name:
    image: website-name:latest
    ports:
      - "5000:80"
    networks:
      - defaultoverlay
    volumes:
      - logs:/var/log/my-company-name

networks:
  defaultoverlay:

volumes:
  logs:

Metricbeat

Metricbeat can be used to monitor the CPU, memory and disk usage on your Docker nodes and then ship those logs to your ElasticSearch cluster. Once again Metricbeat ships with a sample Kibana dashboard that looks like this:

Metricbeat Kibana Dashboard

Here is an example of a custom Metricbeat Dockerfile which I use to configure Metricbeat:

FROM docker.elastic.co/beats/metricbeat:6.0.0
COPY metricbeat.yml metricbeat.template.json /usr/share/metricbeat/
USER root
RUN chown metricbeat /usr/share/metricbeat/metricbeat.yml && /
    chown metricbeat /usr/share/metricbeat/metricbeat.template.json && /
    chmod go-w /usr/share/metricbeat/metricbeat.yml && /
    chmod go-w /usr/share/metricbeat/metricbeat.template.json
USER metricbeat

And here is the Docker stack file below. Once again it configures one instance of Metricbeat to run on each Docker node. It also needs access to the Docker socket and a bunch of other folders on the Docker node, so that explains all of the volume mounts.

version: '3.3'

services: 
  metricbeat:
    image: my-custom-metricbeat-image:latest
    command: metricbeat -e -system.hostfs=/hostfs
    deploy:
      mode: global # One docker container per node
    networks:
      - defaultoverlay
    volumes:
      - /proc:/hostfs/proc:ro
      - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
      - /:/hostfs:ro
      - /var/run/docker.sock:/var/run/docker.sock

networks:
  defaultoverlay:

Heartbeat

Heartbeat is a ping monitor that can be pointed at any status endpoints in your API's or websites. Failures get logged in ElasticSearch which show up in a nice graph. You can also use Kibana to set up alerts, so you can be notified of any downtime. Here is an example of what a Kibana Dashboard containing Heartbeat data looks like:

Heartbeat Kibana Dashboard

The Dockerfile is similar to the other Beats:

FROM docker.elastic.co/beats/heartbeat:6.0.0
COPY heartbeat.yml heartbeat.template.json /usr/share/heartbeat/
USER root
RUN chown heartbeat /usr/share/heartbeat/heartbeat.yml && /
    chown heartbeat /usr/share/heartbeat/heartbeat.template.json && /
    chmod go-w /usr/share/heartbeat/heartbeat.yml && /
    chmod go-w /usr/share/heartbeat/heartbeat.template.json
USER heartbeat

This Docker stack file is extremely simple. There is only one instance of the image required.

version: '3.3'

services: 
  heartbeat:
    image: my-custom-heartbeat-image:latest
    networks:
      - defaultoverlay

networks:
  defaultoverlay:

Ping monitors on the internet are super expensive for what they are, this is because they send pings from various locations on the Earth. Heartbeat will not do that, so be aware of this difference. That said, there is nothing I can do if the pipe for the internet in Australia goes down, so in my opinion, Heartbeat reduces a lot of false positives.

Conclusions

I have discovered that I have enough material for a third and final part to this series of blog posts. In the next part, you can expect to learn more about Redis and Metabase Docker images.

Writing your Webpack configuration in TypeScript

$
0
0

Webpack Configuration is a Mess

::: warning Before you get the wrong idea, let me say that Webpack is a super powerful, it's what you probably should be using these days to deal with static assets and I like its power... :::

However, Webpack configuration files are write once software, they are a mess, a complete and utter mess. There, I said it. It has a steep learning curve and plenty of magic. What's worse is that Webpack makes it intentionally harder than it needs to be. If you look online at examples, even in the Webpack docs themselves, you'll see a dozen examples that look completely different, this is for two reasons:

  1. In its wisdom, Webpack decided it was a good idea to provide lots of different ways to configure it. There are four, yes four different ways to configure a rule and another four to configure a loader.
  2. When Webpack 2 came out, they significantly changed the configuration syntax. They didn't want to make breaking changes, so they supported Webpack 1 and 2 syntax together.

What you ended up with is eight ways to configure a rule or loader, which is insane.

ParcelJS

I'm going off on a tangent here but there is a new module bundler in development called ParcelJS which has support for JavaScript, HTML, CSS, SASS and images built in from the start with zero configuration! Adding Babel, TypeScript or Autoprefixer is also much easier with no need to configure Parcel to work with them.

Unfortunately, it's not ready for prime time yet as it is lacking support for source maps, multiple entry points, code splitting and Vue components. I have high hopes for ParcelJS in the future!

Why TypeScript?

Happily, as I'll show below TypeScript can help you to completely avoid Webpack 1 syntax. Secondly, if you're already writing your application using TypeScript, then often the only JavaScript files you have left in your project end up being the Webpack configuration files. Converting Webpack configuration to TypeScript removes the need to switch context and switch between languages.

How is it done?

It turns out that Webpack supports the use of TypeScript itself. However, the supported method requires you to add a couple of NPM packages as a dependency and you will not be able to use ES 2015 module syntax in your configuration file because it's not supported.

In my opinion, a much simpler and cleaner way is to use the TypeScript tsc command line tool to transpile TypeScript to JavaScript before running Webpack. You could add this command as a simple NPM script in your package.json file. Here are the commands you need to use:

tsc --lib es6 webpack.config.ts
webpack --config webpack.config.js

Webpack does not come with TypeScript typings, so you'll also need to install the @types/webpack NPM package. Finally, to remove all Webpack 1 syntax, you need to create some new types extending the Webpack types, which remove the Webpack 1 syntax, I stuck all of these typings in a webpack.common.ts file:

import * as webpack from "webpack";

// Remove the Old Webpack 1 types to ensure that we are only using Webpack 2 syntax.

export type INewLoader = string | webpack.NewLoader;

export interface INewLoaderRule extends webpack.NewLoaderRule {
  loader: INewLoader;
  oneOf?: INewRule[];
  rules?: INewRule[];
}

export interface INewUseRule extends webpack.NewUseRule {
  oneOf?: INewRule[];
  rules?: INewRule[];
  use: INewLoader | INewLoader[];
}

export interface INewRulesRule extends webpack.RulesRule {
  oneOf?: INewRule[];
  rules: INewRule[];
}

export interface INewOneOfRule extends webpack.OneOfRule {
  oneOf: INewRule[];
  rules?: INewRule[];
}

export type INewRule = INewLoaderRule | INewUseRule | INewRulesRule | INewOneOfRule;

export interface INewModule extends webpack.NewModule {
  rules: INewRule[];
}

export interface INewConfiguration extends webpack.Configuration {
  module?: INewModule;
}

export interface IArguments {
  prod: boolean;
}

export type INewConfigurationBuilder = (env: IArguments) => INewConfiguration;

You can then use these types in your Webpack configuration:

import * as path from "path";
import * as webpack from "webpack";
import { INewConfiguration } from "./webpack.common";

const configuration: INewConfiguration = {
  // ...
};
export default configuration;

Or you can also pass arguments to your webpack configuration file like so:

import * as path from "path";
import * as webpack from "webpack";
import { IArguments, INewConfiguration, INewConfigurationBuilder } from "./webpack.common";

const configurationBuilder: INewConfigurationBuilder = 
  (env: IArguments): INewConfiguration => {
    const isDevBuild = !(env && env.prod);
    const configuration: INewConfiguration = {
      // ...
    };
    return configuration;
  };
export default configurationBuilder;

In this case, you can pass arguments to the webpack configuration file like so:

> webpack --env.prod

Conclusion

I think most people have huge trouble getting to grips with Webpack, once you understand that there are so many ways to supply the same config and how to translate between them, the learning curve gets shallower. You will be able to translate all of the examples you see online that inevitably are using a different syntax to you, so you can 'borrow' (It's what us software developers do for much of the day) their code and get stuff done.

Viewing all 138 articles
Browse latest View live