Author Archives: Administrator

Azure SQL with AAD Authorization via App Service MSI

Introduction

This article contains all the information you might need on using Azure SQL databases through Entity Framework from within an App Service with managed service identities (MSI) configured (where the MSI is used to authenticate with the Azure SQL database) all set-up using Azure Pipelines CI/CD. There is some information out there on all the different parts but it turns out that a lot of investigation was needed to get everything working together smoothly.

The basics

Read the tutorial created by Microsoft, to see what it takes to do most of what we want manually , here:

Tutorial: Secure Azure SQL Database connection from App Service using a managed identity

This is what we want to perform automatically in the deployment pipeline. There are also some other tweaks you might need in your situation and these are described at the end. Anyway, we are going to show the following parts:

  • Configure Azure SQL via an ARM template.
  • Configure an App Service with a managed service identity (MSI).
  • Add the MSI as a user to the database.
  • Use the MSI to connect to the database.
  • Further tips.

We will assume you have a basic understanding of ARM templates and Azure DevOps YAML pipelines throughout this article.

Configure Azure SQL via an ARM template

You can find the ARM template reference for SQL Servers and SQL Databases here.

We will split up the ARM snippet into a few separate snippets. One for creating the server and two snippets for creating inner resources (a database and an AAD administrator configuration).

SQL Server

The server snippet creates the server and configures a regular administrative login. A regular administrator login is still mandatory (and this is not an AAD login). It is probably best to generate your SQL Admin password during your deployment and write it to a key vault for safekeeping. We will use this account later on when adding our AAD accounts as users to the database.

{
    "type": "Microsoft.Sql/servers",
    "kind": "v12.0",
    "name": "[parameters('SqlServerName')]",
    "apiVersion": "2019-06-01-preview",
    "location": "[resourceGroup().location]",
    "properties": {
        "administratorLogin": "[parameters('SqlServerAdminName')]",
        "administratorLoginPassword": "[parameters('SqlServerAdminPassword')]",
        "minimalTlsVersion": "1.2",
        "version": "12.0"
    },
    "resources": [
        // inner resources go here (also remove this comment).
    ]
}

Database

The following creates a database using the old style DTU based resource configuration.

{
    "type": "databases",
    "name": "[parameters('SqlDatabaseName')]",
    "location": "[resourceGroup().location]",
    "apiVersion": "2014-04-01",
    "dependsOn": [
        "[resourceId('Microsoft.Sql/servers', parameters('SqlServerName'))]"
    ],
    "properties": {
        "collation": "[parameters('SqlDatabaseCollation')]",
        "edition": "[parameters('SqlDatabaseEdition')]",
        "requestedServiceObjectiveName": "[parameters('SqlDatabaseServiceObjective')]"
    }
}

Note: the API version is set to a relatively old one. Using the newest one has a different syntax and by default creates vCore based databases.

ARM based AAD administrator

Though not strictly necessary for adding AAD accounts later on at the database level, you can add one AAD account as server administrator via the ARM template with the following resource:

{
    "type": "administrators",
    "name": "activeDirectory",
    "apiVersion": "2019-06-01-preview",
    "location": "[resourceGroup().location]",
    "properties": {
        "administratorType": "ActiveDirectory",
        "login": "[parameters('AADAdminLogin')]",
        "sid": "[parameters('AADAdminSid')]",
        "tenantId": "[parameters('AADAdminTenantId')]"
    },
    "dependsOn": [
        "[concat('Microsoft.Sql/servers/', parameters('SqlServerName'))]"
    ]
}

Configure an App Service with a managed service identity (MSI).

Now that our database is all in order it is time to configure our App Service with an MSI in its ARM template:

{
    "apiVersion": "2015-08-01",
    "name": "[parameters('WebAppName')]",
    "type": "Microsoft.Web/sites",
    "location": "[resourceGroup().location]",
    "identity": {
        "type": "SystemAssigned"
    }
    ...
}

We will need some information regarding the MSI created for our web application in our deployment pipeline so we will also add the following ARM output parameters:

"outputs": {
    "ManagedServiceIdentityPrincipalId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webAppName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').principalId]"
    },
    "ManagedServiceIdentityClientId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webAppName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').clientId]"
    }
}

NOTE: Another way to reference this info is described here but I have not tried it myself.

With your ARM template updated, if you didn’t have any output parameters before, you might need to add some tasks to your pipeline. We are using YAML, and the following steps would take care of ARM template deployment and retrieval of the output parameters afterwards:

- task: AzureResourceGroupDeployment@2
  displayName: 'Deploy ARM template'
  inputs:
    azureSubscription: 'Your service connection name here'
    resourceGroupName: '$(resourceGroupName)'
    location: '$(location)'
    csmFile: 'ARMTemplate.json'
    csmParametersFile: 'ARMTemplate.parameters.json'

- task: keesschollaart.arm-outputs.arm-outputs.ARM Outputs@5
  displayName: 'Retrieve ARM Outputs'
  inputs:
    ConnectedServiceNameARM: 'Your service connection name here'
    resourceGroupName: $(resourceGroupName)

NOTE: The keesschollaart.arm-outputs.arm-outputs.ARM Outputs@5 task is found in this extension.

Add the MSI as a user to the database

Now comes the tricky part, actually giving the MSI access to the database. According to the original tutorial we would need to execute the following piece of SQL to add the user (where <identity-name> is the name of the web application, since system assigned MSIs have the same name as their parent web application):

CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
GO

If you were to try to perform this in the pipeline with a SQLCMD operation using the server admin username and password configured in the SQL Server creation step, you would find out there is a sneaky caveat here. You can only add AAD users with this syntax if you are actually logged in to the database with an AAD user yourself. You could only accomplish this if you had made the Azure Pipelines service connection’s service principal the AAD administrator of the SQL Server in the first step. And then somehow access its client ID and secret and supply this to SQLCMD (if that would actually work, I haven’t tried). We need another way, and I wasn’t the first to think so, someone already asked here:

Is there any way to add managed identity as db user from pipelines?

So our SQL script will now read:

CREATE USER [<identity-name>] WITH default_schema=[dbo], SID=<SID>, TYPE=E;
ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
GO

However, we still need to retrieve the SID for the AAD user somewhere. Something you will not find in the MSI object created by the ARM template. You will also not find it in the service principal properties page in your AAD (the MSI is just a glorified service principal with an application registration). The link provided in the Microsoft Docs github issue above comes to the rescue:

Can’t Create Azure SQL Database Users Mapped to Azure AD Identities using Service Principal

[guid]$guid = [System.Guid]::Parse($objectId)
foreach ($byte in $guid.ToByteArray())
{
    $byteGuid += [System.String]::Format("{0:X2}", $byte)
}
return "0x" + $byteGuid

Though the above powershell script will work to create a correct SID for AAD users and AAD groups object IDs. If you perform this on the object ID of a MSI (the ManagedServiceIdentityPrincipalId returned by the ARM template) your user will be created, but the MSI won’t actually have access. For service principals like the MSI, the SID needs to be created from the application ID (the ManagedServiceIdentityClientId returned by the ARM template).

When putting this all together in a nice little YAML template for reuse:

parameters:
  armServiceConnection: ''  # The Azure Service Connection that has access to the specified database.
  serverName: ''            # The SQL Server name
  databaseName: ''          # The SQL Database name
  sqlAdminUsername: ''      # A SQL user with permissions to create new users.
  sqlAdminPassword: ''      # Password of the SQL user.
  identityName: ''          # The name of the user to create in the database.
  identityObjectId: ''      # The Object ID of the AAD user or group to add (or the application ID of a service principal).
  isGroup: false            # Indicates if the Object ID references a group instead of a user or service principal.

steps:
- task: PowerShell@2
  displayName: Convert ObjectID into SID
  inputs:
    targetType: 'inline'
    script: |
      [guid]$guid = [System.Guid]::Parse("${{ parameters.identityObjectId }}")
      $byteGuid = "0x"
      foreach ($byte in $guid.ToByteArray())
      {
          $byteGuid += [System.String]::Format("{0:X2}", $byte)
      }
      Write-Host "##vso[task.setvariable variable=identitySid]$byteGuid"

- task: PowerShell@2
  displayName: Create identity type
  inputs:
    targetType: 'inline'
    script: |
      if("${{ parameters.isGroup }}" -eq "true")
      {
        Write-Host "##vso[task.setvariable variable=identityType]X"
      } else {
        Write-Host "##vso[task.setvariable variable=identityType]E"
      }

- task: geeklearningio.gl-vsts-tasks-azure.execute-sql-task.ExecuteSql@1
  displayName: 'Add identity to SQL database users'
  inputs:
    ConnectedServiceName: ${{ parameters.armServiceConnection }}
    ScriptType: InlineScript
    Variables: |
      identityName=${{ parameters.identityName }}
      identitySid=$(identitySid)
      identityType=$(identityType)
    InlineScript: |
      IF NOT EXISTS (
          SELECT  [name]
          FROM    sys.database_principals
          WHERE   [name] = '$(identityName)'
      )
      BEGIN
        CREATE USER [$(identityName)] WITH default_schema=[dbo], SID=$(identitySid), TYPE=$(identityType);
        ALTER ROLE db_datareader ADD MEMBER [$(identityName)];
        ALTER ROLE db_datawriter ADD MEMBER [$(identityName)];
        ALTER ROLE db_ddladmin ADD MEMBER [$(identityName)];
      END
      GO
    ServerName: ${{ parameters.serverName }}
    DatabaseName: ${{ parameters.databaseName }}
    SqlUsername: ${{ parameters.sqlAdminUsername }}
    SqlPassword: ${{ parameters.sqlAdminPassword }}

NOTE: The geeklearningio.gl-vsts-tasks-azure.execute-sql-task.ExecuteSql@1 task is found in this extension. It has the ability to add a firewall rule for the build agent before performing any SQL operations, and removing this firewall rule after the SQL operations are completed.

The template can now be called as follows:

- template: add-aad-user-to-sql.v1.yml@templates # this is what I called my template.
  parameters:
    armServiceConnection: 'Your service connection name here'
    serverName: your-sql-server-name.database.windows.net
    databaseName: YourDatabaseName
    sqlAdminUsername: $(SqlServerAdminUsername)
    sqlAdminPassword: $(SqlServerAdminPassword)
    identityName: your-web-app-name
    identityObjectId: $(ManagedServiceIdentityClientId)

Use the MSI to connect to the database

This is actually not that different from the original tutorial. I’ve been implementing all this stuff in a ASP.NET Core 3.1 application so the steps are basically this:

  • First install the Microsoft.Azure.Services.AppAuthentication package into the project containing your DBContext implementation (the last version as of this writing is 1.5.0).
  • Now change your DBContext implementation’s constructor and set the database connection’s token property with one retrieved for the MSI:
using Microsoft.Data.SqlClient;
using Microsoft.EntityFrameworkCore;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.EntityFrameworkCore.SqlServer.Infrastructure.Internal;

namespace Your.Namespace.Here
{
    public class YourDBContext : DbContext
    {
        public YourDBContext(DbContextOptions<YourDBContext> options)
            : base(options)
        {
            // You might want to skip retrieving the access token in case you are running against a local DB or perhaps a sqlite DB for unit testing.
            if (Database.IsSqlServer() && !options.GetExtension<SqlServerOptionsExtension>().ConnectionString.Contains("(localdb)"))
            {
                var conn = (SqlConnection)Database.GetDbConnection();
                conn.AccessToken = (new AzureServiceTokenProvider())
                    .GetAccessTokenAsync("https://database.windows.net/").Result;
            }
        }
    }
}
  • Use the following connection string Server=tcp:your-sql-server-name.database.windows.net,1433;Database=YourDatabaseName;

You might have noticed in the code snippet above that it uses Microsoft.Data.SqlClient, it is the successor to the System.Data.SqlClient you might be more familiar with. For more information see this:

Introducing the new Microsoft.Data.SqlClient

Further tips

MSI for App Service slots

Managed service identities can also be enabled for deployment slots of your app service. In ARM templates you can add a similar identity property:

{
    "apiVersion": "2015-08-01",
    "name": "[parameters('webAppName')]",
    "type": "Microsoft.Web/sites",
    "location": "[resourceGroup().location]",
    "identity": {
        "type": "SystemAssigned"
    },
    ...
    "resources": [
        {
            "apiVersion": "2015-08-01",
            "name": "staging",
            "type": "slots",
            "location": "[resourceGroup().location]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/Sites', parameters('webAppName'))]"
            ],
            "identity": {
                "type": "SystemAssigned"
            }
            ...
        }
    ]
}

The actual MSI that is created will be different from the root app service resource so don’t forget to add the corresponding output parameters as well:

"outputs": {
    "StagingMsiPrincipalId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites/slots', variables('webAppName'), 'staging'), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').principalId]"
    },
    "StagingMsiClientId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites/slots', variables('webAppName'), 'staging'), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').clientId]"
    }
}

Separate EF Migration script generation

If you use Entity Framework and don’t want to have your deployed application upgrade your DB during start-up, you will have to generate upgrade scripts in your pipeline. How to do this I’ll discuss in another article. But since the pipeline isn’t running as an active directory user with access to the database, your migration script generation will most likely fail. You will need to change the way the DBContext is created at design time from the regular context creation. Basic info on this can be found here:

Design-time DbContext Creation

As an example that makes use of the if statement created in the DBContext constructor implementation above. Here we use a very specific connection string for design time purposes (like generating migration scripts):

public class YourDBContextFactory : IDesignTimeDbContextFactory<YourDBContext>
{
    public YourDBContext CreateDbContext(string[] args)
    {
        var builder = new SqlConnectionStringBuilder();
        builder.DataSource = "(localdb)\\mssqllocaldb";
        builder.InitialCatalog = "TestDB";

        var optionsBuilder = new DbContextOptionsBuilder<YourDBContext>();
        optionsBuilder.UseSqlServer(builder.ConnectionString));

        return new YourDBContext(optionsBuilder.Options);
    }
}

NOTE: I actually don’t know why during migration script generation the database is contacted (scripts are generated for all migrations available, not just the ones needed by the database configured).

Application Insights for your App Service in ARM the correct* way

*Correct in my opinion

I will assume you know how to configure the Application Insights resource itself in ARM. If you do not, you might want to go here first. It is also good to know that, to get the most out of Application Insights, you used to need to install a site extension in your App Service. This had some drawbacks, most notably that deployments with site extensions sometimes took a lot longer.

You might often see the following when navigating to the Application Insights option of your App Service (even thought your App Service is logging data to Application Insights):

This happens when you don’t set the instrumentation key in some variable you named yourself. Want to have a more useful page here… one that actually lets you navigate to the application insights. Like this:

This can be achieved by using the ‘magic’ setting key APPINSIGHTS_INSTRUMENTATIONKEY with the value being your Application Insights resource’s instrumentation key obviously. But wait, there is more:

This extra connection can be achieved by adding the following ‘magic’ setting to the App Service configuration: ApplicationInsightsAgent_EXTENSION_VERSION, set to a value of ‘~2’. You can set configuration settings in your ARM template by using the following basic structure:

"resources": [
  {
    "name": "appsettings",
    "type": "config",
    "apiVersion": "2015-08-01",
    "dependsOn": [
      "[resourceId('Microsoft.Web/sites', variables('webSiteName'))]"
    ],
    "properties": {
      "APPINSIGHTS_INSTRUMENTATIONKEY": "<your instrumentation key here>",
      "ApplicationInsightsAgent_EXTENSION_VERSION": "~2"
    }
  }
]

There are even more settings that you can use to control the different switches that are now available on the application insights page of you app service. The are described here.

Happy monitoring!

Configure Azure DevOps pipeline agent to auto reboot after each job.

Sometimes you might want a cleanly started machine (not cleanly installed, mind you) for your pipeline job. For instance, if you are running UI tests. In some situations UI tests are very brittle and might be affected by a canceled or failed previous run. Is these circumstances, restarting the agent automatically after each job can be beneficial. This is now possible with the introduction of the --once parameter of the agent (more info here).

Start off by installing your agent as usual, and be sure to make it an interactive agent. Don’t forget to configure it to autologon, since automatically rebooting without that feature would stop our agent in its tracks rather fast. After you have done this you can add a custom cmd (you could name it customrun.cmd for instance) file to the root directory of the agent with the following contents:

call "C:\agent\run.cmd" --startuptype autostartup --once
shutdown /r /t 0 /f

If you run this file, the agent will start and the --once parameter will force it to close after the first job is finished. The shutdown command will then immediately restart the machine.

To have this file run during autologon instead of the default generated run command. You need to edit the registry as well. Start your registry editor and search for the following key:

HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run

Change the contents to something like the following (be sure to put in the full path to your own custom cmd file obviously):

C:\windows\system32\cmd.exe /D /S /C start "Agent with AutoLogon" "C:\agent\customrun.cmd"


Azure DevOps Graph API and continuation tokens

I recently found out that the Azure DevOps Graph API documentation is somewhat confusing regarding its description of when and where to expect continuation tokens when performing API calls.

As an example, lets say you call the groups API:

https://vssps.dev.azure.com/{account name}/_apis/graph/groups?api-version=4.1-preview.1

The documentation will mention that if the data can not be “returned in a single page”, the “result set” will contain a continuation token. It turns out that the definitions of a single page and result set are both not entirely intuitive.

To start with the latter a result set in this case is not only the resulting JSON document as you might expect, but also the response headers of the API call. To be precise, the
x-ms-continuationtoken response header will contain the continuation token if one is needed to retrieve the next page.

The definition of a page in this API is also somewhat strange. In our account I received 495 results in the first page and 66 in the second (and last page) for a call to the above API without any filtering. When I apply filtering however (for instance, I want only the AAD groups) I receive 33 items in the first page and 5 in the second (and again last page).

Lessons learned: look everywhere for that continuation token even if the number of results doesn’t lead you to believe that it is a full page.

WCF services on an Azure website returning 502 Bad Gateway

So the other day I moved a web role containing WCF services over to an Azure website. Which seemed like a breeze, after deployment I called up the svc file in the browser and all seemed fine. However when I tested with an actual client of the service it received only 502 Bad Gateway responses.

Now there are lots of reasons 502 responses happen, especially in cloud environments where load balancers and what not sit between you and the site/service. However after some research a pattern started to emerge where infrastructure problems seemed unlikely to cause this problem, and a few seemingly random questions on stack overflow caused me to consider: might the problem be caused by my own code/configuration.

You see, a regular website or service should usually not respond with a 502 bad gateway, this is mostly something proxies and load balancers etc. do (as far as I know). In this case too, the error is returned by some intermediate device and not the webserver itself. This intermediate device does this because the website severed then TCP connection abruptly. For instance because the application pool for the website was shutdown unexpectedly. And in a .NET WCF service, what causes the application pool to shutdown unexpectedly is usually something that brings the .NET application domain down. Stuff like, OutOfMemoryException, StackOverflowException and the like.

If you don’t catch these kinds of exceptions yourself (and indeed you usually should not, but that is another discussion entirely) and they bring down the application domain, no logging is done whatsoever (not as far as I could find, and I’ve searched for it quite a while). So the best way to find out what is really going on is remote debugging the azure website. A good tutorial on that can be found here. Be sure to deploy a debug build of your website for easiest debugging.

So now you have that connected, hit that offending service with your client, and presto… you get a nice unhandled exception pop-up which will make you google some more find a solution for that problem and then you have rid yourself of that pesky 502 error. Except… in my case no unhandled exception popped up, I double checked my exception handling settings (twice) to make sure I had that set correctly. So this means… its not my code…

Back to the debugger, this time I turn off the ‘Just My Code’ feature in the debugger settings hit the service again and get presented with an actual unhandled exception. My particular problem was related to the one described in this Stack Overflow post.

I hope writing these steps down lets me (and maybe someone else) fix it considerably faster next time I hit this error. This was quite a long afternoon of headaches I’d love to get back.

 

“Windows 10 SMB Secure negotiation” or “Why will my network shares not work on Windows 10 anymore”

So, a couple of years ago I was the first person in the office upgrade to Windows 8. I had the blessing of corporate IT as long as I troubleshoot my own problems if they were Windows 8 specific. And of course if I encountered and fixed any errors let them know what it was and how to fix it.

One of the first problems I encountered was problems connecting to our $50k SAN. After some digging it turned out that it did not support a new SMB feature turned on by default in Windows 8 called Secure Negotiate. Which basically wants to negotiate with the server about which encryption to use when transferring files. A solution was quickly found: Turn off the feature.

This could be done setting the following registry key:

HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecureNegotiate=0

Everything worked as expected until I upgraded to Windows 10 when that came out. Microsoft had a very valid reason to remove the above workaround and not allow you to bypass any security features unless the server indicated during negotiation that it would not support certain things.

However, the SAN still didn’t support any secure negotiate feature. So after some more research I found out that I could just tell the client to force secure transfer without the need for negotiation. So if you can’t seem to access your SMB shares anymore since upgrading to Windows 10, open a Powershell prompt as Administrator and run the following command:

Set-SmbClientConfiguration -RequireSecuritySignature $true

Please note that I am not an SMB protocol guru so the above text may be a bit inaccurate in its details. If you want more info however, someone at Microsoft who does know what he is talking about did a very detailed write-up about the feature. You can find it here:

https://blogs.msdn.microsoft.com/openspecification/2015/08/11/smb-3-1-1-pre-authentication-integrity-in-windows-10/

Azure ServiceBus Relay – 50200: Bad Gateway

<TL;DR;>This error message is not always caused by proxy issues. After last weeks updates an old version of the service bus DLL’s (2.2.3) on the relay server side caused this error on the client side when trying to call service operations.</TL;DR;>

Last week I arrived at the office and was greeted by a status screen that contained a lot more red lights than when I had left the day before. That in itself wasn’t too strange, we monitor customer’s servers as well and who know what kind of update/reboot schedule these guys have. However, the fact that the only servers that were experiencing problems were the ones we host ourselves made me a bit suspicious.

After some investigation I noticed the error message from the title in our logging. Apparently it can be found in two variations: 50200: Bad Gateway, and of course 502: Bad Gateway. I had encountered this issue before at a customer using a proxy, and all google pages led me to believe that this was indeed a proxy issue on our side as well. However, we don’t have a proxy running in our network, and it was working fine before.

After some digging I noticed only the servers that received updates and were rebooted the night before were experiencing issues. Servers that had not been updated were fine. It turned out that one of the updates did not play well with the old (2.2.3) version of the service bus DLL’s we were still using (software had been running fine for 3 years, why update?). So after updating it to the latest version that could still run on .NET 4 (2.8.0 if I remember correctly) and updating the software on the rebooted servers, we were back in business again.

MSBuild command line building a single project from a solution.

I recently needed to build just one project (and its dependencies) from a solution. I quickly found the following MSDN article on exactly how to do this:

https://msdn.microsoft.com/en-us/library/ms171486.aspx

However, I couldn’t get it to work for the life of me. The command always complained along the lines of:

MySolution.sln.metaproj : error MSB4057: The target "My.Project:Clean" does not exist in the project. [MySolution.sln]

Luckily during a search on the internet about troubleshooting MSBuild issues, I came across a way to save the intermediate project file created by MSBuild from the solution. Because as you might have noticed when you look at a .sln file, its not even close to a regular MSBuild project file. MSBuild interprets the solution file and generates one big MSBuild project file from it, then builds that file.

This can be done by setting an environment variable before calling the MSBuild for a solution. In a command prompt type the following:

Set MSBuildEmitSolution=1

When you then for instance build a solution with the following command:

msbuild MySolution.sln /t:Clean

This will perform a clean of the solution, but also save the entire MSBuild project file in a file called MySolution.sln.metaproj.

I thought this was a good idea because the MSDN article above talks about targets, and usually targets in a project file are called Clean, or Rebuild or something like that. Why would there be a target “MyProjectName:Clean”? Well, because MSBuild generate that target in the aforementioned .metaproj file.

It turns out however that target names may not contain the . character. And MSBuild nicely works around this by replacing them with _ characters. So to get my single project building I had to call:

msbuild MySolution.sln /t:My_Project:Rebuild

Hopefully this post saves someone else some time.

Microsoft Edge not starting after Windows 10 update (v1511)

I recently updated my work machine to the latest Windows 10 update (1511). After the update was finished I noticed that I couldn’t start Microsoft Edge anymore. I didn’t think much of it at the time since it is not my main browser. However it started to annoy me a bit when it turned out it was my main PDF reader.

Rather than setting another app as the default PDF reader I decided to try and fix the cause of the problem. This turned out the be harder than expected though. I don’t know why the problem reared its head after the latest update, but suffice to say after a reinstall Edge worked but then after configuring my PC it didn’t anymore.

Reinstalling again then checking after each step revealed that things went wrong after connecting my work account with my PC. And with Work account I don’t mean my domain account, but rather my Office 365 Organizational account (that you can connect using the Accounts settings page in Windows 10).

Things, however, did not return to normal after I had severed the connection. And I had to remove my profile and recreate it to ensure Edge worked again. If you are using a roaming profile this might not work for you, and also do not take removing your profile lightly. It is holds more of your settings and configuration than you might realize.

Generating and consuming JSON Web Tokens with .NET

Maybe you have read my previous blog post in which I talked about token generation in OWIN. After the issues we had there with Machine key and OWIN versions, I decided to take a look at some alternatives.

After some research I decided JSON Web Tokens (or JWT’s, which apparently should be pronounced as the English word ‘jot’) would fit the bill. They are small, it is an open standard, and has a simple string representation (URL-safe). More info on the standard can be found in this draft.

After this research it should be a easy to incorporate this into my solution right? Well… not as easy as I thought. It turns out many samples are just using an external STS to create and verify tokens, or using some own custom implementation which doesn’t support all of the options. Let alone complete samples of generating a token in a WCF service and using it in a client to pass on to another service. However after a lot of searching, researching etc. I decided to make my own sample.

So here comes the first part, generating and consuming:

I will be using the “JSON Web Token Handler for the Microsoft .NET Framework 4.5” NuGet Package as it is called by its full name. It is also called System.IdentityModel.Tokens.Jwt. So in this post I’ll just show you how to create a token from some claims and then how to turn the token back into claims again. Just in a console application so we can more easily see what is going on.

I have just created a new Console application in Visual Studio 2015, and added the aforementioned NuGet package. At the time of writing the latest stable version is 4.0.2.206221351. Don’t forget to add a reference to the System.IdentityModel assembly as well, it is part of the .NET Framework since v4.5.

First we will add some using clauses we will need:

using System.IdentityModel.Tokens;
using System.Security.Claims;

Before we can sign a token we need a secret to sign it with. There a multiple options like certificates and whatnot. The easiest to use in this example however is just a normal shared secret text. Which we will need to turn into a byte array before we can make it a secret key. Also we will have to put it in a SigningCredentials object together with the algorithms we will use to sign it with:

var plainTextSecurityKey = "This is my shared, not so secret, secret!";
var signingKey = new InMemorySymmetricSecurityKey(
    Encoding.UTF8.GetBytes(plainTextSecurityKey));
var signingCredentials = new SigningCredentials(signingKey, 
    SecurityAlgorithms.HmacSha256Signature, SecurityAlgorithms.Sha256Digest);

You can use a couple of different security algorithms but you should specify one which ends in signature for the first one, and one that ends in digest for the second algorithm. Some will throw a NotSupportedException (because, not supported) and HmacSha256Signature and Sha256Digest seem to be the default in most examples I have seen.

After that we will need a few claims to put in the token, otherwise why would we need a token:

var claimsIdentity = new ClaimsIdentity(new List<Claim>()
{
    new Claim(ClaimTypes.NameIdentifier, "myemail@myprovider.com"),
    new Claim(ClaimTypes.Role, "Administrator"),
}, "Custom");

Now we can create the security token descriptor:

var securityTokenDescriptor = new SecurityTokenDescriptor()
{
    AppliesToAddress = "http://my.website.com",
    TokenIssuerName = "http://my.tokenissuer.com",
    Subject = claimsIdentity,
    SigningCredentials = signingCredentials,
};

Please note that the AppliesToAddress and TokenIssuerName must be valid URI’s. Not in the sense that they should be resolvable, but they must be in a valid URI format (if you have accidentally read the v3.5 WIF documentation this can be confusing, it says that any string will do). The AppliesToAddress should contain the token’s audience, which means the website or application that will receive te token. The TokenIssuerName is the application issuing the token obviously.

This token descriptor can now be used with any WIF (Windows Identity Foundation) token handler (see the SecurityTokenHander class MSDN help). The JwtSecurityTokenHandler we are going to use is a descendant from that class (and implements the necessary abstract members).

Here is the code to create a token, then sign and encode it:

var tokenHandler = new JwtSecurityTokenHandler();
var plainToken = tokenHandler.CreateToken(securityTokenDescriptor);
var signedAndEncodedToken = tokenHandler.WriteToken(plainToken);

If you want you can print the stuff on the screen now to see what it generated:

Console.WriteLine(plainToken.ToString());
Console.WriteLine(signedAndEncodedToken);
Console.ReadLine();

Now that we have the encoded token that is easily transportable we might want some other application to validate the token (to see that it was not tampered with). To do this, we first need an instance of the TokenValidationParameters class:

var tokenValidationParameters = new TokenValidationParameters()
{
    ValidAudiences = new string[]
    {
        "http://my.website.com",
        "http://my.otherwebsite.com"
    },
    ValidIssuers = new string[]
    {
        "http://my.tokenissuer.com",
        "http://my.othertokenissuer.com"
    },
    IssuerSigningKey = signingKey
};

As you can see, the TokenValidationParameters class allows us to specify multiple valid issuers and audiences. You will also need to specify the same signing key as when you created the token (obviously). We can now simply validate the token the following way:

SecurityToken validatedToken;
tokenHandler.ValidateToken(signedAndEncodedToken,
    tokenValidationParameters, out validatedToken);

Console.WriteLine(validatedToken.ToString());
Console.ReadLine();

You might be wondering how the token handler knows which signature and digest algorithms should be used. However if you look carefully you will see that the algorithm name is encoded into the token (this encoding is simply Base64, not encrypted).

The source code to this sample can be found here.