Category Archives: Uncategorized

Azure SQL with AAD Authorization via App Service MSI

Introduction

This article contains all the information you might need on using Azure SQL databases through Entity Framework from within an App Service with managed service identities (MSI) configured (where the MSI is used to authenticate with the Azure SQL database) all set-up using Azure Pipelines CI/CD. There is some information out there on all the different parts but it turns out that a lot of investigation was needed to get everything working together smoothly.

The basics

Read the tutorial created by Microsoft, to see what it takes to do most of what we want manually , here:

Tutorial: Secure Azure SQL Database connection from App Service using a managed identity

This is what we want to perform automatically in the deployment pipeline. There are also some other tweaks you might need in your situation and these are described at the end. Anyway, we are going to show the following parts:

  • Configure Azure SQL via an ARM template.
  • Configure an App Service with a managed service identity (MSI).
  • Add the MSI as a user to the database.
  • Use the MSI to connect to the database.
  • Further tips.

We will assume you have a basic understanding of ARM templates and Azure DevOps YAML pipelines throughout this article.

Configure Azure SQL via an ARM template

You can find the ARM template reference for SQL Servers and SQL Databases here.

We will split up the ARM snippet into a few separate snippets. One for creating the server and two snippets for creating inner resources (a database and an AAD administrator configuration).

SQL Server

The server snippet creates the server and configures a regular administrative login. A regular administrator login is still mandatory (and this is not an AAD login). It is probably best to generate your SQL Admin password during your deployment and write it to a key vault for safekeeping. We will use this account later on when adding our AAD accounts as users to the database.

{
    "type": "Microsoft.Sql/servers",
    "kind": "v12.0",
    "name": "[parameters('SqlServerName')]",
    "apiVersion": "2019-06-01-preview",
    "location": "[resourceGroup().location]",
    "properties": {
        "administratorLogin": "[parameters('SqlServerAdminName')]",
        "administratorLoginPassword": "[parameters('SqlServerAdminPassword')]",
        "minimalTlsVersion": "1.2",
        "version": "12.0"
    },
    "resources": [
        // inner resources go here (also remove this comment).
    ]
}

Database

The following creates a database using the old style DTU based resource configuration.

{
    "type": "databases",
    "name": "[parameters('SqlDatabaseName')]",
    "location": "[resourceGroup().location]",
    "apiVersion": "2014-04-01",
    "dependsOn": [
        "[resourceId('Microsoft.Sql/servers', parameters('SqlServerName'))]"
    ],
    "properties": {
        "collation": "[parameters('SqlDatabaseCollation')]",
        "edition": "[parameters('SqlDatabaseEdition')]",
        "requestedServiceObjectiveName": "[parameters('SqlDatabaseServiceObjective')]"
    }
}

Note: the API version is set to a relatively old one. Using the newest one has a different syntax and by default creates vCore based databases.

ARM based AAD administrator

Though not strictly necessary for adding AAD accounts later on at the database level, you can add one AAD account as server administrator via the ARM template with the following resource:

{
    "type": "administrators",
    "name": "activeDirectory",
    "apiVersion": "2019-06-01-preview",
    "location": "[resourceGroup().location]",
    "properties": {
        "administratorType": "ActiveDirectory",
        "login": "[parameters('AADAdminLogin')]",
        "sid": "[parameters('AADAdminSid')]",
        "tenantId": "[parameters('AADAdminTenantId')]"
    },
    "dependsOn": [
        "[concat('Microsoft.Sql/servers/', parameters('SqlServerName'))]"
    ]
}

Configure an App Service with a managed service identity (MSI).

Now that our database is all in order it is time to configure our App Service with an MSI in its ARM template:

{
    "apiVersion": "2015-08-01",
    "name": "[parameters('WebAppName')]",
    "type": "Microsoft.Web/sites",
    "location": "[resourceGroup().location]",
    "identity": {
        "type": "SystemAssigned"
    }
    ...
}

We will need some information regarding the MSI created for our web application in our deployment pipeline so we will also add the following ARM output parameters:

"outputs": {
    "ManagedServiceIdentityPrincipalId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webAppName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').principalId]"
    },
    "ManagedServiceIdentityClientId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webAppName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').clientId]"
    }
}

NOTE: Another way to reference this info is described here but I have not tried it myself.

With your ARM template updated, if you didn’t have any output parameters before, you might need to add some tasks to your pipeline. We are using YAML, and the following steps would take care of ARM template deployment and retrieval of the output parameters afterwards:

- task: AzureResourceGroupDeployment@2
  displayName: 'Deploy ARM template'
  inputs:
    azureSubscription: 'Your service connection name here'
    resourceGroupName: '$(resourceGroupName)'
    location: '$(location)'
    csmFile: 'ARMTemplate.json'
    csmParametersFile: 'ARMTemplate.parameters.json'

- task: keesschollaart.arm-outputs.arm-outputs.ARM Outputs@5
  displayName: 'Retrieve ARM Outputs'
  inputs:
    ConnectedServiceNameARM: 'Your service connection name here'
    resourceGroupName: $(resourceGroupName)

NOTE: The keesschollaart.arm-outputs.arm-outputs.ARM Outputs@5 task is found in this extension.

Add the MSI as a user to the database

Now comes the tricky part, actually giving the MSI access to the database. According to the original tutorial we would need to execute the following piece of SQL to add the user (where <identity-name> is the name of the web application, since system assigned MSIs have the same name as their parent web application):

CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
GO

If you were to try to perform this in the pipeline with a SQLCMD operation using the server admin username and password configured in the SQL Server creation step, you would find out there is a sneaky caveat here. You can only add AAD users with this syntax if you are actually logged in to the database with an AAD user yourself. You could only accomplish this if you had made the Azure Pipelines service connection’s service principal the AAD administrator of the SQL Server in the first step. And then somehow access its client ID and secret and supply this to SQLCMD (if that would actually work, I haven’t tried). We need another way, and I wasn’t the first to think so, someone already asked here:

Is there any way to add managed identity as db user from pipelines?

So our SQL script will now read:

CREATE USER [<identity-name>] WITH default_schema=[dbo], SID=<SID>, TYPE=E;
ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
GO

However, we still need to retrieve the SID for the AAD user somewhere. Something you will not find in the MSI object created by the ARM template. You will also not find it in the service principal properties page in your AAD (the MSI is just a glorified service principal with an application registration). The link provided in the Microsoft Docs github issue above comes to the rescue:

Can’t Create Azure SQL Database Users Mapped to Azure AD Identities using Service Principal

[guid]$guid = [System.Guid]::Parse($objectId)
foreach ($byte in $guid.ToByteArray())
{
    $byteGuid += [System.String]::Format("{0:X2}", $byte)
}
return "0x" + $byteGuid

Though the above powershell script will work to create a correct SID for AAD users and AAD groups object IDs. If you perform this on the object ID of a MSI (the ManagedServiceIdentityPrincipalId returned by the ARM template) your user will be created, but the MSI won’t actually have access. For service principals like the MSI, the SID needs to be created from the application ID (the ManagedServiceIdentityClientId returned by the ARM template).

When putting this all together in a nice little YAML template for reuse:

parameters:
  armServiceConnection: ''  # The Azure Service Connection that has access to the specified database.
  serverName: ''            # The SQL Server name
  databaseName: ''          # The SQL Database name
  sqlAdminUsername: ''      # A SQL user with permissions to create new users.
  sqlAdminPassword: ''      # Password of the SQL user.
  identityName: ''          # The name of the user to create in the database.
  identityObjectId: ''      # The Object ID of the AAD user or group to add (or the application ID of a service principal).
  isGroup: false            # Indicates if the Object ID references a group instead of a user or service principal.

steps:
- task: PowerShell@2
  displayName: Convert ObjectID into SID
  inputs:
    targetType: 'inline'
    script: |
      [guid]$guid = [System.Guid]::Parse("${{ parameters.identityObjectId }}")
      $byteGuid = "0x"
      foreach ($byte in $guid.ToByteArray())
      {
          $byteGuid += [System.String]::Format("{0:X2}", $byte)
      }
      Write-Host "##vso[task.setvariable variable=identitySid]$byteGuid"

- task: PowerShell@2
  displayName: Create identity type
  inputs:
    targetType: 'inline'
    script: |
      if("${{ parameters.isGroup }}" -eq "true")
      {
        Write-Host "##vso[task.setvariable variable=identityType]X"
      } else {
        Write-Host "##vso[task.setvariable variable=identityType]E"
      }

- task: geeklearningio.gl-vsts-tasks-azure.execute-sql-task.ExecuteSql@1
  displayName: 'Add identity to SQL database users'
  inputs:
    ConnectedServiceName: ${{ parameters.armServiceConnection }}
    ScriptType: InlineScript
    Variables: |
      identityName=${{ parameters.identityName }}
      identitySid=$(identitySid)
      identityType=$(identityType)
    InlineScript: |
      IF NOT EXISTS (
          SELECT  [name]
          FROM    sys.database_principals
          WHERE   [name] = '$(identityName)'
      )
      BEGIN
        CREATE USER [$(identityName)] WITH default_schema=[dbo], SID=$(identitySid), TYPE=$(identityType);
        ALTER ROLE db_datareader ADD MEMBER [$(identityName)];
        ALTER ROLE db_datawriter ADD MEMBER [$(identityName)];
        ALTER ROLE db_ddladmin ADD MEMBER [$(identityName)];
      END
      GO
    ServerName: ${{ parameters.serverName }}
    DatabaseName: ${{ parameters.databaseName }}
    SqlUsername: ${{ parameters.sqlAdminUsername }}
    SqlPassword: ${{ parameters.sqlAdminPassword }}

NOTE: The geeklearningio.gl-vsts-tasks-azure.execute-sql-task.ExecuteSql@1 task is found in this extension. It has the ability to add a firewall rule for the build agent before performing any SQL operations, and removing this firewall rule after the SQL operations are completed.

The template can now be called as follows:

- template: add-aad-user-to-sql.v1.yml@templates # this is what I called my template.
  parameters:
    armServiceConnection: 'Your service connection name here'
    serverName: your-sql-server-name.database.windows.net
    databaseName: YourDatabaseName
    sqlAdminUsername: $(SqlServerAdminUsername)
    sqlAdminPassword: $(SqlServerAdminPassword)
    identityName: your-web-app-name
    identityObjectId: $(ManagedServiceIdentityClientId)

Use the MSI to connect to the database

This is actually not that different from the original tutorial. I’ve been implementing all this stuff in a ASP.NET Core 3.1 application so the steps are basically this:

  • First install the Microsoft.Azure.Services.AppAuthentication package into the project containing your DBContext implementation (the last version as of this writing is 1.5.0).
  • Now change your DBContext implementation’s constructor and set the database connection’s token property with one retrieved for the MSI:
using Microsoft.Data.SqlClient;
using Microsoft.EntityFrameworkCore;
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.EntityFrameworkCore.SqlServer.Infrastructure.Internal;

namespace Your.Namespace.Here
{
    public class YourDBContext : DbContext
    {
        public YourDBContext(DbContextOptions<YourDBContext> options)
            : base(options)
        {
            // You might want to skip retrieving the access token in case you are running against a local DB or perhaps a sqlite DB for unit testing.
            if (Database.IsSqlServer() && !options.GetExtension<SqlServerOptionsExtension>().ConnectionString.Contains("(localdb)"))
            {
                var conn = (SqlConnection)Database.GetDbConnection();
                conn.AccessToken = (new AzureServiceTokenProvider())
                    .GetAccessTokenAsync("https://database.windows.net/").Result;
            }
        }
    }
}
  • Use the following connection string Server=tcp:your-sql-server-name.database.windows.net,1433;Database=YourDatabaseName;

You might have noticed in the code snippet above that it uses Microsoft.Data.SqlClient, it is the successor to the System.Data.SqlClient you might be more familiar with. For more information see this:

Introducing the new Microsoft.Data.SqlClient

Further tips

MSI for App Service slots

Managed service identities can also be enabled for deployment slots of your app service. In ARM templates you can add a similar identity property:

{
    "apiVersion": "2015-08-01",
    "name": "[parameters('webAppName')]",
    "type": "Microsoft.Web/sites",
    "location": "[resourceGroup().location]",
    "identity": {
        "type": "SystemAssigned"
    },
    ...
    "resources": [
        {
            "apiVersion": "2015-08-01",
            "name": "staging",
            "type": "slots",
            "location": "[resourceGroup().location]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/Sites', parameters('webAppName'))]"
            ],
            "identity": {
                "type": "SystemAssigned"
            }
            ...
        }
    ]
}

The actual MSI that is created will be different from the root app service resource so don’t forget to add the corresponding output parameters as well:

"outputs": {
    "StagingMsiPrincipalId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites/slots', variables('webAppName'), 'staging'), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').principalId]"
    },
    "StagingMsiClientId": {
        "type": "string",
        "value": "[reference(concat(resourceId('Microsoft.Web/sites/slots', variables('webAppName'), 'staging'), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2018-11-30').clientId]"
    }
}

Separate EF Migration script generation

If you use Entity Framework and don’t want to have your deployed application upgrade your DB during start-up, you will have to generate upgrade scripts in your pipeline. How to do this I’ll discuss in another article. But since the pipeline isn’t running as an active directory user with access to the database, your migration script generation will most likely fail. You will need to change the way the DBContext is created at design time from the regular context creation. Basic info on this can be found here:

Design-time DbContext Creation

As an example that makes use of the if statement created in the DBContext constructor implementation above. Here we use a very specific connection string for design time purposes (like generating migration scripts):

public class YourDBContextFactory : IDesignTimeDbContextFactory<YourDBContext>
{
    public YourDBContext CreateDbContext(string[] args)
    {
        var builder = new SqlConnectionStringBuilder();
        builder.DataSource = "(localdb)\\mssqllocaldb";
        builder.InitialCatalog = "TestDB";

        var optionsBuilder = new DbContextOptionsBuilder<YourDBContext>();
        optionsBuilder.UseSqlServer(builder.ConnectionString));

        return new YourDBContext(optionsBuilder.Options);
    }
}

NOTE: I actually don’t know why during migration script generation the database is contacted (scripts are generated for all migrations available, not just the ones needed by the database configured).

Application Insights for your App Service in ARM the correct* way

*Correct in my opinion

I will assume you know how to configure the Application Insights resource itself in ARM. If you do not, you might want to go here first. It is also good to know that, to get the most out of Application Insights, you used to need to install a site extension in your App Service. This had some drawbacks, most notably that deployments with site extensions sometimes took a lot longer.

You might often see the following when navigating to the Application Insights option of your App Service (even thought your App Service is logging data to Application Insights):

This happens when you don’t set the instrumentation key in some variable you named yourself. Want to have a more useful page here… one that actually lets you navigate to the application insights. Like this:

This can be achieved by using the ‘magic’ setting key APPINSIGHTS_INSTRUMENTATIONKEY with the value being your Application Insights resource’s instrumentation key obviously. But wait, there is more:

This extra connection can be achieved by adding the following ‘magic’ setting to the App Service configuration: ApplicationInsightsAgent_EXTENSION_VERSION, set to a value of ‘~2’. You can set configuration settings in your ARM template by using the following basic structure:

"resources": [
  {
    "name": "appsettings",
    "type": "config",
    "apiVersion": "2015-08-01",
    "dependsOn": [
      "[resourceId('Microsoft.Web/sites', variables('webSiteName'))]"
    ],
    "properties": {
      "APPINSIGHTS_INSTRUMENTATIONKEY": "<your instrumentation key here>",
      "ApplicationInsightsAgent_EXTENSION_VERSION": "~2"
    }
  }
]

There are even more settings that you can use to control the different switches that are now available on the application insights page of you app service. The are described here.

Happy monitoring!

Configure Azure DevOps pipeline agent to auto reboot after each job.

Sometimes you might want a cleanly started machine (not cleanly installed, mind you) for your pipeline job. For instance, if you are running UI tests. In some situations UI tests are very brittle and might be affected by a canceled or failed previous run. Is these circumstances, restarting the agent automatically after each job can be beneficial. This is now possible with the introduction of the --once parameter of the agent (more info here).

Start off by installing your agent as usual, and be sure to make it an interactive agent. Don’t forget to configure it to autologon, since automatically rebooting without that feature would stop our agent in its tracks rather fast. After you have done this you can add a custom cmd (you could name it customrun.cmd for instance) file to the root directory of the agent with the following contents:

call "C:\agent\run.cmd" --startuptype autostartup --once
shutdown /r /t 0 /f

If you run this file, the agent will start and the --once parameter will force it to close after the first job is finished. The shutdown command will then immediately restart the machine.

To have this file run during autologon instead of the default generated run command. You need to edit the registry as well. Start your registry editor and search for the following key:

HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run

Change the contents to something like the following (be sure to put in the full path to your own custom cmd file obviously):

C:\windows\system32\cmd.exe /D /S /C start "Agent with AutoLogon" "C:\agent\customrun.cmd"


Azure DevOps Graph API and continuation tokens

I recently found out that the Azure DevOps Graph API documentation is somewhat confusing regarding its description of when and where to expect continuation tokens when performing API calls.

As an example, lets say you call the groups API:

https://vssps.dev.azure.com/{account name}/_apis/graph/groups?api-version=4.1-preview.1

The documentation will mention that if the data can not be “returned in a single page”, the “result set” will contain a continuation token. It turns out that the definitions of a single page and result set are both not entirely intuitive.

To start with the latter a result set in this case is not only the resulting JSON document as you might expect, but also the response headers of the API call. To be precise, the
x-ms-continuationtoken response header will contain the continuation token if one is needed to retrieve the next page.

The definition of a page in this API is also somewhat strange. In our account I received 495 results in the first page and 66 in the second (and last page) for a call to the above API without any filtering. When I apply filtering however (for instance, I want only the AAD groups) I receive 33 items in the first page and 5 in the second (and again last page).

Lessons learned: look everywhere for that continuation token even if the number of results doesn’t lead you to believe that it is a full page.

WCF services on an Azure website returning 502 Bad Gateway

So the other day I moved a web role containing WCF services over to an Azure website. Which seemed like a breeze, after deployment I called up the svc file in the browser and all seemed fine. However when I tested with an actual client of the service it received only 502 Bad Gateway responses.

Now there are lots of reasons 502 responses happen, especially in cloud environments where load balancers and what not sit between you and the site/service. However after some research a pattern started to emerge where infrastructure problems seemed unlikely to cause this problem, and a few seemingly random questions on stack overflow caused me to consider: might the problem be caused by my own code/configuration.

You see, a regular website or service should usually not respond with a 502 bad gateway, this is mostly something proxies and load balancers etc. do (as far as I know). In this case too, the error is returned by some intermediate device and not the webserver itself. This intermediate device does this because the website severed then TCP connection abruptly. For instance because the application pool for the website was shutdown unexpectedly. And in a .NET WCF service, what causes the application pool to shutdown unexpectedly is usually something that brings the .NET application domain down. Stuff like, OutOfMemoryException, StackOverflowException and the like.

If you don’t catch these kinds of exceptions yourself (and indeed you usually should not, but that is another discussion entirely) and they bring down the application domain, no logging is done whatsoever (not as far as I could find, and I’ve searched for it quite a while). So the best way to find out what is really going on is remote debugging the azure website. A good tutorial on that can be found here. Be sure to deploy a debug build of your website for easiest debugging.

So now you have that connected, hit that offending service with your client, and presto… you get a nice unhandled exception pop-up which will make you google some more find a solution for that problem and then you have rid yourself of that pesky 502 error. Except… in my case no unhandled exception popped up, I double checked my exception handling settings (twice) to make sure I had that set correctly. So this means… its not my code…

Back to the debugger, this time I turn off the ‘Just My Code’ feature in the debugger settings hit the service again and get presented with an actual unhandled exception. My particular problem was related to the one described in this Stack Overflow post.

I hope writing these steps down lets me (and maybe someone else) fix it considerably faster next time I hit this error. This was quite a long afternoon of headaches I’d love to get back.

 

“Windows 10 SMB Secure negotiation” or “Why will my network shares not work on Windows 10 anymore”

So, a couple of years ago I was the first person in the office upgrade to Windows 8. I had the blessing of corporate IT as long as I troubleshoot my own problems if they were Windows 8 specific. And of course if I encountered and fixed any errors let them know what it was and how to fix it.

One of the first problems I encountered was problems connecting to our $50k SAN. After some digging it turned out that it did not support a new SMB feature turned on by default in Windows 8 called Secure Negotiate. Which basically wants to negotiate with the server about which encryption to use when transferring files. A solution was quickly found: Turn off the feature.

This could be done setting the following registry key:

HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecureNegotiate=0

Everything worked as expected until I upgraded to Windows 10 when that came out. Microsoft had a very valid reason to remove the above workaround and not allow you to bypass any security features unless the server indicated during negotiation that it would not support certain things.

However, the SAN still didn’t support any secure negotiate feature. So after some more research I found out that I could just tell the client to force secure transfer without the need for negotiation. So if you can’t seem to access your SMB shares anymore since upgrading to Windows 10, open a Powershell prompt as Administrator and run the following command:

Set-SmbClientConfiguration -RequireSecuritySignature $true

Please note that I am not an SMB protocol guru so the above text may be a bit inaccurate in its details. If you want more info however, someone at Microsoft who does know what he is talking about did a very detailed write-up about the feature. You can find it here:

https://blogs.msdn.microsoft.com/openspecification/2015/08/11/smb-3-1-1-pre-authentication-integrity-in-windows-10/

MSBuild command line building a single project from a solution.

I recently needed to build just one project (and its dependencies) from a solution. I quickly found the following MSDN article on exactly how to do this:

https://msdn.microsoft.com/en-us/library/ms171486.aspx

However, I couldn’t get it to work for the life of me. The command always complained along the lines of:

MySolution.sln.metaproj : error MSB4057: The target "My.Project:Clean" does not exist in the project. [MySolution.sln]

Luckily during a search on the internet about troubleshooting MSBuild issues, I came across a way to save the intermediate project file created by MSBuild from the solution. Because as you might have noticed when you look at a .sln file, its not even close to a regular MSBuild project file. MSBuild interprets the solution file and generates one big MSBuild project file from it, then builds that file.

This can be done by setting an environment variable before calling the MSBuild for a solution. In a command prompt type the following:

Set MSBuildEmitSolution=1

When you then for instance build a solution with the following command:

msbuild MySolution.sln /t:Clean

This will perform a clean of the solution, but also save the entire MSBuild project file in a file called MySolution.sln.metaproj.

I thought this was a good idea because the MSDN article above talks about targets, and usually targets in a project file are called Clean, or Rebuild or something like that. Why would there be a target “MyProjectName:Clean”? Well, because MSBuild generate that target in the aforementioned .metaproj file.

It turns out however that target names may not contain the . character. And MSBuild nicely works around this by replacing them with _ characters. So to get my single project building I had to call:

msbuild MySolution.sln /t:My_Project:Rebuild

Hopefully this post saves someone else some time.

Microsoft Edge not starting after Windows 10 update (v1511)

I recently updated my work machine to the latest Windows 10 update (1511). After the update was finished I noticed that I couldn’t start Microsoft Edge anymore. I didn’t think much of it at the time since it is not my main browser. However it started to annoy me a bit when it turned out it was my main PDF reader.

Rather than setting another app as the default PDF reader I decided to try and fix the cause of the problem. This turned out the be harder than expected though. I don’t know why the problem reared its head after the latest update, but suffice to say after a reinstall Edge worked but then after configuring my PC it didn’t anymore.

Reinstalling again then checking after each step revealed that things went wrong after connecting my work account with my PC. And with Work account I don’t mean my domain account, but rather my Office 365 Organizational account (that you can connect using the Accounts settings page in Windows 10).

Things, however, did not return to normal after I had severed the connection. And I had to remove my profile and recreate it to ensure Edge worked again. If you are using a roaming profile this might not work for you, and also do not take removing your profile lightly. It is holds more of your settings and configuration than you might realize.

Problems with OAuth Access Token encryption and decryption using Microsoft.OWIN hosted in IIS.

If you want to secure access to your WebAPI calls, a mechanism you can use is OAuth2 Bearer tokens. These tokens are generated via a login call for instance, and the website or mobile app can hold on to this token to authenticate with the server. These tokens can be generated using Microsoft’s OWIN implementation (also known as Katana).

These tokens have an expiration date. After that date you won’t accept the token obviously. However there are also some situations that can occur where the token can’t even be decrypted.

First of all, the default way of encrypting the token when you host the Owin/Katana in your own process (HttpListener or otherwise) is different from when it is being hosted in IIS using the SystemWeb host (which is a separate NuGet package btw). The former uses the DPAPI to protect the tokens, while the latter uses ASP.NET’s machine key data protection. There is also the option of providing your own provider/format.

I am currently only familiar with the SystemWeb host under IIS, and we recently ran into some problems after updating our software and moving it to another machine. See, we had these mobile devices who registered with our WebAPI service and stored a token which should not expire. However, after the update we found the tokens would not decrypt anymore and our users were presented with a security error, which meant they had to reregister the device with our software.

We quickly found out that we forgot to set the machine key in our web.config so encryption on the new server was different than the old one. However after configuring our web.config to use the same machine key as the old server tokens were still not being decrypted.

After a lot of searching it turned out that Microsoft.Owin 3.0.1 will not decrypt tokens created by Microsoft.Owin 3.0.0. As soon as we downgraded all our Microsoft.Owin packages back to 3.0.0 version it worked again.

To make a long story short:

Make sure both machine key and Microsoft.Owin versions stay the same if you want your tokens to keep working after an update of your software. Otherwise you find out the hard way why you should probably have used your own token encryption/decryption scheme in the first place. Our next order of business is finding a way to update our Microsoft.Owin version in the future without breaking our current user’s device registrations.

NuGet package UI always indicates an update for some packages.

Or, why you don’t get a nice reassuring green checkmark after an update of a package.

I recently noticed in a rather large solution with around 70 installed NuGet packages (don’t ask) that an update of some packages did not result in a green checkmark. Also when you open the update screen again, it again indicates there is an update for that package. Also when you try to update again it will not allow you to select projects to update since it thinks (knows) all your solution projects already have the update:

I recreated the situation in a simple solution with the following two packages. I have rigged the Json.NET package to display the above behaviour. When I update both packages one gets the green checkmark and the other doesn’t.

This is usually caused by a rogue copy of an old version of the package still in the packages folder under your solution folder:

As you can see above, there is a Newtonsoft.Json.6.0.2 folder, And also the newer installation. There are probably many reasons why this can happen (for instance the EntityFramework package asks for a restart of Visual Studio, and if you don’t do that and just continue on working on your solution it might leave the old version behind). The solution is simple, just delete the folder.

Afterwards it won’t show up in the packages that need updates anymore.

Please be aware that if you have a project under the solution root folder that is not in your solution, but does contain nuget package references, these can still reference the folder you just deleted (and prompty restore it again if you open the project).