.Net Core – Creating Docker Containers in Azure DevOps using private Nuget Feeds

I recently ran into an issue where some our .Net Core apps have private nuget feed dependencies to both external and internal libraries. When you build apps in Azure DevOps, by default, access to private feeds stored in Azure DevOps already has a bearer token throughout the life of the pipeline in runtime. If you want to explore on the bearer token, it’s deeply discussed here: Predefined variables in Azure DevOps

Once a docker image has been pulled, the context of building an app within the image is restricted within the image context. To fix the issue, Microsoft has developed an azure artifacts credential provider that allows users to set security context in runtime via dotnet.exe or nuget.exe. The creds provider (shortened) is fully documented here.

Essentially, the steps involved are:

  1. Installing Credential Provider inside the docker container
  2. Setting the credentials in runtime via an environment variable within the docker container. VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
  3. Passing a personal access token (PAT) created in Azure DevOps during docker build invocation

Prior to this, I was merely compiling the code outside of the container given that private feeds are authorized in the pipeline context then I simply copying the compiled bits over in the container. This, works, but… not a good pattern.

Installing Credential Provider inside the docker container

For this example, we use a nuget.config file to specify all nuget sources that hosts internal nuget packages. The .net cli (dotnet.exe) supports passing source location endpoints during execution so you don’t necessarily need to have a nuget.config file.

dotnet build --source c:\packages\mypackages

However, for ease of development, I prefer using nuget.config to specify nuget feed endpoints. This way, anyone who works on the same codebase doesn’t have to keep passing nuget source endpoints.

Linux Containers: https://github.com/microsoft/artifacts-credprovider/blob/master/helpers/installcredprovider.sh

# Docker Build Arguments

RUN apt-get update && apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && dpkg-reconfigure --frontend=noninteractive locales && update-locale LANG=en_US.UTF-8

RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash

ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS {\"endpointCredentials\": [{\"endpoint\":\"https://MyADOInstance.pkgs.visualstudio.com/_packaging/InternalNugetFeed/nuget/v3/index.json\", \"username\":\"build\", \"password\":\"${PAT}\"}]}

Windows Containers: https://github.com/microsoft/artifacts-credprovider/blob/master/helpers/installcredprovider.ps1

# Docker Build Arguments

# This is specified here: https://github.com/microsoft/artifacts-credprovider
RUN xcopy "Utilities\credsprovider" "%userprofile%\.nuget\plugins" /E /I

ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS {\"endpointCredentials\": [{\"endpoint\":\"https://MyADOInstance.pkgs.visualstudio.com/_packaging/InternalNugetFeed/nuget/v3/index.json\", \"username\":\"build\", \"password\":\"${PAT}\"}]}

I’ve intentionally wrote the docker file for Windows containers to xcopy bits from my source to the image container nuget directory.  Why? All we’re doing here is simply copying the bits over to the plug-ins directory of the user profile running the build instance. This is also the same for Linux containers. The difference is that we can simply invoke the bash and powershell scripts within the container and execute during run-time.

Setting the credentials in runtime via an environment variable within the docker container

Overview of environment variables used:

NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED:  Controls whether or not the session token is saved to disk. If false, the Credential Provider will prompt for auth every time.

VSS_NUGET_EXTERNAL_FEED_ENDPOINTS: Json that contains an array of service endpoints, usernames and access tokens to authenticate endpoints in nuget.config

${PAT}: is an argument variable that is passed during docker build. This is your personal access token created in Azure Devops.

{\"endpointCredentials\": [{\"endpoint\":\"https://MyADOInstance.pkgs.visualstudio.com/_packaging/InternalNugetFeed/nuget/v3/index.json\", \"username\":\"build\", \"password\":\"${PAT}\"}]}

The JSon file above sets the authentication scheme for the nuget feed endpoint specified in the nuget.config file. Ensure that both feeds match.

Passing a personal access token (PAT) created in Azure DevOps during docker build invocation

For this post, I’m utilizing new Azure DevOps pipeline capabilities. In this case, YAML pipelines. For more information, see my previous post on: YAML Builds in Azure DevOps – A Continuous Integration Scenario.

There are lots of ways to implement secure tokens in Azure DevOps. You really don’t want to expose tokens in clear text as part of your YAML file 😊. The most simplistic scenario is to declare a variable in your pipeline, encrypt it then use it within your pipeline.  See example below:

To pass the encrypted value during docker build:

docker build -f $(DockerFile) -t $(DockerImageEndpoint) ${DockerAppPath} --build-arg PAT=$(AzureDevOpsPAT)

$(AzureDevOpsPAT): Encrypted value declared at the pipeline.

A pipeline result would look something like this when implemented correctly:

Continuous Integration in VSTS using .Net Core (with Code Coverage), NUnit, SonarQube: Part 3: VSTS SonarQube Build Task

What is SonarQube? From SonaQube’s WebsiteSonarQube provides the capability to not only show health of an application but also to highlight issues newly introduced. With a Quality Gate in place, you can fix the leak and therefore improve code quality systematically.”

In short, it’s a continuous integration process targeting developers to set triggers and/or thresholds on maintaining quality code using gates.

Here’s a high-level screenshot of what SonarQube has to offer (Actual screenshot of an application that went through SonarQube’s capabilities:


Note that the instance of SonarQube that I’ve used here is their SaaS based offering – SonarCloud. I didn’t want to go through the hassle of hosting my own instance of SonarQube rather use the SaaS based offering as a guideline. In my opinion, SaaS based offerings are better options for medium to enterprise size companies for multiple reasons (Cost, Support, Maintenance, etc…)

To see detailed description of what SonarQube has to offer: https://www.sonarqube.org/features/clean-code/

Personally, I love everything what SonarQube has to offer. Note that SonarQube can also be self-hosted, If you want to host SonarQube within your IT shop, you can step by step directions here: https://www.sonarqube.org/downloads/

Let’s go through setting up SonarQube in VSTS:

Step 1: Prepare analysis on SonarQube

NOTE: Make sure that this task comes before any application build task. This should be the first task. In my example, this task comes after restore Nuget step. This shouldn’t affect how the analysis works. Nuget restore is pretty much restoring Nuget packages for the given .Net solution/project(s).

This is the most crucial step of the process. This what sets all the properties in build time. The fields you need to enter here are both the Project Key and Project Name. These values can be obtained through SonarQube’s administration page or the landing page of your project in SonarQube.

One important field missing here is the Organization. This is needed to publish to SonarQube. As of writing this post, version 4.x of this task will fail unless you specifically add an additional property to set the organization. You set this by expanding “Advanced” on the task and typing:

sonar.organization=<Org Value>

Both Org and Project Keys are specified as well in the project landing page in SonarQube’s site.


Step 2: Run Code Analysis

This step should come after a successfully test task for your build. The results from the unit tests are gathered (including code coverage), analyzes the results and preps the proper files for publishing to SonarQube.


Step 3: Publish Quality Gate Result

This is the final step. It should come right after the Code Analysis task. No settings are done here since all settings have been properly set in the first step (Prepare analysis on SonarQube).


A successful build with SonarQube integration looks like this:


Continuous Integration in VSTS using .Net Core (with Code Coverage), NUnit, SonarQube: Part 2: VSTS Build Definition Setup – .Net Core and NUnit

If you haven’t setup your .Net Core project/s for code coverage instrumentation, see my previous post: Part 1: .Net Core Project Setup – Code Coverage.

That said, let’s go through the settings for enabling code coverage in VSTS builds. The basic structure of CI build definition would be:

  1. Build the application
  2. Run Tests (Unit Tests with Code Coverage)
  3. Publish Artifacts

In this post, I’ll skip over using NUnit as a test framework for .Net Core. For using NUnit as a test framework, see my previous post: Using NUNIT Test Framework do validate deployments in VSTS Release Management


Note from the image above that we’ve disabled the Dotnet test task because code coverage is currently not supported on dotnet.exe CLI (as of writing of this post)

However, vstest.console.exe does support code coverage. This is the task we’ve enabled to run our unit tests for code coverage instrumentation as well as running the tests from NUnit written code. Vstest.console.exe automatically detects NUnit tests since part of the restore nuget package includes the NUnit test adapters.

The important note here is to ensure that you properly setup the vstest task in the build definition and it’s settings:

  • Ensure that you have code coverage enabled option
  • Ensure that you are pointing to the .runsettings file for further code coverage settings
  • Install the NUnit adapters as part of your test project

Running the build yields the following result.


At this point, you can download the code coverage file and open the result in Visual Studio for further inspection.

You may have noticed that in my build, I also have tasks for SonarQube. What is SonarQube and why use it? For this, see part 3 (final) post for this series: Part 3: VSTS SonarQube Build Task

Continuous Integration in VSTS using .Net Core (with Code Coverage), NUnit, SonarQube: Part 1: .Net Core Project Setup – Code Coverage

There are 2 ways to discover and execute unit tests using Microsoft developed test harnesses:

  • Vstest.console.exe = This is the command-line used to execute tests within/embedded in Visual Studio IDE
  • Dotnet.exe = This is the command line interface (CLI) specific to .Net Core Projects

Documentation for Vstest.console.exe is documented here: https://msdn.microsoft.com/en-us/library/jj155796.aspx

For .Net Core Projects: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test?tabs=netcore2x

The primary difference between both is that vstest.console.exe can execute tests developed in .Net Framework and .Net Core while dotnet.exe is specifically for .Net Core

An example of executing tests for the same assembly domain (test project) would be:


vstest.console.exe <testassembly>.dll (Pointer to the compiled Assembly)


dotnet test <testassemblyproject>.csproj (Pointer to the actual .Net Core Test Project)

The issue with dotnet.exe (CLI) is that Code Coverage doesn’t work. In order for code coverage to work on .Net Core projects, you need to:

  1. Edit the .Net Core projects you want to instrument for code coverage
  2. Use vstest.console.exe and supply /EnableCodeCoverage switch

Edit the .Net Core project/s for code coverage instrumentation

When you run unit tests in visual studio and select the option to “Analyze Code Coverage for Selected Tests” (as seen below), by default, code coverage results will not be captured.


As of writing of this post, the fix is to modify the project file and enable DebugType to Full on the propertygroup section of the project file.


Save the project file and run the unit tests again by selecting the option: to “Analyze Code Coverage for Selected Tests” and you’ll see similar results as shown below.


Use vstest.console.exe and supply /EnableCodeCoverage switch

As you saw within Visual Studio, running tests with code coverage can be trigged via a simple click on the context menu. If you want to execute your unit test with code coverage in a command line, you invoke /EnableCodeCoverage switch.

vstest.console.exe <testassembly>.dll /EnableCodeCoverage

The result would be an export of the code coverage results to a .coverage file. You can then open the file within Visual Studio to inspect the results. See screenshot below:


Setting up your .Net Core projects appropriately using the preceding steps should give you the proper code coverage numbers. More importantly, this allows you to seamlessly integrate with various build systems. Additionally, here are some tips and practices around code coverage:

Use a test .runsettings

Use a test .runsettings file to exclude assemblies you don’t want to instrument. The .runsettings file can be used on how tests are executed from vstest.console.exe. For more information, see the following: Configure unit tests by using a .runsettings file

Here an example on how you would want to exclude piece of code not to be measured for code coverage:

      <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
                <!-- Include all loaded .dll assemblies -->  
                <!-- Exclude all loaded .dll assemblies with the words moq, essentially regex -->
            <!-- We recommend you do not change the following values: -->

To use the .runsettings file, in Visual Studio, click on Test, Test Settings, Select Test Settings File (see below image)


[ExcludeFromCodeCoverage] attribute

Use [ExcludeFromCodeCoverage] attribute wherever appropriate. When a section of code is decorated with this attribute, that section of the code will be skipped for code coverage. Why? In certain cases, you don’t want code to be measured with code coverage. An example would be entity objects that have default property setters (get / set) that has no functionality. If there is “NO” logic developed on either the get and/or set property why measure it?

This ends the first part of this series, on the next part (VSTS Build Definition Setup – .Net Core and NUnit), we will hook up the test tasks in VSTS to include code coverage reporting.

Validating and Unit Testing Web API (2) Route Attribute Parameters

Personally, I like to isolate business rules and/or validations outside of MVC Controllers. In this case, API Controllers. I use ActionFilterAttribute to define my checks on parameters being passed in my MVC Web API routes.

Here’s an example of a WebAPI route with parameter binding:

// GET: /1/employees/AA0000111"
        public IHttpActionResult GetUser(string employeeid, int WebServiceVersion = 1)
            // GET: Do something with webServiceVersion value like logging.
            var user = _emprepository.GetUser(employeeid);
            return Content(HttpStatusCode.OK, user);

I want to isolate validating employeeid outside of my controller for a couple of reasons:

1) Isolation – You may have multiple cases on validating your parameters. In this case, employeeId can be permutated in different ways specially because it is a string. Other developers can easily get lost on what the action controller is actually doing if you have long code that includes all various validations

2) Good development practice – I prefer to see nice clean code and separation on what my controllers do vs business rules

3) Testing – I can isolate testing on my controllers vs business rules. This is really the motivating factor for me.

That said, let’s take a look at the ActionFilterAttribute further. For more information on this, see:

(NOTE: There are 2 versions of ActionFilterAttribute)



When unit testing, make sure you’re writing the correct tests for your filter. In this case, I’m using the namespace: System.Web.Http.Filters

public class ValidateEmployeeIdAttribute : ActionFilterAttribute
        public override void OnActionExecuting(HttpActionContext actionContext)
            var employeeid = actionContext.ActionArguments["employeeid"].ToString();
            if (string.IsNullOrEmpty(employeeid) || employeeid.ToLower() == "<somecheck>" ||
                employeeid.ToLower() == "<replace and use other validation such as regex>")
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest,
                    $"Input parameter error, employeeId: {employeeid} -  not specified, null or bad format",

Note in the preceding code for the controller that I decorated the web api action method with: [ValidateEmployeeId]

This instruct the controller to use the custom ActionFilterAttribute that I created above

Testing your custom validate via UNIT Test/s:

For simplicity, I used MSTest that comes with visual studio.

[TestMethod, TestCategory("UserController")]
        public void Validate_EmpId_ActionFilterAttribute()
            var mockactioncontext = new HttpActionContext
                ControllerContext = new HttpControllerContext
                    Request = new HttpRequestMessage()
                ActionArguments = { { "employeeid", "<somecheck>" } }

            mockactioncontext.ControllerContext.Configuration = new HttpConfiguration();
            mockactioncontext.ControllerContext.Configuration.Formatters.Add(new JsonMediaTypeFormatter());
            var filter = new ValidateEmployeeIdAttribute();
            Assert.IsTrue(mockactioncontext.Response.StatusCode == HttpStatusCode.BadRequest);

At this point, you should have separation of code to validate your “validations” vs controller.

Using fiddler, I can see that whenever I submit a request that has an invalid value for employeeid, I get the correct response:


Using XML Data Transform (XDT) to automatically configure app.config during Nuget Package Install

This should be fairly straight forward as mentioned on nuget.org’s site right? Well, not quite. I’ve spent some time reading through the blog posts and it’s not quite straightforward. Hopefully this post is the simplified version. In my case, the scenario is simply to add entries in the appSettings key node within the app.config file. Nuget.org’s site has the following docs:

Configuration File and Source Code Transformations


How to use XDT in NuGet – Examples and Facts


The steps below will hopefully guide you through the initial steps to get your app.config (or web.config) files to be modified during and after installing your nuget packages. After which you can look at all different XDT transformation processes in the following doc:

Web.config Transformation Syntax for Web Project Deployment Using Visual Studio


Step 1: Create both app.config.install.xdt and app.config.uninstall.xdt

From Nuget site: “Starting with NuGet 2.6, XML-Document-Transform (XDT) is supported to transform XML files inside a project. The XDT syntax can be utilized in the .install.xdt and .uninstall.xdt file(s) under the package’s Content folder, which will be applied during package installation and uninstallation time, respectively.”

The location of these files don’t quite matter. If these files are located in the same directory as where you have your assemblies for nuget package, even better. You’ll need to reference these 2 files as “content” folder locations in the .nuspec file. Nuspec file is the blue print for creating your nuget package.


<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key1" value="Value1" xdt:Transform="Insert"/>
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
    <add key="Key2" value="Value2" xdt:Transform="Insert" />

Let’s break this down. There are 2 appSettings node in this xml file. One to check if the appSettings node exist (InsertIfMissing) and the 2nd, if it does exist, it will remove the key value pair matching the keyword and then add it again. Why do this 2 step process? This is to ensure that you will only have one entry per key. However, you could probably get away using InsertIfMissing as well.


<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
The uninstall file is pretty straightforward. Remove the app setting keys if they exist. Although, in this case, I’m not deleting the appSettings node. Leaving the appSettings node in your config file will not cause any issues.

Step 2: Modify your nuspec file to include both the .install.xdt and .uninstall.xdt file(s) as content folders.

.nuspec file is the core or blue print for generating your nuget package. Here’s an example of a .nuspec file. For more information, go here: http://docs.nuget.org/Create/Nuspec-Reference

In this example, you’ll need to refer for both .install.xdt and .uninstall.xdt file(s) as target content folders:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
    <title>Nuget Package 1</title>
    <authors>QE Dev</authors>
    <owners>Don Tan</owners>
    <description>Package 1 Testing</description>
<summary>Application Config change</summary>

      - Support for Application Config change
    <copyright>Copy Right</copyright>
      <dependency id="Microsoft.ApplicationInsights" version="2.1.0" />
      <reference file="Package1.dll" />
    <file src="Package1.dll" target="lib\net45\Package1.dll.dll" />
    <!--Add Section to Uninstall and Re-install Application.Config files-->
    <file src="app.config.install.xdt" target="content" />
    <file src="app.config.uninstall.xdt" target="content" />

Step 3: Test the generated nuget package and verify if your application config (app.config) settings have been modified

Custom Build Triggers in VSTS

In my previous posts, I’ve shown people how to use VSTS (formerly known as VSO) to trigger continuous testing using builds and release management. I was able to utilize new reporting capabilities in build, particularly, test reports. I created reports that shows pass/fail trends for tests in my build definitions.


There are still limitations (or in this case features I wish Microsoft would consider such as customizing test reports from builds as well as showing pass/fail trends past 10 builds). My biggest disappointment thus far is “NOT” able to schedule build (with tests) using re-occurring pattern/s. As of writing this post, you can schedule builds in VSTS however, you have to “manually” keep adding scheduled times.


Imagine a scenario where you need to run a build every hour (or half hour), you have to manually add new times every hour, in this case, 24 times. Very inconvenient.

Fortunately, VSTS has public API’s that allows us to access build execution and trigger. With the public API’s I was able to write a very simple console app and use Windows’ built in “Task Scheduler” functionality. One would say, why not create a windows services? Yes, that’s option but I would make a point back to say: “Why develop a windows service further complicating the process where Windows has ‘Task Scheduler’ that’s been tested and used more broadly?”

Below is the code:

NOTE: You need to refer to the following Nuget Packages:

  • Microsoft.TeamFoundationServer.ExtendedClient
  • Microsoft.TeamFoundationServer.Client
  • Microsoft.VisualStudio.Services.Client
  • Microsoft.VisualStudio.Services.InteractiveClient
static class Program
        static void Main(string[] args)
            var buildoutputmodel = SetupBuildOutputModel();
            var vssconnection = new VssConnection(
                new Uri(buildoutputmodel.VsoUrl),
                new VssBasicCredential(buildoutputmodel.UserName, buildoutputmodel.Password)
            var buildHttpClient = vssconnection.GetClient<BuildHttpClient>();
            //Below is my implementation of triggering multiple builds. I simply used the app.config to specify the build's ID, split each entry and validate. 
                buildid =>
                    string stringoutput;
                        var id = buildid.ValidateBuildId();
                        DefinitionReference definitionReference = new DefinitionReference
                            Id = id,
                            Project = new TeamProjectReference
                                Name = buildoutputmodel.TeamProjectName
                        var build = new Build { Definition = definitionReference };
                        //This is where you trigger the build
                        var buildnumber = buildHttpClient.QueueBuildAsync(build,
                        stringoutput = $"Build Triggered... \nBuild Number: {buildnumber} \nBuild Definition ID: {definitionReference.Id} \nTeam Project: {definitionReference.Project.Name}\n";
                    catch (Exception ex)
                        stringoutput = $"Exception Occurred: \n{ex.Message} \n{ex.InnerException}\n";

        private static BuildOutputModel SetupBuildOutputModel()
            return new BuildOutputModel
                UserName = ConfigurationManager.AppSettings["username"],
                Password = ConfigurationManager.AppSettings["password"],
                VsoUrl = ConfigurationManager.AppSettings["vsourl"],
                TeamProjectName = ConfigurationManager.AppSettings["teamproject"],
                BuilDefinitionName = ConfigurationManager.AppSettings["builddefinition"],
                GitRepo = ConfigurationManager.AppSettings["gitrepo"]

Once you compile the code (.exe), simply create a scheduled task using Windows’ Task Scheduler:


Then the execution: