Using XML Data Transform (XDT) to automatically configure app.config during Nuget Package Install

This should be fairly straight forward as mentioned on’s site right? Well, not quite. I’ve spent some time reading through the blog posts and it’s not quite straightforward. Hopefully this post is the simplified version. In my case, the scenario is simply to add entries in the appSettings key node within the app.config file.’s site has the following docs:

Configuration File and Source Code Transformations

How to use XDT in NuGet – Examples and Facts

The steps below will hopefully guide you through the initial steps to get your app.config (or web.config) files to be modified during and after installing your nuget packages. After which you can look at all different XDT transformation processes in the following doc:

Web.config Transformation Syntax for Web Project Deployment Using Visual Studio

Step 1: Create both app.config.install.xdt and app.config.uninstall.xdt

From Nuget site: “Starting with NuGet 2.6, XML-Document-Transform (XDT) is supported to transform XML files inside a project. The XDT syntax can be utilized in the .install.xdt and .uninstall.xdt file(s) under the package’s Content folder, which will be applied during package installation and uninstallation time, respectively.”

The location of these files don’t quite matter. If these files are located in the same directory as where you have your assemblies for nuget package, even better. You’ll need to reference these 2 files as “content” folder locations in the .nuspec file. Nuspec file is the blue print for creating your nuget package.


<?xml version="1.0"?>
<configuration xmlns:xdt="">
    <appSettings xdt:Transform="InsertIfMissing">
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key1" value="Value1" xdt:Transform="Insert"/>
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
    <add key="Key2" value="Value2" xdt:Transform="Insert" />

Let’s break this down. There are 2 appSettings node in this xml file. One to check if the appSettings node exist (InsertIfMissing) and the 2nd, if it does exist, it will remove the key value pair matching the keyword and then add it again. Why do this 2 step process? This is to ensure that you will only have one entry per key. However, you could probably get away using InsertIfMissing as well.


<?xml version="1.0"?>
<configuration xmlns:xdt="">
    <appSettings xdt:Transform="InsertIfMissing">
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
The uninstall file is pretty straightforward. Remove the app setting keys if they exist. Although, in this case, I’m not deleting the appSettings node. Leaving the appSettings node in your config file will not cause any issues.

Step 2: Modify your nuspec file to include both the .install.xdt and .uninstall.xdt file(s) as content folders.

.nuspec file is the core or blue print for generating your nuget package. Here’s an example of a .nuspec file. For more information, go here:

In this example, you’ll need to refer for both .install.xdt and .uninstall.xdt file(s) as target content folders:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="">
    <title>Nuget Package 1</title>
    <authors>QE Dev</authors>
    <owners>Don Tan</owners>
    <description>Package 1 Testing</description>
<summary>Application Config change</summary>

      - Support for Application Config change
    <copyright>Copy Right</copyright>
      <dependency id="Microsoft.ApplicationInsights" version="2.1.0" />
      <reference file="Package1.dll" />
    <file src="Package1.dll" target="lib\net45\Package1.dll.dll" />
    <!--Add Section to Uninstall and Re-install Application.Config files-->
    <file src="app.config.install.xdt" target="content" />
    <file src="app.config.uninstall.xdt" target="content" />

Step 3: Test the generated nuget package and verify if your application config (app.config) settings have been modified


Custom Build Triggers in VSTS

In my previous posts, I’ve shown people how to use VSTS (formerly known as VSO) to trigger continuous testing using builds and release management. I was able to utilize new reporting capabilities in build, particularly, test reports. I created reports that shows pass/fail trends for tests in my build definitions.


There are still limitations (or in this case features I wish Microsoft would consider such as customizing test reports from builds as well as showing pass/fail trends past 10 builds). My biggest disappointment thus far is “NOT” able to schedule build (with tests) using re-occurring pattern/s. As of writing this post, you can schedule builds in VSTS however, you have to “manually” keep adding scheduled times.


Imagine a scenario where you need to run a build every hour (or half hour), you have to manually add new times every hour, in this case, 24 times. Very inconvenient.

Fortunately, VSTS has public API’s that allows us to access build execution and trigger. With the public API’s I was able to write a very simple console app and use Windows’ built in “Task Scheduler” functionality. One would say, why not create a windows services? Yes, that’s option but I would make a point back to say: “Why develop a windows service further complicating the process where Windows has ‘Task Scheduler’ that’s been tested and used more broadly?”

Below is the code:

NOTE: You need to refer to the following Nuget Packages:

  • Microsoft.TeamFoundationServer.ExtendedClient
  • Microsoft.TeamFoundationServer.Client
  • Microsoft.VisualStudio.Services.Client
  • Microsoft.VisualStudio.Services.InteractiveClient
static class Program
        static void Main(string[] args)
            var buildoutputmodel = SetupBuildOutputModel();
            var vssconnection = new VssConnection(
                new Uri(buildoutputmodel.VsoUrl),
                new VssBasicCredential(buildoutputmodel.UserName, buildoutputmodel.Password)
            var buildHttpClient = vssconnection.GetClient<BuildHttpClient>();
            //Below is my implementation of triggering multiple builds. I simply used the app.config to specify the build's ID, split each entry and validate. 
                buildid =>
                    string stringoutput;
                        var id = buildid.ValidateBuildId();
                        DefinitionReference definitionReference = new DefinitionReference
                            Id = id,
                            Project = new TeamProjectReference
                                Name = buildoutputmodel.TeamProjectName
                        var build = new Build { Definition = definitionReference };
                        //This is where you trigger the build
                        var buildnumber = buildHttpClient.QueueBuildAsync(build,
                        stringoutput = $"Build Triggered... \nBuild Number: {buildnumber} \nBuild Definition ID: {definitionReference.Id} \nTeam Project: {definitionReference.Project.Name}\n";
                    catch (Exception ex)
                        stringoutput = $"Exception Occurred: \n{ex.Message} \n{ex.InnerException}\n";

        private static BuildOutputModel SetupBuildOutputModel()
            return new BuildOutputModel
                UserName = ConfigurationManager.AppSettings["username"],
                Password = ConfigurationManager.AppSettings["password"],
                VsoUrl = ConfigurationManager.AppSettings["vsourl"],
                TeamProjectName = ConfigurationManager.AppSettings["teamproject"],
                BuilDefinitionName = ConfigurationManager.AppSettings["builddefinition"],
                GitRepo = ConfigurationManager.AppSettings["gitrepo"]

Once you compile the code (.exe), simply create a scheduled task using Windows’ Task Scheduler:


Then the execution:


Accelerate Release Cycle: Continuous Delivery Automation…

How you can apply Automation to accelerate release cycles, improve quality, safety and governance?

Last Tuesday I participated in an online panel on the subject of CD Automation, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps. Watch a recording of the panel:

Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.

Below are a few insights from my contribution to the panel:

Automation != Orchestration

Automation. There’s really a big opportunity, if I want to start talking about automation, people think about many things that are in automation, but in this context, for Continuous Delivery, it is the process of taking your application, getting it deployed faster, with the right set of quality gates, with the right set of people who will help you automate that process seamlessly. So in turn, it’s more about making sure your application is being deployed more frequently, with the right quality gates in it.

Fundamentally, the process, what you look at is this: you take your application and you build it, then you test it, make sure that the build is good. When you build it, then you deploy to a set of environments – what you think your test or your dev or your staging, your environments, then you test it again. So it’s that balance of how you build your application and test it.

And being a test architect, I’m obviously an advocate of quality, and I think there’s three things people have to figure out when they start thinking about Continuous Delivery: one, the velocity of things – how fast you deploy, two, the quality side of things – what set of quality gates you need to put in place when you build and deploy an application, and the last thing you have to figure out is after you have velocity, and you have your quality, what is the cost of doing things? What set of tool sets you have to use to make all of these things work together?

We have many teams here, and there could be many things and many people, but in terms of our orchestration, what you think about is, what is the current process that prevents us from deploying and what are those manual things that we can avoid. It’s a blend of how long does it take for us to build and deploy a current application, what are the manual processes involved in doing that? Now let’s try to dissect each one of them, and make things easier, because let’s be honest, if it takes you a long time to build your application, that’s going to be a big bottleneck for you to have this Continuous Delivery pipeline.

And on top of that, is, being in the airline industry, one other thing we have to think about when we start thinking about this CD process is the idea of having the right set of guidelines for it, or standards behind it. So if it takes your application, let’s say, minutes to build, hopefully not an hour, just a couple of minutes. Are there any current tool sets, or are there any current technologies we can utilize to make that process much faster, avoid the manual effort of doing it? I’ll give you a perfect example:

Before, I think it was pretty typical for us to deploy application to utilize scripts, some PowerShell scripts, and some batch files, ” x-copy this, copy that…” And certainly now in the world of technologies you really don’t have to do that because we have tools, there’s software that allows you to build and take those bits and automatically deploy them. You just need to utilize the right task. So those are the small things in terms of orchestration that you need to start thinking about. Then on top of that, OK, then what would I need for us to make sure that this current application when we deploy it has the right set of quality gates behind it?

So, looking at those different processes, orchestration, flow, helps a lot. I would strongly suggest taking a look at what your application does and how long does it take to build, and if there’s any way that can you minimize the amount of time to build and manually do it – automate it. There are many tools out there that can help you do it. (Too many…)

We’re using the same thing. You actually have the same orchestration processes, the only thing you have to think about differently is – don’t create a process where it’s dependent on each environment, think about your application when you deploy it, it targets any environment. You can be deploying, you can be using the same processes on a test environment, or a QA environment, but you shouldn’t be developing a process that is different from one environment to the other. Your process orchestration should target any environment, so anyone can take that same process, spin up a new environment, and follow through the same processes over again, utilizing the same quality gates in between.

Also, when you start talking about testing, there’s the traditional way of testing applications where you do a lot of the manual stuff, and sometimes you have a hundred, two hundred, three hundred test cases. Sometimes you have to think smartly when you start doing CD processes. For example, which test make sense to deploy this build, and does it cover the right set of test families? And a good way to tell that is this, and if you have an analytics in your organization, think about which of those scenarios are being utilized more by your customers.

Sometimes you create test cases and automate them just for the sake of automating, but what value do you get out from those automated test cases, if some of those test cases are not even used a lot? So optimize your test, take a look at the data, what you have for your app, and try to dissect that, and think about ways to make your tests smarter, and make that part of your process.

Challenges and Checkpoints of CD Pipelines, our main website, it’s a great group of people by the way – the way we approach software is – what is the primary reason we’re doing CD? It’s because we’re doing agile development, taking one step at a time, and based on the experience we have in continuing to observe and evaluate, we have a process where you make things easier for people to integrate with each other.

For example, the tools – pick a tool that can easily be used by people, and it works well for the development, the testing and the PM team. Because, one of the challenges is, for example, (and before you even talk about the tools, I want to go one step back), it’s understanding the culture. I think that is the big thing there, to understand the culture change, what people need to do and embrace that, changes are inevitable.

I’ll give you a good example: we’re all use to doing waterfall. Back in the old days, where you hand it off to someone, and hand it off to someone… Now when you start something – everyone’s engaged at that point of view, at that point of time. When I use the set of tools, there are things you have to sacrifice in between, that you’re not accustomed to, for example, for testing, traditionally, you probably still see it from time to time, but some test teams will still create test plans, and the superior views, when you try to do CD, you try to waste some of that time away, and, so, for doing web automation, tackle it at a good point, try to do less – there is a great article out there that talks about the testing pyramid. Where at the very bottom of the base you talk about the coverage of unit testing, in the middle layer you talk about services, then the top layer would be UX automation. Less emphasis on UX automation, more thorough test in the unit test, that way you get more coverage.

So my point is there are certain teams that use different tool sets for testing, while dev and PM use a different set of for managed work, so when the time comes to integrate and do this agile CD approach, what happens is no one has time to learn these tools. A developer will not have time to understand what tool that tester is using to automate stuff, and likewise for PM and other folks, and DevOps, and infrastructure. So think about a tool that will work well for the team, as that is one of the things that made it challenging for us, and even now we’re still evaluating some of these tools. There are many tools, and given that we’re in a world with a lot of open source, you can certainly take advantage of that, and take a look at what people are using out there: tools, culture, the process, also think about, when you start, when you’re doing Continuous Delivery, sometimes you think moving so fast, velocity, orchestration – but think about the small things that really help improve your app – security, performance testing. So my suggestions is, one of key challenges there, is, make sure you have a good monitoring system in place, when you start doing your CD process. Continue to monitor what your application is doing, even though you deploy it to a test environment, because I guarantee you there are certain things that will get exposed, when you actually have the right monitoring system in place.

I’ll take it as a startup company, where you got the walk, crawl, run phase – the walk phase is like this “Let’s do it!” Let’s get the right tools sets, let’s get it working, let’s go with it, yes, you’re delivering faster, you’re learning caveats, you’re learning some of these points where, “Oh we need to go back…” but the issue is, as you move forward, you start seeing more deficiencies, and a lot more people not standardizing the things. People do things differently, in fact, since we’re talking about Continuous Delivery, I should say people are delivering software differently from one team to the other. It becomes very challenging to standardize some things, not only in the tool sets but simply on the process as well.

Because one person can do the flow differently, then you have a different set of quality gates, then you have a different set of tools. We’re still learning, but at the end of the day – let’s be honest, you will not satisfy everybody. You’re not going to satisfy every team out there, but at least try to get a consensus of small things, standard things that worked well with everybody.

We still have teams that use different tools sets, but it’s not it’s not like one team is different from the other – now we have a big organization using the same tools, and that’s a good start, and we have another organization using the same tools, that’s a good start, then we have another organization using the same tool sets – now we have a lot of people talking with each other, just take it to the next level. Because the other thing to consider is while you’re standardizing the things, the big benefit of doing that as well is cost, you’re able to save the company money by standardizing certain things that worked well for you. So now you’re hitting two birds with one stone – obviously the next thing is, people will get more enticed to collaborate, share best practices, and provide governance around certain things that worked well for us.

Being in quality, quality is my forte, I can talk about web automation the whole day – unit testing, that kind of stuff, but certain people need to realize that when you do test automation there are certain things that you use to automate things that may not work with people. But at least try to establish a standard that will work well in terms of governance of the process, what you do about it, so making things easier for tracking and auditing is the other thing you have to consider.

Tools: Tips and Tricks

I’ll just make this straight – if I had one tip, when talking about Continuous Delivery, we know we want to be Agile, we want to release faster with quality. So have everybody be accountable of quality. If you think about quality from the get go, when you start doing Continuous Delivery, you’ll start expanding of all these tool sets, you’ll be able to define a process, you’ll be able to define your orchestration. In fact, it will actually even help determine what tools you will need to use, if you start thinking about quality at the get go, as part of your process, it’s mandatory.

I can’t stress this enough, when you talk about Continuous Delivery it’s all about release, release, release… features, features, features… at the end, you’ll spend more time thinking about fixing defects or bugs.

And even tools, sometimes tools don’t necessarily have the right set of features that help you deal with quality. So you keep everybody accountable of quality, because this is very important when you start thinking about Continuous Delivery out to your production servers. You’ll be able to write down the list of things, when you start thinking about quality: monitoring, automation, tools sets, orchestration, security, performance. If I had one big tip, it’s to have everybody be accountable for quality, it’s a shared practice by everybody, not by just the test team.

Enabling Targeted Environment Testing during Continuous Delivery (Release Management) in VSTS

In my previous post “Continuous Delivery using VSO Release Manager with Selenium Automated Tests on Azure Web Apps (PaaS)”, I’ve walked through the steps on how to enable continuous delivery by releasing builds to multiple environments. One caveat that I didn’t focus on is taking the same tests and running it against those target environments.

In this post, I’ll walk you through the steps on enabling or running the same tests targeting those environments that you have included as part of your continuous delivery pipeline. You’re basically taking the same tests however passing in different parameters such as the target URL and/or browser to run.

In a high level, these are the changes. The solution only requires:

· Create runsettings file and add your parameters.

· Modify your code to call the parameters from the .runsettings file

· Changes to the “Test Steps” in VSTS release or build

Create runsettings file

In Visual Studio, you have the ability to generate a “testsettings” file however not a “runsettings” file. What’s the difference between both file formats? Both can be used as a reference for executing tests. However the “runsettings” file is more optimized to be used for running Unit Tests. In this case our tests are unit tests.

For more information see: Configure unit tests by using a .runsettings file


1) Create an xml file with the extension of: .runsettings (e.g. ContinousDeliveryTests.runsettings)

2) Replace the contents of the file with the following:

<?xml version="1.0" encoding="utf-8"?>



      <Parameter name="EnvURL" value="" />

      <Parameter name="Browser" value="headless" />


  <!--<RunConfiguration> We will use this later so we can run multiple threads for tests




3) Save the file to your source control repository. In my case GIT in VSTS.

Modify your code to call the params from the .runsettings file

In your unit tests where you specify the URL, apply the following changes:

var envUrl = TestContext.Properties["EnvURL"].ToString();

At this point, you can refer the envUrl variable in your code for navigating through your tests.

Changes to the “Test Steps” in VSTS release or build

In VSTS Release Manager, modify the test steps to:

NOTE: As part of your build definition, ensure that you upload your runsettings file as part of your artifacts directory. Here’s a snippet in my build definition step:


Visual Studio Test step in Release Manager:

For each environment that you deploy your application, modify the Visual Studio Test Step to:

– Specify the runsettings file. You can browse through your build artifacts provided you’ve uploaded your runsettings file as part of your build artifacts

– Override the parameters with the correct value for your environment URL


One you make the following changes, your tests will execute targeting those environments specified in your continuous delivery pipeline.

Continuous Testing in VSO using Selenium WebDriver and VSO Test Agents (On-Premise)

This post walks you through on how to implement continuous testing using the following technologies:

  • Using Config Transform to configure your tests to run multiple browsers during runtime
  • Selenium WebDriver – Automation UX framework to drive UX Testing
  • VSO Build – ALM (Application Lifecycle Management) tool suite for storing code (via GIT), creating the build definition (Build) and configuring On-Premise machines as Test Agents

What is the Config transform? This will allow you to change the configuration settings for your app or web configuration files. When you use Selenium WebDriver, you have the option to run your tests using either Chrome, Internet Explorer, FireFox even PhantomJS (Headless testing). See the sample C# code below to generate the correct web driver instance through configuration settings. The method takes in a string value with a default browser of Internet Explorer. Note that sample code below uses try catches to log exception using Log4Net. You can ignore this but just re-use the code for creating the proper Selenium WebDriver

NOTE: If you’re not familiar on using Selenium to run automated UX testing, see these samples below:

public static T OpenBrowser<T>(string browser = "iexplore", bool useExistingBrowser = false, bool usebrowserstack=false)
            string error = string.Empty;

                if (!string.IsNullOrEmpty(browser))
                    browser = browser.ToLower();
                    switch (browser)
                        case "firefox":
                            IsFirefox = true;
                            return (T)Convert.ChangeType(ReturnDriver<FirefoxDriver, FirefoxProfile>(useExistingBrowser, ref _firFoxDriver,
                                Firefoxprofile), typeof(FirefoxDriver));
                        case "chrome":
                            IsChrome = true;
                            return (T)Convert.ChangeType(ReturnDriver<ChromeDriver, ChromeOptions>(useExistingBrowser, ref _chromeDriver,
                                Chromeprofile), typeof(ChromeDriver));
                        case "headless":
                            return (T)Convert.ChangeType(ReturnDriver<PhantomJSDriver, PhantomJSOptions>(useExistingBrowser, ref _phantomjsdriver,
                                phantomjsoptions), typeof(PhantomJSDriver));

                // ie (internet explorer)
                IEprofile.IgnoreZoomLevel = true;
                IEprofile.EnsureCleanSession = true;
                IsIE = true;
                return (T)Convert.ChangeType(ReturnDriver<InternetExplorerDriver, InternetExplorerOptions>(useExistingBrowser, ref _internetDriver, IEprofile), typeof(InternetExplorerDriver));              
            catch (Exception ex)
                if (string.IsNullOrWhiteSpace(ex.Message))
                    _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = Success");
                    _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = " + ex.Message);
                throw new ArgumentException(ex.Message, ex.InnerException);
            return default(T);

The Generic Type to return the driver is:

NOTE: I’ve hardcoded the value of T as IWebDriver instance to maximize the browser. You don’t have to do this but since we’re using Selenium WebDriver, I’ll just embed it on this method. You can also change the T type to any UX automation.

private static T ReturnDriver<T, TT>(bool existing, ref T driver, TT profile)
            string error = string.Empty;
                if (driver == null || existing == false)
                    driver = (T)Activator.CreateInstance(typeof(T), profile);
            catch (Exception ex)
                if (string.IsNullOrWhiteSpace(ex.Message))
                    _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = Success");
                    _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = " + ex.Message);
                throw new ArgumentException(ex.Message, ex.InnerException);
            return driver;

This method takes in a string param. You can pass the value from the config file and if you use config transforms, you can change the run behavior of the browser just by passing in the correct app config value.

Almost forgot! Download the config transform here:  Configuration Transform

Here’s a look at what the config transform would look like in your project:


The particular key that I used and configured would look very similar to this (this is from, App.Chrome.config):

<!-- valid options for Browsers are: chrome , firefox, iexplore, headless -->
    <add key="Browser" value="chrome"  xdt:Transform="Replace" xdt:Locator="Match(key)" />

Source Control is GIT in VSO

Given that we’re using VSO as our ALM suite, we’ve opted to use GIT as our backend source control system. This make it’s easier for configuring your build definition since all source code is stored in VSO

Configuring On-Premise VSO Test Agents

The next step is to ensure that you have On-Premise Test Agents that VSO can talk to and execute the tests. For this follow the steps on this article to configure your VSO Test Agents. Note that in VSO, build and test agents are now in the same machine. Also, note that this article talk about On-Premise TFS HOWEVER, the same applies in VSO. You have to go to your VSO instance ( and configure your agent pools. The rest is shown below

Let’s do a quick check!

  • Automated UX Tests have been developed in and uses configuration settings to drive the browser driver – Check!
  • Installed Configuration Transform and configured my test project with all appropriate config settings – Check!
  • Test Code stored in GIT (VSO) – Check!
  • On-Premise VSO Test Agents Configured – Check!

Once all of these have been verified then the final step would be stitching and putting all of these together via VSO Build.

Configuring VSO Build for Continuous Testing:

Step 1: Configure Build Variables:




Step 2: Create Visual Studio Build step:

First step is to build the test solution/project. This is pretty straightforward. On the Solution textbox, browse through the .sln file that you’ve checked in source control (in this case GIT)


Step 3: Deploy the Test Agent on the On-Premise Test Machines:

NOTE: Before completing this step, ensure that you’ve properly configured your test machines. Follow the article below. This articles walks you through creating machines groups for your team project:

The key here is ensuring that:

· You have 1 test machine group that has all the test agents configured correctly

· The $(TestUserName) must have admin privileges on the test agents


Step 4: Copy The Compiled Test Assemblies To The On-Premise Test Agents:

The key to this step is ensure that you copy the compiled test assemblies to the right location. $(Build.Repository.LocalPath) is the directory from the build server where “Destination Folder” will copy the test assemblies to the target test agent machine.


Step 5: Execute the Tests:

Nothing special here. Just make sure that you’re reference the correct Test Drop Location. Just copy the Destination Folder from the previous step:


If you configured it correctly, you should get a successful build! Now the result of the build depends on the Pass/Fail results of the tests. In this case, I’ve intentionally Failed 1 automated test to trigger a build failure that coincides to a failing test. Passing the fail test in the future will result to a complete build



In your VSO Home or Team (Dashboard) Pages, you can pin the build trending charts to see PASS/FAIL Patterns for your UX Automation