YAML Builds in Azure DevOps – A Continuous Integration Scenario

Azure DevOps has released YAML builds. Truthfully, I’m very excited about this. YAML builds greatly changes the landscape of DevOps practices on both CI and CD forefront. At least as of writing of this post, Microsoft has full support on YAML builds.

YAML Based Builds through Azure DevOps

Azure DevOps, particularly the build portion of the service, really encourages using YAML. You can tell this by creating a new build definition and the first option under templates is YAML. However, if you’re like me and were used to the UI, or you’re just completely new to Azure DevOps, it could be a bit confusing. In a nutshell, here’s a very simple comparison why I encourage using YAML builds:

NON YAML Based Builds (UI Generated and Managed):

  • JSON Based
  • Not Focused on “Build as Code”
  • No source control versioning
  • Shared steps require more management in different projects
  • Very UI driven and once created, the challenge is to validate changes made (without proper versioning)

YAML Based Builds (Code Based Syntax):

  • Modern way of managing builds that’s common in open source community
  • Focused on “Build as Code” since it’s part of the Application Git Branch
  • Shared Steps and Templates across different repo’s. It’s easier to centralize common steps such as Quality, Security and other utility jobs.
  • Version Control!!! If you have a big team, you don’t want to keep creating builds for your branch strategy. The build itself is also branched
  • Keeps the developer in the same experience.
  • A step towards “Documentation As Code”. Yes, we can use Comments!!!

For more info on Azure DevOps YAML builds, see: Azure DevOps YAML Schema

For YAML specific information, see the following:
https://yaml.org/

Better Read: Relation to JSON:
https://yaml.org/spec/1.2/spec.html#id2759572

YAML’s indentation-based scoping makes ideal for programmers (comments, references, etc.…)

In this post, I’ll provide some sample build YAMLs towards a CI pipeline from developer’s perspective. Platforms used:

  • Application Development Framework – .Net Core 2.2
  • Hosting Environment – Docker Container

The web application is a simple Web API that is used to listen on Azure DevOps service hook events. There are associated Unit Tests that validates changes on the API so a typical process would comprise of:

  1. Build the application
  2. Run quality checks against the application (Unit tests, Code Coverage thresholds, etc…)
  3. If successfully, publish the appropriate artifacts to be used in the next phase (CD – Continuous Deployment)

Taking the above context, we’ll be:

  1. Building the .Net Core Web API (DotNetCoreBuildAndPublish.yml)
  2. Run Quality Checks against the Web Api (DotNetCoreQualitySteps.yml)
  3. Create a Docker Container for the Web Api (DockerBuildAndPublish.yml)

Job 1: Building the .Net Core Web API

parameters:
  Name: ''
  BuildConfiguration: ''
  ProjectFile: ''  

steps:
- task: DotNetCoreCLI@2
  displayName: 'Restore DotNet Core Project'
  inputs:
    command: restore
    projects: ${{ parameters.ProjectFile}}

- task: DotNetCoreCLI@2
  displayName: 'Build DotNet Core Project'
  inputs:
    projects: ${{ parameters.ProjectFile}}
    arguments: '--configuration ${{ parameters.BuildConfiguration }}'
    
- task: DotNetCoreCLI@2
  displayName: 'Publish DotNet Core Artifacts'
  inputs:
    command: publish
    publishWebProjects: false
    projects: ${{ parameters.ProjectFile}}
    arguments: '--configuration ${{ parameters.BuildConfiguration }} --output $(build.artifactstagingdirectory)'
    zipAfterPublish: True

- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact'
  inputs:
    PathtoPublish: '$(build.artifactstagingdirectory)'
    ArtifactName: ${{ parameters.name }}_Package
  condition: succeededOrFailed()

This YAML is straightforward, it uses Azure DevOps tasks to call .Net Core CLI and passes CLI arguments such as restore and publish. This is the most basic YAML for a .Net Core app. Also, notice the parameters section? These are the parameters needed to be passed by the calling app (Up Stream Pipeline)

CHEAT!!! So, if you’re also new to YAML builds, Microsoft has made it easier to transition from JSON to YAML. Navigate to an existing build definition, click on the job level node (not steps) then click on “View As YAML”. This literally takes all your build steps and translates them into YAML format. Moving forward, use this feature and set parameters for shared your YAML steps.

Job 2: Run Quality Checks against the Web Api

This job essentially executes any quality checks for the application. In this case both Unit Test and Code Coverage Thresholds. Again, calling existing pre-build tasks available in Azure DevOps

parameters:
  Name: ''
  BuildConfiguration: ''
  TestProjectFile: ''
  CoverageThreshold: ''

steps:
- task: DotNetCoreCLI@2
  displayName: 'Restore DotNet Test Project Files'
  inputs:
    command: restore
    projects: ${{ parameters.TestProjectFile}}

- task: DotNetCoreCLI@2
  displayName: 'Test DotNet Core Project'
  inputs:
    command: test
    projects: ${{ parameters.TestProjectFile}}
    arguments: '--configuration ${{ parameters.BuildConfiguration }} --collect "Code coverage"'

- task: mspremier.BuildQualityChecks.QualityChecks-task.BuildQualityChecks@5
  displayName: 'Checke Code Coverage'
  inputs:
    checkCoverage: true
    coverageFailOption: fixed
    coverageThreshold: ${{ parameters.CoverageThreshold }}

Job 3: Create a Docker Container for the Web Api

parameters:
  Name: ''
  dockerimagename: ''
  dockeridacr: '' #ACR Admin User
  dockerpasswordacr: '' #ACR Admin Password
  dockeracr: ''
  dockerapppath: ''
  dockerfile: ''
  

steps:    
- powershell: |
    # Get Build Date Variable if need be
    $date=$(Get-Date -Format "yyyyMMdd");
    Write-Host "##vso[task.setvariable variable=builddate]$date"

    # Set branchname to lower case because of docker repo standards or it will error out
    $branchname= $env:sourcebranchname.ToLower();
    Write-Host "##vso[task.setvariable variable=sourcebranch]$branchname"

    # Set docker tag from build definition name: $(Date:yyyyMMdd)$(Rev:.r)
    $buildnamesplit = $env:buildname.Split("_")
    $dateandrevid = $buildnamesplit[2]
    Write-Host "##vso[task.setvariable variable=DockerTag]$dateandrevid"
  displayName: 'Powershell Set Environment Variables for Docker Tag and Branch Repo Name'
  env:
    sourcebranchname: '$(Build.SourceBranchName)' # Used to specify Docker Image Repo
    buildname: '$(Build.BuildNumber)' # The name of the completed build which is defined above the upstream YAML file (main yaml file calling templates)

- script: |
      docker build -f ${{ parameters.dockerfile }} -t ${{ parameters.dockeracr }}.azurecr.io/${{ parameters.dockerimagename }}$(sourcebranch):$(DockerTag) ${{ parameters.dockerapppath }}
      docker login -u ${{ parameters.dockeridacr }} -p ${{ parameters.dockerpasswordacr }} ${{ parameters.dockeracr }}.azurecr.io 
      docker push ${{ parameters.dockeracr }}.azurecr.io/${{ parameters.dockerimagename }}$(sourcebranch):$(DockerTag)
  displayName: 'Builds Docker App - Login - Then Pushes to ACR'

This the last job for our demo. Once Quality check passes, we essentially build a docker image and upload it to a container registry. In this case, I’m using an Azure Container Registry.

This is an interesting YAML. I’ve intentionally not used pre-built tasks from Azure DevOps to illustrate YAML capabilities by using external command sets such as Power Shell (which works across platforms) and inline script commands such as docker

First things first, docker when creating images and tags is very case sensitive. Docker has strict naming conventions and one of them is that all tags and images should be lower case. Let’s dissect these steps:

Powershell Step: I’ve added some logic here to get built-in variables from Azure DevOps build definitions. Notice that I’ve binded sourcebranchname and buildname as an environment variable from Azure DevOps built-in

I’ve added some logic here to get built-in variables from Azure DevOps build definitions. Notice that I’ve binded sourcebranchname and buildname as an environment variable from Azure DevOps built-in

'$(Build.SourceBranchName)' # Used to specify Docker Image Repo
'$(Build.BuildNumber)' # The name of the completed build which is defined above the upstream YAML file (main yaml file calling templates)

What’s next is straightforward for you “DevOps practitioners” 🙂

$branchname= $env:sourcebranchname.ToLower();

The above line is the step where I use powershell to set the branchname to all lowercase. I will use it later when calling docker commands to create and publish docker images

$buildnamesplit = $env:buildname.Split("_")
$dateandrevid = $buildnamesplit[2]

The above line is dependent on what you define as your build definition name. I used the last part of the build definition at the docker tag.

name: $(Build.DefinitionName)_$(Build.SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)
e.g.: #Webhooks-BuildEvents-YAML_FeatureB_20190417.4

Webhooks-BuildEvents-YAML – BuildName
FeatureB – BranchName
20190417.4 – Date/Rev (Used as the Docker Tag) 

You will see this build definition name defined in our upstream pipeline. Meaning, the main build YAML file that calls all these templates.

Script Step: Pretty straightforward as well. We invoke inline docker commands to: Build, Login and Push a docker image to a registry (ACR in this case). Notice this line though:

docker build -f ${{ parameters.dockerfile }} -t ${{ parameters.dockeracr }}.azurecr.io/${{ parameters.dockerimagename }}$(sourcebranch):$(DockerTag) ${{ parameters.dockerapppath }}

I’m setting the image name with a combination of both passed parameter and sourcebranch. This guarantees that new images will always be created on any source branch you’re working on.

The Complete YAML:

name: $(Build.DefinitionName)_$(Build.SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r)

trigger:
# branch triggers. Commenting out to trigger builds on all branches
  branches:
    include:
    - master
    - develop
    - feature*
  paths:
    include:
    - AzureDevOpsBuildEvents/*
    - AzureDevOpsBuildEvents.Tests/*
    - azure-pipelines-buildevents.yml

variables: 
  - group: DockerInfo

resources:
  repositories:
  - repository: templates  # identifier (A-Z, a-z, 0-9, and underscore)
    type: git  # see below git - azure devops
    name: SoftwareTransformation/DevOps  # Teamproject/repositoryname (format depends on `type`)
    ref: refs/heads/master # ref name to use, defaults to 'refs/heads/master'

jobs:
- job: AppBuild
  pool:
      name: 'Hosted VS2017' # Valid Values: 'OnPremAgents' - Hosted:'Hosted VS2017',  'Hosted macOS', 'Hosted Ubuntu 1604'
  steps:
  - template: YAML/Builds/DotNetCoreBuildAndPublish.yml@templates  # Template reference
    parameters:
      Name: 'WebHooksBuildEventsWindowsBuild' # 'Ubuntu 16.04' NOTE: Code Coverage doesn't work on Linux Hosted Agents. Bummer. 
      BuildConfiguration: 'Debug'
      ProjectFile: ' ./AzureDevOpsBuildEvents/AzureDevOpsBuildEvents.csproj'  

- job: QualityCheck
  pool:
      name: 'Hosted VS2017' # Valid Values: 'OnPremAgents' - Hosted:'Hosted VS2017',  'Hosted macOS', 'Hosted Ubuntu 1604'
  steps:
  - template: YAML/Builds/DotNetCoreQualitySteps.yml@templates  # Template reference
    parameters:
      Name: 'WebHooksQualityChecks' # 'Ubuntu 16.04' NOTE: Code Coverage doesn't work on Linux Hosted Agents. Bummer. 
      BuildConfiguration: 'Debug'
      TestProjectFile: ' ./AzureDevOpsBuildEvents.Tests/AzureDevOpsBuildEvents.Tests.csproj'
      CoverageThreshold: '10'
  
- job: DockerBuild
  pool:
      vmImage: 'Ubuntu 16.04' # other options: 'macOS-10.13', 'vs2017-win2016'. 'Ubuntu 16.04' 
  dependsOn: QualityCheck
  condition: succeeded('QualityCheck')
  steps:
  - template: YAML/Builds/DockerBuildAndPublish.yml@templates  # Template reference
    parameters:
      Name: "WebHooksBuildEventsLinux"
      dockerimagename: 'webhooksbuildeventslinux'
      dockeridacr: $(DockerAdmin) #ACR Admin User
      dockerpasswordacr: $(DockerACRPassword) #ACR Admin Password
      dockeracr: 'azuredevopssandbox'
      dockerapppath: ' ./AzureDevOpsBuildEvents'
      dockerfile: './AzureDevOpsBuildEvents/DockerFile'

The above YAML runs is the entire build pipeline comprised of all jobs that calls each YAML templates. There are 2 sections that I do want to point out:

Resources: This is the part where I refer to the YAML templates stored in a different Git Repo instance within Azure DevOps

resources:
  repositories:
  - repository: templates  # identifier (A-Z, a-z, 0-9, and underscore)
    type: git  # see below git - azure devops
    name: SoftwareTransformation/DevOps  # Teamproject/repositoryname (format depends on `type`)
    ref: refs/heads/master # ref name to use, defaults to 'refs/heads/master'

Variables:  This is the section where I use Azure DevOps pipeline group variables to encrypt docker login information. For more information on this, see: Variable groups

variables: 
  - group: DockerInfo

The end results. A working pipeline that triggers builds from code that works in your branching strategy of choice. This greatly speeds up the development process without the worry of maintaining manually created build definitions.

Continuous Integration in VSTS using .Net Core (with Code Coverage), NUnit, SonarQube: Part 1: .Net Core Project Setup – Code Coverage

There are 2 ways to discover and execute unit tests using Microsoft developed test harnesses:

  • Vstest.console.exe = This is the command-line used to execute tests within/embedded in Visual Studio IDE
  • Dotnet.exe = This is the command line interface (CLI) specific to .Net Core Projects

Documentation for Vstest.console.exe is documented here: https://msdn.microsoft.com/en-us/library/jj155796.aspx

For .Net Core Projects: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test?tabs=netcore2x

The primary difference between both is that vstest.console.exe can execute tests developed in .Net Framework and .Net Core while dotnet.exe is specifically for .Net Core

An example of executing tests for the same assembly domain (test project) would be:

VSTest.Console.exe:

vstest.console.exe <testassembly>.dll (Pointer to the compiled Assembly)

Dotnet.exe:

dotnet test <testassemblyproject>.csproj (Pointer to the actual .Net Core Test Project)

The issue with dotnet.exe (CLI) is that Code Coverage doesn’t work. In order for code coverage to work on .Net Core projects, you need to:

  1. Edit the .Net Core projects you want to instrument for code coverage
  2. Use vstest.console.exe and supply /EnableCodeCoverage switch

Edit the .Net Core project/s for code coverage instrumentation

When you run unit tests in visual studio and select the option to “Analyze Code Coverage for Selected Tests” (as seen below), by default, code coverage results will not be captured.

image

As of writing of this post, the fix is to modify the project file and enable DebugType to Full on the propertygroup section of the project file.

image

Save the project file and run the unit tests again by selecting the option: to “Analyze Code Coverage for Selected Tests” and you’ll see similar results as shown below.

image

Use vstest.console.exe and supply /EnableCodeCoverage switch

As you saw within Visual Studio, running tests with code coverage can be trigged via a simple click on the context menu. If you want to execute your unit test with code coverage in a command line, you invoke /EnableCodeCoverage switch.

vstest.console.exe <testassembly>.dll /EnableCodeCoverage

The result would be an export of the code coverage results to a .coverage file. You can then open the file within Visual Studio to inspect the results. See screenshot below:

image

Setting up your .Net Core projects appropriately using the preceding steps should give you the proper code coverage numbers. More importantly, this allows you to seamlessly integrate with various build systems. Additionally, here are some tips and practices around code coverage:

Use a test .runsettings

Use a test .runsettings file to exclude assemblies you don’t want to instrument. The .runsettings file can be used on how tests are executed from vstest.console.exe. For more information, see the following: Configure unit tests by using a .runsettings file

Here an example on how you would want to exclude piece of code not to be measured for code coverage:

<DataCollectionRunSettings>
    <DataCollectors>
      <DataCollector friendlyName="Code Coverage" uri="datacollector://Microsoft/CodeCoverage/2.0" assemblyQualifiedName="Microsoft.VisualStudio.Coverage.DynamicCoverageDataCollector, Microsoft.VisualStudio.TraceCollector, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
        <Configuration>
          <CodeCoverage>
            <ModulePaths>
              <Include>  
                <!-- Include all loaded .dll assemblies -->  
              </Include> 
              <Exclude>
                <!-- Exclude all loaded .dll assemblies with the words moq, essentially regex -->
                <ModulePath>.*\\[^\\]*moq[^\\]*\.dll</ModulePath>
                <ModulePath>.*\\[^\\]*Moq[^\\]*\.dll</ModulePath>
              </Exclude>
            </ModulePaths>
            <!-- We recommend you do not change the following values: -->
            <UseVerifiableInstrumentation>True</UseVerifiableInstrumentation>
            <AllowLowIntegrityProcesses>True</AllowLowIntegrityProcesses>
            <CollectFromChildProcesses>True</CollectFromChildProcesses>
            <CollectAspDotNet>False</CollectAspDotNet>s
            <Attributes>
              <Exclude>
                <Attribute>^System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute$</Attribute>
              </Exclude>
            </Attributes>
          </CodeCoverage>
        </Configuration>
      </DataCollector>
    </DataCollectors>
  </DataCollectionRunSettings>

To use the .runsettings file, in Visual Studio, click on Test, Test Settings, Select Test Settings File (see below image)

SNAGHTML2d4683

[ExcludeFromCodeCoverage] attribute

Use [ExcludeFromCodeCoverage] attribute wherever appropriate. When a section of code is decorated with this attribute, that section of the code will be skipped for code coverage. Why? In certain cases, you don’t want code to be measured with code coverage. An example would be entity objects that have default property setters (get / set) that has no functionality. If there is “NO” logic developed on either the get and/or set property why measure it?

This ends the first part of this series, on the next part (VSTS Build Definition Setup – .Net Core and NUnit), we will hook up the test tasks in VSTS to include code coverage reporting.

Working With Stand Alone Entity Framework Core 2.0 in .Net Framework 4.6 (above) with SQL Server

When you work with EF Core, the initial project creation requires that you select a .Net Core project. This is all good if you’re entirely working with a .Net Core App. What about .Net Frameworks 4.6 and above? How about starting with a .Net Core app without EF Core?  There are multiple guides out there, scattered in multiple places to get EF Core installed and working. This post is intended to have you install EF Core at minimum and walk through the setup and unit testing scenarios.

As of this post, we’re utilizing Entity Framework 2.0 within .Net Framework 4.7 projects. Entity Framework Core 2.0 is compatible with versions of .Net Framework 4.6.x and above. Why use EF Core on .Net Framework projects? Simply put, compatibility reasons. Certain services in azure (Azure Functions for example) currently supports .Net Framework projects and not .Net Core. To circumvent the problem, the EF team has done a great job with EF Core so it can work with various .Net versions. While the intention of this post is adopting EF Core 2.0 within .Net 4.7, the goal is to show the features built in EF Core 2.0. In particular, I really love the capability to unit test databases through in memory channel. We’ll talk about this later.

If this is your first time working through Entity Framework, I strongly suggest going through the “Get Started Guide”, see the following article from Microsoft: https://docs.microsoft.com/en-us/ef/core/get-started/

Let’s get started: Create a .Net 4.7 Project

In Visual Studio, create a new project:

EF1

Install the following Nuget Packages for EF Core:

Microsoft.EntityFrameworkCore – Core EF libraries

Microsoft.EntityFrameworkCore.Design – The .NET Core CLI tools for EF Core

Microsoft.EntityFrameworkCore.SqlServer – EF Core Database Provider for SQL. In this case, we’ll be using SQL EF Core Provider

Microsoft.EntityFrameworkCore.Relational– EF Core Libraries that allows EF to be used to access many different databases. Some concepts are common to most databases, and are included in the primary EF Core components. Such concepts include expressing queries in LINQ, transactions, and tacking changes to objects once they are loaded from the database. NOTE: If you install Microsoft.EntityFrameworkCore.SqlServer, this will automatically install the Relational assemblies

Microsoft.EntityFrameworkCore.Tools – EF Core tools to create a model from the database

Microsoft.EntityFrameworkCore.SqlServer.Design– EF Core tools for SQL server

Microsoft.EntityFrameworkCore.InMemory – EF Core In-memory database provider for Entity Framework Core (to be used for testing purposes). This is or will be your best friend! One of the reasons why I switched to EF Core (besides it’s other cool features). You only need to install this package if you’re doing Unit Testing (which you should!)

Edit the Project Files:

Edit the project file and make sure the following entry appears in the initial property group.

<PropertyGroup> 
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
</PropertyGroup>

For test projects, also make sure the following entry is also present:

<GenerateBindingRedirectsOutputType>true</GenerateBindingRedirectsOutputType>

Implementing EF Core

I’m not going to go through the basics of EF Core. I’ll skip the entire overview and just dive deep on implementing EF entities and using the toolsets.

The Model and DBContext:
We’ll be using a simple model and context class here.

public class Employee
    {
        [Key]
        public int EmployeeId { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string DisplayName => FirstName + " " + LastName;
        public EmployeeType EmployeeType { get; set; }
    }

    public class EmployeeType
    {
        [Key]
        public int EmployeeTypeId { get; set; }
        public string EmployeeTypeRole { get; set; }
    }

public class DbContextEfCore : DbContext
    {
        public DbContextEfCore(DbContextOptions<DbContextEfCore> options) : base(options) { }

        public virtual DbSet<Employee> Employees { get; set; }
        public virtual DbSet<EmployeeType> EmployeeTypes { get; set; }

        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
            //When creating an instance of the DbContext, you use this check to ensure that if you're not passing any Db Options to always use SQL. 
            if (!optionsBuilder.IsConfigured)
            {
                var connectingstring = ConfigurationManager.ConnectionStrings["SqlConnectionString"].ConnectionString;
                optionsBuilder.UseSqlServer(connectingstring);
            }
        }
    }

 

EF Tools

Database Migrations – This is far one of the best database upgrade tools that you can use for SQL server. Since we have our model and DBContext, let’s create the scripts to eventually create the database on any target server and update the database when necessary
Before you run any Database Migration commands, you’ll need to ensure that you have a class file that implements IDesignTimeDbContextFactory. This class will be recognized and used by entity framework to provide command line tooling such as code generation and database migrations. Before proceeding, make sure your application or web.config file has the connectingstring values set for your SQL server (see example below)

  <connectionStrings>
    <add name="SqlConnectionString" connectionString="Server=XXXX;Database=XXX;User ID=XXX;Password=XXX;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;" providerName="System.Data.SqlClient"/>
  </connectionStrings>

NOTE: Unfortunately, you cannot use InMemory Database when working with Database Migration Tools. In Memory Database doesn’t use a relational provider.

Here’s an example class file that you can use in your project:

public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<DbContextEfCore>
    {
        public DbContextEfCore CreateDbContext(string[] args)
        {
            var builder = new DbContextOptionsBuilder<DbContextEfCore>();
            //Database Migrations must use a relational provider. Microsoft.EntityFrameworkCore.InMemory is not a Relational provider and therefore cannot be use with Migrations.
            //builder.UseInMemoryDatabase("EmployeeDB");
            //You can point to use SQLExpress on your local machine
            var connectionstring = ConfigurationManager.ConnectionStrings["SqlConnectionString"].ConnectionString;
            builder.UseSqlServer(connectionstring);
            return new DbContextEfCore(builder.Options);
        }
    }

 

When running command line options, make sure you select the .Net Project where you are working on Entity Framework. At the same time, ensure that the default start-up project in solution explorer is set on the EF project

In Visual Studio. Go to Tools > Nuget Package Manager > Package Manager Console

In the PMC (Package Manager Console) type: Add-Migration InitialCreate

After running, this command, notice that it will create the proper class files for you to create the initial database schema.

EF2

Let’s create our database on the target server. For this, run: update-database. When done, you should see the following:

EF3

Your database has been created as well:

EF4

As we all know, requirements do change more often than before (specially in agile environments), a request was made to add the city and zip code to the employee data. This is easy as:

  • Modifying the entity (model)
  • Running “Add-Migration <ChangeSet>”
  • Running “Update-Database”

The Entity Change:

public class Employee
    {
        [Key]
        public int EmployeeId { get; set; }

        public string FirstName { get; set; }

        public string LastName { get; set; }

        public string DisplayName => FirstName + " " + LastName;

        public string City { get; set; }

        public int ZipCode { get; set; }

        public EmployeeType EmployeeType { get; set; }
    }

In the PMC (Package Manager Console) type: Add-Migration AddCityAndZipCodeToEmployee

New file has been created:

EF5

Schema has been added as well:

EF6

In the PMC (Package Manager Console) type: Update-Database

EF7

Changes have been applied to the database directly.

EF8

I’ll leave database migrations here. You can get more information on all the nice and nifty features of database migrations here: https://msdn.microsoft.com/en-us/library/jj554735(v=vs.113).aspx

For all command line options for database migrations: https://docs.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell

Finally, Database Migrations uses the __MigrationsHistory table to store what migrations have been applied to the database.

Testing EF Core

This is my favorite topic! I can’t stress enough how easy it is to Unit Test databases using EF Core. InMemory Database as part of EF Core sets up the runtime and everything else you need to Unit Test databases. Before EF Core, it really took sometime to setup mock objects, fakes, dependencies, etc… With EF Core InMemory Database, I was able to focus more on the design of the database rather spend time focusing on the test harness. The short story is this, InMemory database is another provider in EF that stores data In Memory during runtime. Meaning, you get all the same benefits, features and functions with EF to SQL (or other provider) except with the beauty of not connecting to an actual provider endpoint (SQL in this case).

Start of by adding a new .Net Framework 4.7 Test Project

EF9

Add a reference to the previously created project with EF Core Enabled.

Add the following nuget packages:

  • Microsoft.EntityFrameworkCore
  • Microsoft.EntityFrameworkCore.InMemory
  • Microsoft.EntityFrameworkCore.SqlServer

Edit the project file and make sure the following entry appears in the initial property group.

<PropertyGroup>
<AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
<GenerateBindingRedirectsOutputType>true</GenerateBindingRedirectsOutputType>
</PropertyGroup>

Test Class:

[TestClass]
    public class DbContextTests
    {
        protected DbContextOptions<DbContextEfCore> Dboptionscontext;

        [TestInitialize]
        public void TestInitialize()
        {
            //This is where you use InMemory Provider for working with Unit Tests
            //Unlike before, we're you'll need to work with Mocks such as MOQ or RhinoMocks, you can use InMemory Provider
            Dboptionscontext = new DbContextOptionsBuilder<DbContextEfCore>().UseInMemoryDatabase("EmployeeDatabaseInMemoery").Options;

            //Lets switch to use Sql Server
            //var connectionstring = ConfigurationManager.ConnectionStrings["SqlConnectionString"].ConnectionString;
            //Dboptionscontext = new DbContextOptionsBuilder<DbContextEfCore>().UseSqlServer(connectionstring).Options;
        }

        [TestMethod]
        public void ValidateInsertNewEmployeeRecord()
        {
            //Employee Entity Setup
            var employee = new Employee
            {
                FirstName = "Don",
                LastName = "Tan",
                City = "Seattle",
                ZipCode = 98023,
                EmployeeType = new EmployeeType
                {
                    EmployeeTypeRole = "HR"

                }
            };

            using (var dbcontext = new DbContextEfCore(Dboptionscontext))
            {
                dbcontext.Employees.Add(employee);
                dbcontext.SaveChanges();
                Assert.IsTrue(dbcontext.Employees.Any());
            }
        }
    }

Test Initialize method is where I set the DBContext option to use InMemory vs SQL providers. When you run the tests switching either InMemory or SQL DBContext options, both tests run successfully.

Switching to Sql EF Core provider, stores the data to the DB directly:

EF10

Using InMemory provider for EF Core lets you focus unit testing your database more efficiently. This is definitely useful when working with multiple tables and each table has multiple relationships (keys and other constraints). More importantly, this lets you focus designing the appropriate DB strategies such as repository or unit of work patterns.

Referencing MSTest And MSTestv2 Unit Testing Framework Through Namespace Aliasing

Let me start-off by explaining what MSTest and MSTestV2 are.

MSTest (Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll) – This is the unit testing framework that comes pre-installed when you install Visual Studio IDE (Available through the .Net Framework – GAC)

MStestV2 (Microsoft.VisualStudio.TestPlatform.TestFramework.dll) – This is now the open source version of MSTest. With any open source libraries, there are lots of good contributions but also features do change more frequent. More often, removed (or enhanced in this case). You install this version of MSTest through Nuget.

With that brief description on MSTest and MSTestV2, now comes the question: Why would I reference both MSTest and MSTestV2 in the same test project? Well, there are two reasons; Backwards compatibility and issues exposed in MSTestV2 that is still being worked on.

In terms of backwards compatibility, I work with many developers around utilizing data driven features in MSTest. The good back then is that we can data drive tests using many data source providers (e.g. Excel, SQL, etc…). The bad part is that the open source framework (MSTestV2) only supports both XML and CSV as the data source providers (Though, it supports DataRow as a data source which is good).

Ideally, I would ask the developers to migrate directly to MSTestV2 but in this case, I’d like for them to regress any issues they find in MSTest and see what else could break in MSTestV2.

The issue: Referencing both dlls causes collisions and/or conflicts simply because most of the attributes (or All- [TestClass],[TestMethod],etc…) uses the exact same namespace:

Microsoft.VisualStudio.TestTools.UnitTesting

The Solution! Welcome back namespace aliasing. The last time I used namespace aliasing, oh, I can’t remember exactly but probably late 2006 (C# 2.0)

With namespace aliases, you can reference multiple assemblies even if those assemblies have the exact same namespace

Step 1: Provide an alias name at the assembly level.

Go to the properties of each assembly and provide an alias.

MSTest1

Step 2: In code, refer to the assembly alias using the C# reserved keywordextern

extern alias FrameworkV1;
extern alias FrameworkV2;

using System;
using TestFrameworkV1 = FrameworkV1.Microsoft.VisualStudio.TestTools.UnitTesting;
using TestFrameworkV2 = FrameworkV2.Microsoft.VisualStudio.TestTools.UnitTesting;

Step 3: Refer to the appropriate assembly classes and/or attributes through the namespace alias (variable you created through the “using” statement)

MSTest2

And the result

MSTest3

Unit Tests (TDD) + Code Coverage = “Happy Couple”

We all rave and talk about TDD (Test Driven Development) all the time. Have you asked yourself these questions?

· “Do my unit tests truly cover blocks (or lines) of code that I’ve implemented?”

· “How do I ensure that specific features (implementation) is doing what it’s supposed to do?”

· “Is there a possibility that a block or line of code that I’ve written is not being touched by my unit tests?”

This post however, I’m not going through the practices and understanding of how unit testing works. There are many resources and literature available for you to look at (just google TDD J). Most of you already know how to do this but I do want to share my experience and practices around unit testing “WITH” code coverage. Have you used code coverage before? If not, let’s start with that.

So, what is code coverage? Simply put: “it is a measure (%) used to describe the degree to which the source code of a program is executed when a particular test suite runs”

Source: https://en.wikipedia.org/wiki/Code_coverage

We also describe that a program/application with high degree of code coverage, has a lower chance of containing undetected software defects compared to a program/application with low code coverage, again depending on the test suite. It’s easy to produce tons of tests that should cover the code, but we normally measure it. Covering code just means you need quality tests that verifies the functionality of blocked or line code that you wrote (the quantity isn’t that important). Which boils down to, you are not writing “Regression Tests” (validating edge cases and/or test case families) when writing unit tests to measure code coverage. However, you may have requirements to implement certain rules that may touch edge case scenarios. In this case, you write unit tests for those because it’s now a logic/function that you will implement.

Finding the sweet spot! When is “Enough” enough? It’s when you can make changes to your code with confidence that you’re not breaking anything and to me what it means is that you have tested a block of code that has logic and/or implementation in place. As a .Net developer, I come to wonder, how about properties? Specially auto generated properties? Do we account for code coverage numbers for those? My answer to this is No. Auto generated properties or properties in general and by default doesn’t have logic in place. Hence, why write unit tests for something that doesn’t have logic?, Then why measure it through code coverage?

Here, I’ll start off with a project that has a unit test available and some level of functionality. Consider the following unit test.

Consider the following unit test code block:

[TestMethod]
        public void ValidateGetRequest()
        {
            var uri = new Uri("https://api.github.com/users/mikelo/repos");
            var jsonresponse = new HttpConnectionService().GetResponse(uri);
            Assert.IsTrue(jsonresponse.Contains("38358544"));
        }

This unit test validates getting a JSon response from a web api. In this case, a valid public API from Github. For simplicity, I simply want to do an HTTP GET from this web api to get repo’s (Git repositories) from a Github contributor.

Below is the implementation

public string GetResponse(Uri url)
        {
            string jsonresponse;
            using (var client = new HttpClient())
            {
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                client.DefaultRequestHeaders.Add("User-Agent", "client agent");
                var response = client.GetAsync(url).Result;
                jsonresponse = response.Content.ReadAsStringAsync().Result;

            }
            return jsonresponse;
        }

Running the tests yields the following results:

TDDCC1

So far so good, we’ve achieved writing a unit test to validate a response from a public web api as well as ensured that the test we wrote covered 100% of the code implementation.

A problem arises from this. It’s apparent that we have a major dependency in our unit test. Now and days we use and rely on build systems (such as Jenkins and VSTS) to compile, tests and publish artifacts. The practice behind Unit Testing is to ensure that all unit tests executed are de-coupled and not have any dependencies in place.

Besides dependencies, I’m sure by now you’ve realized that there’s no guarantee that every single call you make will be successful so now comes the next cycle of our work, let’s expand the implementation to include a try catch block so at some point we can customize what we want to return or throw back to the user in an event a problem occurs

We start by creating a new test to validate that any exceptions thrown are caught in the catch block. Again, TDD circle of:

TDDCC2

We wrote the Unit Test, failed it and now need to pass it. How do we pass a unit test where we expect an exception and still make it pass? In MSTESTV2, we can apply a method attribute to a test method that expects a type of exception. Once used, this tells the test method that the test “SHOULD” pass when an exception is caught of an exception type. Here’s the newly created unit test:

        [TestMethod]
        [ExpectedException(typeof(HttpRequestException))]
        public void ValidateGetRequestCatchesException()
        {
            var uri = new Uri("https://apiXXXX.github.com/users/mikelo/repos");
            var jsonresponse = new HttpConnectionService().GetResponse(uri);
        }

Notice that the URL was changed to point to a non-existent URL. We also refactored implementation on the code by adding a try – catch block and with this line: response.EnsureSuccessStatusCode();

public string GetResponse(Uri url)
        {
            try
            {
                string jsonresponse;
                using (var client = new HttpClient())
                {
                    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                    client.DefaultRequestHeaders.Add("User-Agent", "client agent");
                    var response = client.GetAsync(url).Result;
                    response.EnsureSuccessStatusCode();
                    jsonresponse = response.Content.ReadAsStringAsync().Result;
                }
                return jsonresponse;
            }
            catch (Exception e)
            {
                Debug.WriteLine(e);
                throw;
            }
        }

Using this method: EnsureSuccessStatusCode() ensures that exceptions are thrown if the IsSuccessStatusCode property for the HTTP response is false. This is my own implementation and I’m sure there are many ways to catch and throw errors/exceptions back to users.

So far so good, I have 2 Unit Tests that:

· Passes

· Verifies implementation of the code I wrote is being covered through Code Coverage – 100%

(Note: The blue highlighted section of the screenshot)

TDDCC3

While we’ve satisfied basic principles of TDD + Code Coverage thus far, we’re still left with the point of isolation. Current TDD practice suggests that any unit test when executed should be isolated and no real dependencies should be called upon.

Our Unit Tests still rely on a working API endpoint. This is problematic as we know unit tests are autonomous.

Solution? Mocking/Faking endpoints! The best part I love about TDD is that inevitably your code leads to better design through Dependency Injection and/or Inversion of Control.

Before I go further, it might be worthwhile to talk about Mocking (Faking) and Dependency Injection. I’ll briefly show you how dependency injection works later.

What is Mocking? Mocking is primarily used in unit testing. It is used to isolate behavior of an object or function of what you want to test by simulating the behavior of the actual object and/or function.

There are so many articles, guides and practices around mocking. There are 2 well known mocking frameworks that are used by many developers

1) MOQ – My favorite. Easy to use, open source with many contributors. MOQ is hosted in GitHub and works well in .Net. https://github.com/Moq/moq4/wiki/Quickstart

2) RhinoMocks – Same concept as MOQ. https://hibernatingrhinos.com/oss/rhino-mocks

What is Dependency Injection? If you’ve been practicing TDD, I’m quite certain your code will eventually lead to dependency injection.

Snippet from Wikipedia:

https://en.wikipedia.org/wiki/Dependency_injection

“Dependency injection is a technique whereby one object supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client’s state, passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.”

For .Net developers: Dependency Injection implements Interfaces then used in class constructors as parameters.

Let’s go back to our code and will consider dependency injection later.

Unit Tests –CHECK. Code Coverage – CHECK. As I write this post, I’m in route to San Francisco. What a great way to work on TDD and ensure I start mocking at this point. Given that I don’t have any persistent connection to the public API my tests will fail

TDDCC4

Luckily, the implementation I decided to use for API request is HTTPClient (built in .Net) which allows me to pass in an “HttpMessageHandler” which therefore I can mock the data.

With a little bit of refactor work: (Again, Red -> Green -> Refactor analogy), Here’s the modified version of the HttpConnectionService class

public string GetResponse(Uri url, HttpMessageHandler handler = null)
        {
            try
            {
                string jsonresponse;
                using (var client = handler == null ? new HttpClient() : new HttpClient(handler))
                {
                    client.Timeout = TimeSpan.FromSeconds(3);
                    client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                    client.DefaultRequestHeaders.Add("User-Agent", "client agent");
                    var requestmessage = new HttpRequestMessage
                    {
                        Method = HttpMethod.Get,
                        RequestUri = url
                    };
                    var response = client.SendAsync(requestmessage).Result;
                    response.EnsureSuccessStatusCode();
                    jsonresponse = response.Content.ReadAsStringAsync().Result;
                }
                return jsonresponse;
            }
            catch (Exception e)
            {
                Debug.WriteLine(e);
                throw;
            }
        }

Here’s a modified version of the Unit Tests:

[TestMethod]
        public void ValidateGetRequest()
        {
            //No Need to specify a valid URI. I'm mocking the "state" or behavior at this point.
            //var uri = new Uri("https://api.github.com/users/mikelo/repos");
            var mockhandler = new Mock<HttpMessageHandler>();
            mockhandler.Protected()
                .Setup<Task<HttpResponseMessage>>("SendAsync", ItExpr.IsAny<HttpRequestMessage>(), ItExpr.IsAny<CancellationToken>())
                .Returns(Task<HttpResponseMessage>.Factory.StartNew(() => new HttpResponseMessage
                {
                    StatusCode = HttpStatusCode.OK,
                    Content = new StringContent("Mock Response. OK", Encoding.UTF8, "application/json")
                }));
            var jsonresponse = new HttpConnectionService().GetResponse(new Uri("http://someuri"), mockhandler.Object);
            Assert.IsTrue(jsonresponse.Contains("Mock Response. OK"));

Passing the first test yields the following code coverage result:

TDDCC5

I passed the first unit test because I was able mock the request however this test doesn’t cover all the code blocks. Code Coverage is at 89.6%. Great progress so far! Note that code coverage % is subjective to the total code blocks developed. Importantly in this picture, the specific Unit Test didn’t go through the exception block. Let’s fix the other tests to validate exceptions are caught. Below is the modified version of the Unit Test to validate exception handling

[TestMethod]
        [ExpectedException(typeof(HttpRequestException))]
        public void ValidateGetRequestCatchesException()
        {
            var mockhandler = new Mock<HttpMessageHandler>();
            mockhandler.Protected()
                .Setup<Task<HttpResponseMessage>>("SendAsync", ItExpr.IsAny<HttpRequestMessage>(),
                    ItExpr.IsAny<CancellationToken>())
                .Throws<HttpRequestException>();
            var jsonresponse = new HttpConnectionService().GetResponse(new Uri("http://someuri"), mockhandler.Object);
        }

Both test passes! Code Coverage went up to: 96.55%! Lastly, both tests reached relevant code blocks in implementation.

TDDCC6

Why wasn’t I able to achieve 100% code coverage even though the Unit Tests covered the path to all code blocks? It seems that if use certain built .net features (highlighted in yellow) it treats it as an ambiguous state. I would only assume at this point that it’s how code coverage works in Visual Studio and/or the libraries for Code Coverage. In this case, MSTest.

At this point, it’s debatable that 100% code coverage should be met all the time. In the last refactor work, we didn’t meet 100% code coverage but it’s acceptable in this case. I would probably bet that as you continue to practice TDD (now with code coverage in mind J), you will have an acceptable range of code coverage %. Meaning, not all the time you will attain 100% code coverage

The real point here is that we’ve met the criteria for TDD by ensuring unit tests written validates any change or refactor work. This is what TDD + Code Coverage does. With Code Coverage, you ensure that any change or refactor work you do, has valid unit tests which is shown through code coverage numbers

 

A modified version of TDD circle would show:

TDDCC7

 

 

Now let’s expand the code a bit by introducing DI (Dependency Injection) I’ll add an abstraction layer (Façade) so that the user doesn’t call the connection service directly rather call this façade class for working with data. This is a common practice so the façade layer can work with any other business logic or requirement. Here’s the new class implementation:

public class GithubApiService
    {
        private readonly IConnectionService _connectionService;
        public GithubApiService(IConnectionService connectionService)
        {
            _connectionService = connectionService;
        }

        public string GetReposFromGitHub(Uri uri, HttpMessageHandler handler = null)
        {
            var response = _connectionService.GetResponse(uri, handler);
            return $"From GitHubApiService Class. Response Value: {response}";
        }
    }

I used DI to pass the IConnectionService in the constructor. When I do this:

1) I tell the object (constructor parameter) that during instantiation, I pass in an interface of IConnectionService type.

2) I can then mock the data directly in the façade layer instead of the ConnectionService layer.

As you might guess, here’s the Unit Test for validating the new façade class.

[TestMethod]
        public void ValidateGithubApiService_GetReposFromGitHub()
        {
            var mockhandler = new Mock<IConnectionService>();
            mockhandler.Setup(service => service.GetResponse(It.IsAny<Uri>(),null)).Returns("Mock Response. OK");
            var githubapiservice = new GithubApiService(mockhandler.Object);
            var response = githubapiservice.GetReposFromGitHub(new Uri("http://someuri"));
            Assert.IsTrue(response.Contains("From GitHubApiService Class. Response Value: Mock Response. OK"));
        }

Final Screenshot!

TDDCC8

Here’s an actual usage of the façade layer without mocking data:

[TestMethod]
        public void ValidateGithubApiService_GetReposFromGitHubWIthConnectionService()
        {

            var githubapiservice = new GithubApiService(new HttpConnectionService());
            var response = githubapiservice.GetReposFromGitHub(new Uri("https://api.github.com/users/mikelo/repos"));
            Assert.IsTrue(response.Contains("38358544"));
        }

There’s a lot more information on Dependency Injection. This post is not intended to deep dive on DI or Mocking rather ensure that when we practice TDD, we should also account for Code Coverage as measure of quality.

IBM Data Server Provider (DB2) .Net Tips – Series 1

I’ve been working with IBM Informix lately and from my previous post, I’ve mentioned a couple of ways to work with Informix. Now that I’ve settled with strictly using DB2 (see my previous post on this), my colleagues and I have been developing abstractions to work with Informix via Entity Framework and/or DB2 SDK (IBM’s version).

So, you’ve just been asked to tackle some data access tasks through Informix via .Net? My colleague, Jeff Crose (https://www.linkedin.com/in/jeff-crose-419797/) came out with some really cool tips worth blogging about. Jeff is a distinguished software developer in our team, working with backend data development through database platforms such as SQL Server and Informix. Shout out to Jeff for coming up with these tips!

TIP: Informix Overloaded Stored Procedures

Unlike other relational database management systems, Informix supports overloaded stores procedures. That may not seem like a big deal, but it can lead you astray if you’re not careful. Consider the following sample:

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "George Washington";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "WA";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The first President of the United States";

            var customerid = cmd.ExecuteScalar();
Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

You’ve created the connection, created the command, and created parameters of the correct types to match the signature of the stored procedure but you receive the following error message: ERROR[IX000][IBM][IDS / UNIX64] Routine(customer_insert_overload) cannot be resolved. As it turns out, the issue is not with the overload but with the fact that one of the parameters is a TEXT type. To get past the error, you need to explicitly cast the parameter to TEXT in the CommandText property.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "George Washington";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "WA";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The first President of the United States";

            var customerid = cmd.ExecuteScalar();
Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

TIP: ExecuteScalar VS ExecuteNonQuery

Now that the code works, you may be wondering why the ExecuteScalar method was chosen over ExecuteNonQuery for an insert. In this case, the stored procedure returns the identity for the newly inserted row. Even though the stored procedure returns an integer, it does not behave like its SQL Server counterpart. The following code inserts the data successfully but does not return a value.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Andrew Jackson";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "MS";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The seventh President of the United States";
            cmd.Parameters.Add(new DB2Parameter("customerid", DB2Type.Integer));
            cmd.Parameters["customerid"].Direction = ParameterDirection.ReturnValue;

            cmd.ExecuteNonQuery();
            var customerid = cmd.Parameters["customerid"].Value;

Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

To get the “customerid” you need to use the ExcecuteScalar method.

TIP: ExecuteRow for multiple return values

What if the stored procedure returns multiple values? The ExecuteRow method can be used to get all the values.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Abraham Lincoln";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "NE";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The sixteenth President of the United States";

            var row = cmd.ExecuteRow();
Console.WriteLine("customerid: {0}, result: {1}", row[0], row[1]);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

Notice that the “customerinfo” parameter was once again explicitly cast to a TEXT type in the CommandText property. The following example will once again cause the “ERROR[IX000][IBM][IDS / UNIX64] Routine(customer_insert) can not be resolved.” exception.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert(?, ?, ?)"; 

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Abraham Lincoln";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "NE";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The sixteenth President of the United States";

            var row = cmd.ExecuteRow();
Console.WriteLine("customerid: {0}, result: {1}", row[0], row[1]);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

 

TIP: And Finally, the “ExecuteReader” …

With all the data inserted successfully, it’s time to find out how return all the rows. Luckily that part is straightforward. Just call the ExecuteReader method and iterate through the results.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_select()";

            var dr = cmd.ExecuteReader();
            while (dr.Read())
            {
                Console.WriteLine("customer_id: {0}, customer_name: {1}, customer_info: {2}", dr[0], dr[1], dr[2]);
            }
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

Getting Started with Azure Data Catalog REST API

What is Azure Data Catalog? Simply put, Azure Data Catalog is a SaaS application hosted within Azure’s Cloud Stack. With Azure Data Catalog, enterprise customers can store information about their enterprise data source assets. There’s the concept of catalogs, assets and annotations and for more information, go to: https://azure.microsoft.com/en-us/services/data-catalog/

We use Azure Data Catalog to organize, discover and understand all of our backend data sources. With that in mind, I needed to find a solution where we can automate data source creation (Databases, Tables, etc…) to Azure Data Catalog and don’t want to spend the time creating/registering assets manually. It is a tedious process to manually specially if you have to deal with lots of databases and stored procedures J.

Microsoft exposes an API for you to use and work with Azure Data Catalog. There are plenty of documentation out there but it really took me a while to get everything setup and working correctly. At least from searching on existing assets and registering a new one.

Most of Microsoft’s documentation around the Azure Data Catalog API is located here:

https://docs.microsoft.com/en-us/rest/api/datacatalog/

This guide will walk you through the steps on registering a catalog asset with additional information to properly authenticate against Azure AD and a modified schema version to include annotations when registering or updating a catalog asset. Note that sample below uses Native Client Authentication to Azure Active Directory.

Part 1

The first section talks about creating an Azure Active Directory client app registration. We will use this to authenticate either using OAuth2 or Federation

Note: As of writing this blog post, the screenshots below have been taken from the recent UI on azure portal.

Register a client app in Azure Active Directory. When you register a client app in Azure Active Directory, you give your app access to the Data Catalog APIs. To register a client app:

1. Go to http://portal.azure.com

2. Click on “Azure Active Directory

ADC1

3. Click on “App Registrations

ADC2

4. Click on “Add” and provide a “Name”, “Application Type” and “Redirect UI”. NOTE: The redirect URI is a unique identifier for the client to send the access token back. This doesn’t have to be a valid URI however; you need to keep track of this. You will need it later to authenticate against the catalog api.

ADC3

GRANT the app client access to the Azure Catalog API. To do this:

1. Click on “Settings” on the newly created app registration.

2. Click on “Required Permissions” then “Add”

3. On “Select an API”, pick “Microsoft Azure Data Catalog”

4. Take the defaults

5. IMPORTANT: Make sure you click on “GRANT PERMISSIONS” once you select “Microsoft Azure Data Catalog” as seen below. If you don’t do this, then your native client will not be able to authenticate properly on the Azure Data Catalog API.

ADC4

ADC5

Part 2

The second section talks about authenticating against Azure REST API. Particularly, authenticating against Azure Data Catalog API. The article below will guide you through steps on calling the Azure Data Catalog API via ADAL libraries for authentication. The information presented below from Microsoft’s site is accurate as of this writing.

Authenticate a client app

https://docs.microsoft.com/en-us/rest/api/datacatalog/authenticate-a-client-app

Couple of notes from the steps mentioned above:

· “Register a Client App”. You just did this in the preceding steps. Make sure to write down the Client ID (or APP ID of the newly created app in azure active directory)

· Don’t use HTTPWebRequest rather use HTTPClient to authenticate. HTTPClient has far more features that HTTPWebRequest. That said, refer to this Microsoft article for examples on HTTPClient.

Calling a Web API From a .NET Client (C#)

https://docs.microsoft.com/en-us/aspnet/web-api/overview/advanced/calling-a-web-api-from-a-net-client

Part 3

Changes to the request body when registering Data Assets. This is the part where I’ve spend most of my research modifying the schema for registering or updating assets. In this case, adding annotations during the registration process. Microsoft provides basic schema definitions when registering assets but doesn’t provide enough details on other schema values such as annotation experts, tags and description. Here’s a modified version of the schema when registering an asset to include annotations.

{
  "properties": {
    "fromSourceSystem": false,
    "name": "table name",
    "dataSource": {
      "sourceType": "Db2",
      "objectType": "Table"
    },
    "dsl": {
      "protocol": "db2",
      "authentication": "windows",
      "address": {
        "server": "ServerName",
        "database": "DatabaseName",
        "object": "NameOfTable",
        "schema": "dbo"
      }
    },
    "lastRegisteredBy": {
      "upn": "smtp@address.com",
      "firstName": "Don",
      "lastName": "Tan"
    },
    "containerId": "containers/<SomeGuid>"
  },
  "annotations": {
    "schema": {
      "properties": {
        "fromSourceSystem": true,
        "columns": [
          {
            "name": "identity",
            "isNullable": false,
            "type": "Int32",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "Other Column",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "short_desc",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          }
        ]
      }
    },
    //Add Other Annotation Details
    "experts": [
      {
        "properties": {
          "expert": {
            "upn": "smtp@address.com",
            "objectId": "<SomeGuid>"
          },
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ],
    "descriptions": [
      {
        "properties": {
          "key": "<SomeGuid>",
          "fromSourceSystem": false,
          "description": "Some Descrption"
        }
      }
    ],
    "tags": [
      {
        "properties": {
          "tag": "Dtan",
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ]
  }
}

Part 4:

Putting it all together: Here’s a complete sample on how to invoke the Azure Data Catalog using HTTPClient in C#.

// The ResourceURI is used by the application to uniquely identify itself to Azure AD.
// The ClientId is used by the application to uniquely identify itself to Azure AD.
// The AAD Instance is the instance of Azure, for example public Azure or Azure China.
// The Authority is the sign-in URL (either the tenant or OAuth2 provider)
// The RedirectUri gives AAD more details about the specific application that it will authenticate.
// NOTE: Make sure that the ClientID has sufficient permissions against the resourceURI. In this case, Azure Data Catalog
//See article: https://docs.microsoft.com/en-us/rest/api/datacatalog/Register-a-client-app?redirectedfrom=MSDN#client

var ClientId = ConfigurationManager.AppSettings["ClientId"];
var ResourceUri = ConfigurationManager.AppSettings["ResourceUri"];
var RedirectUri = new Uri(ConfigurationManager.AppSettings["RedirectUri"]);
var Tenant = ConfigurationManager.AppSettings["Tenant"];
var AadInstance = ConfigurationManager.AppSettings["AADInstance"];
//OAuth2 provider
//private static readonly string Authority = String.Format(CultureInfo.InvariantCulture, "https://login.windows.net/common/oauth2/authorize");
//Tenant Authority
var Authority = String.Format(CultureInfo.InvariantCulture, AadInstance, Tenant);
var authContext = new AuthenticationContext(Authority);
var authResult =
    authContext.AcquireTokenAsync(ResourceUri, ClientId, RedirectUri,
        new PlatformParameters(PromptBehavior.RefreshSession)).Result;

using (var httpClient = new HttpClient())
{
    var requestbody = "{\"properties\":{\"fromSourceSystem\":false,\"name\":\"air_allowed\",\"dataSource\":{\"sourceType\":\"Db2\",\"objectType\":\"Table\"},\"dsl\":{\"protocol\":\"db2\",\"authentication\":\"windows\",\"address\":{\"server\":\"YourServerName\",\"database\":\"YourDatabase\",\"object\":\"YourTable\",\"schema\":\"dbo\"}},\"lastRegisteredBy\":{\"upn\":\"smtp@address.com \",\"firstName\":\"Don\",\"lastName\":\"Tan\"},\"containerId\":\"containers/42070252-e318-4a0a-8c73-a33c0dc8fd65\"},\"annotations\":{\"schema\":{\"properties\":{\"fromSourceSystem\":true,\"columns\":[{\"name\":\"Column1\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column2\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column3\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0}]}},\"experts\":[{\"properties\":{\"expert\":{\"upn\":\"smtp@address.com\",\"objectId\":\"fb7d1a8a-4ae6-4ee2-aaaa-9de5b4c598df\"},\"key\":\"52c4543b-ee75-42d7-95e7-3a01437fee58\",\"fromSourceSystem\":false}}],\"descriptions\":[{\"properties\":{\"key\":\"791bab95-428a-4941-b633-7d2d0cd9c75e\",\"fromSourceSystem\":false,\"description\":\"SomeDescription\"}}],\"tags\":[{\"properties\":{\"tag\":\"Dtan\",\"key\":\"a2a3f272-14a3-4a03-b85d-65af33022dc4\",\"fromSourceSystem\":false}}]}}";
    var url = "https://api.azuredatacatalog.com/catalogs/<yourcatalog> /views/tables?api-version=2016-03-30";
    httpClient.DefaultRequestHeaders.Add("Authorization", authResult.CreateAuthorizationHeader());
    httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    var stringContent = new StringContent(requestbody);
    stringContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    var response = httpClient.PostAsync(url, stringContent).Result;
}

ADC6

Working with Entity Framework 6.0 ON IBM Informix V11.10+ in Visual Studio 2015

We know EF (Entify Framework) has many benefits working with Databases. Particularly from a development and performance stand-point. There are 2 versions of client connectivity SDK’s for working with Informix Databases:

IBM.Data.Informix.dll— Also referred to as the Common IDS .NET Provider. This assembly has been specifically created to help existing applications that were developed using the CSDK .NET Provider (SQLI protocol) to use the latest DRDA protocol support. It has additional support for some of the earlier Informix client features and is targeted only for .NET application development for Informix.

IBM.Data.DB2.dll— Also referred to as the DB2 .NET Provider. Although the name of the provider indicates DB2, it is in fact the single .NET provider for IBM database servers including DB2 and Informix. It is the recommended and preferred .NET provider for all clients targeting DB2 and new application development targeting Informix (Version 11.10 or later).

These are referenced from IBM’s website:

https://www.ibm.com/developerworks/data/library/techarticle/dm-1007dsnetids/

IBM.Data.DB2.dll is the preferred approach and uses Entity Framework. More importantly, this is the version that IBM will support for new enhancements in conjunction with Entity Framework.

Note: In order to use the .Net Data Provider (DB2) for Entity Framework 6.0, you need to ensure that DRDA protocol has been enabled on the Informix Server. For more information on DRDA overview and troubleshooting, see the following articles:

Overview of DRDA

https://www.ibm.com/support/knowledgecenter/SSGU8G_11.50.0/com.ibm.admin.doc/ids_admin_0206.htm

TCPIP communication errors with DRDA

http://www-01.ibm.com/support/docview.wss?uid=swg21164785

To get started with the .Net Data Provider for IBM Informix V11.10+ in Visual Studio 2015

1) Download and install the latest updates for Visual Studio 2015 (As of writing this blog post, the current updates is version 3)

2) Download and Install the DSDriver Package (Data Server Driver Package) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35279_DSClients-ntx64-dsdriver-10.5.600.232-FP006%3A898521251824283008&includeSupersedes=0

Specifically, this version” Special Build 35279 for IBM Data Server Driver Package (Windows/x86-64 64 bit) V10.5 Fix Pack 6” (special_35279_ntx64_dsdriver_EN.exe)

NOTE: As of writing on this blog, there could be more fix pack versions of the DS Driver Package, however the fix pack version above works well in VS 2015

3) Download and Install VSAI (IBM Database Add-Ins for Visual Studio) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35192_DSClients-nt32-vsai-10.5.600.232-FP006%3A295467480640129088&includeSupersedes=0

Specifically, this version” Special Build 35192 for IBM Database Add-Ins for Visual Studio (Windows/x86-32 32 bit) V10.5 Fix Pack 6” (special_35192_nt32_vsai.zip)

NOTE: As of writing on this blog post, none of the Add-Ins for Visual Studio work on machine running Windows 10. IBM hasn’t provided a solution for this problem. Also, do not use DS Driver Package V11. Use DS Driver Package version 10.5+. This version is specifically compiled for EF 6.0

4) Install IBM Entity Framework 6.0 in your projects. Right click on the Project and Select “Manage Nuget Packages” and install latest EntityFramework.IBM.DB2

image

image

Sample Project to verify that you can use EF 6.0 connecting to IBM Informix Database (V11.10+):

  • Start of by creating a sample project in Visual Studio 2015. Any project would be fine, however for testing purposes, create a test project. This will allow you to generate Unit Tests to verify EF on Informix Database V11.10+
  • Right click on the project and Select “Add” then “New Item”.
  • From the list of items, select “ADO.NET Entity Data Model”, “Select IBM DB2 and IDS Servers

image

image

  • Follow the wizard but on the first step, select: “EF Designer from database
  • Click on “New Connection” and provide the proper server settings for the Informix DB server. Note that DRDA protocol needs to be enabled on the target server. Refer back to the top section of this post.

image

  • Click “OK” then click on “Next
  • Select the appropriate Tables then click “Finish

Once completed, your test project should have generated EF files which you can use to connect and work with Informix Server. Here’s an example of an auto generated file which is the actual context file that “inherits” DbContext from EF

image

The tools in VS also generates the entities for you. Since we’ve used the “EF Designer from database” template, all the of table to entity mappings is actually stored in the .EDMX file. You can explore this file visually or to see raw data, open the file in a text editor such as notepad

image

As an example, I’ve written some unit tests to verify some data from a table in Informix V11.10+.

image

With EF, we can greatly improve how are applications integrate on Informix. We can have better design principles and patterns using EF. A common design pattern with Databases is called “UNIT of Work”. Here’s a great article on how to implement Unit of Work design pattern with ASP.Net MVC

https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application

Validating and Unit Testing Web API (2) Route Attribute Parameters

Personally, I like to isolate business rules and/or validations outside of MVC Controllers. In this case, API Controllers. I use ActionFilterAttribute to define my checks on parameters being passed in my MVC Web API routes.

Here’s an example of a WebAPI route with parameter binding:

// GET: /1/employees/AA0000111"
[Route("{WebServiceVersion}/employees/{employeeId}")]
[ValidateEmployeeId]
        public IHttpActionResult GetUser(string employeeid, int WebServiceVersion = 1)
        {
            // GET: Do something with webServiceVersion value like logging.
            var user = _emprepository.GetUser(employeeid);
            return Content(HttpStatusCode.OK, user);
        }

I want to isolate validating employeeid outside of my controller for a couple of reasons:

1) Isolation – You may have multiple cases on validating your parameters. In this case, employeeId can be permutated in different ways specially because it is a string. Other developers can easily get lost on what the action controller is actually doing if you have long code that includes all various validations

2) Good development practice – I prefer to see nice clean code and separation on what my controllers do vs business rules

3) Testing – I can isolate testing on my controllers vs business rules. This is really the motivating factor for me.

That said, let’s take a look at the ActionFilterAttribute further. For more information on this, see:

(NOTE: There are 2 versions of ActionFilterAttribute)

System.Web.Http.Filters

System.Web.Mvc

When unit testing, make sure you’re writing the correct tests for your filter. In this case, I’m using the namespace: System.Web.Http.Filters

public class ValidateEmployeeIdAttribute : ActionFilterAttribute
    {
        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            var employeeid = actionContext.ActionArguments["employeeid"].ToString();
            if (string.IsNullOrEmpty(employeeid) || employeeid.ToLower() == "<somecheck>" ||
                employeeid.ToLower() == "<replace and use other validation such as regex>")
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest,
                    $"Input parameter error, employeeId: {employeeid} -  not specified, null or bad format",
                    actionContext.ControllerContext.Configuration.Formatters.JsonFormatter);
            }
            base.OnActionExecuting(actionContext);
        }
    }

Note in the preceding code for the controller that I decorated the web api action method with: [ValidateEmployeeId]

This instruct the controller to use the custom ActionFilterAttribute that I created above

Testing your custom validate via UNIT Test/s:

For simplicity, I used MSTest that comes with visual studio.

[TestMethod, TestCategory("UserController")]
        public void Validate_EmpId_ActionFilterAttribute()
        {
            var mockactioncontext = new HttpActionContext
            {
                ControllerContext = new HttpControllerContext
                {
                    Request = new HttpRequestMessage()
                },
                ActionArguments = { { "employeeid", "<somecheck>" } }
            };

            mockactioncontext.ControllerContext.Configuration = new HttpConfiguration();
            mockactioncontext.ControllerContext.Configuration.Formatters.Add(new JsonMediaTypeFormatter());
            
            var filter = new ValidateEmployeeIdAttribute();
            filter.OnActionExecuting(mockactioncontext);
            Assert.IsTrue(mockactioncontext.Response.StatusCode == HttpStatusCode.BadRequest);
        }

At this point, you should have separation of code to validate your “validations” vs controller.

Using fiddler, I can see that whenever I submit a request that has an invalid value for employeeid, I get the correct response:

fiddlertrace

Using XML Data Transform (XDT) to automatically configure app.config during Nuget Package Install

This should be fairly straight forward as mentioned on nuget.org’s site right? Well, not quite. I’ve spent some time reading through the blog posts and it’s not quite straightforward. Hopefully this post is the simplified version. In my case, the scenario is simply to add entries in the appSettings key node within the app.config file. Nuget.org’s site has the following docs:

Configuration File and Source Code Transformations

https://docs.nuget.org/create/configuration-file-and-source-code-transformations

How to use XDT in NuGet – Examples and Facts

http://blog.nuget.org/20130920/how-to-use-nugets-xdt-feature-examples-and-facts.html

The steps below will hopefully guide you through the initial steps to get your app.config (or web.config) files to be modified during and after installing your nuget packages. After which you can look at all different XDT transformation processes in the following doc:

Web.config Transformation Syntax for Web Project Deployment Using Visual Studio

https://msdn.microsoft.com/en-us/library/dd465326(v=vs.110).aspx

Step 1: Create both app.config.install.xdt and app.config.uninstall.xdt

From Nuget site: “Starting with NuGet 2.6, XML-Document-Transform (XDT) is supported to transform XML files inside a project. The XDT syntax can be utilized in the .install.xdt and .uninstall.xdt file(s) under the package’s Content folder, which will be applied during package installation and uninstallation time, respectively.”

The location of these files don’t quite matter. If these files are located in the same directory as where you have your assemblies for nuget package, even better. You’ll need to reference these 2 files as “content” folder locations in the .nuspec file. Nuspec file is the blue print for creating your nuget package.

app.config.install.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key1" value="Value1" xdt:Transform="Insert"/>
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
    <add key="Key2" value="Value2" xdt:Transform="Insert" />
  </appSettings>
</configuration>

Let’s break this down. There are 2 appSettings node in this xml file. One to check if the appSettings node exist (InsertIfMissing) and the 2nd, if it does exist, it will remove the key value pair matching the keyword and then add it again. Why do this 2 step process? This is to ensure that you will only have one entry per key. However, you could probably get away using InsertIfMissing as well.

app.config.uninstall.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
 </appSettings>
</configuration>
The uninstall file is pretty straightforward. Remove the app setting keys if they exist. Although, in this case, I’m not deleting the appSettings node. Leaving the appSettings node in your config file will not cause any issues.

Step 2: Modify your nuspec file to include both the .install.xdt and .uninstall.xdt file(s) as content folders.

.nuspec file is the core or blue print for generating your nuget package. Here’s an example of a .nuspec file. For more information, go here: http://docs.nuget.org/Create/Nuspec-Reference

In this example, you’ll need to refer for both .install.xdt and .uninstall.xdt file(s) as target content folders:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>Package1</id>
    <version>1.1</version>
    <title>Nuget Package 1</title>
    <authors>QE Dev</authors>
    <owners>Don Tan</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package 1 Testing</description>
<summary>Application Config change</summary>

    <releaseNotes>
      - Support for Application Config change
    </releaseNotes>
    <copyright>Copy Right</copyright>
    <language>en-US</language>
    <dependencies>
      <dependency id="Microsoft.ApplicationInsights" version="2.1.0" />
    </dependencies>
    <references>
      <reference file="Package1.dll" />
    </references>
  </metadata>
  <files>
    <file src="Package1.dll" target="lib\net45\Package1.dll.dll" />
    <!--Add Section to Uninstall and Re-install Application.Config files-->
    <file src="app.config.install.xdt" target="content" />
    <file src="app.config.uninstall.xdt" target="content" />
  </files>
</package>

Step 3: Test the generated nuget package and verify if your application config (app.config) settings have been modified