IBM Data Server Provider (DB2) .Net Tips – Series 1

I’ve been working with IBM Informix lately and from my previous post, I’ve mentioned a couple of ways to work with Informix. Now that I’ve settled with strictly using DB2 (see my previous post on this), my colleagues and I have been developing abstractions to work with Informix via Entity Framework and/or DB2 SDK (IBM’s version).

So, you’ve just been asked to tackle some data access tasks through Informix via .Net? My colleague, Jeff Crose (https://www.linkedin.com/in/jeff-crose-419797/) came out with some really cool tips worth blogging about. Jeff is a distinguished software developer in our team, working with backend data development through database platforms such as SQL Server and Informix. Shout out to Jeff for coming up with these tips!

TIP: Informix Overloaded Stored Procedures

Unlike other relational database management systems, Informix supports overloaded stores procedures. That may not seem like a big deal, but it can lead you astray if you’re not careful. Consider the following sample:

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "George Washington";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "WA";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The first President of the United States";

            var customerid = cmd.ExecuteScalar();
Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

You’ve created the connection, created the command, and created parameters of the correct types to match the signature of the stored procedure but you receive the following error message: ERROR[IX000][IBM][IDS / UNIX64] Routine(customer_insert_overload) cannot be resolved. As it turns out, the issue is not with the overload but with the fact that one of the parameters is a TEXT type. To get past the error, you need to explicitly cast the parameter to TEXT in the CommandText property.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "George Washington";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "WA";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The first President of the United States";

            var customerid = cmd.ExecuteScalar();
Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

TIP: ExecuteScalar VS ExecuteNonQuery

Now that the code works, you may be wondering why the ExecuteScalar method was chosen over ExecuteNonQuery for an insert. In this case, the stored procedure returns the identity for the newly inserted row. Even though the stored procedure returns an integer, it does not behave like its SQL Server counterpart. The following code inserts the data successfully but does not return a value.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert_overload(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Andrew Jackson";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "MS";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The seventh President of the United States";
            cmd.Parameters.Add(new DB2Parameter("customerid", DB2Type.Integer));
            cmd.Parameters["customerid"].Direction = ParameterDirection.ReturnValue;

            cmd.ExecuteNonQuery();
            var customerid = cmd.Parameters["customerid"].Value;

Console.WriteLine("customerid: {0}", customerid);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

To get the “customerid” you need to use the ExcecuteScalar method.

TIP: ExecuteRow for multiple return values

What if the stored procedure returns multiple values? The ExecuteRow method can be used to get all the values.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert(?, ?, ?::TEXT)";

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Abraham Lincoln";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "NE";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The sixteenth President of the United States";

            var row = cmd.ExecuteRow();
Console.WriteLine("customerid: {0}, result: {1}", row[0], row[1]);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

Notice that the “customerinfo” parameter was once again explicitly cast to a TEXT type in the CommandText property. The following example will once again cause the “ERROR[IX000][IBM][IDS / UNIX64] Routine(customer_insert) can not be resolved.” exception.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_insert(?, ?, ?)"; 

            cmd.Parameters.Add(new DB2Parameter("customername", DB2Type.VarChar, 30)).Value = "Abraham Lincoln";
            cmd.Parameters.Add(new DB2Parameter("customerstate", DB2Type.Char, 2)).Value = "NE";
            cmd.Parameters.Add(new DB2Parameter("customerinfo", DB2Type.Text)).Value = "The sixteenth President of the United States";

            var row = cmd.ExecuteRow();
Console.WriteLine("customerid: {0}, result: {1}", row[0], row[1]);
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

 

TIP: And Finally, the “ExecuteReader” …

With all the data inserted successfully, it’s time to find out how return all the rows. Luckily that part is straightforward. Just call the ExecuteReader method and iterate through the results.

try
{
    using (DB2Connection conn = new DB2Connection(connectionString))
    {
        conn.Open();

        using (DB2Command cmd = conn.CreateCommand())
        {
            cmd.CommandType = CommandType.Text;
            cmd.CommandText = "EXECUTE PROCEDURE customer_select()";

            var dr = cmd.ExecuteReader();
            while (dr.Read())
            {
                Console.WriteLine("customer_id: {0}, customer_name: {1}, customer_info: {2}", dr[0], dr[1], dr[2]);
            }
        }
    }
}
catch (DB2Exception exception)
{
    Console.WriteLine("Error Message: {0}", exception.Message);
}

Getting Started with Azure Data Catalog REST API

What is Azure Data Catalog? Simply put, Azure Data Catalog is a SaaS application hosted within Azure’s Cloud Stack. With Azure Data Catalog, enterprise customers can store information about their enterprise data source assets. There’s the concept of catalogs, assets and annotations and for more information, go to: https://azure.microsoft.com/en-us/services/data-catalog/

We use Azure Data Catalog to organize, discover and understand all of our backend data sources. With that in mind, I needed to find a solution where we can automate data source creation (Databases, Tables, etc…) to Azure Data Catalog and don’t want to spend the time creating/registering assets manually. It is a tedious process to manually specially if you have to deal with lots of databases and stored procedures J.

Microsoft exposes an API for you to use and work with Azure Data Catalog. There are plenty of documentation out there but it really took me a while to get everything setup and working correctly. At least from searching on existing assets and registering a new one.

Most of Microsoft’s documentation around the Azure Data Catalog API is located here:

https://docs.microsoft.com/en-us/rest/api/datacatalog/

This guide will walk you through the steps on registering a catalog asset with additional information to properly authenticate against Azure AD and a modified schema version to include annotations when registering or updating a catalog asset. Note that sample below uses Native Client Authentication to Azure Active Directory.

Part 1

The first section talks about creating an Azure Active Directory client app registration. We will use this to authenticate either using OAuth2 or Federation

Note: As of writing this blog post, the screenshots below have been taken from the recent UI on azure portal.

Register a client app in Azure Active Directory. When you register a client app in Azure Active Directory, you give your app access to the Data Catalog APIs. To register a client app:

1. Go to http://portal.azure.com

2. Click on “Azure Active Directory

ADC1

3. Click on “App Registrations

ADC2

4. Click on “Add” and provide a “Name”, “Application Type” and “Redirect UI”. NOTE: The redirect URI is a unique identifier for the client to send the access token back. This doesn’t have to be a valid URI however; you need to keep track of this. You will need it later to authenticate against the catalog api.

ADC3

GRANT the app client access to the Azure Catalog API. To do this:

1. Click on “Settings” on the newly created app registration.

2. Click on “Required Permissions” then “Add”

3. On “Select an API”, pick “Microsoft Azure Data Catalog”

4. Take the defaults

5. IMPORTANT: Make sure you click on “GRANT PERMISSIONS” once you select “Microsoft Azure Data Catalog” as seen below. If you don’t do this, then your native client will not be able to authenticate properly on the Azure Data Catalog API.

ADC4

ADC5

Part 2

The second section talks about authenticating against Azure REST API. Particularly, authenticating against Azure Data Catalog API. The article below will guide you through steps on calling the Azure Data Catalog API via ADAL libraries for authentication. The information presented below from Microsoft’s site is accurate as of this writing.

Authenticate a client app

https://docs.microsoft.com/en-us/rest/api/datacatalog/authenticate-a-client-app

Couple of notes from the steps mentioned above:

· “Register a Client App”. You just did this in the preceding steps. Make sure to write down the Client ID (or APP ID of the newly created app in azure active directory)

· Don’t use HTTPWebRequest rather use HTTPClient to authenticate. HTTPClient has far more features that HTTPWebRequest. That said, refer to this Microsoft article for examples on HTTPClient.

Calling a Web API From a .NET Client (C#)

https://docs.microsoft.com/en-us/aspnet/web-api/overview/advanced/calling-a-web-api-from-a-net-client

Part 3

Changes to the request body when registering Data Assets. This is the part where I’ve spend most of my research modifying the schema for registering or updating assets. In this case, adding annotations during the registration process. Microsoft provides basic schema definitions when registering assets but doesn’t provide enough details on other schema values such as annotation experts, tags and description. Here’s a modified version of the schema when registering an asset to include annotations.

{
  "properties": {
    "fromSourceSystem": false,
    "name": "table name",
    "dataSource": {
      "sourceType": "Db2",
      "objectType": "Table"
    },
    "dsl": {
      "protocol": "db2",
      "authentication": "windows",
      "address": {
        "server": "ServerName",
        "database": "DatabaseName",
        "object": "NameOfTable",
        "schema": "dbo"
      }
    },
    "lastRegisteredBy": {
      "upn": "smtp@address.com",
      "firstName": "Don",
      "lastName": "Tan"
    },
    "containerId": "containers/<SomeGuid>"
  },
  "annotations": {
    "schema": {
      "properties": {
        "fromSourceSystem": true,
        "columns": [
          {
            "name": "identity",
            "isNullable": false,
            "type": "Int32",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "Other Column",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "short_desc",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          }
        ]
      }
    },
    //Add Other Annotation Details
    "experts": [
      {
        "properties": {
          "expert": {
            "upn": "smtp@address.com",
            "objectId": "<SomeGuid>"
          },
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ],
    "descriptions": [
      {
        "properties": {
          "key": "<SomeGuid>",
          "fromSourceSystem": false,
          "description": "Some Descrption"
        }
      }
    ],
    "tags": [
      {
        "properties": {
          "tag": "Dtan",
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ]
  }
}

Part 4:

Putting it all together: Here’s a complete sample on how to invoke the Azure Data Catalog using HTTPClient in C#.

// The ResourceURI is used by the application to uniquely identify itself to Azure AD.
// The ClientId is used by the application to uniquely identify itself to Azure AD.
// The AAD Instance is the instance of Azure, for example public Azure or Azure China.
// The Authority is the sign-in URL (either the tenant or OAuth2 provider)
// The RedirectUri gives AAD more details about the specific application that it will authenticate.
// NOTE: Make sure that the ClientID has sufficient permissions against the resourceURI. In this case, Azure Data Catalog
//See article: https://docs.microsoft.com/en-us/rest/api/datacatalog/Register-a-client-app?redirectedfrom=MSDN#client

var ClientId = ConfigurationManager.AppSettings["ClientId"];
var ResourceUri = ConfigurationManager.AppSettings["ResourceUri"];
var RedirectUri = new Uri(ConfigurationManager.AppSettings["RedirectUri"]);
var Tenant = ConfigurationManager.AppSettings["Tenant"];
var AadInstance = ConfigurationManager.AppSettings["AADInstance"];
//OAuth2 provider
//private static readonly string Authority = String.Format(CultureInfo.InvariantCulture, "https://login.windows.net/common/oauth2/authorize");
//Tenant Authority
var Authority = String.Format(CultureInfo.InvariantCulture, AadInstance, Tenant);
var authContext = new AuthenticationContext(Authority);
var authResult =
    authContext.AcquireTokenAsync(ResourceUri, ClientId, RedirectUri,
        new PlatformParameters(PromptBehavior.RefreshSession)).Result;

using (var httpClient = new HttpClient())
{
    var requestbody = "{\"properties\":{\"fromSourceSystem\":false,\"name\":\"air_allowed\",\"dataSource\":{\"sourceType\":\"Db2\",\"objectType\":\"Table\"},\"dsl\":{\"protocol\":\"db2\",\"authentication\":\"windows\",\"address\":{\"server\":\"YourServerName\",\"database\":\"YourDatabase\",\"object\":\"YourTable\",\"schema\":\"dbo\"}},\"lastRegisteredBy\":{\"upn\":\"smtp@address.com \",\"firstName\":\"Don\",\"lastName\":\"Tan\"},\"containerId\":\"containers/42070252-e318-4a0a-8c73-a33c0dc8fd65\"},\"annotations\":{\"schema\":{\"properties\":{\"fromSourceSystem\":true,\"columns\":[{\"name\":\"Column1\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column2\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column3\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0}]}},\"experts\":[{\"properties\":{\"expert\":{\"upn\":\"smtp@address.com\",\"objectId\":\"fb7d1a8a-4ae6-4ee2-aaaa-9de5b4c598df\"},\"key\":\"52c4543b-ee75-42d7-95e7-3a01437fee58\",\"fromSourceSystem\":false}}],\"descriptions\":[{\"properties\":{\"key\":\"791bab95-428a-4941-b633-7d2d0cd9c75e\",\"fromSourceSystem\":false,\"description\":\"SomeDescription\"}}],\"tags\":[{\"properties\":{\"tag\":\"Dtan\",\"key\":\"a2a3f272-14a3-4a03-b85d-65af33022dc4\",\"fromSourceSystem\":false}}]}}";
    var url = "https://api.azuredatacatalog.com/catalogs/<yourcatalog> /views/tables?api-version=2016-03-30";
    httpClient.DefaultRequestHeaders.Add("Authorization", authResult.CreateAuthorizationHeader());
    httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    var stringContent = new StringContent(requestbody);
    stringContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    var response = httpClient.PostAsync(url, stringContent).Result;
}

ADC6

Working with Entity Framework 6.0 ON IBM Informix V11.10+ in Visual Studio 2015

We know EF (Entify Framework) has many benefits working with Databases. Particularly from a development and performance stand-point. There are 2 versions of client connectivity SDK’s for working with Informix Databases:

IBM.Data.Informix.dll— Also referred to as the Common IDS .NET Provider. This assembly has been specifically created to help existing applications that were developed using the CSDK .NET Provider (SQLI protocol) to use the latest DRDA protocol support. It has additional support for some of the earlier Informix client features and is targeted only for .NET application development for Informix.

IBM.Data.DB2.dll— Also referred to as the DB2 .NET Provider. Although the name of the provider indicates DB2, it is in fact the single .NET provider for IBM database servers including DB2 and Informix. It is the recommended and preferred .NET provider for all clients targeting DB2 and new application development targeting Informix (Version 11.10 or later).

These are referenced from IBM’s website:

https://www.ibm.com/developerworks/data/library/techarticle/dm-1007dsnetids/

IBM.Data.DB2.dll is the preferred approach and uses Entity Framework. More importantly, this is the version that IBM will support for new enhancements in conjunction with Entity Framework.

Note: In order to use the .Net Data Provider (DB2) for Entity Framework 6.0, you need to ensure that DRDA protocol has been enabled on the Informix Server. For more information on DRDA overview and troubleshooting, see the following articles:

Overview of DRDA

https://www.ibm.com/support/knowledgecenter/SSGU8G_11.50.0/com.ibm.admin.doc/ids_admin_0206.htm

TCPIP communication errors with DRDA

http://www-01.ibm.com/support/docview.wss?uid=swg21164785

To get started with the .Net Data Provider for IBM Informix V11.10+ in Visual Studio 2015

1) Download and install the latest updates for Visual Studio 2015 (As of writing this blog post, the current updates is version 3)

2) Download and Install the DSDriver Package (Data Server Driver Package) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35279_DSClients-ntx64-dsdriver-10.5.600.232-FP006%3A898521251824283008&includeSupersedes=0

Specifically, this version” Special Build 35279 for IBM Data Server Driver Package (Windows/x86-64 64 bit) V10.5 Fix Pack 6” (special_35279_ntx64_dsdriver_EN.exe)

NOTE: As of writing on this blog, there could be more fix pack versions of the DS Driver Package, however the fix pack version above works well in VS 2015

3) Download and Install VSAI (IBM Database Add-Ins for Visual Studio) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35192_DSClients-nt32-vsai-10.5.600.232-FP006%3A295467480640129088&includeSupersedes=0

Specifically, this version” Special Build 35192 for IBM Database Add-Ins for Visual Studio (Windows/x86-32 32 bit) V10.5 Fix Pack 6” (special_35192_nt32_vsai.zip)

NOTE: As of writing on this blog post, none of the Add-Ins for Visual Studio work on machine running Windows 10. IBM hasn’t provided a solution for this problem. Also, do not use DS Driver Package V11. Use DS Driver Package version 10.5+. This version is specifically compiled for EF 6.0

4) Install IBM Entity Framework 6.0 in your projects. Right click on the Project and Select “Manage Nuget Packages” and install latest EntityFramework.IBM.DB2

image

image

Sample Project to verify that you can use EF 6.0 connecting to IBM Informix Database (V11.10+):

  • Start of by creating a sample project in Visual Studio 2015. Any project would be fine, however for testing purposes, create a test project. This will allow you to generate Unit Tests to verify EF on Informix Database V11.10+
  • Right click on the project and Select “Add” then “New Item”.
  • From the list of items, select “ADO.NET Entity Data Model”, “Select IBM DB2 and IDS Servers

image

image

  • Follow the wizard but on the first step, select: “EF Designer from database
  • Click on “New Connection” and provide the proper server settings for the Informix DB server. Note that DRDA protocol needs to be enabled on the target server. Refer back to the top section of this post.

image

  • Click “OK” then click on “Next
  • Select the appropriate Tables then click “Finish

Once completed, your test project should have generated EF files which you can use to connect and work with Informix Server. Here’s an example of an auto generated file which is the actual context file that “inherits” DbContext from EF

image

The tools in VS also generates the entities for you. Since we’ve used the “EF Designer from database” template, all the of table to entity mappings is actually stored in the .EDMX file. You can explore this file visually or to see raw data, open the file in a text editor such as notepad

image

As an example, I’ve written some unit tests to verify some data from a table in Informix V11.10+.

image

With EF, we can greatly improve how are applications integrate on Informix. We can have better design principles and patterns using EF. A common design pattern with Databases is called “UNIT of Work”. Here’s a great article on how to implement Unit of Work design pattern with ASP.Net MVC

https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application

Validating and Unit Testing Web API (2) Route Attribute Parameters

Personally, I like to isolate business rules and/or validations outside of MVC Controllers. In this case, API Controllers. I use ActionFilterAttribute to define my checks on parameters being passed in my MVC Web API routes.

Here’s an example of a WebAPI route with parameter binding:

// GET: /1/employees/AA0000111"
[Route("{WebServiceVersion}/employees/{employeeId}")]
[ValidateEmployeeId]
        public IHttpActionResult GetUser(string employeeid, int WebServiceVersion = 1)
        {
            // GET: Do something with webServiceVersion value like logging.
            var user = _emprepository.GetUser(employeeid);
            return Content(HttpStatusCode.OK, user);
        }

I want to isolate validating employeeid outside of my controller for a couple of reasons:

1) Isolation – You may have multiple cases on validating your parameters. In this case, employeeId can be permutated in different ways specially because it is a string. Other developers can easily get lost on what the action controller is actually doing if you have long code that includes all various validations

2) Good development practice – I prefer to see nice clean code and separation on what my controllers do vs business rules

3) Testing – I can isolate testing on my controllers vs business rules. This is really the motivating factor for me.

That said, let’s take a look at the ActionFilterAttribute further. For more information on this, see:

(NOTE: There are 2 versions of ActionFilterAttribute)

System.Web.Http.Filters

System.Web.Mvc

When unit testing, make sure you’re writing the correct tests for your filter. In this case, I’m using the namespace: System.Web.Http.Filters

public class ValidateEmployeeIdAttribute : ActionFilterAttribute
    {
        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            var employeeid = actionContext.ActionArguments["employeeid"].ToString();
            if (string.IsNullOrEmpty(employeeid) || employeeid.ToLower() == "<somecheck>" ||
                employeeid.ToLower() == "<replace and use other validation such as regex>")
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest,
                    $"Input parameter error, employeeId: {employeeid} -  not specified, null or bad format",
                    actionContext.ControllerContext.Configuration.Formatters.JsonFormatter);
            }
            base.OnActionExecuting(actionContext);
        }
    }

Note in the preceding code for the controller that I decorated the web api action method with: [ValidateEmployeeId]

This instruct the controller to use the custom ActionFilterAttribute that I created above

Testing your custom validate via UNIT Test/s:

For simplicity, I used MSTest that comes with visual studio.

[TestMethod, TestCategory("UserController")]
        public void Validate_EmpId_ActionFilterAttribute()
        {
            var mockactioncontext = new HttpActionContext
            {
                ControllerContext = new HttpControllerContext
                {
                    Request = new HttpRequestMessage()
                },
                ActionArguments = { { "employeeid", "<somecheck>" } }
            };

            mockactioncontext.ControllerContext.Configuration = new HttpConfiguration();
            mockactioncontext.ControllerContext.Configuration.Formatters.Add(new JsonMediaTypeFormatter());
            
            var filter = new ValidateEmployeeIdAttribute();
            filter.OnActionExecuting(mockactioncontext);
            Assert.IsTrue(mockactioncontext.Response.StatusCode == HttpStatusCode.BadRequest);
        }

At this point, you should have separation of code to validate your “validations” vs controller.

Using fiddler, I can see that whenever I submit a request that has an invalid value for employeeid, I get the correct response:

fiddlertrace

Using XML Data Transform (XDT) to automatically configure app.config during Nuget Package Install

This should be fairly straight forward as mentioned on nuget.org’s site right? Well, not quite. I’ve spent some time reading through the blog posts and it’s not quite straightforward. Hopefully this post is the simplified version. In my case, the scenario is simply to add entries in the appSettings key node within the app.config file. Nuget.org’s site has the following docs:

Configuration File and Source Code Transformations

https://docs.nuget.org/create/configuration-file-and-source-code-transformations

How to use XDT in NuGet – Examples and Facts

http://blog.nuget.org/20130920/how-to-use-nugets-xdt-feature-examples-and-facts.html

The steps below will hopefully guide you through the initial steps to get your app.config (or web.config) files to be modified during and after installing your nuget packages. After which you can look at all different XDT transformation processes in the following doc:

Web.config Transformation Syntax for Web Project Deployment Using Visual Studio

https://msdn.microsoft.com/en-us/library/dd465326(v=vs.110).aspx

Step 1: Create both app.config.install.xdt and app.config.uninstall.xdt

From Nuget site: “Starting with NuGet 2.6, XML-Document-Transform (XDT) is supported to transform XML files inside a project. The XDT syntax can be utilized in the .install.xdt and .uninstall.xdt file(s) under the package’s Content folder, which will be applied during package installation and uninstallation time, respectively.”

The location of these files don’t quite matter. If these files are located in the same directory as where you have your assemblies for nuget package, even better. You’ll need to reference these 2 files as “content” folder locations in the .nuspec file. Nuspec file is the blue print for creating your nuget package.

app.config.install.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key1" value="Value1" xdt:Transform="Insert"/>
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
    <add key="Key2" value="Value2" xdt:Transform="Insert" />
  </appSettings>
</configuration>

Let’s break this down. There are 2 appSettings node in this xml file. One to check if the appSettings node exist (InsertIfMissing) and the 2nd, if it does exist, it will remove the key value pair matching the keyword and then add it again. Why do this 2 step process? This is to ensure that you will only have one entry per key. However, you could probably get away using InsertIfMissing as well.

app.config.uninstall.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
 </appSettings>
</configuration>
The uninstall file is pretty straightforward. Remove the app setting keys if they exist. Although, in this case, I’m not deleting the appSettings node. Leaving the appSettings node in your config file will not cause any issues.

Step 2: Modify your nuspec file to include both the .install.xdt and .uninstall.xdt file(s) as content folders.

.nuspec file is the core or blue print for generating your nuget package. Here’s an example of a .nuspec file. For more information, go here: http://docs.nuget.org/Create/Nuspec-Reference

In this example, you’ll need to refer for both .install.xdt and .uninstall.xdt file(s) as target content folders:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>Package1</id>
    <version>1.1</version>
    <title>Nuget Package 1</title>
    <authors>QE Dev</authors>
    <owners>Don Tan</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package 1 Testing</description>
<summary>Application Config change</summary>

    <releaseNotes>
      - Support for Application Config change
    </releaseNotes>
    <copyright>Copy Right</copyright>
    <language>en-US</language>
    <dependencies>
      <dependency id="Microsoft.ApplicationInsights" version="2.1.0" />
    </dependencies>
    <references>
      <reference file="Package1.dll" />
    </references>
  </metadata>
  <files>
    <file src="Package1.dll" target="lib\net45\Package1.dll.dll" />
    <!--Add Section to Uninstall and Re-install Application.Config files-->
    <file src="app.config.install.xdt" target="content" />
    <file src="app.config.uninstall.xdt" target="content" />
  </files>
</package>

Step 3: Test the generated nuget package and verify if your application config (app.config) settings have been modified

Selecting VSTS Nuget EndpointS for Nuget Clients…

While I love utilizing private VSTS Nuget Endpoints, it’s been painful for folks (people that I work with day-in and day-out) on how to use the endpoints correctly. Particularly, if you have clients that use older versions of Nuget Client (< 2.9), most likely those clients will receive an endless authentication box or will get authentication but not receive any data (package details, etc…)

How to fix it? It’s as simple as changing the URL from your clients.

Clients that has 3.X version of Nuget, use this URL:

https://<yourvstsendpoint>.pkgs.visualstudio.com/DefaultCollection/_packaging/<nameofyournugetfeed>/nuget/v3/index.json

Support Clients:

  • Visual Studio 2015 +
  • Nuget.exe 3.x

Clients that has < 2.9 version of the Nuget, use this URL:

https://<yourvstsendpoint>.pkgs.visualstudio.com/DefaultCollection/_packaging/<nameofyournugetfeed>/nuget/v2

Support Clients:

  • < Visual Studio 2013 (including updates)
  • < Nuget.exe 2.9
  • < Xamarin Studio
  • Nuget Package Explorer

Outside of pointing to the correct endpoints, make sure you have the proper authentication setup in your development environment, which involves:

  1. Getting a Personal Access Token (PAT)
  2. Selecting the correct method for storing your credentials. I personally like the boostrap method of storing Nuget credentials in your development environment

For more information on Nuget and the different versions, go to: http://www.nuget.org/

From Nuget’s website:

NuGet Feed Locations

NuGet delivers packages to your project from a feed URL that provides interactions with the repository for your NuGet client. For NuGet.org, configure your NuGet clients to use one of the following repository URLs:

Custom Build Triggers in VSTS

In my previous posts, I’ve shown people how to use VSTS (formerly known as VSO) to trigger continuous testing using builds and release management. I was able to utilize new reporting capabilities in build, particularly, test reports. I created reports that shows pass/fail trends for tests in my build definitions.

PassFailTrend

There are still limitations (or in this case features I wish Microsoft would consider such as customizing test reports from builds as well as showing pass/fail trends past 10 builds). My biggest disappointment thus far is “NOT” able to schedule build (with tests) using re-occurring pattern/s. As of writing this post, you can schedule builds in VSTS however, you have to “manually” keep adding scheduled times.

Scheduled

Imagine a scenario where you need to run a build every hour (or half hour), you have to manually add new times every hour, in this case, 24 times. Very inconvenient.

Fortunately, VSTS has public API’s that allows us to access build execution and trigger. With the public API’s I was able to write a very simple console app and use Windows’ built in “Task Scheduler” functionality. One would say, why not create a windows services? Yes, that’s option but I would make a point back to say: “Why develop a windows service further complicating the process where Windows has ‘Task Scheduler’ that’s been tested and used more broadly?”

Below is the code:

NOTE: You need to refer to the following Nuget Packages:

  • Microsoft.TeamFoundationServer.ExtendedClient
  • Microsoft.TeamFoundationServer.Client
  • Microsoft.VisualStudio.Services.Client
  • Microsoft.VisualStudio.Services.InteractiveClient
static class Program
    {
        static void Main(string[] args)
        {
            var buildoutputmodel = SetupBuildOutputModel();
            var vssconnection = new VssConnection(
                new Uri(buildoutputmodel.VsoUrl),
                new VssBasicCredential(buildoutputmodel.UserName, buildoutputmodel.Password)
                );
            var buildHttpClient = vssconnection.GetClient<BuildHttpClient>();
            //Below is my implementation of triggering multiple builds. I simply used the app.config to specify the build's ID, split each entry and validate. 
            ConfigurationManager.AppSettings["builddefinitionids"].Split(',').ToList().ForEach(
                buildid =>
                {
                    string stringoutput;
                    try
                    {
                        var id = buildid.ValidateBuildId();
                        DefinitionReference definitionReference = new DefinitionReference
                        {
                            Id = id,
                            Project = new TeamProjectReference
                            {
                                Name = buildoutputmodel.TeamProjectName
                            }
                        };
                        var build = new Build { Definition = definitionReference };
                        //This is where you trigger the build
                        var buildnumber = buildHttpClient.QueueBuildAsync(build,
                            buildoutputmodel.TeamProjectName).Result;
                        stringoutput = $"Build Triggered... \nBuild Number: {buildnumber} \nBuild Definition ID: {definitionReference.Id} \nTeam Project: {definitionReference.Project.Name}\n";
                        Console.WriteLine(stringoutput);
                        AsLogger.Info(stringoutput);
                    }
                    catch (Exception ex)
                    {
                        stringoutput = $"Exception Occurred: \n{ex.Message} \n{ex.InnerException}\n";
                        Console.WriteLine(stringoutput);
                        AsLogger.Error(stringoutput);
                    }
                });
        }

        private static BuildOutputModel SetupBuildOutputModel()
        {
            return new BuildOutputModel
            {
                UserName = ConfigurationManager.AppSettings["username"],
                Password = ConfigurationManager.AppSettings["password"],
                VsoUrl = ConfigurationManager.AppSettings["vsourl"],
                TeamProjectName = ConfigurationManager.AppSettings["teamproject"],
                BuilDefinitionName = ConfigurationManager.AppSettings["builddefinition"],
                GitRepo = ConfigurationManager.AppSettings["gitrepo"]
            };
        }
    }

Once you compile the code (.exe), simply create a scheduled task using Windows’ Task Scheduler:

TaskScheduler

Then the execution:

VSTSQueue