Getting Started with Azure Data Catalog REST API

What is Azure Data Catalog? Simply put, Azure Data Catalog is a SaaS application hosted within Azure’s Cloud Stack. With Azure Data Catalog, enterprise customers can store information about their enterprise data source assets. There’s the concept of catalogs, assets and annotations and for more information, go to: https://azure.microsoft.com/en-us/services/data-catalog/

We use Azure Data Catalog to organize, discover and understand all of our backend data sources. With that in mind, I needed to find a solution where we can automate data source creation (Databases, Tables, etc…) to Azure Data Catalog and don’t want to spend the time creating/registering assets manually. It is a tedious process to manually specially if you have to deal with lots of databases and stored procedures J.

Microsoft exposes an API for you to use and work with Azure Data Catalog. There are plenty of documentation out there but it really took me a while to get everything setup and working correctly. At least from searching on existing assets and registering a new one.

Most of Microsoft’s documentation around the Azure Data Catalog API is located here:

https://docs.microsoft.com/en-us/rest/api/datacatalog/

This guide will walk you through the steps on registering a catalog asset with additional information to properly authenticate against Azure AD and a modified schema version to include annotations when registering or updating a catalog asset. Note that sample below uses Native Client Authentication to Azure Active Directory.

Part 1

The first section talks about creating an Azure Active Directory client app registration. We will use this to authenticate either using OAuth2 or Federation

Note: As of writing this blog post, the screenshots below have been taken from the recent UI on azure portal.

Register a client app in Azure Active Directory. When you register a client app in Azure Active Directory, you give your app access to the Data Catalog APIs. To register a client app:

1. Go to http://portal.azure.com

2. Click on “Azure Active Directory

ADC1

3. Click on “App Registrations

ADC2

4. Click on “Add” and provide a “Name”, “Application Type” and “Redirect UI”. NOTE: The redirect URI is a unique identifier for the client to send the access token back. This doesn’t have to be a valid URI however; you need to keep track of this. You will need it later to authenticate against the catalog api.

ADC3

GRANT the app client access to the Azure Catalog API. To do this:

1. Click on “Settings” on the newly created app registration.

2. Click on “Required Permissions” then “Add”

3. On “Select an API”, pick “Microsoft Azure Data Catalog”

4. Take the defaults

5. IMPORTANT: Make sure you click on “GRANT PERMISSIONS” once you select “Microsoft Azure Data Catalog” as seen below. If you don’t do this, then your native client will not be able to authenticate properly on the Azure Data Catalog API.

ADC4

ADC5

Part 2

The second section talks about authenticating against Azure REST API. Particularly, authenticating against Azure Data Catalog API. The article below will guide you through steps on calling the Azure Data Catalog API via ADAL libraries for authentication. The information presented below from Microsoft’s site is accurate as of this writing.

Authenticate a client app

https://docs.microsoft.com/en-us/rest/api/datacatalog/authenticate-a-client-app

Couple of notes from the steps mentioned above:

· “Register a Client App”. You just did this in the preceding steps. Make sure to write down the Client ID (or APP ID of the newly created app in azure active directory)

· Don’t use HTTPWebRequest rather use HTTPClient to authenticate. HTTPClient has far more features that HTTPWebRequest. That said, refer to this Microsoft article for examples on HTTPClient.

Calling a Web API From a .NET Client (C#)

https://docs.microsoft.com/en-us/aspnet/web-api/overview/advanced/calling-a-web-api-from-a-net-client

Part 3

Changes to the request body when registering Data Assets. This is the part where I’ve spend most of my research modifying the schema for registering or updating assets. In this case, adding annotations during the registration process. Microsoft provides basic schema definitions when registering assets but doesn’t provide enough details on other schema values such as annotation experts, tags and description. Here’s a modified version of the schema when registering an asset to include annotations.

{
  "properties": {
    "fromSourceSystem": false,
    "name": "table name",
    "dataSource": {
      "sourceType": "Db2",
      "objectType": "Table"
    },
    "dsl": {
      "protocol": "db2",
      "authentication": "windows",
      "address": {
        "server": "ServerName",
        "database": "DatabaseName",
        "object": "NameOfTable",
        "schema": "dbo"
      }
    },
    "lastRegisteredBy": {
      "upn": "smtp@address.com",
      "firstName": "Don",
      "lastName": "Tan"
    },
    "containerId": "containers/<SomeGuid>"
  },
  "annotations": {
    "schema": {
      "properties": {
        "fromSourceSystem": true,
        "columns": [
          {
            "name": "identity",
            "isNullable": false,
            "type": "Int32",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "Other Column",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          },
          {
            "name": "short_desc",
            "isNullable": false,
            "type": "String",
            "maxLength": 0,
            "precision": 0
          }
        ]
      }
    },
    //Add Other Annotation Details
    "experts": [
      {
        "properties": {
          "expert": {
            "upn": "smtp@address.com",
            "objectId": "<SomeGuid>"
          },
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ],
    "descriptions": [
      {
        "properties": {
          "key": "<SomeGuid>",
          "fromSourceSystem": false,
          "description": "Some Descrption"
        }
      }
    ],
    "tags": [
      {
        "properties": {
          "tag": "Dtan",
          "key": "<SomeGuid>",
          "fromSourceSystem": false
        }
      }
    ]
  }
}

Part 4:

Putting it all together: Here’s a complete sample on how to invoke the Azure Data Catalog using HTTPClient in C#.

// The ResourceURI is used by the application to uniquely identify itself to Azure AD.
// The ClientId is used by the application to uniquely identify itself to Azure AD.
// The AAD Instance is the instance of Azure, for example public Azure or Azure China.
// The Authority is the sign-in URL (either the tenant or OAuth2 provider)
// The RedirectUri gives AAD more details about the specific application that it will authenticate.
// NOTE: Make sure that the ClientID has sufficient permissions against the resourceURI. In this case, Azure Data Catalog
//See article: https://docs.microsoft.com/en-us/rest/api/datacatalog/Register-a-client-app?redirectedfrom=MSDN#client

var ClientId = ConfigurationManager.AppSettings["ClientId"];
var ResourceUri = ConfigurationManager.AppSettings["ResourceUri"];
var RedirectUri = new Uri(ConfigurationManager.AppSettings["RedirectUri"]);
var Tenant = ConfigurationManager.AppSettings["Tenant"];
var AadInstance = ConfigurationManager.AppSettings["AADInstance"];
//OAuth2 provider
//private static readonly string Authority = String.Format(CultureInfo.InvariantCulture, "https://login.windows.net/common/oauth2/authorize");
//Tenant Authority
var Authority = String.Format(CultureInfo.InvariantCulture, AadInstance, Tenant);
var authContext = new AuthenticationContext(Authority);
var authResult =
    authContext.AcquireTokenAsync(ResourceUri, ClientId, RedirectUri,
        new PlatformParameters(PromptBehavior.RefreshSession)).Result;

using (var httpClient = new HttpClient())
{
    var requestbody = "{\"properties\":{\"fromSourceSystem\":false,\"name\":\"air_allowed\",\"dataSource\":{\"sourceType\":\"Db2\",\"objectType\":\"Table\"},\"dsl\":{\"protocol\":\"db2\",\"authentication\":\"windows\",\"address\":{\"server\":\"YourServerName\",\"database\":\"YourDatabase\",\"object\":\"YourTable\",\"schema\":\"dbo\"}},\"lastRegisteredBy\":{\"upn\":\"smtp@address.com \",\"firstName\":\"Don\",\"lastName\":\"Tan\"},\"containerId\":\"containers/42070252-e318-4a0a-8c73-a33c0dc8fd65\"},\"annotations\":{\"schema\":{\"properties\":{\"fromSourceSystem\":true,\"columns\":[{\"name\":\"Column1\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column2\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0},{\"name\":\"Column3\",\"isNullable\":false,\"type\":\"String\",\"maxLength\":0,\"precision\":0}]}},\"experts\":[{\"properties\":{\"expert\":{\"upn\":\"smtp@address.com\",\"objectId\":\"fb7d1a8a-4ae6-4ee2-aaaa-9de5b4c598df\"},\"key\":\"52c4543b-ee75-42d7-95e7-3a01437fee58\",\"fromSourceSystem\":false}}],\"descriptions\":[{\"properties\":{\"key\":\"791bab95-428a-4941-b633-7d2d0cd9c75e\",\"fromSourceSystem\":false,\"description\":\"SomeDescription\"}}],\"tags\":[{\"properties\":{\"tag\":\"Dtan\",\"key\":\"a2a3f272-14a3-4a03-b85d-65af33022dc4\",\"fromSourceSystem\":false}}]}}";
    var url = "https://api.azuredatacatalog.com/catalogs/<yourcatalog> /views/tables?api-version=2016-03-30";
    httpClient.DefaultRequestHeaders.Add("Authorization", authResult.CreateAuthorizationHeader());
    httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
    var stringContent = new StringContent(requestbody);
    stringContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    var response = httpClient.PostAsync(url, stringContent).Result;
}

ADC6

Advertisement

Working with Entity Framework 6.0 ON IBM Informix V11.10+ in Visual Studio 2015

We know EF (Entify Framework) has many benefits working with Databases. Particularly from a development and performance stand-point. There are 2 versions of client connectivity SDK’s for working with Informix Databases:

IBM.Data.Informix.dll— Also referred to as the Common IDS .NET Provider. This assembly has been specifically created to help existing applications that were developed using the CSDK .NET Provider (SQLI protocol) to use the latest DRDA protocol support. It has additional support for some of the earlier Informix client features and is targeted only for .NET application development for Informix.

IBM.Data.DB2.dll— Also referred to as the DB2 .NET Provider. Although the name of the provider indicates DB2, it is in fact the single .NET provider for IBM database servers including DB2 and Informix. It is the recommended and preferred .NET provider for all clients targeting DB2 and new application development targeting Informix (Version 11.10 or later).

These are referenced from IBM’s website:

https://www.ibm.com/developerworks/data/library/techarticle/dm-1007dsnetids/

IBM.Data.DB2.dll is the preferred approach and uses Entity Framework. More importantly, this is the version that IBM will support for new enhancements in conjunction with Entity Framework.

Note: In order to use the .Net Data Provider (DB2) for Entity Framework 6.0, you need to ensure that DRDA protocol has been enabled on the Informix Server. For more information on DRDA overview and troubleshooting, see the following articles:

Overview of DRDA

https://www.ibm.com/support/knowledgecenter/SSGU8G_11.50.0/com.ibm.admin.doc/ids_admin_0206.htm

TCPIP communication errors with DRDA

http://www-01.ibm.com/support/docview.wss?uid=swg21164785

To get started with the .Net Data Provider for IBM Informix V11.10+ in Visual Studio 2015

1) Download and install the latest updates for Visual Studio 2015 (As of writing this blog post, the current updates is version 3)

2) Download and Install the DSDriver Package (Data Server Driver Package) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35279_DSClients-ntx64-dsdriver-10.5.600.232-FP006%3A898521251824283008&includeSupersedes=0

Specifically, this version” Special Build 35279 for IBM Data Server Driver Package (Windows/x86-64 64 bit) V10.5 Fix Pack 6” (special_35279_ntx64_dsdriver_EN.exe)

NOTE: As of writing on this blog, there could be more fix pack versions of the DS Driver Package, however the fix pack version above works well in VS 2015

3) Download and Install VSAI (IBM Database Add-Ins for Visual Studio) from IBM’s site:

https://www-945.ibm.com/support/fixcentral/swg/selectFixes?parent=ibm~Information%2BManagement&product=ibm/Information+Management/IBM+Data+Server+Client+Packages&release=All&platform=All&function=fixId&fixids=special_35192_DSClients-nt32-vsai-10.5.600.232-FP006%3A295467480640129088&includeSupersedes=0

Specifically, this version” Special Build 35192 for IBM Database Add-Ins for Visual Studio (Windows/x86-32 32 bit) V10.5 Fix Pack 6” (special_35192_nt32_vsai.zip)

NOTE: As of writing on this blog post, none of the Add-Ins for Visual Studio work on machine running Windows 10. IBM hasn’t provided a solution for this problem. Also, do not use DS Driver Package V11. Use DS Driver Package version 10.5+. This version is specifically compiled for EF 6.0

4) Install IBM Entity Framework 6.0 in your projects. Right click on the Project and Select “Manage Nuget Packages” and install latest EntityFramework.IBM.DB2

image

image

Sample Project to verify that you can use EF 6.0 connecting to IBM Informix Database (V11.10+):

  • Start of by creating a sample project in Visual Studio 2015. Any project would be fine, however for testing purposes, create a test project. This will allow you to generate Unit Tests to verify EF on Informix Database V11.10+
  • Right click on the project and Select “Add” then “New Item”.
  • From the list of items, select “ADO.NET Entity Data Model”, “Select IBM DB2 and IDS Servers

image

image

  • Follow the wizard but on the first step, select: “EF Designer from database
  • Click on “New Connection” and provide the proper server settings for the Informix DB server. Note that DRDA protocol needs to be enabled on the target server. Refer back to the top section of this post.

image

  • Click “OK” then click on “Next
  • Select the appropriate Tables then click “Finish

Once completed, your test project should have generated EF files which you can use to connect and work with Informix Server. Here’s an example of an auto generated file which is the actual context file that “inherits” DbContext from EF

image

The tools in VS also generates the entities for you. Since we’ve used the “EF Designer from database” template, all the of table to entity mappings is actually stored in the .EDMX file. You can explore this file visually or to see raw data, open the file in a text editor such as notepad

image

As an example, I’ve written some unit tests to verify some data from a table in Informix V11.10+.

image

With EF, we can greatly improve how are applications integrate on Informix. We can have better design principles and patterns using EF. A common design pattern with Databases is called “UNIT of Work”. Here’s a great article on how to implement Unit of Work design pattern with ASP.Net MVC

https://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application

Validating and Unit Testing Web API (2) Route Attribute Parameters

Personally, I like to isolate business rules and/or validations outside of MVC Controllers. In this case, API Controllers. I use ActionFilterAttribute to define my checks on parameters being passed in my MVC Web API routes.

Here’s an example of a WebAPI route with parameter binding:

// GET: /1/employees/AA0000111"
[Route("{WebServiceVersion}/employees/{employeeId}")]
[ValidateEmployeeId]
        public IHttpActionResult GetUser(string employeeid, int WebServiceVersion = 1)
        {
            // GET: Do something with webServiceVersion value like logging.
            var user = _emprepository.GetUser(employeeid);
            return Content(HttpStatusCode.OK, user);
        }

I want to isolate validating employeeid outside of my controller for a couple of reasons:

1) Isolation – You may have multiple cases on validating your parameters. In this case, employeeId can be permutated in different ways specially because it is a string. Other developers can easily get lost on what the action controller is actually doing if you have long code that includes all various validations

2) Good development practice – I prefer to see nice clean code and separation on what my controllers do vs business rules

3) Testing – I can isolate testing on my controllers vs business rules. This is really the motivating factor for me.

That said, let’s take a look at the ActionFilterAttribute further. For more information on this, see:

(NOTE: There are 2 versions of ActionFilterAttribute)

System.Web.Http.Filters

System.Web.Mvc

When unit testing, make sure you’re writing the correct tests for your filter. In this case, I’m using the namespace: System.Web.Http.Filters

public class ValidateEmployeeIdAttribute : ActionFilterAttribute
    {
        public override void OnActionExecuting(HttpActionContext actionContext)
        {
            var employeeid = actionContext.ActionArguments["employeeid"].ToString();
            if (string.IsNullOrEmpty(employeeid) || employeeid.ToLower() == "<somecheck>" ||
                employeeid.ToLower() == "<replace and use other validation such as regex>")
            {
                actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.BadRequest,
                    $"Input parameter error, employeeId: {employeeid} -  not specified, null or bad format",
                    actionContext.ControllerContext.Configuration.Formatters.JsonFormatter);
            }
            base.OnActionExecuting(actionContext);
        }
    }

Note in the preceding code for the controller that I decorated the web api action method with: [ValidateEmployeeId]

This instruct the controller to use the custom ActionFilterAttribute that I created above

Testing your custom validate via UNIT Test/s:

For simplicity, I used MSTest that comes with visual studio.

[TestMethod, TestCategory("UserController")]
        public void Validate_EmpId_ActionFilterAttribute()
        {
            var mockactioncontext = new HttpActionContext
            {
                ControllerContext = new HttpControllerContext
                {
                    Request = new HttpRequestMessage()
                },
                ActionArguments = { { "employeeid", "<somecheck>" } }
            };

            mockactioncontext.ControllerContext.Configuration = new HttpConfiguration();
            mockactioncontext.ControllerContext.Configuration.Formatters.Add(new JsonMediaTypeFormatter());
            
            var filter = new ValidateEmployeeIdAttribute();
            filter.OnActionExecuting(mockactioncontext);
            Assert.IsTrue(mockactioncontext.Response.StatusCode == HttpStatusCode.BadRequest);
        }

At this point, you should have separation of code to validate your “validations” vs controller.

Using fiddler, I can see that whenever I submit a request that has an invalid value for employeeid, I get the correct response:

fiddlertrace

Using XML Data Transform (XDT) to automatically configure app.config during Nuget Package Install

This should be fairly straight forward as mentioned on nuget.org’s site right? Well, not quite. I’ve spent some time reading through the blog posts and it’s not quite straightforward. Hopefully this post is the simplified version. In my case, the scenario is simply to add entries in the appSettings key node within the app.config file. Nuget.org’s site has the following docs:

Configuration File and Source Code Transformations

https://docs.nuget.org/create/configuration-file-and-source-code-transformations

How to use XDT in NuGet – Examples and Facts

http://blog.nuget.org/20130920/how-to-use-nugets-xdt-feature-examples-and-facts.html

The steps below will hopefully guide you through the initial steps to get your app.config (or web.config) files to be modified during and after installing your nuget packages. After which you can look at all different XDT transformation processes in the following doc:

Web.config Transformation Syntax for Web Project Deployment Using Visual Studio

https://msdn.microsoft.com/en-us/library/dd465326(v=vs.110).aspx

Step 1: Create both app.config.install.xdt and app.config.uninstall.xdt

From Nuget site: “Starting with NuGet 2.6, XML-Document-Transform (XDT) is supported to transform XML files inside a project. The XDT syntax can be utilized in the .install.xdt and .uninstall.xdt file(s) under the package’s Content folder, which will be applied during package installation and uninstallation time, respectively.”

The location of these files don’t quite matter. If these files are located in the same directory as where you have your assemblies for nuget package, even better. You’ll need to reference these 2 files as “content” folder locations in the .nuspec file. Nuspec file is the blue print for creating your nuget package.

app.config.install.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key1" value="Value1" xdt:Transform="Insert"/>
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
    <add key="Key2" value="Value2" xdt:Transform="Insert" />
  </appSettings>
</configuration>

Let’s break this down. There are 2 appSettings node in this xml file. One to check if the appSettings node exist (InsertIfMissing) and the 2nd, if it does exist, it will remove the key value pair matching the keyword and then add it again. Why do this 2 step process? This is to ensure that you will only have one entry per key. However, you could probably get away using InsertIfMissing as well.

app.config.uninstall.xdt

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings xdt:Transform="InsertIfMissing">
    </appSettings>
  <appSettings>
    <add key="Key1" xdt:Transform="Remove" xdt:Locator="Match(key)" />
    <add key="Key2" xdt:Transform="Remove" xdt:Locator="Match(key)"/>
 </appSettings>
</configuration>
The uninstall file is pretty straightforward. Remove the app setting keys if they exist. Although, in this case, I’m not deleting the appSettings node. Leaving the appSettings node in your config file will not cause any issues.

Step 2: Modify your nuspec file to include both the .install.xdt and .uninstall.xdt file(s) as content folders.

.nuspec file is the core or blue print for generating your nuget package. Here’s an example of a .nuspec file. For more information, go here: http://docs.nuget.org/Create/Nuspec-Reference

In this example, you’ll need to refer for both .install.xdt and .uninstall.xdt file(s) as target content folders:

<?xml version="1.0" encoding="utf-8"?>
<package xmlns="http://schemas.microsoft.com/packaging/2011/08/nuspec.xsd">
  <metadata>
    <id>Package1</id>
    <version>1.1</version>
    <title>Nuget Package 1</title>
    <authors>QE Dev</authors>
    <owners>Don Tan</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Package 1 Testing</description>
<summary>Application Config change</summary>

    <releaseNotes>
      - Support for Application Config change
    </releaseNotes>
    <copyright>Copy Right</copyright>
    <language>en-US</language>
    <dependencies>
      <dependency id="Microsoft.ApplicationInsights" version="2.1.0" />
    </dependencies>
    <references>
      <reference file="Package1.dll" />
    </references>
  </metadata>
  <files>
    <file src="Package1.dll" target="lib\net45\Package1.dll.dll" />
    <!--Add Section to Uninstall and Re-install Application.Config files-->
    <file src="app.config.install.xdt" target="content" />
    <file src="app.config.uninstall.xdt" target="content" />
  </files>
</package>

Step 3: Test the generated nuget package and verify if your application config (app.config) settings have been modified

Selecting VSTS Nuget EndpointS for Nuget Clients…

While I love utilizing private VSTS Nuget Endpoints, it’s been painful for folks (people that I work with day-in and day-out) on how to use the endpoints correctly. Particularly, if you have clients that use older versions of Nuget Client (< 2.9), most likely those clients will receive an endless authentication box or will get authentication but not receive any data (package details, etc…)

How to fix it? It’s as simple as changing the URL from your clients.

Clients that has 3.X version of Nuget, use this URL:

https://<yourvstsendpoint>.pkgs.visualstudio.com/DefaultCollection/_packaging/<nameofyournugetfeed>/nuget/v3/index.json

Support Clients:

  • Visual Studio 2015 +
  • Nuget.exe 3.x

Clients that has < 2.9 version of the Nuget, use this URL:

https://<yourvstsendpoint>.pkgs.visualstudio.com/DefaultCollection/_packaging/<nameofyournugetfeed>/nuget/v2

Support Clients:

  • < Visual Studio 2013 (including updates)
  • < Nuget.exe 2.9
  • < Xamarin Studio
  • Nuget Package Explorer

Outside of pointing to the correct endpoints, make sure you have the proper authentication setup in your development environment, which involves:

  1. Getting a Personal Access Token (PAT)
  2. Selecting the correct method for storing your credentials. I personally like the boostrap method of storing Nuget credentials in your development environment

For more information on Nuget and the different versions, go to: http://www.nuget.org/

From Nuget’s website:

NuGet Feed Locations

NuGet delivers packages to your project from a feed URL that provides interactions with the repository for your NuGet client. For NuGet.org, configure your NuGet clients to use one of the following repository URLs:

Custom Build Triggers in VSTS

In my previous posts, I’ve shown people how to use VSTS (formerly known as VSO) to trigger continuous testing using builds and release management. I was able to utilize new reporting capabilities in build, particularly, test reports. I created reports that shows pass/fail trends for tests in my build definitions.

PassFailTrend

There are still limitations (or in this case features I wish Microsoft would consider such as customizing test reports from builds as well as showing pass/fail trends past 10 builds). My biggest disappointment thus far is “NOT” able to schedule build (with tests) using re-occurring pattern/s. As of writing this post, you can schedule builds in VSTS however, you have to “manually” keep adding scheduled times.

Scheduled

Imagine a scenario where you need to run a build every hour (or half hour), you have to manually add new times every hour, in this case, 24 times. Very inconvenient.

Fortunately, VSTS has public API’s that allows us to access build execution and trigger. With the public API’s I was able to write a very simple console app and use Windows’ built in “Task Scheduler” functionality. One would say, why not create a windows services? Yes, that’s option but I would make a point back to say: “Why develop a windows service further complicating the process where Windows has ‘Task Scheduler’ that’s been tested and used more broadly?”

Below is the code:

NOTE: You need to refer to the following Nuget Packages:

  • Microsoft.TeamFoundationServer.ExtendedClient
  • Microsoft.TeamFoundationServer.Client
  • Microsoft.VisualStudio.Services.Client
  • Microsoft.VisualStudio.Services.InteractiveClient
static class Program
    {
        static void Main(string[] args)
        {
            var buildoutputmodel = SetupBuildOutputModel();
            var vssconnection = new VssConnection(
                new Uri(buildoutputmodel.VsoUrl),
                new VssBasicCredential(buildoutputmodel.UserName, buildoutputmodel.Password)
                );
            var buildHttpClient = vssconnection.GetClient<BuildHttpClient>();
            //Below is my implementation of triggering multiple builds. I simply used the app.config to specify the build's ID, split each entry and validate. 
            ConfigurationManager.AppSettings["builddefinitionids"].Split(',').ToList().ForEach(
                buildid =>
                {
                    string stringoutput;
                    try
                    {
                        var id = buildid.ValidateBuildId();
                        DefinitionReference definitionReference = new DefinitionReference
                        {
                            Id = id,
                            Project = new TeamProjectReference
                            {
                                Name = buildoutputmodel.TeamProjectName
                            }
                        };
                        var build = new Build { Definition = definitionReference };
                        //This is where you trigger the build
                        var buildnumber = buildHttpClient.QueueBuildAsync(build,
                            buildoutputmodel.TeamProjectName).Result;
                        stringoutput = $"Build Triggered... \nBuild Number: {buildnumber} \nBuild Definition ID: {definitionReference.Id} \nTeam Project: {definitionReference.Project.Name}\n";
                        Console.WriteLine(stringoutput);
                        AsLogger.Info(stringoutput);
                    }
                    catch (Exception ex)
                    {
                        stringoutput = $"Exception Occurred: \n{ex.Message} \n{ex.InnerException}\n";
                        Console.WriteLine(stringoutput);
                        AsLogger.Error(stringoutput);
                    }
                });
        }

        private static BuildOutputModel SetupBuildOutputModel()
        {
            return new BuildOutputModel
            {
                UserName = ConfigurationManager.AppSettings["username"],
                Password = ConfigurationManager.AppSettings["password"],
                VsoUrl = ConfigurationManager.AppSettings["vsourl"],
                TeamProjectName = ConfigurationManager.AppSettings["teamproject"],
                BuilDefinitionName = ConfigurationManager.AppSettings["builddefinition"],
                GitRepo = ConfigurationManager.AppSettings["gitrepo"]
            };
        }
    }

Once you compile the code (.exe), simply create a scheduled task using Windows’ Task Scheduler:

TaskScheduler

Then the execution:

VSTSQueue

Accelerate Release Cycle: Continuous Delivery Automation…

How you can apply Automation to accelerate release cycles, improve quality, safety and governance?

Last Tuesday I participated in an online panel on the subject of CD Automation, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps. Watch a recording of the panel:

https://www.youtube.com/watch?v=2-_bb9Q-rtw

Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.

Below are a few insights from my contribution to the panel:

Automation != Orchestration

Automation. There’s really a big opportunity, if I want to start talking about automation, people think about many things that are in automation, but in this context, for Continuous Delivery, it is the process of taking your application, getting it deployed faster, with the right set of quality gates, with the right set of people who will help you automate that process seamlessly. So in turn, it’s more about making sure your application is being deployed more frequently, with the right quality gates in it.

Fundamentally, the process, what you look at is this: you take your application and you build it, then you test it, make sure that the build is good. When you build it, then you deploy to a set of environments – what you think your test or your dev or your staging, your environments, then you test it again. So it’s that balance of how you build your application and test it.

And being a test architect, I’m obviously an advocate of quality, and I think there’s three things people have to figure out when they start thinking about Continuous Delivery: one, the velocity of things – how fast you deploy, two, the quality side of things – what set of quality gates you need to put in place when you build and deploy an application, and the last thing you have to figure out is after you have velocity, and you have your quality, what is the cost of doing things? What set of tool sets you have to use to make all of these things work together?

We have many teams here, and there could be many things and many people, but in terms of our orchestration, what you think about is, what is the current process that prevents us from deploying and what are those manual things that we can avoid. It’s a blend of how long does it take for us to build and deploy a current application, what are the manual processes involved in doing that? Now let’s try to dissect each one of them, and make things easier, because let’s be honest, if it takes you a long time to build your application, that’s going to be a big bottleneck for you to have this Continuous Delivery pipeline.

And on top of that, is, being in the airline industry, one other thing we have to think about when we start thinking about this CD process is the idea of having the right set of guidelines for it, or standards behind it. So if it takes your application, let’s say, minutes to build, hopefully not an hour, just a couple of minutes. Are there any current tool sets, or are there any current technologies we can utilize to make that process much faster, avoid the manual effort of doing it? I’ll give you a perfect example:

Before, I think it was pretty typical for us to deploy application to utilize scripts, some PowerShell scripts, and some batch files, ” x-copy this, copy that…” And certainly now in the world of technologies you really don’t have to do that because we have tools, there’s software that allows you to build and take those bits and automatically deploy them. You just need to utilize the right task. So those are the small things in terms of orchestration that you need to start thinking about. Then on top of that, OK, then what would I need for us to make sure that this current application when we deploy it has the right set of quality gates behind it?

So, looking at those different processes, orchestration, flow, helps a lot. I would strongly suggest taking a look at what your application does and how long does it take to build, and if there’s any way that can you minimize the amount of time to build and manually do it – automate it. There are many tools out there that can help you do it. (Too many…)

We’re using the same thing. You actually have the same orchestration processes, the only thing you have to think about differently is – don’t create a process where it’s dependent on each environment, think about your application when you deploy it, it targets any environment. You can be deploying, you can be using the same processes on a test environment, or a QA environment, but you shouldn’t be developing a process that is different from one environment to the other. Your process orchestration should target any environment, so anyone can take that same process, spin up a new environment, and follow through the same processes over again, utilizing the same quality gates in between.

Also, when you start talking about testing, there’s the traditional way of testing applications where you do a lot of the manual stuff, and sometimes you have a hundred, two hundred, three hundred test cases. Sometimes you have to think smartly when you start doing CD processes. For example, which test make sense to deploy this build, and does it cover the right set of test families? And a good way to tell that is this, and if you have an analytics in your organization, think about which of those scenarios are being utilized more by your customers.

Sometimes you create test cases and automate them just for the sake of automating, but what value do you get out from those automated test cases, if some of those test cases are not even used a lot? So optimize your test, take a look at the data, what you have for your app, and try to dissect that, and think about ways to make your tests smarter, and make that part of your process.

Challenges and Checkpoints of CD Pipelines

as.com, our main website, it’s a great group of people by the way – the way we approach software is – what is the primary reason we’re doing CD? It’s because we’re doing agile development, taking one step at a time, and based on the experience we have in continuing to observe and evaluate, we have a process where you make things easier for people to integrate with each other.

For example, the tools – pick a tool that can easily be used by people, and it works well for the development, the testing and the PM team. Because, one of the challenges is, for example, (and before you even talk about the tools, I want to go one step back), it’s understanding the culture. I think that is the big thing there, to understand the culture change, what people need to do and embrace that, changes are inevitable.

I’ll give you a good example: we’re all use to doing waterfall. Back in the old days, where you hand it off to someone, and hand it off to someone… Now when you start something – everyone’s engaged at that point of view, at that point of time. When I use the set of tools, there are things you have to sacrifice in between, that you’re not accustomed to, for example, for testing, traditionally, you probably still see it from time to time, but some test teams will still create test plans, and the superior views, when you try to do CD, you try to waste some of that time away, and, so, for doing web automation, tackle it at a good point, try to do less – there is a great article out there that talks about the testing pyramid. Where at the very bottom of the base you talk about the coverage of unit testing, in the middle layer you talk about services, then the top layer would be UX automation. Less emphasis on UX automation, more thorough test in the unit test, that way you get more coverage.

So my point is there are certain teams that use different tool sets for testing, while dev and PM use a different set of for managed work, so when the time comes to integrate and do this agile CD approach, what happens is no one has time to learn these tools. A developer will not have time to understand what tool that tester is using to automate stuff, and likewise for PM and other folks, and DevOps, and infrastructure. So think about a tool that will work well for the team, as that is one of the things that made it challenging for us, and even now we’re still evaluating some of these tools. There are many tools, and given that we’re in a world with a lot of open source, you can certainly take advantage of that, and take a look at what people are using out there: tools, culture, the process, also think about, when you start, when you’re doing Continuous Delivery, sometimes you think moving so fast, velocity, orchestration – but think about the small things that really help improve your app – security, performance testing. So my suggestions is, one of key challenges there, is, make sure you have a good monitoring system in place, when you start doing your CD process. Continue to monitor what your application is doing, even though you deploy it to a test environment, because I guarantee you there are certain things that will get exposed, when you actually have the right monitoring system in place.

I’ll take it as a startup company, where you got the walk, crawl, run phase – the walk phase is like this “Let’s do it!” Let’s get the right tools sets, let’s get it working, let’s go with it, yes, you’re delivering faster, you’re learning caveats, you’re learning some of these points where, “Oh we need to go back…” but the issue is, as you move forward, you start seeing more deficiencies, and a lot more people not standardizing the things. People do things differently, in fact, since we’re talking about Continuous Delivery, I should say people are delivering software differently from one team to the other. It becomes very challenging to standardize some things, not only in the tool sets but simply on the process as well.

Because one person can do the flow differently, then you have a different set of quality gates, then you have a different set of tools. We’re still learning, but at the end of the day – let’s be honest, you will not satisfy everybody. You’re not going to satisfy every team out there, but at least try to get a consensus of small things, standard things that worked well with everybody.

We still have teams that use different tools sets, but it’s not it’s not like one team is different from the other – now we have a big organization using the same tools, and that’s a good start, and we have another organization using the same tools, that’s a good start, then we have another organization using the same tool sets – now we have a lot of people talking with each other, just take it to the next level. Because the other thing to consider is while you’re standardizing the things, the big benefit of doing that as well is cost, you’re able to save the company money by standardizing certain things that worked well for you. So now you’re hitting two birds with one stone – obviously the next thing is, people will get more enticed to collaborate, share best practices, and provide governance around certain things that worked well for us.

Being in quality, quality is my forte, I can talk about web automation the whole day – unit testing, that kind of stuff, but certain people need to realize that when you do test automation there are certain things that you use to automate things that may not work with people. But at least try to establish a standard that will work well in terms of governance of the process, what you do about it, so making things easier for tracking and auditing is the other thing you have to consider.

Tools: Tips and Tricks

I’ll just make this straight – if I had one tip, when talking about Continuous Delivery, we know we want to be Agile, we want to release faster with quality. So have everybody be accountable of quality. If you think about quality from the get go, when you start doing Continuous Delivery, you’ll start expanding of all these tool sets, you’ll be able to define a process, you’ll be able to define your orchestration. In fact, it will actually even help determine what tools you will need to use, if you start thinking about quality at the get go, as part of your process, it’s mandatory.

I can’t stress this enough, when you talk about Continuous Delivery it’s all about release, release, release… features, features, features… at the end, you’ll spend more time thinking about fixing defects or bugs.

And even tools, sometimes tools don’t necessarily have the right set of features that help you deal with quality. So you keep everybody accountable of quality, because this is very important when you start thinking about Continuous Delivery out to your production servers. You’ll be able to write down the list of things, when you start thinking about quality: monitoring, automation, tools sets, orchestration, security, performance. If I had one big tip, it’s to have everybody be accountable for quality, it’s a shared practice by everybody, not by just the test team.

Working with VSTS Rest APIs

I’ve been working with VSTS for quite some time now and wanted to share some of the sample code I’ve written to work with VSTS data. As I work with many teams, there have been requests such as getting specific metadata during and/or after build. Examples would be people wanting to get specific data from associated work-items during builds or collection level licensing information for your users. Here are I’ll tap in to specific areas:

VSTS Builds, VSTS Work-Items, VSTS GIT (Commits), VSTS User License Information

You’ll need to the following Nuget Packages:

  • Microsoft.TeamFoundationServer.ExtendedClient
  • Microsoft.TeamFoundationServer.Client
  • Microsoft.VisualStudio.Services.Client
  • Microsoft.VisualStudio.Services.InteractiveClient

Sample Code to retrieve all Builds from a given VSTS Team Project Build Definition containing associated commits and work-items:

using System;
using System.Configuration;
using System.Linq;
using System.Text;
using AAG.Test.Core.Logger;
using Microsoft.TeamFoundation.Build.WebApi;
using Microsoft.TeamFoundation.SourceControl.WebApi;
using Microsoft.VisualStudio.Services.Client;
using VSTSApi.Entities;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.TeamFoundation.WorkItemTracking.WebApi;

namespace VSTSApi
{
    class Program
    {
        private static BuildOutputModel _buildoutputmodel;

        private static string VssAccountUrl { get; set; } = ConfigurationManager.AppSettings["VssAccountUrl"];

        static void Main(string[] args)
        {
            try
            {
                StringBuilder outputStringBuilder = new StringBuilder();
                var buildoutputmodel = SetBuildOutputModel(args);
                var creds = new VssClientCredentials(false);
                creds.PromptType = CredentialPromptType.PromptIfNeeded;
                var vssConnection = new VssConnection(new Uri(buildoutputmodel.VSOUrl + "/defaultcollection"), new VssBasicCredential(buildoutputmodel.UserName, buildoutputmodel.Password));

                var buildserver = vssConnection.GetClient<BuildHttpClient>();
                var workitems = vssConnection.GetClient<WorkItemTrackingHttpClient>();
                var commititems = vssConnection.GetClient<GitHttpClient>();
                var builds = buildserver.GetBuildsAsync(_buildoutputmodel.TeamProjectName).Result;
                var targetbuilds = builds.Where(definition => definition.Definition.Name.Contains(buildoutputmodel.BuilDefinitionName));
                foreach (var build in targetbuilds)
                {
                    outputStringBuilder.AppendLine($"Name: {build.Definition.Name} : BuildID: {build.Id}");
                    var associatedcommits = buildserver.GetBuildCommitsAsync(build.Definition.Project.Name,
                        build.Id).Result;
                    if (associatedcommits.Any())
                        outputStringBuilder.AppendLine($"All Commits Made for this Build:  {Environment.NewLine} ========= {Environment.NewLine} ");
                    associatedcommits.ForEach(commit =>
                    {
                        var user = commititems.GetCommitAsync(buildoutputmodel.TeamProjectName, commit.Id, buildoutputmodel.GitRepo).Result.Author;
                        outputStringBuilder.AppendLine($"ID: {commit.Id} Committed By: {user.Name}  E-mail: {user.Email} {Environment.NewLine} Description: {commit.Message} {Environment.NewLine}");
                    });
                    var commits = associatedcommits.Select(change => change.Id);
                    var associatedworkitems = buildserver.GetBuildWorkItemsRefsAsync(commits,
                        build.Definition.Project.Name, build.Id).Result;
                    if (associatedworkitems.Any())
                        outputStringBuilder.AppendLine($"All Associated Workitems for this Build:  {Environment.NewLine} ========= {Environment.NewLine} ");
                    foreach (var wi in associatedworkitems)
                    {
                        outputStringBuilder.AppendLine($"ID : {wi.Id} : URL: {wi.Url}");
                        var workitem = workitems.GetWorkItemAsync(int.Parse(wi.Id)).Result;
                        outputStringBuilder.AppendLine($"Title : {workitem.Fields["Title"]} : Description: {workitem.Fields["Description"]}");
                    }
                    outputStringBuilder.AppendLine($"{Environment.NewLine} ========= {Environment.NewLine} ");
                }
                //}

                DumpData(outputStringBuilder.ToString(), Console.WriteLine);
                DumpData(outputStringBuilder.ToString(), print => AsLogger.Info(print));
                Console.WriteLine("Press Any Key to Continue...");
                Console.ReadKey();
            }
            catch (Exception exception)
            {
                throw new Exception($"Error with Application: {exception.Message}", exception.InnerException);
            }
        }

        static void DumpData(string stringoutput, Action<string> print)
        {
            print(stringoutput);
        }

        static BuildOutputModel SetBuildOutputModel(string[] args)
        {
            _buildoutputmodel = new BuildOutputModel
            {
                UserName = ConfigurationManager.AppSettings["username"],
                Password = ConfigurationManager.AppSettings["password"],
                VSOUrl = ConfigurationManager.AppSettings["vsourl"],
                TeamProjectName = ConfigurationManager.AppSettings["teamproject"],
                BuilDefinitionName = ConfigurationManager.AppSettings["builddefinition"],
                GitRepo = ConfigurationManager.AppSettings["gitrepo"]
            };
            return _buildoutputmodel;
        }
    }
}

Entity

namespace VSTSApi.Entities
{
    public class BuildOutputModel
    {
        public string UserName { get; set; }

        public string Password { get; set; }

        public string BuilDefinitionName { get; set; }

        public string TeamProjectName { get; set; }

        public string VSOUrl { get; set; }

        public string GitRepo { get; set; }
    }
}

Sample Code for Getting User Information/Licenses:

using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using VSTSAccountAdmin.Model;
using Microsoft.VisualStudio.Services.Client;
using Microsoft.VisualStudio.Services.Common;
using Microsoft.VisualStudio.Services.Identity.Client;
using Microsoft.VisualStudio.Services.Licensing;
using Microsoft.VisualStudio.Services.Licensing.Client;

namespace VSTSAccountAdmin
{
    public class Program
    {
        private static string VssAccountUrl { get; set; } = ConfigurationManager.AppSettings["VssAccountUrl"];
        private static string VssAccountName { get; set; }
        private static License VssLicense { get; set; }

        private static List<VSOUserInfo> _vsousers;



        public static void Main(string[] args)
        {
            try
            {
                _vsousers = new List<VSOUserInfo>();

                // Create a connection to the specified account.
                // If you change the false to true, your credentials will be saved.
                var creds = new VssClientCredentials(false);
                creds.PromptType = CredentialPromptType.PromptIfNeeded;
                var vssConnection = new VssConnection(new Uri(VssAccountUrl), creds);

                // We need the clients for tw4o services: Licensing and Identity
                var licensingClient = vssConnection.GetClient<LicensingHttpClient>();
                var identityClient = vssConnection.GetClient<IdentityHttpClient>();

                var entitlements = licensingClient.GetAccountEntitlementsAsync().Result;
                IEnumerable<AccountEntitlement> accountEntitlements = entitlements as IList<AccountEntitlement> ??
                                                                      entitlements.ToList();
                var userIds = accountEntitlements.Select(entitlement => entitlement.UserId).ToList();
                var users = identityClient.ReadIdentitiesAsync(userIds).Result.ToDictionary(item => item.Id);
                foreach (var entitlement in accountEntitlements)
                {
                    var user = users[entitlement.UserId];
                    _vsousers.Add(new VSOUserInfo()
                    {
                        DisplayName = user.DisplayName,
                        LastAccessDate = entitlement.LastAccessedDate,
                        License = entitlement.License.ToString().ToLowerInvariant(),
                        UserID = entitlement.UserId
                    });
                    var stringoutput =
                        $"{Environment.NewLine}Name: {user.DisplayName}, UserId: {entitlement.UserId}, License: {entitlement.License}.";
                    Console.WriteLine(stringoutput);
                }
            }
            catch (Exception ex)
            {
                throw new ArgumentException(ex.Message, ex.InnerException);
            }

        }
    }
}

Entity:

namespace VSTSAccountAdmin.Model
{
    public class VSOUserInfo
    {
        public string DisplayName { get; set; }

        public Guid UserID { get; set; }

        public string License { get; set; }

        public DateTimeOffset LastAccessDate { get; set; }

    }
}

MSTEST TIP: Extension Method/s for TestContext while working on Data Driven Tests.

A very common scenario that developers/testers encounter when writing data driven tests is to print input parameters from data sources. Data driven test can encompass many data sources (sql, .csv, excel, .xml, etc…) and Yes, I was even able to extend MSTests functionality by passing my own custom collection:

MSTEST: EXTENDING DATA DRIVEN TESTS TO USE IENUMERABLE<OBJECT> AS THE DATA SOURCE

https://dondeetan.com/2016/02/18/mstest-extending-data-driven-tests-to-use-ienumerableobject-as-the-data-source/

While there are many implementations to solve this, I wanted to provide (in my opinion) the most easiest and efficient way possible for developers and testers to print out input parameters without leaving the context during test development. Thus, extension methods. What are extension methods? In a high level, extension methods allows you to “extend” or “add” methods to existing types (even types built in the .Net framework) without creating a new class from the derived type. Click here on Extension Methods (C# Programming Guide) more information.

Now that we have good understanding of extension methods, then why not extend the TestContext class which contains all information during test execution? TestContext also contains information around the datarow and datacolumn passed during data driven testing. It’s a simple as calling the datarow during runtime.

TestContext.DataRow["Description"].ToString();

TestConext.DataRow has a property “Table” which you can get all columns during the instance of test execution. At this point, all you need to do is extend TestContext to print input parameters.

public static class Helper
    {
        public static void PrintInput(this TestContext testContext)
        {
            Console.WriteLine($"Total Input Fields: {testContext.DataRow.Table.Columns.Count}");
            var columns = testContext.DataRow.Table.Columns;
            for (var i = 0; i < columns.Count; i++)
            {
                Console.WriteLine($"Field: {columns[i].ColumnName} Value: {testContext.DataRow[i]}");
            }
        }
    }

In your Test Method, all you have to do is just call one line of code:

TestContext.PrintInput();

And the results from that instance test execution:

image

Great way to show input parameters for debugging purposes!

MSTEST: Extending Data Driven Tests to use IENUMERABLE<Object> as the Data Source

One great feature that I like in NUnit is the capability to use collection types for data driven tests. Meaning, you don’t have to open up an external data source connection, pull data and use it to drive parameters for your tests. With a simple attribute in NUnit, you can drive tests as indicated here:

TestCaseSourceAttribute

http://www.nunit.org/index.php?p=testCaseSource&r=2.5.3

NUnit implementation allows you to enumerate from a collection to data drive your tests. MSTest has the same extensibility and is outlined in the following blog:

Extending the Visual Studio Unit Test Type

http://blogs.msdn.com/b/vstsqualitytools/archive/2009/09/04/extending-the-visual-studio-unit-test-type-part-1.aspx

From this blog, I was able to go through the steps and process how MSTest invokes and passes objects in a test method. When MSTest executes, the flow goes through:

  1. TestClassExtensionAttribute calls : GetExecution()
  2. TestExtensionExecution calls : CreateTestMethodInvoker(TestMethodInvokerContext context)
  3. ITestMethodInvoker calls : Invoke(params object[] parameters)

image

Throughout this process, you can use custom attributes and utilize attribute properties for passing in test data. Its best that you use custom attributes in the ITestMethodInvoker.Invoke()

In my solution, I want to develop a fast way of invoking IEnumerable<object> as my test data. In this case, I’ll consume custom attributes to provide a classname and methodname to return test data through reflection. I’ll then use that in ITestMethodInvoker.Invoke() to enumerate objects for my tests.

  • ClassName: class holding the method to generate test data
  • DataSourceName: method within the class that generates any test data

Project Setup:

Make sure that you have the following references in your project:

  • Microsoft.VisualStudio.QualityTools.Common.dll
  • Microsoft.VisualStudio.QualityTools.UnitTestFramework
  • Microsoft.VisualStudio.QualityTools.Vsip.dll

These assemblies are included as part of the .Net framework. Simply browse in the references section in your project

The Custom Attribute:

    [global::System.AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = false)]
    public class EnumurableDataSourceAttribute : Attribute
    {
        public string DataSourceName { get; set; }
        public string ClassName { get; set; }


        public EnumurableDataSourceAttribute(string className, string dataSourceName)
        {
            this.DataSourceName = dataSourceName;
            this.ClassName = className;
        }
    } 

Test Class Implementation:

[Serializable]
    public class TestClassCollectionAttribute : TestClassExtensionAttribute
    {
        public override Uri ExtensionId => new Uri("urn:TestClassAttribute");

        public override object GetClientSide()
        {
            return base.GetClientSide();
        }

        public override TestExtensionExecution GetExecution()
        {
            return new TestExtension();
        }
    }

Test Extension:

public class TestExtension : TestExtensionExecution
    {
        public override void Initialize(TestExecution execution)
        {
            
        }

        public override ITestMethodInvoker CreateTestMethodInvoker(TestMethodInvokerContext context)
        {
            return new TestInvokerMethodCollection(context);
        }

        public override void Dispose()
        {
            
        }
    }

Test Method Invoker:

public class TestInvokerMethodCollection : ITestMethodInvoker
    {
        private readonly TestMethodInvokerContext _context;
        
        public TestInvokerMethodCollection(TestMethodInvokerContext context)
        {
            Debug.Assert(context != null);
            _context = context;
        }
        public TestMethodInvokerResult Invoke(params object[] parameters)
        {
            Trace.WriteLine($"Begin Invoke:Test Method Name: {_context.TestMethodInfo.Name}");
            Assembly testMethodAssembly = _context.TestMethodInfo.DeclaringType.Assembly;
            object[] datasourceattributes = _context.TestMethodInfo.GetCustomAttributes(typeof (EnumurableDataSourceAttribute), false);
            Type getclasstype = testMethodAssembly.GetType(((EnumurableDataSourceAttribute)datasourceattributes[0]).ClassName);
            MethodInfo getmethodforobjects = getclasstype.GetMethod(((EnumurableDataSourceAttribute) datasourceattributes[0]).DataSourceName);
            /*
            Use the line below if there are parameters that needs to be passed to the method. 
            ParameterInfo[] methodparameters = getmethodforobjects.GetParameters();
            To instantiate a new concreate class
            object classInstance = Activator.CreateInstance(getclasstype, null);
            Invoke(null,null) = The first null parameter specifies whether it's a static class or not. For static, leave it null
            IEnumerable<object> enmeruableobjects = getmethodforobjects.Invoke(classInstance, null) as IEnumerable<object>;
            */
            IEnumerable<object> enmeruableobjects = getmethodforobjects.Invoke(null, null) as IEnumerable<object>;
            var testresults = new TestResults();
            //This is where each object will be enumarated for the test method. 
            foreach (var obj in enmeruableobjects)
            {
                testresults.AddTestResult(_context.InnerInvoker.Invoke(obj), new object[1] { obj });
            }           
            var output = testresults.GetAllResults();
            _context.TestContext.WriteLine(output.ExtensionResult.ToString());
            return output;
        }
    }

Test Project Setup:

Once, you’ve successfully build your assembly project (custom TestClass attribute), you need to register the custom extension class in your local machine. This is a custom test assembly/adapter so, we’ll need to:

  • Make changes to the registry
  • Add the compiled assembly in the install directory for your VS version. In my case, I’m using Visual Studio 2015 so your custom assemblies will be copied to:  C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\PrivateAssemblies

Luckily, there’s a batch script that lets you do all of these steps. The only thing you need to do is:

  • Change the version of VS to your working VS edition
  • Change the assembly namespace and class reference

The deployment script: (You can also download the deployment script from this blog. Scroll at the bottom of the blog post)

http://blogs.msdn.com/b/qingsongyao/archive/2012/03/28/examples-of-mstest-extension.aspx?CommentPosted=true

@echo off
::------------------------------------------------
:: Install a MSTest unit test type extension
:: which defines a new test class attribute
:: and how to execute its test methods and
:: interpret results.
::
:: NOTE: Only VS needs this and the registration done; the xcopyable mstest uses
:: TestTools.xml virtualized registry file updated which we already have done in sd
::------------------------------------------------

setlocal

:: All the files we need to copy or register are realtive to this script folder
set extdir=%~dp0

:: Get 32 or 64-bit OS
set win64=0
if not "%ProgramFiles(x86)%" == "" set win64=1
if %win64% == 1 (
    set vs14Key=HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\14.0
) else (
    set vs14Key=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\14.0
)

:: Get the VS installaton path from the Registry
for /f "tokens=2*" %%i in ('reg.exe query %vs14Key% /v InstallDir') do set vsinstalldir=%%j


:: Display some info
echo.
echo =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-
echo Please ensure that you are running with adminstrator privileges
echo to copy into the Visual Studio installation folder add keys to the Registry.
echo Any access denied messages probably means you are not.
echo =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-
echo.
echo 64-bit OS: %win64%
echo Visual Studio 14.0 regkey:   %vs14Key%
echo Visual Studio 14.0 IDE dir:  %vsinstalldir%

::
:: Copy the SSM test type extension assembly to the VS private assemblies folder
::

set extdll=AAG.Test.Core.CustomTestExtenstions.dll
set vsprivate=%vsinstalldir%PrivateAssemblies
echo Copying to VS PrivateAssemblies: %vsprivate%\%extdll%
copy /Y %extdir%%extdll% "%vsprivate%\%extdll%"

::
:: Register the extension with mstest as a known test type
:: (SSM has two currently, both are in the same assembly)
::

echo Registering the unit test types extensions for use in VS' MSTest

:: Keys Only for 64-bit
if %win64% == 1 (
    set vs14ExtKey64=HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\14.0\EnterpriseTools\QualityTools\TestTypes\{13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b}\TestTypeExtensions
    set vs14_configExtKey64=HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\14.0_Config\EnterpriseTools\QualityTools\TestTypes\{13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b}\TestTypeExtensions
)

:: Keys for both 32 and 64-bit
set vs14ExtKey=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\14.0\EnterpriseTools\QualityTools\TestTypes\{13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b}\TestTypeExtensions
set vs14_configExtKey=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\14.0_Config\EnterpriseTools\QualityTools\TestTypes\{13cdc9d9-ddb5-4fa4-a97d-d965ccfc6d4b}\TestTypeExtensions

:: Register the TestClassCollectionAttribute
set regAttrName=TestClassCollectionAttribute
set regProvider="AAG.Test.Core.CustomTestExtenstions.TestClassCollectionAttribute, AAG.Test.Core.CustomTestExtenstions"
if  %win64% == 1 (
    reg add %vs14ExtKey64%\%regAttrName%        /f /v AttributeProvider /d %regProvider%
    reg add %vs14_ConfigExtKey64%\%regAttrName% /f /v AttributeProvider /d %regProvider%
)
reg add %vs14ExtKey%\%regAttrName%        /f /v AttributeProvider /d %regProvider%
reg add %vs14_ConfigExtKey%\%regAttrName% /f /v AttributeProvider /d %regProvider%

:eof
endlocal
exit /b %errorlevel%

Creating the tests in VS:

In your test project, add a reference to the custom MSTest Assemblies

image

In your tests, make sure to use the custom test class and enumerator attribute that we previously defined. Here’s a sample of these test methods that uses different IEnumerable objects.

[TestClassCollection]
    public class MethodCollectionTests
    {

        public TestContext TestContext { get; set; }

        [TestInitialize()]
        public void TestInit()
        {
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get5Employees")]
        public void Verify5Employees(Employee employee)
        {
            Assert.IsFalse(String.IsNullOrEmpty(employee.Displayname));
            Console.WriteLine($"Employee FirstName: {employee.Displayname}");
            TestContext.WriteLine($"Test Case Passed for {TestContext.TestName} with Data: {employee.Displayname}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get20Employees")]
        public void Verify20Employees(Employee employee)
        {
            Assert.IsFalse(String.IsNullOrEmpty(employee.Displayname));
            Console.WriteLine($"Employee FirstName: {employee.Displayname}");
            TestContext.WriteLine($"Test Case Passed for {TestContext.TestName} with Data: {employee.Displayname}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get5Cars")]
        public void Verify5Cars(Car car)
        {
            Assert.IsNotNull(car);
            Assert.IsFalse(String.IsNullOrEmpty(car.Description));
            Console.WriteLine($"Car Info: Type: {car.CarType} Cost: {car.Cost.ToString("C")}");
            TestContext.WriteLine($"Test Case Passed for {TestContext.TestName} with Data: {car.CarType} with Id: {car.Id}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get13Cars")]
        public void Verify13Cars(Car car)
        {
            Assert.IsNotNull(car);
            Assert.IsFalse(String.IsNullOrEmpty(car.Description));
            Console.WriteLine($"Car Info: Type: {car.CarType} Cost: {car.Cost.ToString("C")}");
            TestContext.WriteLine($"Test Case Passed for {TestContext.TestName} with Data: {car.CarType} with Id: {car.Id}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get5EmployeesWithCars")]
        public void Verify5EmployeesWithCars(EmployeeWithCar employeeWithCar)
        {
            Assert.IsNotNull(employeeWithCar);
            Assert.IsFalse(String.IsNullOrEmpty(employeeWithCar.Id));
            Assert.IsNotNull(employeeWithCar.Car);
            Assert.IsNotNull(employeeWithCar.Employee);
            TestContext.WriteLine($"Test Case Passed for {TestContext.TestName} with Data: Name: {employeeWithCar.Employee.Displayname} Car: {employeeWithCar.Car.CarType} with Id: {employeeWithCar.Employee.Id}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get10SequentialInts")]
        public void Verify10SequentialInts(int intcurrent)
        {
            Assert.IsInstanceOfType(intcurrent, typeof(int));
            TestContext.WriteLine($"Current int Value: {intcurrent}");
        }

        [TestMethod]
        [EnumurableDataSourceAttribute("CustomTestExtenstions.Tests.Helper.Helper", "Get5StringObjects")]
        public void VerifyGet5StringObjects(string stringcurrent)
        {
            Assert.IsInstanceOfType(stringcurrent, typeof(string));
            TestContext.WriteLine($"Current string Value: {stringcurrent}");
        }
    }

The Helper class defines the helper methods to generate test data:

public static class Helper
    {
        public static IEnumerable<object> Get5Employees()
        {
            var employees = GenerateData.GetEmployees(5);
            return (IEnumerable<object>) employees;
        }

        public static IEnumerable<object> Get20Employees()
        {
            var employees = GenerateData.GetEmployees(20);
            return (IEnumerable<object>) employees;
        }
        public static IEnumerable<object> Get5Cars()
        {
            var cars = GenerateData.GetCars(5);
            return (IEnumerable<object>) cars;
        }

        public static IEnumerable<object> Get13Cars()
        {
            var cars = GenerateData.GetCars(13);
            return (IEnumerable<object>)cars;
        }

        public static IEnumerable<object> Get5EmployeesWithCars()
        {
            var cars = GenerateData.GetCars(5);
            var employees = GenerateData.GetEmployees(5);
            var employeeswithcars = new List<EmployeeWithCar>();
            for (int i = 0; i < cars.Count; i++)
            {
                var employeewithcar = new EmployeeWithCar
                {
                    Car = cars[i],
                    Employee = employees[i]
                };
                employeeswithcars.Add(employeewithcar);
            }
            return (IEnumerable<object>)employeeswithcars;
        }

        public static IEnumerable<object> Get10SequentialInts()
        {
            var intobjects = new int[] {1, 2, 3, 4, 5,6,7,8,9, 10};
            return (IEnumerable<object>) intobjects.Cast<object>();
        }

        public static IEnumerable<object> Get5StringObjects()
        {
            var stringobjects = new string[] {"String1", "String2", "String3", "String4", "String5" };
            return (IEnumerable<object>)stringobjects.Cast<object>();
        }
    }

The Execution results!

image

And the output for each of the result!

image