Don’t be a stranger. I’ll be attending VSLive 2016 at Las Vegas!
Author: Dondee
Enabling Targeted Environment Testing during Continuous Delivery (Release Management) in VSTS
In my previous post “Continuous Delivery using VSO Release Manager with Selenium Automated Tests on Azure Web Apps (PaaS)”, I’ve walked through the steps on how to enable continuous delivery by releasing builds to multiple environments. One caveat that I didn’t focus on is taking the same tests and running it against those target environments.
In this post, I’ll walk you through the steps on enabling or running the same tests targeting those environments that you have included as part of your continuous delivery pipeline. You’re basically taking the same tests however passing in different parameters such as the target URL and/or browser to run.
In a high level, these are the changes. The solution only requires:
· Create runsettings file and add your parameters.
· Modify your code to call the parameters from the .runsettings file
· Changes to the “Test Steps” in VSTS release or build
Create runsettings file
In Visual Studio, you have the ability to generate a “testsettings” file however not a “runsettings” file. What’s the difference between both file formats? Both can be used as a reference for executing tests. However the “runsettings” file is more optimized to be used for running Unit Tests. In this case our tests are unit tests.
For more information see: Configure unit tests by using a .runsettings file
https://msdn.microsoft.com/en-us/library/jj635153.aspx
Essentially:
1) Create an xml file with the extension of: .runsettings (e.g. ContinousDeliveryTests.runsettings)
2) Replace the contents of the file with the following:
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
<TestRunParameters>
<Parameter name="EnvURL" value="http://XXXXX.azurewebsites.net" />
<Parameter name="Browser" value="headless" />
</TestRunParameters>
<!--<RunConfiguration> We will use this later so we can run multiple threads for tests
<MaxCpuCount>4</MaxCpuCount>
</RunConfiguration>-->
</RunSettings>
3) Save the file to your source control repository. In my case GIT in VSTS.
Modify your code to call the params from the .runsettings file
In your unit tests where you specify the URL, apply the following changes:
var envUrl = TestContext.Properties["EnvURL"].ToString();
At this point, you can refer the envUrl variable in your code for navigating through your tests.
Changes to the “Test Steps” in VSTS release or build
In VSTS Release Manager, modify the test steps to:
NOTE: As part of your build definition, ensure that you upload your runsettings file as part of your artifacts directory. Here’s a snippet in my build definition step:
Visual Studio Test step in Release Manager:
For each environment that you deploy your application, modify the Visual Studio Test Step to:
– Specify the runsettings file. You can browse through your build artifacts provided you’ve uploaded your runsettings file as part of your build artifacts
– Override the parameters with the correct value for your environment URL
One you make the following changes, your tests will execute targeting those environments specified in your continuous delivery pipeline.
Continuous Delivery using VSO Release Manager with Selenium Automated Tests on Azure Web Apps (PaaS)
In my previous post (Continuous Testing In VSO using Selenium), I’ve shown how to create a build definition in VSO for continuous and simultaneous UX automation testing. I’ve also shown how to use various configuration files to set browser specific testing without the use of other tools to set config values in runtime.
With more and more companies, organizations and/or IT departments speeding up their development process, an important goal in mind is for frequent changes to be deployed immediately on intermediate environments (Dev/Test) then finally to production. This is what we call Continuous Delivery/Deployment. A process wherein changes gets deployed to intermediate environments with certain quality gates (automated tests) that eventually end up to production.
VSO offers “Release Manager” that targets continuous deployment. I’ll focus on the basic principles of continuous deployment then I strongly suggest taking these steps to the next level by innovating around quality gates (further automated tests).
Automated Tests:
In this post, I’ll simply create a test project that uses Selenium UX framework to run tests. Also, in the same test project, I’ll introduce Data Driven Tests to run all our browsers. I suggest referencing my previous post around Continuous Testing. I’ve provided sample code in that post that allows you to create multiple instances of browser webdriver by passing in a parameter (OpenBrowser<T>). It’s helpful in this case when we use data driven testing. I’ve used data driven tests to trigger which browsers to run.
NOTE: If you’re not familiar on using Selenium to run automated UX testing, see these samples: http://docs.seleniumhq.org/docs/03_webdriver.jsp
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\Data\\Browser.csv", "Browser#csv", DataAccessMethod.Sequential), DeploymentItem(@"Data\Browser.csv", "Data"), DeploymentItem("phantomjs.exe"), DeploymentItem("chromedriver.exe"), DeploymentItem("IEDriverserver.exe"), TestCategory("BVTs"), TestMethod] public void ValidateHomePage() { var browserstring = TestContext.DataRow["Browser"].ToString(); Trace.TraceInformation($"Test Ran on {browserstring}"); BrowserType browsertype; if (!Enum.TryParse<BrowserType>(browserstring, out browsertype)) throw new Exception($"{TestContext.DataRow["Browser"].ToString()} is not valid member of enumeration MyEnum"); var driver = HttpWebHelper.OpenBrowser<IWebDriver>(browsertype); driver.Navigate().GoToUrl(new Uri("http://<some URL/")); Assert.IsTrue(driver.PageSource.Contains("Quality Engineering")); }
Note this test method focuses on the following areas:
Data Driven Tests – In my sample, I use a CSV file to host the browsers that I’m going to execute. In this case, the contents of the CSV file has 1 column (Browser) with four rows: Chrome, IExplore, FireFox, Headless (which is phantomJs). For more information on Data Driven Testing in C#, see the following:
How To: Create a Data-Driven Unit Test
https://msdn.microsoft.com/en-us/library/ms182527.aspx
The test itself launches the page and one way to verify once the deployment succeeded is to check if the home page launched with certain text on the page source.
Selenium Dependencies. Make sure that part of your deployment files include all appropriate selenium drivers for your browser tests. Without the drivers, tests will fail to execute on remote machines. I would also take a look at this article to make sure that you properly deploy your dependencies as part of the execution process.
DeploymentItemAttribute Class
When executed and successfully passing, you see the following results:
Build Definition:
While you may have your app building appropriately with associated tests (in my case a Web App hosted on Azure), it’s important that you properly define your build definition (tasks) with the proper deployment and test files.
For an overview of how VSO Build works, see this documentation:
https://msdn.microsoft.com/Library/vs/alm/Build/overview
There are 4 essential tasks that you need to define when creating your web app build definition
1) Build your web application with the deployments files. This step is important when configuring Release Manager
The key here is passing the MSBuild Arguments:
/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=”$(build.stagingDirectory)\QESite.zip”
The arguments supplied here is a way to generate your deployment files for your webapplication. The /p:PackageLocation is where you save your deployment files to (in a form of a zip file).
2) Publishing your deployment files for later use in Release Manager
The artifact name is the location where Release Manager will grab the deployment files from.
3) Publishing your test files for Release Manager to use
Take note of the contents structure.
**\QualityEngineeringSite.Tests\bin\**\*.dll – This will take all DLLs that in QualityEngineeringSite.Tests project.
**\bin\**\*.exe – Take anything that’s an executable (all selenium webdrivers)
**\bin\**\Data\*.csv – Take anything that’s a .csv (data files for testing)
4) Trigger the build for Continuous Integration. Anytime a commit/push has been executed on your branch (we’re using GIT for our source control) a build will automatically be triggered.
Once you have a successful build, you should see the following output:
Release Manager (putting both Test and Build together to form continuous deployment)
There are lots of documentation out in MSDN that specifically walks you through on Release Manager. I’m not going to focus much on this rather plug-in the pieces to have continuous delivery working for my web application.
For more information on release manager, see:
Understanding Release Management
https://msdn.microsoft.com/Library/vs/alm/Release/getting-started/understand-rm
Essential Tasks in Release Manager:
1) Create your Environments:
Depending on your needs, you have control of which environments you want to enable Continuous Deployment. For the purpose of my post, I used a Dev -> Test -> Prod work-flow model utilizing PaaS on Azure.
First Step:
Azure Web App Deployment:
Azure Subscription
It’s important for you to ensure that you manage your azure subscription correctly. You can have different azure subscriptions per environment. Each build step for an environment however requires that you only use 1 azure subscription. This subscription hosts the web application that you’re trying to deploy to.
An Example would be that your Dev or Test environment would use a different azure subscription while your Production environment uses a different subscription.
For more information on managing azure subscriptions in VSO, see the following article:
Create an Azure Service Endpoint
https://msdn.microsoft.com/Library/vs/alm/Release/getting-started/deploy-to-azure#createazurecon
Web App Name
This is essentially your http://webappname.azurewebsites.net
Web Deploy Package
This is the location where you point the location for your deployment files. It’s important to point specifically to the artifact name. In this case QESite.zip
Second Step:
Visual Studio Test
This is pretty straightforward step. You will basically ensure that any test project dlls will be executed. In this case, this is coming from the Test Artifacts that you’ve built in the build definition
2) Artifacts
Once you select the build definition field. Release Manager will automatically sense the artifacts associated with that build.
3) Triggers
It’s important that you select the box for Continuous Deployment. After all, this is what we want to achieve. Any new build that gets triggered from your build definition will now trigger your deployment for the environments specified.
Trigger a Continuous Deployment.
Kicking off a Release. Normally, I would assign an “Approver” before deploying to Test and Prod but for the purpose of this post, I’ve skipped this capability.
I just made a change from 3.0.0.0 to 3.0.0.1. Once I committed the code to the server, a build was triggered (from my CI in build definition). Upon successful build, it kicked off a release deployment.
Tests also passed on Dev and now continuous to deploy to both Test and Prod environments:
Deployment Succeeded.
Deployment Failure
In an event that a deployment failed in one of your environments (likely due to a failing test), the deployment process will stop at the point of failure. In my case, the Dev environment. Both Test and Prod are left untouched.
The logs provide you further information about the failure:
APPROVING A RELEASE.
This by far is one of the key capabilities why I/we chose to use release manager. Imagine a scenario where you want to have specific set of people managing your deployments “or” key people approving deployments. For each release step or phase, you can assign an approver before or after it proceeds to the next step.
Once you’ve appropriately assigned an approver, that person would need to approve the request as shown below:
Once approved, the next step will proceed.
Continuous Testing in VSO using Selenium WebDriver and VSO Test Agents (On-Premise)
This post walks you through on how to implement continuous testing using the following technologies:
- Using Config Transform to configure your tests to run multiple browsers during runtime
- Selenium WebDriver – Automation UX framework to drive UX Testing
- VSO Build – ALM (Application Lifecycle Management) tool suite for storing code (via GIT), creating the build definition (Build) and configuring On-Premise machines as Test Agents
What is the Config transform? This will allow you to change the configuration settings for your app or web configuration files. When you use Selenium WebDriver, you have the option to run your tests using either Chrome, Internet Explorer, FireFox even PhantomJS (Headless testing). See the sample C# code below to generate the correct web driver instance through configuration settings. The method takes in a string value with a default browser of Internet Explorer. Note that sample code below uses try catches to log exception using Log4Net. You can ignore this but just re-use the code for creating the proper Selenium WebDriver
NOTE: If you’re not familiar on using Selenium to run automated UX testing, see these samples below:http://docs.seleniumhq.org/docs/03_webdriver.jsp
public static T OpenBrowser<T>(string browser = "iexplore", bool useExistingBrowser = false, bool usebrowserstack=false) { string error = string.Empty; try { if (!string.IsNullOrEmpty(browser)) { browser = browser.ToLower(); switch (browser) { case "firefox": IsFirefox = true; return (T)Convert.ChangeType(ReturnDriver<FirefoxDriver, FirefoxProfile>(useExistingBrowser, ref _firFoxDriver, Firefoxprofile), typeof(FirefoxDriver)); case "chrome": IsChrome = true; return (T)Convert.ChangeType(ReturnDriver<ChromeDriver, ChromeOptions>(useExistingBrowser, ref _chromeDriver, Chromeprofile), typeof(ChromeDriver)); case "headless": return (T)Convert.ChangeType(ReturnDriver<PhantomJSDriver, PhantomJSOptions>(useExistingBrowser, ref _phantomjsdriver, phantomjsoptions), typeof(PhantomJSDriver)); } } // ie (internet explorer) IEprofile.IgnoreZoomLevel = true; IEprofile.EnsureCleanSession = true; IsIE = true; return (T)Convert.ChangeType(ReturnDriver<InternetExplorerDriver, InternetExplorerOptions>(useExistingBrowser, ref _internetDriver, IEprofile), typeof(InternetExplorerDriver)); } catch (Exception ex) { if (string.IsNullOrWhiteSpace(ex.Message)) _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = Success"); else _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = " + ex.Message); throw new ArgumentException(ex.Message, ex.InnerException); } return default(T); }
The Generic Type to return the driver is:
NOTE: I’ve hardcoded the value of T as IWebDriver instance to maximize the browser. You don’t have to do this but since we’re using Selenium WebDriver, I’ll just embed it on this method. You can also change the T type to any UX automation.
private static T ReturnDriver<T, TT>(bool existing, ref T driver, TT profile) { string error = string.Empty; try { if (driver == null || existing == false) { driver = (T)Activator.CreateInstance(typeof(T), profile); } ((IWebDriver)driver).Manage().Window.Maximize(); } catch (Exception ex) { if (string.IsNullOrWhiteSpace(ex.Message)) _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = Success"); else _Testlog.Info(MethodBase.GetCurrentMethod().Name + " Results = " + ex.Message); throw new ArgumentException(ex.Message, ex.InnerException); } return driver; }
This method takes in a string param. You can pass the value from the config file and if you use config transforms, you can change the run behavior of the browser just by passing in the correct app config value.
Almost forgot! Download the config transform here: Configuration Transform
https://visualstudiogallery.msdn.microsoft.com/579d3a78-3bdd-497c-bc21-aa6e6abbc859
Here’s a look at what the config transform would look like in your project:
The particular key that I used and configured would look very similar to this (this is from, App.Chrome.config):
<!-- valid options for Browsers are: chrome , firefox, iexplore, headless --> <add key="Browser" value="chrome" xdt:Transform="Replace" xdt:Locator="Match(key)" />
Source Control is GIT in VSO
Given that we’re using VSO as our ALM suite, we’ve opted to use GIT as our backend source control system. This make it’s easier for configuring your build definition since all source code is stored in VSO
Configuring On-Premise VSO Test Agents
The next step is to ensure that you have On-Premise Test Agents that VSO can talk to and execute the tests. For this follow the steps on this article to configure your VSO Test Agents. Note that in VSO, build and test agents are now in the same machine. Also, note that this article talk about On-Premise TFS HOWEVER, the same applies in VSO. You have to go to your VSO instance (https://xxxx.visualstudio.com) and configure your agent pools. The rest is shown belowhttps://msdn.microsoft.com/Library/vs/alm/Build/agents/windows
Let’s do a quick check!
- Automated UX Tests have been developed in and uses configuration settings to drive the browser driver – Check!
- Installed Configuration Transform and configured my test project with all appropriate config settings – Check!
- Test Code stored in GIT (VSO) – Check!
- On-Premise VSO Test Agents Configured – Check!
Once all of these have been verified then the final step would be stitching and putting all of these together via VSO Build.
Configuring VSO Build for Continuous Testing:
Step 1: Configure Build Variables:
Step 2: Create Visual Studio Build step:
First step is to build the test solution/project. This is pretty straightforward. On the Solution textbox, browse through the .sln file that you’ve checked in source control (in this case GIT)
Step 3: Deploy the Test Agent on the On-Premise Test Machines:
NOTE: Before completing this step, ensure that you’ve properly configured your test machines. Follow the article below. This articles walks you through creating machines groups for your team project:
The key here is ensuring that:
· You have 1 test machine group that has all the test agents configured correctly
· The $(TestUserName) must have admin privileges on the test agents
Step 4: Copy The Compiled Test Assemblies To The On-Premise Test Agents:
The key to this step is ensure that you copy the compiled test assemblies to the right location. $(Build.Repository.LocalPath) is the directory from the build server where “Destination Folder” will copy the test assemblies to the target test agent machine.
Step 5: Execute the Tests:
Nothing special here. Just make sure that you’re reference the correct Test Drop Location. Just copy the Destination Folder from the previous step:
If you configured it correctly, you should get a successful build! Now the result of the build depends on the Pass/Fail results of the tests. In this case, I’ve intentionally Failed 1 automated test to trigger a build failure that coincides to a failing test. Passing the fail test in the future will result to a complete build
In your VSO Home or Team (Dashboard) Pages, you can pin the build trending charts to see PASS/FAIL Patterns for your UX Automation
Creating MSI Packages using VSO Build Services – A simple approach
With VS 2012 (and above), those versions have removed the capability to create deployment projects. We use 3rd party tools to build setup/MSI packages at the moment. This works very well for us in our on-prem environments. With the integration of cloud services, we took the approach of consolidating most of the build process and output to our cloud instance. This also means, having the capabilities of hosted (cloud) build controllers to make use of other 3rd party tools (in this case build MSI’s) however, we cannot install additional apps on hosted cloud build controllers. The other approach was to use hybrid solution where we use build services on the cloud however the actual build controller resides on our on-prem infrastructure. This was an approach that I’m willing to take but I’d like to take it further and really push all the mechanisms to the cloud. Further researching, VS 2012 (and above) still has the capability to develop setup projects. The only downside is that: It’s a separate extension in Visual Studio (which is really not a big concern) and that MSBUILD doesn’t support building deployment project files (.VDPROJ).
Luckily, devenv.exe (when used correctly) allows you to build solutions with deployment project files. More importantly, you can run it on a command line and with the right switches, you can build deployment projects files which outputs MSI packages.
First and foremost, use these extension package to install deployment project templates for Visual Studio (2012) and above:
Microsoft Visual Studio 2013 Installer Projects
https://visualstudiogallery.msdn.microsoft.com/9abe329c-9bba-44a1-be59-0fbf6151054d
Further context, I’m not going to elaborate on the steps to create deployment projects using the templates in Visual Studio, however, here’s a very good article that I’m going to reference to get you started:
Visual Studio Create Setup project to deploy web application in IIS
http://www.aspdotnet-suresh.com/2012/04/visual-studio-create-setup-project-to.html
Note that given some of our MSIs are used for deploying web applications (Yes, let’s have the discussion later why we use MSI’s for web deployments J), the article above is suitable for this context.The end result though is generating MSI’s regardless whether it’s a Web or Windows setup project.
Steps:
Create a powershell script that calls devenv process and invoke parameters for the location of your solution file, project file and configuration (release, debug, etc…)
The exact syntax would be:
Param(
[string]$SolutionPath,
[string]$ProjectPath,
[string]$ConifugrationMode
)
Start-Process -FilePath "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\devenv" -ArgumentList "$SolutionPath /build $ConifugrationMode /project $ProjectPath /projectconfig $ConifugrationMode"
Save the powershell script in source control
In our case, we use VSO GIT. The script file doesn’t need to go on any folder structure as you will see later.
Create a build definition in VSO Build
Start of by creating a build definition in VSO. Go to “Builds” in your team project, click on the “+”.
Select Visual Studio. (Note that probably this UI may change given that features and changes do occur more frequently in VSO).
“Deleting” and “Adding” the correct build steps
Given that we’re using devenv process to build the solution and generate the MSI packages, we don’t need the initial steps to build the code with the deployment project. All we need for this example are 2 steps
Run a powershell script:
Publish build artifacts:
Powershell Build Step
The key for this step is to ensure that you provide the correct set of variables for the location for your solution and project file.
Script filename: Basically click on the ellipses and browse through your repository to select the powershell script that runs devenv.exe
Arguments: Make sure that you use the correct configuration variables. Simply copy and paste this line:
.\<yourpowershellscript>.ps1 -SolutionPath “$(Build.Repository.LocalPath)\<locationofthesln>\XXX.sln” -ProjectPath “$(Build.Repository.LocalPath)\<locationofthevdproj>\XXX.VDPROJ” -ConifugrationMode $(BuildConfiguration)
The variables that I used here are:
$(Build.Repository.LocalPath) = this is where the build controller will hold the temporary build files
$(BuildConfiguration) = configuration setting that’s under the “Variables” section for your build:
NOTE: Here’s a complete list of build variables for VSO:
https://github.com/Microsoft/vso-agent-tasks/blob/master/docs/authoring/variables.md
Publish Build Artifacts Step
Contents: select the location where your MSI is located. I used the following syntax since my MSI packages are under setup\release\**
**\\Release\**
Artifact Name: This is just the name of your drop location.
Complete Build definition
Here’s an image of what it should like including a successful build:
Using OpenID to authenticate in MVC via Azure AD (Manual Steps)
Title says it all, we have some MVC apps using Azure AD via WSFed and want to convert using OpenID auth. While WSFED works well, we wanted to take a simple approach of using OpenID through Azure AD. These are the steps to either convert from WSFED or add OpenID in existing MVC Apps for Authentication.
I assume that you already have an application registered in Azure Active Directory for your website to use for authenticating AD users. If not, the first step is to create an Application in Azure Active Directory for your website to use to authenticate AD users. To do this:
- Sign in to the Azure Management Portal (http://azure.microsoft.com).
- Click on the Active Directory icon on the left menu, and then click on the desired directory.
- On the top menu, click Applications. If no apps have been added to your directory, this page will only show the Add an App link. Click on the link, or alternatively you can click on the Add button on the command bar.
- On the What do you want to do page, click on the link to Add an application my organization is developing.
- On the Tell us about your application page, you must specify a name for your application as well as indicate the type of application you are registering with Azure AD. You can choose from a web application and/or web API (default) or native client application which represents an application that is installed on a device such as a phone or computer. For this guide, make sure to select Web Application and/or Web API
- Once finished, click the arrow icon on the bottom-right corner of the page.
- On the App properties page, provide the Sign-on URL (URL for your web application) and App ID URI (Unique URI for your application – Usually it’s a combination or your AD domain/application. Example: http://www.domain.com/mywebsite.somedomain.com) for your web application then click the checkbox in the bottom-right hand corner of the page.
- Your application has been added, and you will be taken to the Quick Start page for your application.
- Click on the “Configure” Tab. Generate a Key for your client access and write down the following information:
- CLIENT ID:
- KEY (You generate a Key by clicking on the Save Button on the configure tab)
- APP ID URI
- Federation Metadata Document (You can get this information by click on “VIEW ENDPOINTS” at the bottom section of the Configure tab)
Enable SSL on your Dev Machines
With OpenID, you need to have your MVC app enabled with SSL. In your development environment, you can set this by going to the properties of the MVC app, select “Web” on the left navigation and type “https” on the project URL box:
Add OpenID and OWIN nuget packages to your MVC Application:
- Microsoft.IdentityModel.Protocol.Extensions
- System.IdentityModel.Tokens.Jwt
- Microsoft.Owin.Security.OpenIdConnect
- Microsoft.Owin.Security.Cookies
- Microsoft.Owin.Host.SystemWeb
- Active Directory Authentication Library
Create a class Startup.Auth.cs in the App_Start folder
Replace the code from below: Be sure to take the whole class definition!
Namespace references:
using Microsoft.IdentityModel.Clients.ActiveDirectory; using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using Owin;
public partial class Startup { // // The Client ID is used by the application to uniquely identify itself to Azure AD. // The App Key is a credential used to authenticate the application to Azure AD. Azure AD supports password and certificate credentials. // The Metadata Address is used by the application to retrieve the signing keys used by Azure AD. // The AAD Instance is the instance of Azure, for example public Azure or Azure China. // The Authority is the sign-in URL of the tenant. // The Post Logout Redirect Uri is the URL where the user will be redirected after they sign out. // private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"]; private static string appKey = ConfigurationManager.AppSettings["ida:AppKey"]; private static string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"]; private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"]; private static string postLogoutRedirectUri = ConfigurationManager.AppSettings["ida:PostLogoutRedirectUri"]; public static readonly string Authority = String.Format(CultureInfo.InvariantCulture, aadInstance, tenant); // This is the resource ID of the AAD Graph API. We'll need this to request a token to call the Graph API. string graphResourceId = ConfigurationManager.AppSettings["ida:GraphUrl"]; public void ConfigureAuth(IAppBuilder app) { app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); app.UseCookieAuthentication(new CookieAuthenticationOptions()); app.UseOpenIdConnectAuthentication( new OpenIdConnectAuthenticationOptions { ClientId = clientId, Authority = Authority, PostLogoutRedirectUri = postLogoutRedirectUri, Notifications = new OpenIdConnectAuthenticationNotifications() { // // If there is a code in the OpenID Connect response, redeem it for an access token and refresh token, and store those away. // AuthorizationCodeReceived = (context) => { var code = context.Code; ClientCredential credential = new ClientCredential(clientId, appKey); string userObjectID = context.AuthenticationTicket.Identity.FindFirst( "http://schemas.microsoft.com/identity/claims/objectidentifier").Value; AuthenticationContext authContext = new AuthenticationContext(Authority, new NaiveSessionCache(userObjectID)); AuthenticationResult result = authContext.AcquireTokenByAuthorizationCode( code, new Uri(HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Path)), credential, graphResourceId); AuthenticationHelper.token = result.AccessToken; return Task.FromResult(0); } } }); } }
Create Utility classes
In the project, create a new folder called Utils, create a class AuthenticationHelper.cs. Replace the code from below. Be sure to take the whole class definition!
References
using Microsoft.Azure.ActiveDirectory.GraphClient;
internal class AuthenticationHelper { public static string token; /// <summary> /// Async task to acquire token for Application. /// </summary> /// <returns>Async Token for application.</returns> public static async Task<string> AcquireTokenAsync() { if (token == null || token.IsEmpty()) { throw new Exception("Authorization Required."); } return token; } /// <summary> /// Get Active Directory Client for Application. /// </summary> /// <returns>ActiveDirectoryClient for Application.</returns> public static ActiveDirectoryClient GetActiveDirectoryClient() { Uri baseServiceUri = new Uri(Constants.ResourceUrl); ActiveDirectoryClient activeDirectoryClient = new ActiveDirectoryClient(new Uri(baseServiceUri, Constants.TenantId), async () => await AcquireTokenAsync()); return activeDirectoryClient; } }
In the Utils folder, create a class Constants.cs. Replace the code from below. Be sure to take the whole class definition!
internal class Constants { public static string ResourceUrl = ConfigurationManager.AppSettings["ida:GraphUrl"]; public static string ClientId = ConfigurationManager.AppSettings["ida:ClientId"]; public static string AppKey = ConfigurationManager.AppSettings["ida:AppKey"]; public static string TenantId = ConfigurationManager.AppSettings["ida:TenantId"]; public static string AuthString = ConfigurationManager.AppSettings["ida:Auth"] + ConfigurationManager.AppSettings["ida:Tenant"]; public static string ClientSecret = ConfigurationManager.AppSettings["ida:ClientSecret"]; }
In the Utils folder, create three new classes called NaiveSessionCache.cs. Replace the code from below. Be sure to take the whole class definition!
References:
using Microsoft.IdentityModel.Clients.ActiveDirectory;
public class NaiveSessionCache : TokenCache { private static readonly object FileLock = new object(); private readonly string CacheId = string.Empty; private string UserObjectId = string.Empty; public NaiveSessionCache(string userId) { UserObjectId = userId; CacheId = UserObjectId + "_TokenCache"; AfterAccess = AfterAccessNotification; BeforeAccess = BeforeAccessNotification; Load(); } public void Load() { lock (FileLock) { if (HttpContext.Current != null) { Deserialize((byte[])HttpContext.Current.Session[CacheId]); } } } public void Persist() { lock (FileLock) { // reflect changes in the persistent store HttpContext.Current.Session[CacheId] = Serialize(); // once the write operation took place, restore the HasStateChanged bit to false HasStateChanged = false; } } // Empties the persistent store. public override void Clear() { base.Clear(); HttpContext.Current.Session.Remove(CacheId); } public override void DeleteItem(TokenCacheItem item) { base.DeleteItem(item); Persist(); } // Triggered right before ADAL needs to access the cache. // Reload the cache from the persistent store in case it changed since the last access. private void BeforeAccessNotification(TokenCacheNotificationArgs args) { Load(); } // Triggered right after ADAL accessed the cache. private void AfterAccessNotification(TokenCacheNotificationArgs args) { // if the access operation resulted in a cache update if (HasStateChanged) { Persist(); } } }
Add OWIN Startup class
Right-click on the project, select Add, select “OWIN Startup class”, and name the class “Startup”. If “OWIN Startup Class” doesn’t appear in the menu, instead select “Class”, and in the search box enter “OWIN”. “OWIN Startup class” will appear as a selection; select it, and name the class Startup.cs .
In Startup.cs , replace the code from below. Again, note the definition changes from public class Startup to public partial class Startup .
using System; using System.Threading.Tasks; using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(MVCProject.Startup))] namespace MVCProject { public partial class Startup { public void Configuration(IAppBuilder app) { ConfigureAuth(app); } } }
Create UserProfile model
In the Models folder add a new class called UserProfile.cs . Copy the implementation of UserProfile from below:
public class UserProfile { public string DisplayName { get; set; } public string GivenName { get; set; } public string Surname { get; set; } }
Create new UserProfileController
Add a new empty MVC5 controller UserProfileController to the project. Copy the implementation from below. Remember to include the [Authorize] attribute on the class definition.
References:
using System.Net.Http; using System.Net.Http.Headers; using System.Security.Claims; using System.Threading.Tasks; using System.Web; using System.Web.Mvc; using Microsoft.IdentityModel.Clients.ActiveDirectory; using Microsoft.Owin.Security.OpenIdConnect; using Newtonsoft.Json;
[Authorize] public class UserProfileController : Controller { private const string TenantIdClaimType = "http://schemas.microsoft.com/identity/claims/tenantid"; private static readonly string clientId = ConfigurationManager.AppSettings["ida:ClientId"]; private static readonly string appKey = ConfigurationManager.AppSettings["ida:AppKey"]; private readonly string graphResourceId = ConfigurationManager.AppSettings["ida:GraphUrl"]; private readonly string graphUserUrl = "https://graph.windows.net/{0}/me?api-version=" + ConfigurationManager.AppSettings["ida:GraphApiVersion"]; // // GET: /UserProfile/ public async Task<ActionResult> Index() { // // Retrieve the user's name, tenantID, and access token since they are parameters used to query the Graph API. // UserProfile profile; string tenantId = ClaimsPrincipal.Current.FindFirst(TenantIdClaimType).Value; AuthenticationResult result = null; try { // Get the access token from the cache string userObjectID = ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier") .Value; AuthenticationContext authContext = new AuthenticationContext(Startup.Authority, new NaiveSessionCache(userObjectID)); ClientCredential credential = new ClientCredential(clientId, appKey); result = authContext.AcquireTokenSilent(graphResourceId, credential, new UserIdentifier(userObjectID, UserIdentifierType.UniqueId)); // Call the Graph API manually and retrieve the user's profile. string requestUrl = String.Format( CultureInfo.InvariantCulture, graphUserUrl, HttpUtility.UrlEncode(tenantId)); HttpClient client = new HttpClient(); HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, requestUrl); request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken); HttpResponseMessage response = await client.SendAsync(request); // Return the user's profile in the view. if (response.IsSuccessStatusCode) { string responseString = await response.Content.ReadAsStringAsync(); profile = JsonConvert.DeserializeObject<UserProfile>(responseString); } else { // If the call failed, then drop the current access token and show the user an error indicating they might need to sign-in again. authContext.TokenCache.Clear(); profile = new UserProfile(); profile.DisplayName = " "; profile.GivenName = " "; profile.Surname = " "; ViewBag.ErrorMessage = "UnexpectedError"; } } catch (Exception e) { if (Request.QueryString["reauth"] == "True") { // // Send an OpenID Connect sign-in request to get a new set of tokens. // If the user still has a valid session with Azure AD, they will not be prompted for their credentials. // The OpenID Connect middleware will return to this controller after the sign-in response has been handled. // HttpContext.GetOwinContext() .Authentication.Challenge(OpenIdConnectAuthenticationDefaults.AuthenticationType); } // // The user needs to re-authorize. Show them a message to that effect. // profile = new UserProfile(); profile.DisplayName = " "; profile.GivenName = " "; profile.Surname = " "; ViewBag.ErrorMessage = "AuthorizationRequired"; } return View(profile); } }
Create new AccountController
Add a new empty MVC5 controller AccountController to the project. Copy the implementation from below.
References:
using System.Security.Claims; using Microsoft.IdentityModel.Clients.ActiveDirectory; using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using QualityEngineeringSite.Utils;
public class AccountController : Controller { public void SignIn() { // Send an OpenID Connect sign-in request. if (!Request.IsAuthenticated) { HttpContext.GetOwinContext() .Authentication.Challenge(new AuthenticationProperties { RedirectUri = "/" }, OpenIdConnectAuthenticationDefaults.AuthenticationType); } } public void SignOut() { // Remove all cache entries for this user and send an OpenID Connect sign-out request. string userObjectID = ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value; AuthenticationContext authContext = new AuthenticationContext(Startup.Authority, new NaiveSessionCache(userObjectID)); authContext.TokenCache.Clear(); AuthenticationHelper.token = null; HttpContext.GetOwinContext().Authentication.SignOut( OpenIdConnectAuthenticationDefaults.AuthenticationType, CookieAuthenticationDefaults.AuthenticationType); } }
Create a new partial view _LoginPartial.cshtml
In the Views –> Shared folder, create a new partial view _LoginPartial.cshtml. Replace the contents of the file from below
@using System @{ var user = "Null User"; if (!String.IsNullOrEmpty(User.Identity.Name)) { user = User.Identity.Name; } } @if (Request.IsAuthenticated) { <text> <ul class="nav navbar-nav navbar-right"> <li> @Html.ActionLink(user, "Index", "UserProfile", routeValues: null, htmlAttributes: null) </li> <li> @Html.ActionLink("Sign out", "SignOut", "Account") </li> </ul> </text> } else { <ul class="nav navbar-nav navbar-right"> <li>@Html.ActionLink("Sign in", "Index", "UserProfile", routeValues: null, htmlAttributes: new { id = "loginLink" })</li> </ul> }
Modify existing _Layout.cshtml
In the Views –> Shared folder, add a single line, @Html.Partial(“_LoginPartial”) , that lights up the previously added _LoginPartial view. See screenshot below
Authenticate Users
If you want the user to be required to sign-in before they can see any page of the app, then in the HomeController, decorate the HomeController class with the [Authorize] attribute. If you leave this out, the user will be able to see the home page of the app without having to sign-in first, and can click the sign-in link on that page to get signed in.
For more information around the AuthorizeAttribute, refer to:
AuthorizeAttribute Class
https://msdn.microsoft.com/en-us/library/system.web.mvc.authorizeattribute(v=vs.118).aspx
Web.Config Settings
In web.config , in <appSettings> , create keys for ida:ClientId , ida:AppKey , ida:AADInstance , ida:Tenant and ida:PostLogoutRedirectUri and set the values accordingly. For the public Azure AD, the value of ida:AADInstance is https://login.windows.net/{0} . See sample below:
<!-- Values for OpenID and Graph API --> <!-- ClientId is the application ID from your own Azure AD tenant --> <add key="ida:ClientId" value="XXXXXXX" /> <add key="ida:AppKey" value="XXXXX" /> <add key="ida:AADInstance" value="https://login.windows.net/{0}" /> <!-- Tenant is the Tenant ID from your own Azure AD tenant. This is in a form of GUID.This is the value from your Federation Metadata Document URL' --> <add key="ida:Tenant" value="XXXXXXXXX" /> <!-- Tenant is the Tenant ID from your own Azure AD tenant. This is in a form of GUID.This is the value from your Federation Metadata Document URL' --> <add key="ida:TenantId" value="XXXXXXXX" /> <!-- PostLogoutRedirectUri is your application endpoint --> <add key="ida:PostLogoutRedirectUri" value="http://xxxx.azurewebsites.net/" /> <add key="aspnet:UseTaskFriendlySynchronizationContext" value="true" />
In web.config add this line in the <system.web> section: <sessionState timeout=”525600″ /> . This increases the ASP.Net session state timeout to its maximum value so that access tokens and refresh tokens cache in session state aren’t cleared after the default timeout of 20 minutes.
Associating Automated Tests doesn’t work for Test Cases in Visual Studio
As a developer focused on quality, I write code to test code but from time to time you get into issues that you just don’t understand what is going on more importantly how to fix it. In this case, all I wanted to do is associate a VSO Test Case to an automated test (unit test). I’ve done this in the past many times with success but this one just stumbles me. All of the steps are documented in this article:
How to: Associate an Automated Test with a Test Case
https://msdn.microsoft.com/en-us/library/dd380741(v=vs.110).aspx
However, the steps are not working for me and all I get when I click on the ellipses is a blank window.
I tried:
- Cleaning my solution and rebuilding
- Ensure that my tests have the proper attributes bound. E.g. [TestMethod]
- Ensured that it’s a managed test project
All of which works. As a last resort, I opened another test project and surprisingly enough I was able to select an automated test. I compared the project file (via text editor) and noticed that my actual test project is missing the projectype xml entry. Now I remember that when I generated my test project, I started off as a class library. I then manually referenced all the VS Unit Testing frameworks, created my unit tests and everything worked fine EXCEPT for associating an automated test to a test case.
The FIX:
Open up your project file in a text editor (Notepad)
Add the following under the <PropertyGroup> node
<ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
Save the project file, reload your project and rebuild.
Now you should be able to Associate test cases:
Uploading Windows VMs on Azure without using HYPER-V
Given that Windows 10 is soon to release on summer 2015, I needed to ensure that teams in our group have at least ran some sanity checks against the new Spartan Browser that comes as default on Windows 10. Unfortunately, when you upgrade Windows 10 images on azure you lose all your settings and resets your machine as first time setup. While other people had success remoting to the azure images after Windows 10 build upgrades, others (majority) have not been successful. If you’ve managed to update an existing Windows 10 image to a newer or later build, I strongly encourage to look at this article first:
Enable RDP or Reset Password with the VM Agent
http://blogs.msdn.com/b/mast/archive/2014/03/06/enable-rdp-or-reset-password-with-the-vm-agent.aspx
If the methods in the above article fails, you do have another alternative and that is to upload your own VHD and create an image out of it.
I assume that most of you have already have azure accounts. If not, you need to have an azure account created and have at least an azure storage service created and available. For this, see:
About Azure Storage Accounts
http://azure.microsoft.com/en-us/documentation/articles/storage-create-storage-account/
You need the following tools:
DISK2VHD – Great tool for generating .VHD files. Disk2vhd is a utility that creates VHD (Virtual Hard Disk – Microsoft’s Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs).
https://technet.microsoft.com/en-us/library/ee656415.aspx
Azure Powershell SDK – You need this to upload .VHDs to azure service
http://azure.microsoft.com/en-us/downloads/ (Windows Poweshell)
The PROCESS:
– Install Windows 10 on any machine as normal. I also suggest once installed, get the latest updates
– Use DISK2VHD tool to generate the VHD (https://technet.microsoft.com/en-us/library/ee656415.aspx)
– Follow the exact steps indicated on this article HOWEVER pause at Step 4 (Upload the .vhd file).
Create and upload a Windows Server VHD to Azure
Before you upload the .VHD, you need to ensure that you resize your VHD so it’s a whole number (disk size). Else, you will get the following exception when you proceed on creating the image using the VHD
“The VHD https://xxxxx.blob.core.windows.net/vhds/xxxx.vhd has an unsupported virtual size of xxxxxx bytes. The size must be a whole number (in MBs).”
Luckily, powershell comes with a cmdlet that allows you to resize VHDs and it only takes seconds to do it.
RESIZE-VHD
https://technet.microsoft.com/en-us/library/hh848535.aspx
The specific command is:
PS C:\> Resize-VHD –Path c:\BaseVHDX.vhdx –SizeBytes 1TB
Once you’ve resized your VHD as fixed Whole Number. Then proceed to Step 4.
The key in this process is to ensure that you’ve:
- Ran SYSPREP on the vhd (Windows 10 in this case)
- Resize the VHD to avoid the exceptions when creating an Image
Hope this helps!
Adding Application Insights to MVC Web API
I was helping a peer of mine design, plan and execute some level of performance testing on MVC Web APIs and as I go through some of the documentation in VSO (Visual Studio Online), I ran into questions such as gathering counters, custom events and “real time monitoring” of performance execution. Given that our servers are on azure, I don’t have direct access to the servers where I can get specific information. More importantly while running performance/load testing in Visual Studio it doesn’t provide specific counters that we want to monitor such as exceptions/sec, etc… You do get this information “after” the execution happens which produces detailed results but what If you want to monitor your site (in this case Web APIs hosted in Azure) during performance execution or at higher level just want to monitor activities, events, requests, etc… on your Azure Site hosting Web APIs. Application Insights immediately came to my mind but for Web APIs, it’s not as simple as plugging in some javascript code compared to MVC Web App.
Application Insights works by adding an SDK into your app, which sends telemetry to the Azure portal. This allows us to detect issues, solve problems and continuously improve your applications. It also allows us to quickly diagnose any problems in your live application. Understand what your users do with it. (You will see this below)
For Web APIs, you need to instrument code (telemetry) in the Web Api itself (controller). As I go through this documentation: “Custom events and metrics with the Application Insights API”, it was apparent that not all app application insights supported api methods are needed. We just needed to use the following:
TrackException Log exceptions for diagnosis. Trace where they occur in relation to other events and examine stack traces.
TrackRequest (Start and Stop) Log the frequency and duration of server requests for performance analysis.
TrackTrace Diagnostic log messages. You can also capture 3rd-party logs.
That said, let’s get started!
Let’s start by adding assembly references to your MVC Web API. Using Nuget Manager, install the following Nuget Packages:
– Application Insights API
– Application Insights API for Web Applications
– Application Insights Telemetry SDK for Services
Once these packages are installed in your API, project, it will make some modifications in your web.config file and add another config “ApplicationInsights.config”. This file contains most of the settings and information you need to modify what components you want to track and instrument. The most important section of this config is the instrumentation key:
<InstrumentationKey>XXXXXXX-1ed4-484c-a5a0-3df1aba5XXXX</InstrumentationKey>
This key contains the value for which application insights resource you will use to track usage for your Web API. For more information, see this article:
http://azure.microsoft.com/en-us/documentation/articles/app-insights-create-new-resource/
The code: Given that we just wanted to get started with Application Insights, we want to track requests and exceptions using the most basic telemetry. As I started to write the code for instrumenting data, I’ve approached my peer dev to see if she wants to pair on this. Luckily she already has stash (in GIT terms) or shelved (in TFS terms) telemetry code. I have to give Kudos to one our Devs who mostly did the work. Thank you Nemo Hajiyusuf!
The INTERFACE: The interface was simple and we supported the following app insights api methods:
public interface ITelemetryTracker { void TrackException(Exception ex); void TrackTrace(string message, SeverityLevel severity); void TrackEvent(string eventName, Dictionary<string,string> properties ); Stopwatch StartTrackRequest(string requestName); void StopTrackRequest(string requestName, Stopwatch stopwatch); }
The IMPLEMENTATION:
public class TelemetryTracker : ITelemetryTracker { private readonly TelemetryClient _telemetry = new TelemetryClient(); public void TrackException(Exception ex) { _telemetry.TrackException(ex); } public void TrackTrace(string message, SeverityLevel severity) { _telemetry.TrackTrace(message, severity); } public void TrackEvent(string eventName, Dictionary<string,string> properties ) { _telemetry.TrackEvent(eventName,properties); } public Stopwatch StartTrackRequest(string requestName) { // Operation Id is attached to all telemetry and helps you identify // telemetry associated with one request: _telemetry.Context.Operation.Id = Guid.NewGuid().ToString(); return Stopwatch.StartNew(); } public void StopTrackRequest(string requestName, Stopwatch stopwatch) { stopwatch.Stop(); _telemetry.TrackRequest(requestName, DateTime.Now, stopwatch.Elapsed, "200", true); // Response code, success } }
The USAGE: Now it’s up to you which parts of your web api code (in this case our your MVC Web API controllers). Here’s a sample snippet for one of our supported web methods:
From the class level:
private readonly ITelemetryTracker _telemetryTracker;
We invoke the constructor (Yes, we use dependency injection):
public SomeController(IHelper Helper, ITelemetryTracker telemetryTracker) { _helper = Helper; _telemetryTracker = telemetryTracker; }
The Web Method
[System.Web.Http.HttpGet] [System.Web.Http.Route("xxxxx")] public HttpResponseMessage Get([FromUri] string Id) { const string requestName = "api/1/Gets"; try { //Track the request in App insight var stopwatch = _telemetryTracker.StartTrackRequest(requestName); _telemetryTracker.TrackTrace(string.Format("Get: processing for Id: {0}", Id),SeverityLevel.Information); if (string.IsNullOrWhiteSpace(Id)) { return new HttpResponseMessage(HttpStatusCode.BadRequest) { Content = new StringContent("Missing Id in the request") }; } var preferences = _helper.Gets(Id); Trace.TraceInformation("Get: Done processing for Id: {0}", Id); _telemetryTracker.TrackTrace( string.Format("Get: Done processing for Id: {0}", Id), SeverityLevel.Information); _telemetryTracker.StopTrackRequest(requestName,stopwatch); return Request.CreateResponse(HttpStatusCode.OK, preferences); } catch (Exception ex) { Trace.TraceError("Get Exception: {0}", ex.ToString()); //Track the exception in App Insights _telemetryTracker.TrackException(ex); return new HttpResponseMessage(HttpStatusCode.InternalServerError) { Content = new StringContent("Unexpected error happen while processing Get. Please try again.") }; }
At this point, we’ve instrumented the data to one of our application insights resource. We then started running our perf and able to see the results dynamically and realtime on the server. It even provides us stack trace information based on the trackexception call.
With application insights, we’re able to quickly monitor our tests, more importantly exceptions happening during that session. This is also realtime so any other requests sent to our API, we see real time as well. There are more metrics that applications offer, for now, we’re happy that we were able to collect information (even custom) from our web api
The RESULTS:
How far do you want to go with Generics on test automation?
I’ve been working with certain developers and testers around providing Generics to test certain Web API’s or WebSerices. With that in mind, questions come up such as what if I want to test objects (models) dynamically so I can write a test method that takes a model name then validate certain properties. Yes, you can do this through reflection but how far do you want to go with this? In the testing world, we want to make sure we have good documentation particularly on failure scenarios. We also have to think about that our code are also the source of test case documentation. It’s a fine line where you want to extend generics with reflection vs calling API’s appropriately. Take the following example:
Example 1:
//Using Reflection to invoke methods var objModel = GetTheType(namespaceName + "." + objModelName); var helpergenericclass = GetTheType("XXX.XXX.XXX.HelperClassWebApi"); Task<AuthTokenResponse> userAuthToken = HelperClassWebApi.GetUserToken(userId, userPsw); MethodInfo mi = helpergenericclass.GetMethod("GetObject"); MethodInfo miConstructed = mi.MakeGenericMethod(objModel); var arguments = new object[] { webSrvcUrl, resource, method, urlParams, true, userAuthToken.Result }; var response = miConstructed.Invoke(null, arguments); var profile = (UserProfile) response; Assert.IsNotNull(profile, "Test Case Failed: User Profile is Null"); Assert.IsNotNull(profile.Advisories, "Test Case Failed: Advisories Object is Null"); Assert.IsFalse(String.IsNullOrEmpty(profile.LastName), "Test Case Failed: Last Name is Empty");
Example 2:
//Calling methods not using Reflection var profileGet = HelperClassWebApi.GetObject<UserProfile>(webSrvcUrl, resource, method, urlParams, true, userAuthToken.Result); Assert.IsNotNull(profileGet, "Test Case Failed: User Profile is Null"); Assert.IsNotNull(profileGet.Advisories, "Test Case Failed: Advisories Object is Null"); Assert.IsFalse(String.IsNullOrEmpty(profileGet.LastName), "Test Case Failed: Last Name is Empty");
In both Examples, both achieve the same results. However as a developer or tester, strongly typed (type safe) name comes in handy at least when writing test automation. For Example 1 and in reflection you have to ensure that you:
- Ensure that the arguments you pass will exactly match the parameters specified in the called method
- You need to verify if the called method returns data back or not (void or T)
This practice while good in generics, provides more overhead to write automated tests for users who share the same codebase. More importantly if you engage with other developers to write automated tests, they end up spending more time understand the code. Even worse, if an issue does occur, you need to ensure that the cause is not through the test code rather a valid failing test.
In this case, Example 2 was sufficient so you return a strongly type name and during compile time (in development), intelliSense on invoking the method and finally knowing that the method returns an object back or not.
In summary, make sure you use generics the right way and the right way is to look at adoption and ease of use. There are many practices out in the real world and in my experience develop generics. There are instances. Good article to read up on: