Implementing Active Directory in a .NET application with Multiple Roles

The need for a directory of users arises when various devices are used on the same network. It is crucial to locate the directory on one central source, known as Active Directory. It helps validate and authenticate multiple users accessing all resources on the domain with a single sign-on.

In this blog, we will demonstrate how to implement Active Directory in a .NET application.


What is Active Directory?

Active Directory is a database adapted by Microsoft to manage multiple devices on a single network. It also can be defined as a set of services that connect users with network resources they need to accomplish projects. In order to obtain access to devices, the network users require it to be validated.

Let us consider a C# .NET application to validate the users using Active Directory on the login page, before implementing Active Directory, we use ASP.NET membership to validate a user on the Login page. The code should look similar to the code below.


Now, to create our own function to validate the user through Active Directory, we use the code below.


VerifyUserAD accepts three parameters, namely, Username, Password and ReturnMsg. ReturnMsg returns the error message if the validation of the user fails against the Active Directory. Refer to the code below.


Once the user is validated, the function VertifyUserAD returns an ‘authenticated’ message, based on which the user can take the following relevant actions.

To entirely bypass the ASP .NET membership, use the Active Directory to validate the user. For an existing application, users can maintain and access a copy of Users in the database, meaning there is no need to modify the whole application end-to-end.

Role-based implementation

For role-based implementation, we use the DirectorySearcher class to fetch the property ‘memberOf’ for that user in Active Directory, as shown in the code below.


This enables users to find the list of Groups in which the user is assigned using the SearchResult.

Search result

Use the command below to install Directory Services using the package manager console.

Install-Package System.DirectoryServices

Hope this blog will help you in the implementation of Active Directory in Dot Net application. For more information on .Net services, please visit

What, Why, and How of Microservices?

What are Microservices?

Historically applications were Monolithic applications where the architecture was a unified and closely coupled integrated unit. Microservices, on the contrary, are smaller independent unified business modules. Each module in Microservices performs its own unique business functionality, at times with dedicated databases.

Monolith and Microservices

As shown in the above image, the architecture of Microservices consists of independent smaller units, which are interconnected and managed with the help of API Gateway.

Why opt for Microservices instead of Monolithic Applications?

A Monolithic application is a big container with a collection of different smaller independent parts combined and coupled tightly together, which creates varied inseparable disadvantages.

Here are a few disadvantages of Monolithic services.

• Inflexible – Monolithic applications cannot be built using different technologies.
• Unreliable – One bug or issue in the application may result in the shutdown of the entire system.
• Not Scalable – The tightly coupled nature of a Monolithic application does not scale easily, as workloads cannot be easily distributed across multiple nodes or hardware.
• Hinders continuous deployment – Continuous delivery and deployment in short cycles of time is difficult due to the monolithic nature of the application
• Longer development timelines – The development of Monolithic applications requires lengthy timelines since every feature demands rebuilding of the entire application.
• Complex applications –Incorporating changes in complex monolithic applications become expensive and a maintenance nightmare.

As mentioned earlier, a microservices application is a collection of small independent services designed for different business purposes. In Microservices, each individual service is self-contained. Communication with each self-contained unit is managed by an API Gateway. There are various API Gateways available, and the client can communicate with different business functions of Microservices via the API Gateway.

Features of Microservices

• Decoupled Components – Decoupled services in Microservices architecture enables the entire application to be built, modified, and scaled up quickly with ease.
• Componentization –As each service is an independent component, they can be easily individually replaced and upgraded.

• Undivided business capability –Each Microservice is effortless and focuses on a single business capability

• Autonomy – Developers and teams can work with minimal dependencies, thus increasing development speed and turnaround time.
• Continuous delivery – Allows frequent releases of features by systematic automation of application creation, testing, and approval.
• Responsibility – Microservices treat applications as products and not projects, ensuring the responsibility is in-built.
• Decentralized governance – With no fixed or standardized tool or any technology patterns, developers have the freedom to choose tools based on the requirements to accomplish the job within stipulated timelines.
• Agility – New features can be added easily and quickly developed. A Microservices architecture supports agile development.

Advantages of Microservices

• Independent development – all services are independent of their business purposes and usage.
• Independent deployment –The architecture of Microservices allows services to be individually deployed.
• Isolation of fault – the system continues to function even if one service or a part of the application ceases to work.
• Mixed technologies stack – it is not mandatory to use only one of the platforms for development. We can use multiple platforms and built Microservices architecture as per the need of the application.
• Individual scaling – scale different individual components and deploy them individually without affecting other components.

Best Practices to Design Microservices

• Separate data store for each Microservices
• Maintain the level of code maturity
• Separate build for each Microservice
• Deploy services into containers
• Treat server as stateless

Disadvantages of Microservices

• Huge number of services makes application management tough to track
• The developer will require to solve issues pertaining to Network latency and load balancing

How to create Microservices and API Gateway interface?

Note: This is for those who are familiar with ASP.Net project concepts.

In this demo, we’ll cover the following points,

1. Create two Microservices
2. Create an API Gateway
3. For creating the demo project VS2019 or VS Code, .Net Core 3.1 SDK needs to be installed on the machine

Steps to Create a Microservices Demo Project

Step 1
• Create two .Net Core web API template project for different purposes
• First, UserService project for user data purpose
• Second, ProductService project for product data purpose
• Create UserController in UserService project and ProductController in ProductService
• Add simple action into the controller that returns the string for testing purpose
• If required, connect API project with the database


Step 2 – Test the above web API project with the help of postman individually

1.) Product Service Output

Product service output

2.) User Service Output

User service output


Step 3 – Create .net core web empty template project for API Gateway with the desired name. In this instance, we chose ‘APIGateway’

Step 4 – Include the dependency of ocelot API Gateway from NuGet package manager


Step 5 –
• Create a JSON file to configure API Gateway for web API and assign a name. In this instance, it is ‘ocelot.json’
• Include the following code text to the JSON file for configuring the API Gateway. In this demo project, API Gateway is used for routing purpose. API Gateway serves different purposes such as:
o Routing
o Caching
o Logging
o Authentication
o Authorization
o Load balancing
o Service Discovery


Note: In the above image, detail of ocelot.JSON are in the comments

Some details of ocelot.JSON are considered while configuring API Gateway from the ocelot package.

• The request forwarded to URL set byDownstreamPathTemplate, DownstreamHostAndPorts and DownstreamScheme
• Ocelot will use the UpstreamPathTemplate URL to identify where the DownstreamPathTemplate request is to be used.
• Ocelot uses UpstreamHttpMethod to make a difference between multiple requests with the same URLs but with different HTTP verbs. We can set a specific list of HTTP Methods or set a blank to allow any of them.

Step 6 – Configure the JSON file for application configuration as shown below in Program.cs file

Program.cs file

Step 7 – Set Ocelot middleware ASP.Net project as shown below in Startup.cs file

Startup.cs file

Step 8 – Run the application to test the working condition of API Gateway. Before running, make sure all projects are marked as startup projects. To handle request from the API Gateway application UserService and ProductService must be running.

Note: If you are testing project through VS2019, the following steps will help you mark all projects as the startup project.

Step 9 – Right-click on the main solution and click on the property, alter the settings as shown below

Simplemicroservice image 2
Step 10 – Below screen shows the final output. When you run the project, all three startup projects will be running, at times, on different ports. We need to check if Customer and User service received a call from the API Gateway project, as shown below.

API Gateway project


Hope this article has helped you understand the know-how of Microservices. If you have any questions, please feel free to drop them in our comments section. Happy to Help!
Happy Coding!

Device and Browser Testing Strategies

Testing without proper planning can cause major problems for an app release, as it can result in compromised software quality and an increase in total cost. Defining and following a suitable and thorough testing procedure is a very important part of the development process that should be considered from the very beginning. Time should be specifically allocated to the manual testing on devices and browsers, as this is a low cost strategy to significantly improve the quality of the app release.  In this article, I will share some of the strategies we follow at MetaSys for real device and browser testing.

There are four points that we consider when defining our testing strategy.

  1. The first point is determining which devices and browsers will be used for testing. This is entirely dependent on the project requirements, and the development team analyses the application use cases to make the selection based on the following principles:
  • For web applications, we usually test on the three most commonly used browsers (Chrome, Firefox and Safari). If time allows for more extensive testing, we will also test on other browsers like Internet Explorer and Microsoft Edge.
  • For Device testing of web applications, we choose the devices based on the functional requirements and priorities of the applications. In other words, if a web application is supposed to run especially well on any particular device we focus the testing on the corresponding commonly used browsers with the appropriate resolution. For instance, for the Android platform we focus on Chrome and Firefox, whereas for the iOS platform we focus on Safari and Chrome.
  • For Native applications we directly test the application on the devices themselves, rather than using an emulator. This provides the most accurate feedback in terms of functionality and application performance.
  1. There are instances where the project timeline and/or budget limits the amount of testing that we can do. It is very important to identify these situations, and to develop strategies in order to still deliver high quality software to the client. At MetaSys we handle these cases by focusing on high level general testing, which covers most of the UI and the functional part of the applications.
  2. For functional testing of web applications, we utilise automation as much as possible. For repetitive testing of browsers, we usually design automated test cases. Using automation not only helps save the time of the testers, it is also very useful for retesting resolved issues. We use the Selenium WebDriver tool for automation testing and the Microsoft Team Foundation Server 2019 and the Microsoft Test Management tools for bug reporting and test case management.
  3. For web applications, we put a strong emphasis on performance, in addition to the ‘look and feel’., The speed of the app is one of the most important factors that determines the user experience. For performance testing we use the Apache JMeter and New Relic tools which give very accurate results regarding the application performance. The New Relic tool also provides an analysis of database query level problems, and gives many more reports and real time graphs. This helps significantly with troubleshooting, and improving performance.

At MetaSys, We have a team of experienced Dot Net developers who build solutions using Microsoft technologies. We have done web application development using ASP.Net Core, .Net & ASP.Net Framework, Visual Studio, Microsoft SQL Server, MVC, Team Foundation Server, Javascript and JQuery. For more info.

What is EWS?

Exchange Web Services is an Application Program Interface (API) by Microsoft that allows programmers to fetch Microsoft Exchange items including calendars, contacts and emails. It can be used to read the email box and retrieve emails along with all the metadata such as headers, body and attachments. This is useful when the same information needs to be extracted from Exchange items repeatedly. The example I’m using in this article is retrieval of order details from emails, and is based on a recent assignment for a client at MetaSys.

How does it help?

The client wanted the ability to read the email inbox, import the order to the system and then send the email with the order details generated to the sales representatives.

With regular mail reading API’s the process would have been as follows. First the email is read, secondly the order is imported, and finally a new function is called that sends a new email with new order details to the sales representative. In this case the details of the requester are saved and reused for sending the confirmation email.

Using the EWS managed API, a more efficient solution was developed. The email was directly forwarded from the email inbox, without the need to create a separate email for the order confirmation. The confirmation email is created directly from the received email object, as is the forwarded email to the sales representative. The following code sample shows how the forwarded email is created and sent:

ResponseMessage responseMessage = message.CreateForward();
responseMessage.BodyPrefix = messageBodyPrefix;

In the code snippet above, “message” is the object which contains all the details of the order email and we use it to create the new forward email without saving any details to the local system or variables.

Similarly, we can use reply functionality of the API to maintain the email conversation chain by using the following code:

bool replyToAll = true;
ResponseMessage responseMessage = message.CreateReply(replyToAll);

string emailbody= “Please find the attachment below.”;
responseMessage.BodyPrefix = emailbody;

Setting the “replyToAll” variable to true specifies that mail will be forwarded to all the recipients who were present in the original conversation. The text contained in the variable “emailbody” will be on the top of the email body of the conversation.

Additional features of EWS managed API

EWS provides useful features for dealing with emails with invalid delivery email addresses. The postmaster bot may send a mail delivery failure email to the same inbox, which can cause issues with the importing of other orders. These issues can be resolved in EWS by checking the subject lines, and automatically deleting delivery failure emails, or moving them to a separate folder. These orders can then be separately corrected and resent without interfering with the remaining orders in the inbox.

The following code sample can be used to move all the email items from the inbox to the “DidNotDelivered” folder:

Folder rootfolder = Folder.Bind(service, WellKnownFolderName.MsgFolderRoot);
foreach (Folder folder in rootfolder.FindFolders(new FolderView(100)))
FindItemsResults findResults =
service.FindItems(WellKnownFolderName.Inbox, new ItemView(10));
// The if below checks if the folder “DidNotDelivered” is present in the email box
if (folder.DisplayName == “DidNotDelivered”)
var fid = folder.Id;
foreach (Item item in findResults.Items)

This is how EWS helped us simplify the processing of emails with minimal lines of code.

We hope that this article gives you some useful ideas for dealing with Microsoft Exchange using EWS. If you are having issues, feel free to get in touch with us at

Using the NReco pdf writing tool

These days financial, marketing and e-commerce websites allow us to download reports and receipts in pdf form. The Pdf file format is a convenient way of sharing information, as there is a high level of confidence that the user can open the document with the intended look and feel. This is even true for documents containing charts, images and text-based on dynamic data. There are many pdf writing tools available online, of which two commonly used ones are wkhtmltopdf and NReco. This blog article details the recent switch we made from wkhtmltopdf to NReco, and the numerous benefits of the switch.

Our experience with wkhtmltopdf

In the past, we generally used wkhtmltopdf to implement pdf functionality in our web applications. It was a practical choice, as it is an open-source tool with which we have extensive development experience already. The converter tool is given a destination file path and a URL of the report web page. Since the download button is contained within the generated report in web page form, the pdf conversion adds an unnecessary report generation step. To avoid this inefficiency, we wanted to explore different pdf converter options.

Our experience with NReco

We came across a library in a NuGet package called .Net Reusable Components (NReco), which contains a collection of reusable components for the .NET platform including a pdf conversion tool. The only input the tool requires is either a URL to the web page or the report contents as an HTML string. NReco is easier to implement, requiring only two to three lines of code. Even reports containing charts and images created using a third-party tool can be rendered to a pdf without additional coding. All CSS, fonts and images in HTML are supported by the NReco conversion tool.

The NReco tool is easy to install, and performs efficiently, taking much less time than wkhtmltopdf to generate a pdf. Although we currently only use NReco for pdf conversion, many other tools are available.

A major advantage of NReco, is that it supports both the .Net framework and .Net Core. Since we are looking to upgrade a number of our applications to .Net Core, it saves us considerable development time if we can use the existing code for pdf conversion.

To conclude, using NReco instead of wkhtmltopdf for pdf conversion has many benefits including easy implementation, performance, and compatibility with .Net Core.

About us

Our team of .Net developers have successfully delivered applications using ASP.Net Core, .Net & ASP.Net framework, Visual Studio, Microsoft SQL Server, Team Foundation Server, Javascript and JQuery. For more info –

Converting an MVC web APP to .Net Core Web App


Like many others, we have been working on MVC 5 based web applications since 2013. With Microsoft planning significant investment into the open-source development platform .Net core, we saw the advantage of migrating our current applications to the new platform sooner rather than later.
The first version of .Net Core 1.0 was released by Microsoft in 2014, followed by several versions, most recently .Net 3.1.1 in January 2020. At the time that we started the migration in 2019, we found .Net 2.2 to be a stable version with a well-developed community advanced enough to answer our queries. The web application that we decided to convert to .Net Core was developed in 2017 on the .Net 4.5.1 MVC platform.

Initial considerations
Evaluating the conversion risk is an essential first step before convincing the client to invest in the new technology. Several factors need to be considered, including the project timeline, the scale of the project and the available resources. Using a team that has worked with the technology for at least a year or two is the best option for reducing risk in such a conversion project. A great option is using interns as an additional resource, as the project provides them with the excitement of learning something new.

HOW to start?
The first step is to check the old application with the tool called NET Portability Analyzer Tool. This tool analyzes assemblies and provides a detailed report on the .Net APIs that are missing for the applications or libraries to be portable on .Net Core. It is not a tool which will automatically convert the .NET MVC app to a .NET Core, but it is a useful initial guide towards identifying the portable and non-portable items.
The tool details are available on the Microsoft website:
The tool can be downloaded using the link:

The screenshots below show some of the tool outputs:
Portability Summary

Portability summary


Details image

Missing Assemblies

Missing assemblies

Creating the new project
It is not useful to open the entire MVC project as a .Net Core project immediately in Visual Studio (VS) 2017, as it will result in a huge list of errors that are difficult to address one by one. A better approach is to create an empty project and copy a few models, controllers, views or corresponding files at a time into the newly created .Net Core project in the VS 2017 environment. After each addition, build the project, analyze and fix the errors.
What were my next steps? Let me give you some technical bullets here.
One of the important steps is to move the connection strings settings from Web.Config to JSON settings in the file named as AppSettings.JSON .
It is necessary to add a middle layer file for the session and call it in the StartUp.cs file. so that all the session objects set on the Global. asax file that do not exist on the .Net Core project will go into the middle layer file and register as a service in StartUp.cs. The Session dependency is included by adding AddSession into ConfigureServices of StartUp.cs
Convert all of your class libraries created separately to .Net standard Class Libraries wherever required by creating a .NET Standard Project and add the references wherever required for the new .NET Core Web App project you have created.
All static files like Images, icons, CSS, JS, email templates need to be copied into WWWRoot. The file locations have to be changed across the project wherever they are referenced.
The Route.config file should be replaced by adding the MapRoute in the StartUp.cs file.
We can create Set and Get extension functions Like SetObject and GetObject for handling session operations as shown below

set object and get object

We have two parts in our project Web App and Web API so we have to add the DI (Dependency Injection) for calling WebAPIClient and HostingEnvironment (IWebAPIClient webapiclient, IHostingEnvironment env)

What can be done on SSL redirection?
We have to add following setting in AppSettings.json file


Also we have to add following code in Startup.cs

startup cs

Third party Dlls
Every project has some third party Dlls used in the project for a specific purpose. For our application, the third party Dlls like EPPlus, ICSharpCode.SharpZipLib library worked on the .NET Core project without any issues. However, it is possible that certain third party tool kits are not compatible with .NET Core. Some can be downloaded from NuGet or by contacting a third party vendor.
There may be instances where third party assemblies used in the project do not work and cannot be bought from third party vendors. In this case, I would recommend finding a solution that omits the tool altogether. It pays to think of this early whilst updating any web app that might be migrated in the future. This way incompatible third party Dlls can be avoided in favor of compatible tools, in order to save work at the migration stage. One such example is Nreco PDF to Image renderer, which has a version that is compatible with .Net Core available from a third party vendor.

The technical points in this article refer to architectural changes, I will cover the common conversion issues and deployment in the next article so stay tuned…

For more details regarding the kind of ASP web application projects which we handle