The need for a directory of users arises when various devices are used on the same network. It is crucial to locate the directory on one central source, known as Active Directory. It helps validate and authenticate multiple users accessing all resources on the domain with a single sign-on.
In this blog, we will demonstrate how to implement Active Directory in a .NET application.
What is Active Directory?
Active Directory is a database adapted by Microsoft to manage multiple devices on a single network. It also can be defined as a set of services that connect users with network resources they need to accomplish projects. In order to obtain access to devices, the network users require it to be validated.
Let us consider a C# .NET application to validate the users using Active Directory on the login page, before implementing Active Directory, we use ASP.NET membership to validate a user on the Login page. The code should look similar to the code below.
Now, to create our own function to validate the user through Active Directory, we use the code below.
VerifyUserAD accepts three parameters, namely, Username, Password and ReturnMsg. ReturnMsg returns the error message if the validation of the user fails against the Active Directory. Refer to the code below.
Once the user is validated, the function VertifyUserAD returns an ‘authenticated’ message, based on which the user can take the following relevant actions.
To entirely bypass the ASP .NET membership, use the Active Directory to validate the user. For an existing application, users can maintain and access a copy of Users in the database, meaning there is no need to modify the whole application end-to-end.
For role-based implementation, we use the DirectorySearcher class to fetch the property ‘memberOf’ for that user in Active Directory, as shown in the code below.
This enables users to find the list of Groups in which the user is assigned using the SearchResult.
Use the command below to install Directory Services using the package manager console.
Hope this blog will help you in the implementation of Active Directory in Dot Net application. For more information on .Net services, please visit https://www.metasyssoftware.com/dot-net
What are Microservices?
Historically applications were Monolithic applications where the architecture was a unified and closely coupled integrated unit. Microservices, on the contrary, are smaller independent unified business modules. Each module in Microservices performs its own unique business functionality, at times with dedicated databases.
As shown in the above image, the architecture of Microservices consists of independent smaller units, which are interconnected and managed with the help of API Gateway.
Why opt for Microservices instead of Monolithic Applications?
A Monolithic application is a big container with a collection of different smaller independent parts combined and coupled tightly together, which creates varied inseparable disadvantages.
Here are a few disadvantages of Monolithic services.
• Inflexible – Monolithic applications cannot be built using different technologies.
• Unreliable – One bug or issue in the application may result in the shutdown of the entire system.
• Not Scalable – The tightly coupled nature of a Monolithic application does not scale easily, as workloads cannot be easily distributed across multiple nodes or hardware.
• Hinders continuous deployment – Continuous delivery and deployment in short cycles of time is difficult due to the monolithic nature of the application
• Longer development timelines – The development of Monolithic applications requires lengthy timelines since every feature demands rebuilding of the entire application.
• Complex applications –Incorporating changes in complex monolithic applications become expensive and a maintenance nightmare.
As mentioned earlier, a microservices application is a collection of small independent services designed for different business purposes. In Microservices, each individual service is self-contained. Communication with each self-contained unit is managed by an API Gateway. There are various API Gateways available, and the client can communicate with different business functions of Microservices via the API Gateway.
Features of Microservices
• Decoupled Components – Decoupled services in Microservices architecture enables the entire application to be built, modified, and scaled up quickly with ease.
• Componentization –As each service is an independent component, they can be easily individually replaced and upgraded.
• Undivided business capability –Each Microservice is effortless and focuses on a single business capability
• Autonomy – Developers and teams can work with minimal dependencies, thus increasing development speed and turnaround time.
• Continuous delivery – Allows frequent releases of features by systematic automation of application creation, testing, and approval.
• Responsibility – Microservices treat applications as products and not projects, ensuring the responsibility is in-built.
• Decentralized governance – With no fixed or standardized tool or any technology patterns, developers have the freedom to choose tools based on the requirements to accomplish the job within stipulated timelines.
• Agility – New features can be added easily and quickly developed. A Microservices architecture supports agile development.
Advantages of Microservices
• Independent development – all services are independent of their business purposes and usage.
• Independent deployment –The architecture of Microservices allows services to be individually deployed.
• Isolation of fault – the system continues to function even if one service or a part of the application ceases to work.
• Mixed technologies stack – it is not mandatory to use only one of the platforms for development. We can use multiple platforms and built Microservices architecture as per the need of the application.
• Individual scaling – scale different individual components and deploy them individually without affecting other components.
Best Practices to Design Microservices
• Separate data store for each Microservices
• Maintain the level of code maturity
• Separate build for each Microservice
• Deploy services into containers
• Treat server as stateless
Disadvantages of Microservices
• Huge number of services makes application management tough to track
• The developer will require to solve issues pertaining to Network latency and load balancing
How to create Microservices and API Gateway interface?
Note: This is for those who are familiar with ASP.Net project concepts.
In this demo, we’ll cover the following points,
1. Create two Microservices
2. Create an API Gateway
3. For creating the demo project VS2019 or VS Code, .Net Core 3.1 SDK needs to be installed on the machine
Steps to Create a Microservices Demo Project
• Create two .Net Core web API template project for different purposes
• First, UserService project for user data purpose
• Second, ProductService project for product data purpose
• Create UserController in UserService project and ProductController in ProductService
• Add simple action into the controller that returns the string for testing purpose
• If required, connect API project with the database
Step 2 – Test the above web API project with the help of postman individually
1.) Product Service Output
2.) User Service Output
Step 3 – Create .net core web empty template project for API Gateway with the desired name. In this instance, we chose ‘APIGateway’
Step 4 – Include the dependency of ocelot API Gateway from NuGet package manager
Step 5 –
• Create a JSON file to configure API Gateway for web API and assign a name. In this instance, it is ‘ocelot.json’
• Include the following code text to the JSON file for configuring the API Gateway. In this demo project, API Gateway is used for routing purpose. API Gateway serves different purposes such as:
o Load balancing
o Service Discovery
Note: In the above image, detail of ocelot.JSON are in the comments
Some details of ocelot.JSON are considered while configuring API Gateway from the ocelot package.
• The request forwarded to URL set byDownstreamPathTemplate, DownstreamHostAndPorts and DownstreamScheme
• Ocelot will use the UpstreamPathTemplate URL to identify where the DownstreamPathTemplate request is to be used.
• Ocelot uses UpstreamHttpMethod to make a difference between multiple requests with the same URLs but with different HTTP verbs. We can set a specific list of HTTP Methods or set a blank to allow any of them.
Step 6 – Configure the JSON file for application configuration as shown below in Program.cs file
Step 7 – Set Ocelot middleware ASP.Net project as shown below in Startup.cs file
Step 8 – Run the application to test the working condition of API Gateway. Before running, make sure all projects are marked as startup projects. To handle request from the API Gateway application UserService and ProductService must be running.
Note: If you are testing project through VS2019, the following steps will help you mark all projects as the startup project.
Step 9 – Right-click on the main solution and click on the property, alter the settings as shown below
Step 10 – Below screen shows the final output. When you run the project, all three startup projects will be running, at times, on different ports. We need to check if Customer and User service received a call from the API Gateway project, as shown below.
Hope this article has helped you understand the know-how of Microservices. If you have any questions, please feel free to drop them in our comments section. Happy to Help!
Exchange Web Services is an Application Program Interface (API) by Microsoft that allows programmers to fetch Microsoft Exchange items including calendars, contacts and emails. It can be used to read the email box and retrieve emails along with all the metadata such as headers, body and attachments. This is useful when the same information needs to be extracted from Exchange items repeatedly. The example I’m using in this article is retrieval of order details from emails, and is based on a recent assignment for a client at MetaSys.
How does it help?
The client wanted the ability to read the email inbox, import the order to the system and then send the email with the order details generated to the sales representatives.
With regular mail reading API’s the process would have been as follows. First the email is read, secondly the order is imported, and finally a new function is called that sends a new email with new order details to the sales representative. In this case the details of the requester are saved and reused for sending the confirmation email.
Using the EWS managed API, a more efficient solution was developed. The email was directly forwarded from the email inbox, without the need to create a separate email for the order confirmation. The confirmation email is created directly from the received email object, as is the forwarded email to the sales representative. The following code sample shows how the forwarded email is created and sent:
ResponseMessage responseMessage = message.CreateForward();
responseMessage.BodyPrefix = messageBodyPrefix;
In the code snippet above, “message” is the object which contains all the details of the order email and we use it to create the new forward email without saving any details to the local system or variables.
Similarly, we can use reply functionality of the API to maintain the email conversation chain by using the following code:
bool replyToAll = true;
ResponseMessage responseMessage = message.CreateReply(replyToAll);
string emailbody= “Please find the attachment below.”;
responseMessage.BodyPrefix = emailbody;
Setting the “replyToAll” variable to true specifies that mail will be forwarded to all the recipients who were present in the original conversation. The text contained in the variable “emailbody” will be on the top of the email body of the conversation.
Additional features of EWS managed API
EWS provides useful features for dealing with emails with invalid delivery email addresses. The postmaster bot may send a mail delivery failure email to the same inbox, which can cause issues with the importing of other orders. These issues can be resolved in EWS by checking the subject lines, and automatically deleting delivery failure emails, or moving them to a separate folder. These orders can then be separately corrected and resent without interfering with the remaining orders in the inbox.
The following code sample can be used to move all the email items from the inbox to the “DidNotDelivered” folder:
Folder rootfolder = Folder.Bind(service, WellKnownFolderName.MsgFolderRoot);
foreach (Folder folder in rootfolder.FindFolders(new FolderView(100)))
FindItemsResults findResults =
service.FindItems(WellKnownFolderName.Inbox, new ItemView(10));
// The if below checks if the folder “DidNotDelivered” is present in the email box
if (folder.DisplayName == “DidNotDelivered”)
var fid = folder.Id;
foreach (Item item in findResults.Items)
This is how EWS helped us simplify the processing of emails with minimal lines of code.
We hope that this article gives you some useful ideas for dealing with Microsoft Exchange using EWS. If you are having issues, feel free to get in touch with us at https://www.metasyssoftware.com/dot-net
In building web applications for clients, two important factors we at MetaSys focus on are performance, and speed of development. Good performance is crucial for the success of any web application, as users expect pages and screens to load instantly. Users will quickly stop using slow programs in favour of other web or mobile applications. Development speed is also important, as clients currently expect rapid application development.
We have experienced difficulties in both these areas using the Entity Framework, and in this article, I will be describing the cause of the issues, and the solutions that we at MetaSys came up with. If you are having performance issues with Entity Framework, this article might provide some useful insight and suggestions.
Our issues with the Entity Framework:
Let me quickly brief you about how we started using Entity Framework. We started application development of a .NET web application roughly 10 years ago. At the time, Entity Framework was a new concept introduced by Microsoft. Its purpose was to allow the developer to more easily write SQL queries including calculations, it also simplified CRUD operations and handled results into objects. We used Entity Framework for our web application, and during initial testing, everything was working very well.
The performance issues arose after the client started using the application, particularly as the amount of data in the database started growing. We used the Ants Profiler tool to identify the root cause of poor performance. It showed us that stored procedures were executed fast without any significant delay, but with the Entity Framework code, it was taking a long time to render data on a page.
Another issue was that the SQL database for the application had more than 300 tables. Updating the Entity model with a change in any of the tables would take a very long time. It was also difficult to merge changes of only one developer, or only one module, as it would update the entire Entity model. This made it a challenge to release the application module-wise.
MetaSys Approach :
To overcome performance issues, we first tried to change some of the settings of EDMX, and secondly updated the Entity Framework to the latest version. Neither made much difference to the performance. In the meanwhile, the applications database size and complexity kept on growing, as the application grew.
Eventually, we replaced the Entity with ADO.NET, and we immediately saw a significant improvement in performance. The difficulty we faced with the conversion was how to handle the ADO.NET result into objects. We resolved this using the open-source Dapper ORM. Dapper is a framework for mapping relational object models. Like Entity Framework, It eases the handling of data in objects and supports multiple data query results. This solution not only improved the page loading time, but as there was no need to update the entity model, the developer’s time also reduced significantly.
So far we have found that using ADO.Net with Dapper ORM solved all the problems we experienced with the Entity Framework.
Our team of Dot Net developers have experience of building Dot Net solutions using Microsoft technologies for more than two decades using VB to latest .Net Core applications. For more info. https://www.metasyssoftware.com/dot-net
These days financial, marketing and e-commerce websites allow us to download reports and receipts in pdf form. The Pdf file format is a convenient way of sharing information, as there is a high level of confidence that the user can open the document with the intended look and feel. This is even true for documents containing charts, images and text-based on dynamic data. There are many pdf writing tools available online, of which two commonly used ones are wkhtmltopdf and NReco. This blog article details the recent switch we made from wkhtmltopdf to NReco, and the numerous benefits of the switch.
Our experience with wkhtmltopdf
In the past, we generally used wkhtmltopdf to implement pdf functionality in our web applications. It was a practical choice, as it is an open-source tool with which we have extensive development experience already. The converter tool is given a destination file path and a URL of the report web page. Since the download button is contained within the generated report in web page form, the pdf conversion adds an unnecessary report generation step. To avoid this inefficiency, we wanted to explore different pdf converter options.
Our experience with NReco
We came across a library in a NuGet package called .Net Reusable Components (NReco), which contains a collection of reusable components for the .NET platform including a pdf conversion tool. The only input the tool requires is either a URL to the web page or the report contents as an HTML string. NReco is easier to implement, requiring only two to three lines of code. Even reports containing charts and images created using a third-party tool can be rendered to a pdf without additional coding. All CSS, fonts and images in HTML are supported by the NReco conversion tool.
The NReco tool is easy to install, and performs efficiently, taking much less time than wkhtmltopdf to generate a pdf. Although we currently only use NReco for pdf conversion, many other tools are available.
A major advantage of NReco, is that it supports both the .Net framework and .Net Core. Since we are looking to upgrade a number of our applications to .Net Core, it saves us considerable development time if we can use the existing code for pdf conversion.
To conclude, using NReco instead of wkhtmltopdf for pdf conversion has many benefits including easy implementation, performance, and compatibility with .Net Core.
Like many others, we have been working on MVC 5 based web applications since 2013. With Microsoft planning significant investment into the open-source development platform .Net core, we saw the advantage of migrating our current applications to the new platform sooner rather than later.
The first version of .Net Core 1.0 was released by Microsoft in 2014, followed by several versions, most recently .Net 3.1.1 in January 2020. At the time that we started the migration in 2019, we found .Net 2.2 to be a stable version with a well-developed community advanced enough to answer our queries. The web application that we decided to convert to .Net Core was developed in 2017 on the .Net 4.5.1 MVC platform.
Evaluating the conversion risk is an essential first step before convincing the client to invest in the new technology. Several factors need to be considered, including the project timeline, the scale of the project and the available resources. Using a team that has worked with the technology for at least a year or two is the best option for reducing risk in such a conversion project. A great option is using interns as an additional resource, as the project provides them with the excitement of learning something new.
HOW to start?
The first step is to check the old application with the tool called NET Portability Analyzer Tool. This tool analyzes assemblies and provides a detailed report on the .Net APIs that are missing for the applications or libraries to be portable on .Net Core. It is not a tool which will automatically convert the .NET MVC app to a .NET Core, but it is a useful initial guide towards identifying the portable and non-portable items.
The tool details are available on the Microsoft website:
The tool can be downloaded using the link: https://marketplace.visualstudio.com/items?itemName=ConnieYau.NETPortabilityAnalyzer
The screenshots below show some of the tool outputs:
Creating the new project
It is not useful to open the entire MVC project as a .Net Core project immediately in Visual Studio (VS) 2017, as it will result in a huge list of errors that are difficult to address one by one. A better approach is to create an empty project and copy a few models, controllers, views or corresponding files at a time into the newly created .Net Core project in the VS 2017 environment. After each addition, build the project, analyze and fix the errors.
What were my next steps? Let me give you some technical bullets here.
One of the important steps is to move the connection strings settings from Web.Config to JSON settings in the file named as AppSettings.JSON .
It is necessary to add a middle layer file for the session and call it in the StartUp.cs file. so that all the session objects set on the Global. asax file that do not exist on the .Net Core project will go into the middle layer file and register as a service in StartUp.cs. The Session dependency is included by adding AddSession into ConfigureServices of StartUp.cs
Convert all of your class libraries created separately to .Net standard Class Libraries wherever required by creating a .NET Standard Project and add the references wherever required for the new .NET Core Web App project you have created.
All static files like Images, icons, CSS, JS, email templates need to be copied into WWWRoot. The file locations have to be changed across the project wherever they are referenced.
The Route.config file should be replaced by adding the MapRoute in the StartUp.cs file.
We can create Set and Get extension functions Like SetObject and GetObject for handling session operations as shown below
We have two parts in our project Web App and Web API so we have to add the DI (Dependency Injection) for calling WebAPIClient and HostingEnvironment (IWebAPIClient webapiclient, IHostingEnvironment env)
What can be done on SSL redirection?
We have to add following setting in AppSettings.json file
Also we have to add following code in Startup.cs
Third party Dlls
Every project has some third party Dlls used in the project for a specific purpose. For our application, the third party Dlls like EPPlus, ICSharpCode.SharpZipLib library worked on the .NET Core project without any issues. However, it is possible that certain third party tool kits are not compatible with .NET Core. Some can be downloaded from NuGet or by contacting a third party vendor.
There may be instances where third party assemblies used in the project do not work and cannot be bought from third party vendors. In this case, I would recommend finding a solution that omits the tool altogether. It pays to think of this early whilst updating any web app that might be migrated in the future. This way incompatible third party Dlls can be avoided in favor of compatible tools, in order to save work at the migration stage. One such example is Nreco PDF to Image renderer, which has a version that is compatible with .Net Core available from a third party vendor.
The technical points in this article refer to architectural changes, I will cover the common conversion issues and deployment in the next article so stay tuned…
For more details regarding the kind of ASP web application projects which we handle https://www.metasyssoftware.com/case-study-dotnet