Migrating from ASP.NET to ASP.NET Core

iteo
14 min readNov 20, 2020

--

ASP.NET to ASP.NET Core — Motivation

If you are still developing or maintaining a .NET Framework based solution (which probably should be considered legacy at this point, especially for ASP.NET), then by now you must have considered migrating it to .NET Core. Before we jump into nitty-gritty aspects of it, let us hold on for a second and ask ourselves these questions:

  • Why would you want to do that?
  • What is to be gained here?

Those are the kinds of questions that “business people” or your customers might ask before they put out money for that goal. Some of the reasons why you would want to do that are:

  1. Because that’s where all the cool kids are, right? 😉
  2. Developing and maintaining .NET Core based solutions is just easier. Many of the existing mechanics were improved, simplified and unified. Simply put — made better. Managing configuration files (sources), referencing assemblies, hosting — all has been streamlined. It leads to less bugs, less time required for “maintenance” and producing boilerplate code. More time can be spent more efficiently on more valuable stuff — new functionalities, tests, etc.
  3. It’s often the case that new useful libraries or newer versions of existing libraries are released exclusively for .NET Core. Swashbuckle.WebApi and it’s ASP.NET Core counterpart are just one of them. You may think “Oh well, I’ll just keep using the old stuff in this case”, but this will only get you so far. OpenAPI, Improved UI, being able to use Authorization Code + PKCE are just a few of many perks involved with the latest release of this one single library. Now think about all the other nuget packages your solution uses. There’s also a great possibility that some areas in your code could just be replaced with calls to external libraries (aka “there’s a library for that”). By moving to .NET Core you maximize your chances of being able to leverage that.
  4. Being able to host your .NET Core app on Linux brings new possibilities to the table. Docker? Kubernetes? Yes, please. True, you can (to some extent) use Docker and Kubernetes with Windows underneath as well, but that’s not a common scenario as Linux is just the default / native OS for doing this kind of stuff.
  5. Another fact that shouldn’t be disregarded is that job candidates are more likely to respond to your job offer if you do .NET Core, “.NET Framework 4.6.1” doesn’t look good on job offer anymore. Seeing a job offer like this begs the questions: “Where have you been for the last few years?”, “What kind of legacy project / company am I getting myself into?”. Don’t get me wrong, I do understand that there is still time and place for full/old .NET Framework, it’s just it shouldn’t be your default choice anymore.
  6. Performance. The ultimate reason. I’ve seen “performance” card being thrown way too many times in response to questions of why some things are being done “suboptimal”. It’s a one word excuse / conversation stopper. But if you have to convince one of these people who put performance on top — I got good news for you — .NET Core is blazing fast. Depending on the particular scenario, you will get improvements ranging from 20% to 6x compared to .NET Framework 4.8, with .NET Core 5.0 raising the bar even higher. This can directly translate into better user experience and lower bills for cloud hosting services. Apart from that it can open up new possibilities, like running your app in a potato powered IoT device.

ASP.NET to ASP.NET Core — Preparation

Now that you have at least some ammo for pushing forward this idea of migrating to .NET Core (I’m sure you can come up with much more), let’s take a look at how it could be achieved. We’re going to focus on Web Applications (REST APIs specifically) as those are perfect candidates for migration and that’s what my exposure mainly was. You can find a great deal of guidelines / articles on porting to .NET Core, either on microsoft websites or from other people who already went this path. Being late to the game has some advantages after all. You’re more likely to find a solution to a problem, contrary to early adopters — who often had to figure it out on their own. I strongly advise you to read those articles first as they often lay out “the basics”. I will reiterate over some of those but I will mainly focus on “what’s not there”. The whole process should start with thorough preparation. Proper planning and prerequisite activities will minimize the time your solution stays uncompilable. But make no mistake — there will be times where you will be forced to make a jump and hope for the best — figure things out as you go — but we want to keep the number of those situations to a minimum.

Refactor first

So the absolute first thing you should do before you even say “.net core” is to get your existing solution “straight”. There’s nothing worse than trying to fix dozens of problems at once while porting to .NET Core. Refactor first. It may take a week or month or longer, but it will take even longer if you try to do it while porting to .NET Core. Not to mention you boss or customer nagging you why this “.NET Core porting” takes so long. Depending on the quality of your existing solution, this may be the most time consuming step. Focus on having clear separation between abstractions and implementation. Make sure outermost layer concepts (usually like any class with “http” in its name) don’t sip into internal layers. Ensure you do things consistently (one particular way) throughout the solution and you can isolate those and hopefully hide them behind abstractions. It will save you headaches later down the road.

Change .csproj files to SDK-style format

Also known as “Visual Studio 2017 project format” as it was introduced at that time. .NET Core projects require this style of .csproj. But why is this still preparation? Isn’t that “migration to .NET Core” already? No. We’re keeping the full framework for now. While you are able to find some tools that will try to perform this conversion, I found it just easier to do it manually starting with the following “almost empty project” template:

<Project Sdk=”Microsoft.NET.Sdk”>
<PropertyGroup>
<TargetFramework>net48</TargetFramework>
<Version>1.0.0</Version>
</PropertyGroup>

<ItemGroup> <!– nuget packages –>
<PackageReference Include=”Newtonsoft.Json” Version=”12.0.2″ />
</ItemGroup>

<ItemGroup> <!– project references –>
<ProjectReference Include=”..\..\OtherProject\OtherProject.csproj” />
</ItemGroup>
</Project>

Use this template for regular class library projects. Valid TargetFramework values can be found here. Valid Sdk values can be found here. Also delete AssemblyInfo.cs and packages.config files. Important note here — there is no support for this project style for the “executable” ASP.NET Projects, so the conversion of those will have to be done when we’ll be switching to ASP.NET Core. There are many advantages to this style of .csproj:

  • Short and concise
  • You can edit .csproj file without first unloading it from solution
  • Binding redirects are being handled automatically (at compilation time)
  • You don’t need to explicitly reference nuget package if one of the projects that is already being referenced already uses them

One aspect of this style of project is that it includes all the files in the project folder. Remember those files that you removed from the project but didn’t actually delete? Yep, those will be resurrected into your project/solution and will require to be deleted.

Migrate from Entity Framework (6.*) to Entity Framework Core

That’s another huge step you should take before you make the jump to .NET Core. Of course only if you use Entity Framework in your project. If you find it confusing, then let me explain: Entity Framework Core is a different ORM than the original Entity Framework that you use in your project. EF Core does not provide the same features as EF 6. For example, there is no support for many-to-many relationships, you will have to model them yourself and the “joining entity” will be first class citizen in your model. Wiring the configuration will differ, attributes will be different, your existing migrations will need to be deleted and new (initial) migration will have to be created. Since EF Core depends on .NET Standard 2.0, it can be used by both .NET Framework and .NET Core applications (.NET Standard is the “common denominator” of those two frameworks). So by switching from EF to EF Core, we perform one additional step that will relieve us from making this switch while we perform conversion to .NET Core. We found this step time consuming and failing often at execution time (due to incorrect configuration). So every bit of integration tests you have will definitely help. With new ORM come new providers for your DBMS. If you happen to be using MySql / MariaDb then I strongly suggest you to go with Pomelo. There is an official provider from Oracle but we haven’t had much luck with it. Be In either case, make sure to update your database server to up to date version first.

Libraries

Just like with Entity Framework Core, you can benefit by converting your libraries to .NET Standard 2.0. This way you’ll be able to use them in old and new solution. Start with a list of all external libraries that you use in your existing solution and try to find a substitute for it. Take note of dependency a particular library has (either .NET Core or .NET Standard), this will dictate what your library will have to target. Well, for the most part at least, since you may find yourself in need of something “built in” from .NET Core namespace. While certainly a nice place to be in, you may find that limiting yourself to .NET Standard API subset is just too much constraint. So give it a try, but don’t hesitate to change target framework to .NET Core 3.1 if limited functionality of .NET Standard starts to get in your way. In this case, trying out a new external library might be something that you will find yourself doing at later stages of the conversion process (where your solution compiles and runs relatively crash free).

ASP.NET to ASP.NET Core — Execution

So far we’ve been closing the gap that we will have to jump over, thus giving ourselves a better chance of succeeding while also doing it in a reasonable time. Now it’s time to actually make the jump.

There are certain ways of doing things in .NET Core. Having someone in your team with first hand experience writing .NET Core apps will surely help. But if you haven’t had any previous experience with .NET Core, that’s not an end of the world either. Taking an online course like this one will quickly get you up to speed. So without further ado, let me share with you some of the challenges we faced and hopefully give you some recipes that worked for us.

IoC container and dependency registration

.NET Core comes with it’s own IoC container and a set of abstractions related to dependency resolution. While it is possible to use 3rd party IoC containers like Autofac, so far we haven’t found reasons to use them over what’s provided out of the box. With full .NET Framework things were quite different in that regard as it didn’t come with any IoC container. While Microsoft’s (later took over by community) approach to IoC — Unity Container — could definitely get the job done, it still left something to be desired. That’s where all the other IoC containers could shine. With .NET Core however, their relevance diminished.

With ASP.NET Core, dependency registration takes place in the Startup.ConfigureServices method. This method is then called (by convention) at runtime. Startup class is also a host for other activities so it tends to get messy pretty quick. To keep things organized and promote code reuse, let me show you a pattern I picked up the other day from Nick Chapsas (I recommend his Youtube channel btw). Start with an interface:

public interface IInstaller
{
void InstallServices(IHostEnvironment hostEnvironment, IServiceCollection services, IConfiguration configuration);
}

And it’s accompanying extension method:

public static class InstallerExtensions
{
public static void InstallServicesFromAssembly<T>(this IServiceCollection services, IHostEnvironment hostEnvironment, IConfiguration configuration)
{
var installers = typeof(T).Assembly.ExportedTypes
.Where(x => typeof(IInstaller)
.IsAssignableFrom(x) && !x.IsInterface && !x.IsAbstract)
.Select(Activator.CreateInstance).Cast<IInstaller>()
.ToList();

installers.ForEach(installer => installer.InstallServices(hostEnvironment, services, configuration));
}
}

I tend to keep them both in “Common.IoC” library project that I later reference by “executable” projects. Now, instead of cluttering Startup class we can instead create dedicated classes (installers that implement IInstaller interface) that will add some particular “features” to our application. SwaggerInstaller, ConfigurationInstaller, HangfireInstaller, MassTransitInstaller, ServicesInstaller are just a few examples. Now, inside ConfigureServices method we can simply call:

services.InstallServicesFromAssembly<Startup>(HostingEnvironment, Configuration);

With HostingEnvironment and Configuration being properties initialized in the constructor.

You can even move some of those installers to “base” projects and reuse them across your solution. I find this approach a perfect blend between fully automated dependency registration (discovery) and doing everything manually. One thing to note here is what Nick wrote in comment to his video — you lose control over the order installers will be called, thus the order particular services will be registered. I found it rarely to be an issue as you shouldn’t resolve anything while the registration is taking place. For other oddball scenarios where registration order is crucial, you can always revert to making those particular registrations inside Startup.ConfigureServices method (with call to InstallServicesFromAssembly preceding or trailing those registrations).

OWIN and Authorization Server Middleware

With the original framework we were using OWIN and a number of libraries that relied on it, particularly Microsoft.Owin.Security.OAuth — an authorization server middleware for issuing access and refresh tokens. With ASP.NET Core having its own middleware (you plug them inside Startup.Configure method usually with “Use..” extension method call) we started looking for a replacement. We were surprised to find out that Microsoft did not deliver any libraries for issuing OAuth bearer tokens. Out of the box support is only for consuming / validating them. We found this post by Jeffrey Fritz / Mike Rousos that gave us some alternatives. Eventually we went with OpenIddict which aims to be a more lightweight alternative to IdentityServer4 and seemed more like a direct replacement to the original library from Microsoft. You can also consider external providers like Okta or Auth0 to do the heavy lifting for you, for us however it never was an option.

WebApiCompatShim

While looking for any potential time savers doing ASP.NET Web API 2 to ASP.NET Core MVC conversion, you may come across this library from Microsoft. It’s not even available for .NET Core 3.1 (not to mention 5.0) but at the time we were doing conversion to .NET Core 2.1 so it was still a viable option. We used it initially and it decreased the number of compilation errors by a magnitude. But as our solution kept evolving toward “Core”, we found this library just straying us from the way things were supposed to be done properly. Today I wouldn’t even consider using it should it be available for .NET Core 3.1 as it would only slow you down embracing ASP.NET Core’s way of doing things.

Configuration

.NET Core comes with powerful configuration schemes that allow for pulling configuration values from multiple sources as well as having dedicated configurations depending on the environment the program is run. Web.config XML files are replaced with appsettings.json JSON files. Probably the only time you will need to reintroduce web.config file to your project is when you will want to host your project in IIS. But this will merely be an addon as your main configuration will still be kept in appsettings.json. You can use .xml files or even .ini files if you really want to, but json is the default choice in .NET Core. Speaking of IIS, if you plan on hosting your ASP.NET Core app in IIS, then prepare for some tinkering. Definitely check out this blog post before you start.

Misconfiguration problems are often those kinds of problems that don’t manifest themselves right away when the program is started, which makes them harder to spot.

Having an organized configuration approach will greatly improve transparency of your configuration and will help you configure things correctly “the first time”.

Let me show you the configuration scheme that worked for us. Basically it’s a standard approach with a slight modification. As many examples suggest, we started with a set of appsettings.json files — one general file and each dedicated to a specific environment. Later we will read those files in specific order, so that values from the environment specific file will override values from appsettings.json. In Visual Studio it looks like this:

The non standard thing here is that environment specific file conveys two informations:

  • Environment purpose (Dev, Stage, Prod, Test)
  • Environment kind (Local developer machine, Azure, IIS, AWS)

This way we are able to make decisions in code based separately on environment purpose and it’s kind. So instead of using built in extension methods from Microsoft.Extensions.Hosting namespace (.IsDevelopment(), .IsProduction(), .IsStaging()) we check if the environment name contains a particular string. For example if the environment name contains “Azure” we will configure our service to use Azure Service Bus and App Insights. That’s opposed to Local where we’ll be using RabbitMQ along with file logging and user secrets. Regarding environment purpose — when the environment name contains “Dev” we can safely disclose exception details to the client, when it’s not likely something you want to do for the “Prod” environment. Loading configuration is later done with extension methods, .NET Core will merge all sources in order specified:

config.SetBasePath(AppDomain.CurrentDomain.BaseDirectory)
.AddJsonFile(“appsettings.json”, optional: false) .AddJsonFile($”appsettings.{hostContext.HostingEnvironment.EnvironmentName}.json”, optional: true)
.AddEnvironmentVariables()
.AddCommandLine(args);

Regarding using these configuration values later inside your app, you will want to convert them to objects. What is being promoted is this options pattern. So far I haven’t found much use for it as I don’t need all of the bells and whistles that come with it. We went with plain ol’ POCO that we inject into services. We simply register them as singletons inside a dedicated installer where we take advantage of the fact that the name of the settings class corresponds to the name of the section in appsettings.json.

Installer:

public class ConfigBindingInstaller : IInstaller
{
public void InstallServices(IHostEnvironment hostEnvironment, IServiceCollection services, IConfiguration configuration)
{
RegisterConfigSection<DataQuerySettings>(services, configuration);
}

private void RegisterConfigSection<T>(IServiceCollection services, IConfiguration configuration) where T : class, new()
{
var section = new T();
configuration.Bind(typeof(T).Name, section);
services.AddSingleton(section);
}
}

appsettings.json:

{
“DataQuerySettings”: {
“MaxResultsCount”: 100
}
}

SignalR

That’s a similar story to Entity Framework vs Entity Framework Core. What classic ASP.NET SignalR and ASP.NET Core SignalR have most in common it’s their name. Just like with EF and EF Core, new SignalR is not backward compatible with old SignalR. You can read all about the differences here.

Microsoft offers Azure SignalR Service which main goal is to offload your http server from SignalR connections. With use of it, your http server maintains only one connection to Azure, where Azure Service maintains connections between all the clients. What Microsoft claims with their Azure SignalR Service is that it strives to also serve as a compatibility layer / adapter between old and new SignalR. So if you want to ensure that transition to Core SingalR goes smoothly for your clients, Azure SignalR Service might be the key component in ensuring that. But the key word here is “strive”. It’s their best effort, not a guarantee. For us unfortunately, the compatibility never happened. We suspect that it might have been caused by outdated client libraries, so I’m going to give this “adapter” feature the benefit of the doubt. But regardless of your outcome (and especially with a negative outcome as ours) you might reconsider using SignalR in the first place. There may be better alternatives, particularly if your clients are also mobile devices. Emerging real time communication technology these days is gRPC. I found this blog post by Fiodar Sazanavets that neatly compares gRPC to SignalR. And if you happen to be using SignalR for chat specifically, I can almost guarantee that Firebase Realtime Database will be a much better option. Realtime Database combines persistency and live notifications with SDKs available for major platforms. For platforms / languages where SDK is not provided by Google (that is the case with C# unfortunately), there are 3rd party nuget libraries wrapping REST API that Google provides, so it feels almost like native support.

Final words

It’s impossible to write about all the problems and challenges we faced while porting to .NET Core. Those are some of the most memorable ones. I’m sure most of the other questions you may have can be answered by Googling “[name of the component from .NET Framework] .NET Core” and surely some Stack Overflow threads will pop up.

--

--

iteo

iteo is an international digital product studio founded in Poland, that helps businesses benefit from technology better. Visit us on www.iteo.com