The surprising ease of getting started with xUnit in .Net Core

As all developers are aware, learning a new programming language/library is something we do pretty much all the time. Sometimes it’s more challenging than others and in some circumstances, we may get frustrated and think “actually, I’m going to do this a different way”.

A complete opposite to this, for me, was xUnit. I was surprised as to how easy the library was to pick up and get going. xUnit’s own documentation is somewhat lacking, but this I can forgive due to the relative simplicity of getting started, most likely because their assumption is that users who will be writing in xUnit should already know .Net languages, in my case C#.

What’s good about it?

For a start, it’s very well integrated into Visual Studio 2017. Yes, most, if not all .Net Unit Test frameworks are supported by Visual Studio so I won’t give a +1 to xUnit specifically for this one but the Test Explorer in Visual Studio has made development, debugging and life-time running of Unit Tests an absolute breeze.

Another benefit is it’s natural adoption of general C# constructors and functions. There’s no need to create Initialisation functions so you essentially are writing normal .Net Core code, making it more maintainable and understandable for developers around you.

[Fact] and [Theory] are two attributes I really enjoy the naming scheme of. A small thing but they just make sense…
Fact, is literally “It takes this, so should return this”.
Theory is “It takes something like this and should output this”.

Subtle between the two but I have found the more I use xUnit with applications that use external data, the more I end up using [Theory] because of the ability to throw all sorts of data-sets at the function.

The challenges

With all new things comes challenges and xUnit does have them.

Entity Framework Contexts

The problem

One of the hardest things for me was getting Entity Framework Contexts to work correctly, with as little code as possible. Most of the code is already done in projects i want to test so the less code i have to duplicate or interface around is a necessity. There were some solutions online related to using Moq. For my situation, I did not feel the need when using Entity Framework, however as i progress and learn xUnit more, I may soon find my approach to be lacking what Moq could later provide.

The solution

The solution that I now use allows multiple In-Memory Database Contexts to be constructed with around five lines of code (excluding test data)

public class MockDbContext
{
    public DbContextOptions Options;
    public MyDbContext BuildContext(string functionName)
    {
        Options = new DbContextOptionsBuilder()
        .UseInMemoryDatabase(databaseName: functionName)
        .Options;
        MyDbContext context = new MyDbContext(Options);
        return context;
    }
}

I wouldn’t normally call database contexts “My” but in this example, it’s a very generic description.

This provides a generic function for a specific database context to be spun up by any test function with a unique database name. In my tests, this is essential so i can individually test each method with fresh data rather than having a single database with multiple database transactions moving about that could cause unexpected changes in the data that could corrupt my tests.

How would this be called?

As an example, one of the tests could look like the following:

public class MyTests
{
    private MockDbContext Mock = new MockDbContext();

    [Theory]
    [ClassData(typeof(DbCreateEntityRow))]
    public void EntityRowShouldAdd(List<newRow>)
    {
        // Arrange.
        Mock.BuildContext(nameof(EntityRowShouldAdd));
        using (MyDbContext temp = new MyDbContext(Mock.Options))
        {
            // Act.
            new MyDatabaseFunction(temp).AddRow(newRow);
            temp.SaveChanges();

            // Assert.
            Assert.True(temp.EntityRow.Where(a => a.RowId == 1).Any());
        }
    }
}

A simple example, but this concept can be applied to most, if not all database context testing situations.

  1. Make a reference to your MockDbContext class.
  2. Pull in the options that are publicly exposed.
  3. Spin up a temporary Database context to perform actions on.
  4. Check the action was successful.

There is a potential weak point with this method of creating a context; take note of the nameof() call that is sending the database name as that function name. Now, with [Theory] at the top of the function, I am saying that there could be more than one test data instance being passed to that function, so calling a database the same name could cause data conflicts in your Unit Tests so assess the need of your database name as and when you are writing tests.

Rewriting your brain for Unit Testing

The hardest challenge I found was rewriting the way I thought about code. When I write ASP.Net Core webpages, I tend to think “what should the server do here”, “what do I need to do to make this better” e.t.c. On the flip side, Unit Testing had me thinking more along the lines of “What can I do to make the server do, or not do what it should do”. Subtle difference, but it’s actually a lot harder to get your head around when you first glance. I therefore offer a few tips:

  • Don’t get bogged down in how the application works as a whole, focus on that one function you are testing. Think of that function as it’s own program and try and write functions around that single purpose.
  • Understand where Unit Testing sits in the application life-cycle. Instead of being deep in the specific function you are validating, it’s brushing the entry points of those functions, or think of it as a function call specifically, so therefore it has no true understanding of what is going on in the function, it just wants a black-box response.
  • You don’t need to test every function in your project! Test the most critical functions in your project and then work backwards until you get to a point where you are about to write a Unit Test for a function and say “This isn’t going to give me any benefit”. You’ve then tested your application to, in my opinion, the most optimal balance between maintainability and application reliability and longevity.

If you are also anything like me, you think of code visually so I offer this basic diagram to try and help show where you can make a Unit Test call sit in a virtual run-time situation.

UnitTestingGuide
Diagram explaining where Unit Testing calls could sit within an Applications Life-Time

Conclusion

Although a very quick run-down of my enjoyments and challenges of xUnit I hope this has helped smooth the pathway onto Unit Testing and xUnit. As I learn more and find more challenges or things I like, I’ll try and keep this post updated, or add another post on-top to further help people down the line.

The feasibility of GPU acceleration in ASP.Net Core MVC

Throughout the past few weeks, in my spare time, I have been researching and planning an attempt at improving efficiency and load potential of ASP.Net sites via the use of GPU acceleration. This task has been quite a challenge, although there are a number of powerful NuGet packages that utilise the NVidea CUDA SDK, they aren’t strictly designed for ASP.Net so getting them to be correctly referenced was not easily possible.

After a large range of attempts, it has proven not to possible at the current time using pre-existing packages, although still possible to an extent, but one which doesn’t bring as many benefits as what was hoped.

The theory of GPU accelerating ASP.Net MVC would be to change how an Asynchronous controller function would handle resourcing; rather than splitting up a CPU’s thread-count to the number of users accessing an application, it would be ideal to split up a GPU’s (in this case CUDA) core count to allow for a much larger quantity of users accessing a site, although only a theoretical result.

To put this into theoretical perspective, we can take a high end Intel Xeon Processor with 24 cores and 48 threads and compare this to a high end Nvidea Card which in contract has 3584 Parallel-Processing cores. It is, from face value, easy to see the potential advantage, more available cores means that we can have more functions running in an Asynchronous fashion, potentially allowing more users onto a site at once and hopefully decreasing latency of page response times.

Obviously, there are a huge range of arguments that have been going on regarding this topic and I don’t intend on joining these heated debates but the curiosity and rise in deep-learning via GPU Parallel-Computing shows that GPU’s are paving the way for a more efficient approach to Computing.

The difficulty I encountered with trying to get ASP.Net to work with GPU’s is mainly that there is no real support for Parallelising ASP in that manor, all existing Asynchronous computation is done via the CPU and the reason for this is that a huge range of servers do not have dedicated GPU’s present inside them, meaning the idea of Microsoft or Open-Source Developers for .NET Core providing support for GPU Parallelism is somewhat unnecessary at this time. There are some packages available, notably Hybridizer by Altimesh, and AleaGPU. Both these packages allow development using C# rather than having to move to C++, providing an easier transition to GPU utilisation. However, these packages are only really designed around Desktop and Console Applications and not for Server-Side Applications, providing issues when it comes to implementation.

The only way I could work around this, is creating Class Libraries which use these packages and allow for GPU usage but this brings with it an annoying issue which is maintainability. Class Libraries need to be converted to .dll files at runtime which means debugging code in the .dll when something goes wrong is a pain in itself.

When taking all of this into consideration, I will continue to, on an enterprise and business level, to view the GPU acceleration capability of ASP.Net asynchronous functions, as an infeasible method of development. On a personal level, I plan to see what the possibility of rewriting/modifying the existing .NET Core packages which provide asynchronous functionality to try and achieve the goal I have in mind. From here, it may be possible to determine the overall feasibility of GPU acceleration.