The surprising ease of getting started with xUnit in .Net Core

As all developers are aware, learning a new programming language/library is something we do pretty much all the time. Sometimes it’s more challenging than others and in some circumstances, we may get frustrated and think “actually, I’m going to do this a different way”.

A complete opposite to this, for me, was xUnit. I was surprised as to how easy the library was to pick up and get going. xUnit’s own documentation is somewhat lacking, but this I can forgive due to the relative simplicity of getting started, most likely because their assumption is that users who will be writing in xUnit should already know .Net languages, in my case C#.

What’s good about it?

For a start, it’s very well integrated into Visual Studio 2017. Yes, most, if not all .Net Unit Test frameworks are supported by Visual Studio so I won’t give a +1 to xUnit specifically for this one but the Test Explorer in Visual Studio has made development, debugging and life-time running of Unit Tests an absolute breeze.

Another benefit is it’s natural adoption of general C# constructors and functions. There’s no need to create Initialisation functions so you essentially are writing normal .Net Core code, making it more maintainable and understandable for developers around you.

[Fact] and [Theory] are two attributes I really enjoy the naming scheme of. A small thing but they just make sense…
Fact, is literally “It takes this, so should return this”.
Theory is “It takes something like this and should output this”.

Subtle between the two but I have found the more I use xUnit with applications that use external data, the more I end up using [Theory] because of the ability to throw all sorts of data-sets at the function.

The challenges

With all new things comes challenges and xUnit does have them.

Entity Framework Contexts

The problem

One of the hardest things for me was getting Entity Framework Contexts to work correctly, with as little code as possible. Most of the code is already done in projects i want to test so the less code i have to duplicate or interface around is a necessity. There were some solutions online related to using Moq. For my situation, I did not feel the need when using Entity Framework, however as i progress and learn xUnit more, I may soon find my approach to be lacking what Moq could later provide.

The solution

The solution that I now use allows multiple In-Memory Database Contexts to be constructed with around five lines of code (excluding test data)

public class MockDbContext
{
    public DbContextOptions Options;
    public MyDbContext BuildContext(string functionName)
    {
        Options = new DbContextOptionsBuilder()
        .UseInMemoryDatabase(databaseName: functionName)
        .Options;
        MyDbContext context = new MyDbContext(Options);
        return context;
    }
}

I wouldn’t normally call database contexts “My” but in this example, it’s a very generic description.

This provides a generic function for a specific database context to be spun up by any test function with a unique database name. In my tests, this is essential so i can individually test each method with fresh data rather than having a single database with multiple database transactions moving about that could cause unexpected changes in the data that could corrupt my tests.

How would this be called?

As an example, one of the tests could look like the following:

public class MyTests
{
    private MockDbContext Mock = new MockDbContext();

    [Theory]
    [ClassData(typeof(DbCreateEntityRow))]
    public void EntityRowShouldAdd(List<newRow>)
    {
        // Arrange.
        Mock.BuildContext(nameof(EntityRowShouldAdd));
        using (MyDbContext temp = new MyDbContext(Mock.Options))
        {
            // Act.
            new MyDatabaseFunction(temp).AddRow(newRow);
            temp.SaveChanges();

            // Assert.
            Assert.True(temp.EntityRow.Where(a => a.RowId == 1).Any());
        }
    }
}

A simple example, but this concept can be applied to most, if not all database context testing situations.

  1. Make a reference to your MockDbContext class.
  2. Pull in the options that are publicly exposed.
  3. Spin up a temporary Database context to perform actions on.
  4. Check the action was successful.

There is a potential weak point with this method of creating a context; take note of the nameof() call that is sending the database name as that function name. Now, with [Theory] at the top of the function, I am saying that there could be more than one test data instance being passed to that function, so calling a database the same name could cause data conflicts in your Unit Tests so assess the need of your database name as and when you are writing tests.

Rewriting your brain for Unit Testing

The hardest challenge I found was rewriting the way I thought about code. When I write ASP.Net Core webpages, I tend to think “what should the server do here”, “what do I need to do to make this better” e.t.c. On the flip side, Unit Testing had me thinking more along the lines of “What can I do to make the server do, or not do what it should do”. Subtle difference, but it’s actually a lot harder to get your head around when you first glance. I therefore offer a few tips:

  • Don’t get bogged down in how the application works as a whole, focus on that one function you are testing. Think of that function as it’s own program and try and write functions around that single purpose.
  • Understand where Unit Testing sits in the application life-cycle. Instead of being deep in the specific function you are validating, it’s brushing the entry points of those functions, or think of it as a function call specifically, so therefore it has no true understanding of what is going on in the function, it just wants a black-box response.
  • You don’t need to test every function in your project! Test the most critical functions in your project and then work backwards until you get to a point where you are about to write a Unit Test for a function and say “This isn’t going to give me any benefit”. You’ve then tested your application to, in my opinion, the most optimal balance between maintainability and application reliability and longevity.

If you are also anything like me, you think of code visually so I offer this basic diagram to try and help show where you can make a Unit Test call sit in a virtual run-time situation.

UnitTestingGuide
Diagram explaining where Unit Testing calls could sit within an Applications Life-Time

Conclusion

Although a very quick run-down of my enjoyments and challenges of xUnit I hope this has helped smooth the pathway onto Unit Testing and xUnit. As I learn more and find more challenges or things I like, I’ll try and keep this post updated, or add another post on-top to further help people down the line.

Vehicle Morality – The VR endeavour

The first study on the vehiclemorality site came to a close at the beginning of September. This marked a very important milestone for the Masters degree and has paved the way to future areas of research, including the subject of this post:

Virtual Reality; An area that is hot with research and an area that this master’s degree is delving into.

What for?

To compare the results of the previous study, with a new factor, immersion.

Is there a significant difference in decision making when adding immersion as a factor into the mix?

The first study showed little area of significance between the two core factor sets being evaluated:

  • Time Pressure
  • Actor Influence

The hypothesis from the previous study, is thus nullified, however there are arguments as to why this was the case.

The way the study was presented only showed actors by a line of text, thus not clearly indicating their position within the environment on the images, not to mention that a two-dimensional image would no doubt have an abstract impact on a participant, thus making decisions questionable as to whether participants would choose the decisions they did, when actually in that situation.

From the study, two areas of interest surfaced which showed potential for further exploration.

Participants showed a difference between intervening compared to continuing down the road, as well as changing their decision based on time pressure, for self-preservation compared to self-sacrifice.

This, as well as immersion, are the core areas for the second study.

Participants will be asked to complete two VR scenarios and then complete an interview to Qualitatively understand the participants decisions. Participants will be randomly assigned between two groups, time and non-time pressure. Actor evaluation will no longer continue, only for the reason the VR study has to be carefully designed, and timed to reduce the risk of motion sickness, not to mention the ability to time manage the quantity of participants would be far easier.

Via the immersive factor and the focus on the areas previously mentioned, could it be possible to highlight that immersion does have a substantial factor on automated ethical decisions? Meaning an important alternative consideration for Vehicle Manufacturers when developing the next generation of Autonomous Vehicle.

The VR environment is being developed as this post goes out, being carefully designed and implemented to be as best a representation of the study design as possible. The expected study date is around March, to run through till May but timing is flexible based on when ethics is complete and the environment is ready to go.

Vehiclemorality.com

After a number of months of planning, development and testing the first study is available for people to access.

It would be fantastic if you could participate and allow this research to hopefully gain insight into peoples viewpoints across different collision scenarios.

The aim is to have this study running for around a month, by which point the data will be collated and analysed.

It is recommended, if you are on your phone, that you complete the study in landscape due to the way the study is set out.

To access, please go to https://vehiclemorality.com.

Many thanks in advance.

My Adoption of jMeter

Introduction

As part of my work within the University, it has become necessary to build up Load Tests of business critical systems, so when releasing an upgrade to one of the systems, it is possible to identify our maximum possible load, whilst also seeing areas of improvements.

There were a number of possible Load Testing tools available, however the two most notable are

  • Visual Studio Web Performance Testing (VS-WPT) Tools
  • jMeter

These two attracted the most attention for a number of reasons.

  • Visual Studio is a hugely adopted IDE across enterprises and specifically within the University Establishment it is the main development tool for Web Applications.
  • jMeter was an existing load testing tool utilised by prior colleagues before my promotion to the post I am now situated in, although the load testing tool never really got it’s chance to shine due to configuration changes that left the existing load testing development redundant.
  • Both tools can be incorporated within Visual Studio Team Services (VSTS) and run via Azure.

Initially, VS-WPT was used as this is the easiest tool to pick up and start using, the difficulty with this tool became apparent soon after trying to development tests past login screens. Documentation and user adoption was lacking for this tool at the time of writing and made it extremely difficult to progress.

Once the frustration set in, I turned my attention to jMeter to try and combat this issue, as I had been informed of prior success with the tool. After a few hours of patience and YouTube videos, the understanding of jMeters power begun to set in. It’s simplicity started to come to light, minus a few frustrations with the tool.

File and Folder Structure

During development of load tests it became apparent of the ever growing need for some form of configuration that would allow for reuse of load test functions across all load tests, most notably the login function.

I chose to use my understanding of the MVC framework and structured my folders in a manor that allowed reusable functions to be stored and controlled much easier. An important thing to mention is the adoption of MVC was around the API version of MVC where the View (V) section isn’t really required, essentially forming a MC framework.

Each load test project would use two main folders:

  • Modules (M)
  • Load Tests (C)

The Modules folder is designed for the reusable files, where they can all be stored and called as necessary. In the future, this folder may need folders within it, to facilitate more file clarity, but at the time of writing, the folder structure is perfect for its requirements.
The Load Tests folder is designed to house the load tests that have been developed. Providing a one-stop-shop for load tests related to a specific service.

Ease of Environment Variables

jMeter surprised me from the first time I used it’s “Include Modules” functionality where I was able to call jMeter files from around my pc. When calling these files during runtime, jMeter automatically adds the file to scope of the parent file and allows the child files to use Variables defined in the parent layer without the need to pass any variables in a command line or function call. That being said, I’m sure a number of people would find this worrying however on an implementation level, it is also possible to have local variable definition should the need suit you.

My main issue with jMeter

Why can’t I create dynamic file paths in the “Include Modules” function? It would make scalability a lot better as the need to rename each functions folder structure will be much easier when it occurs due to changing potentially one variable which would span across all “Include Modules”.

Conclusion

jMeter as a tool is fantastic for very in-depth load testing, whilst at the same time giving a level of simplicity that you aren’t grinding your teeth at the sight of a bug in your development. I will be continuing to use the tool and hope to develop a suite of resources within the department I work for to provide an easy adoption of load testing.

The feasibility of GPU acceleration in ASP.Net Core MVC

Throughout the past few weeks, in my spare time, I have been researching and planning an attempt at improving efficiency and load potential of ASP.Net sites via the use of GPU acceleration. This task has been quite a challenge, although there are a number of powerful NuGet packages that utilise the NVidea CUDA SDK, they aren’t strictly designed for ASP.Net so getting them to be correctly referenced was not easily possible.

After a large range of attempts, it has proven not to possible at the current time using pre-existing packages, although still possible to an extent, but one which doesn’t bring as many benefits as what was hoped.

The theory of GPU accelerating ASP.Net MVC would be to change how an Asynchronous controller function would handle resourcing; rather than splitting up a CPU’s thread-count to the number of users accessing an application, it would be ideal to split up a GPU’s (in this case CUDA) core count to allow for a much larger quantity of users accessing a site, although only a theoretical result.

To put this into theoretical perspective, we can take a high end Intel Xeon Processor with 24 cores and 48 threads and compare this to a high end Nvidea Card which in contract has 3584 Parallel-Processing cores. It is, from face value, easy to see the potential advantage, more available cores means that we can have more functions running in an Asynchronous fashion, potentially allowing more users onto a site at once and hopefully decreasing latency of page response times.

Obviously, there are a huge range of arguments that have been going on regarding this topic and I don’t intend on joining these heated debates but the curiosity and rise in deep-learning via GPU Parallel-Computing shows that GPU’s are paving the way for a more efficient approach to Computing.

The difficulty I encountered with trying to get ASP.Net to work with GPU’s is mainly that there is no real support for Parallelising ASP in that manor, all existing Asynchronous computation is done via the CPU and the reason for this is that a huge range of servers do not have dedicated GPU’s present inside them, meaning the idea of Microsoft or Open-Source Developers for .NET Core providing support for GPU Parallelism is somewhat unnecessary at this time. There are some packages available, notably Hybridizer by Altimesh, and AleaGPU. Both these packages allow development using C# rather than having to move to C++, providing an easier transition to GPU utilisation. However, these packages are only really designed around Desktop and Console Applications and not for Server-Side Applications, providing issues when it comes to implementation.

The only way I could work around this, is creating Class Libraries which use these packages and allow for GPU usage but this brings with it an annoying issue which is maintainability. Class Libraries need to be converted to .dll files at runtime which means debugging code in the .dll when something goes wrong is a pain in itself.

When taking all of this into consideration, I will continue to, on an enterprise and business level, to view the GPU acceleration capability of ASP.Net asynchronous functions, as an infeasible method of development. On a personal level, I plan to see what the possibility of rewriting/modifying the existing .NET Core packages which provide asynchronous functionality to try and achieve the goal I have in mind. From here, it may be possible to determine the overall feasibility of GPU acceleration.

Another potential topic comes to light.

In researching more into the existing work being undertaken in the field of automated vehicles and trust, there is a topic that it, itself, causes a dramatic influence on that of social acceptance and trust of automated vehicles; The Moral Machine.

This social experiment has been around for a couple of years and has been gauging people’s viewpoints on how an automated vehicle should react in a sever, split choice situation. As an example, if an automated vehicle was to be going at a rapid speed and there, in front of the vehicle, is a set of pedestrians crossing the road. On the wrong side of the road for the vehicle, is 1 person, on the correct side of the road, there is 2. The vehicle now has to make one of three decisions because stopping safely is no longer an option. Should the vehicle a) drive into the 2 people, b) drive into the 1 person, c) drive itself off into whatever environmental object the vehicle can find, potentially killing the occupants of the vehicle.

This has been a huge talking point even before the realistic emergence of an automated vehicle, mainly within automated systems used across the industrial sector.

This whole area of understanding and social experiments opens an important door to analyse, however with a major caveat which I shall explain shortly. The current social experiment involves asking people what the vehicle should do based on an abstract image identifying the issue and the outcome. This could raise a concern that those completing said experiment may feel no risk in choosing either option due to their abstractness from the entire situation because it is purely theoretical. In flipside, I could utilise the existing research shown from this study, but attempt to validate or counteract the results, dependant on the outcome of the research I conducted. The research could involve understanding humans decisions in a virtual, real-time environment. This brings an interesting alternative that could improve how we understand social norms during critical conditions.

There is a number of arguments against this potential area of research. Firstly, there is the ethical question of should we really be putting people inside a virtual reality space, then asking them to decide who should be safe and who should not, is that abstractness from the moral machine rightly justified as an ethical line? Well, the argument against this would be that due to it being in a virtual reality space, it still has a level of safety and abstractness that doesn’t actually harm anyone involved (theoretically).

The past point brings me onto the argument against that, which is what is the real point of still having an abstraction that still leaves participants feeling safe and therefore, what’s the real point of the research. My main argument here is the idea of changing the situation to a real-time environment, this will determine what people really socially justify under rapid conditions.

This is still not yet a research topic that is confirmed to be something I will undertake, but it’s another on the cards that needs evaluating and justifying down the line.

My area of Research on Automated Vehicles

Introduction

With the growing commercial viability of Automated Vehicles, there is a huge array of Research that is being undertaken to determine the public’s viewpoint on the concept of owning/using an Automated Vehicle.

Research from the likes of Gold, Körbera, Hohenbergerb, Lechnera, and Benglera (2015) has already indicated that one of the precursors to the acceptance of a new technology, is that of trust, or, as described from Schaefer, Chen, Szalma, and Hancock (2016); “No trust, no use”. The indication of understanding how a consumer trusts a vehicle is paramount to the adoption of this newfound technology, and must be nurtured to have any major impact on consumers day-to-day lives and the investment that companies have made into this technology.

A meta-analysis by Schaefer, Chen, Szalma, and Hancock (2016) indicates an array of factors that effect trust on automation:

Firstly, the concept of States on a consumer, an example is that of stress level; Consumers can feel different levels of stress throughout their lives and will effect both positively and negatively the trust level towards an automated vehicle. This idea of states is also applied to the concentration, or attention of a human on automated machinery. Several studies have indicated that operators with lower attentional control will rely more heavily on automated systems, that those with a higher attentional control (Schaefer et al., 2016, 381).

Secondly, which closely relates to that of States, is Cognition. A user’s trust is influenced by their learning experience and their ease of interacting with an automated piece of machinery. Further to this, a users prior knowledge of alternate automated systems drastically improves a users trust over an automated system (Schaefer et al., 2016, 382). The significance of this is uncanny as this proves the difficulty manufactures are going to have, in regards to acceptance of the newfound technology. Overtime it is highly likely, as this research indicates, that trust of automated systems would be something of a natural adoption due to humans previous understanding of automated systems.
Another cognitive factor, is the mental workload influenced by the automated system on the human. Previous research utilised with combat based tasks, shows a degradation of trust on automation, when the mental workload is high whilst interacting with the automated system (Schaefer et al., 2016, 382).

These two factors are but a small snippet that has been identified and would essentially form their own book which is not the role of this Blog.

Areas of research that need exploring

From all the literature, there are a few areas which are clear need truly understanding that may assist in providing an insight into the challenge of trusting an automated vehicle.

One area, is the understanding of the difference in age ranges, specifically why the current status quo of technological adoption seems to be being questioned via the automated vehicles. The existing status quo, is that younger people are more likely to adopt a newfound technology, far quicker and easier than an elderly person. Research on automated vehicles, shows a slightly different conclusion that young people have less trust in an automated vehicle than elderly people, one study proved this via the use of eye tracking which indicated a high horizontal deviation in young people compared to elderly people. However, the improvement in trust after the study was far more significant in young people than that of elderly people. The consideration for a research topic here, is not what change in trust is there after a participant is subject to influence from an automated vehicle, but why there is a change in trust, specifically in younger people?

Another area, is the understanding of whether there is a significant trust difference between males and females related to automated vehicles. All studies show a fairly equal proportion of males and females in studies, but there is no measure of any change between genders. If there was a significant difference, it would indicate an interesting area of investigation to determine whether there is a solution that could balance the viewpoint of whichever gender trusts the automated vehicle less.

Finally, an area which hasn’t been investigated. is the trust difference between before and after a critical failure of an automated vehicle, in this case a cyber attack. There is already trust understanding of the difference between stagnant lane driving and a take-over scenario, however there is no understanding of a trust effect when an unexpected failure/cyber attack was to occur on a consumers vehicle. This is an interesting area although ethically questionable.

Conclusion

Out of the three areas of research that could be selected, there is still questions and approval that needs to be answered before a decision and an ethical application can be made before selecting the correct research topic. Once a research topic has been decided on, it will be documented here.

Bibliography

Gold, C., Körbera, M., Hohenbergerb,C. , Lechnera, D., and Benglera, K. (2015) Trust in automation –Before and after the experience of take-over scenarios in a highly automated vehicle. In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, Las Vegas, United States, 26-30 July. [Unknown]: Elsevier B. V., 3025-3032. Available from https://www.sciencedirect.com/science/article/pii/S2351978915008483 [accessed 2 February 2018].

Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., and Hancock, P. A. (2016) A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. Human Factors: The journal of the Human Factors and Ergonomics Society, 53(3) 377-400. Available from http://journals.sagepub.com/doi/full/10.1177/0018720816634228 [accessed 2 February 2018].