Intro

We have been doing testing as long as we have been programming. In the beginning we did a quite thorough review of the punch cards (and we actually sometime mesasured the space between the holes). Then when the higher level languages came along, we began to do basic assertions. In the 90s and 00s, together with the internet, we began to see adoption of several testing frameworks: junit, nunit, xunit, jasper (Python BDD testing library) (i could write a post with just the names), but we see a pattern in just those frameworks mentioned: unit, with the except of on: Jasper - which is a BDD framework. I could really recommend to check it out: https://github.com/tylerlaberge/Jasper/. It is "easy" to write a unit test. Not easy, but we have the a) framework, b) 90% of programmers is taught how to write it, c) the vocabulary built into the unit testing frameworks is quite clean (you can feel that xunit is buit ontop of the experience of other frameworks), but it kind of lacks certain aspects when it comes to integration testing, and functional testing. Already we get some confused: there is both integration testing and functional testing? YES and the funny thing: when people says integration testing, the usally mean functional testing. This is also the case when we say system testing: we mean functional testing. That is: when the context is a developer. If we overhear a conversation between quality assurance people, funcitonal testing and integration testing mean something completly different. Our focus here is from the developers point of view, and our goal is to see what it takes to write a functional test.

Why a functional test, and not just integration test?

Good question. Before doing this PoC i tended to always just call everything that had the slightest relation to saving something to a disk, an integration test: after all that is what we are doing, but the more i come to think about it, the more i want to divide it into:

  1. functional testing
  2. integration testing
  3. unit testing

and it is actually rather simple: if a test fails, and it is a functional test, well then i know something fundemental has changed, and i need to look into it right away, asking POs, QAs etc. If an integration test or unit test is failing, i can handle it differently. The smart might as well be, that i for an integration test, i can be rather loose: can it INSERT or UPDATE something to the database (no matter the value), but for a functional test, i can be a bit pickier, because, i can test certain requirements, fx pr the requirement, the address method needs to be able to save at least and address of 40 characters, then i can test in and arround that, so when my integration test fails, i know: _something is wrong with the INSERT or UPDATE logic (especially if the functional test also fails), but if the integration test run smoothly, but the functional test fails, i know something has changed, pr the requirement (another team member has lowered the nvarchar(40) to nvarchar(30) in the database) … Believe me, this has happened more than once in the teams i have been a part of.

I have begun to think more of this, and plan accordingly.

Integration test

As an example, an integration test could be me trying to test a ProductRepository.Get(int productId) method against a real database, that i have seeded with some test data. Does it get the information i have requested? If it spans multiple tables, does it JOIN those tables correctly? All in all a fine little test, but it doesn't actually gives on any real value other that it tells us the obvious: it retrieves the correct product, and the documentation aspect of it, well … As a developer i can read the method signature and understand it quite nicely (reading tests, is a good way of understanding how the framework is used. It is actually quite often 1000x better than documentation describing how to use the code). So we don't get any real value from it, and i always thing that we could get more value from time spend doing this as a part of a functional test: the functional test tells us if a slice of a system works from the end users perspective (the end user could be another system and not a person). If the functional test breaks, the other system will most certainly break also, and we don't want that. If the functional test breaks, we know that some update to a JOIN doesn't work, or a mapping error has been introduced somewhere. It gives us more value, and it tests the inteaction between the end user and the system, more than it tests implementation details: implementation details is usally tested when doing integration test. WHICH can be the what you want. If it is a highly critical system component that you are working on, you would want to test those details.

In theory: an integration test is when you take the differnt units, that was unit tested, and instead of mocks, you test how they interact. This may or may not have some sort of external dependencies attached to it: like you integrate with a real UserRepository that connects to a real database. You don't know all the paths, so you are doing some kind of black box testing, eg you look at input and output. In practice, you don't always does black box integration testing, but white box testing (sort of), and i think it is somewhat fine, if it is some critical integrations you are testing. But that is just my opionion.

Functional test

As an example, a functional test could be me trying to test a ProductController.GetLatestProduct(int productId) against a real database with a HTTP request. I could swap out the HTTP with AMQP and thereby use a message bus, as my data carrier, but we all understand HTTP. For me to do that, i need to spin up the http server, and create a http client to make the http request. By doing it this way, i also test the IoC and all the middleware, and we really want to test this aswell: is the request routed correctly, is the JWT token validated, is the psycho middleware compenent you had to write to satify an obscure criteria, actually not messing up the request. Please note: a functional test is not you new'ing up the ProductController, that gets you some of the way, but it is closer to an integration test, that a functional test. A functional test is: you testing a slice of the system as seen through the lense of the caller: spinning up the server, activating the IoC, letting the request flow through the middleware until it hits something (a database, storage etc) and possibly returning something. Plase note: i said slice, meaning that it is PERFECTLY OK to mock certain external calls out, so the focus of your test is 100% on the GetLatestProduct bit, so when it fails, you know that is has something to do with retrieving the latest product.

For the readers that have gotten this far: yes, in my thought experiment i am using ProductReposity.Get(int productId) when calling ProductController.GetLatestProduct(int productId) :).

In theory, a functional test is a black box test, where you focus on input vs output where you test the functional requirements of the application. In practice, this isn't always as black box as we might think, but i have an example for that later on.

How?

To do a good functional test in .NET you really need to familiarize with WebApplicationFactory. That is the core component of a good functional test. With that you can:

  1. spin up an inmemory http server
  2. mock out the things in the IoC
  3. create a HttpClient to use against your server

What this doesn't give you is a way to the something that isn't HTTP, but i have created some small PoC's on how to do that, that i will link to in the last part of this post, BUT the flow is the same: you spin up the system that is under test, you mock the things you don't want, out and you call it.

Before we begin the journey (intro part 2)

I am using XUnit so I am myself a victim of the shorted vocabulary, but i really wanted to see how it turned turned out so i knew what to look for, if i wanted to a) create my own library for proper functional testing, or find one that already does it (SpecFlow could be a candidate although i haven't looked at it yet). Please also note that fx SpecFlow will only get my part of the way because this is turned much towards a QA and/or a product owner specifying the test scenarios: that is not my intent because I am looking for something that i can use to

  1. specify what i want to test (this could easily be specfied with fx Gherkin as in SpecFlow), and
  2. specify what part of the system i want to test against (i want to test against this database, but not this endpoint) - with Gherkin, you can specify what scenario to test, but not what infrastructure to test against

So SpecFlow will only get me part of the way, and i really want to take a closer look on what i takes to spin up an environment, without to much hassle.

My focus from now on will not be on a complete example (you can find that in my repo, that i am linking to in just a moment), but more on the interesting stuff that i have learned when trying to spin up my environment when i run dotnet test. In an outline, my focus will be on:

  1. A class I call EnvironmentController - that does the heavylifting
  2. Injecting connectionstrings
  3. Parallelization of the tests run

EnvironmentController

Before we begin: The PoC can be found here.

Where are we?

Whenever a test is created, we start out be defining it, following the same format as we always see:

namespace Tests
{
    // An IClassFixture is run before and after each test class (not method)
    public class IntegrationTest
        : IClassFixture<CustomWebApplicationFactoryBase<Program>>, IDisposable
    {
        private readonly CustomWebApplicationFactoryBase<Program> _factory;

        public IntegrationTest(CustomWebApplicationFactoryBase<Program> factory)
        {
            _factory = factory;
        }

        public void Dispose()
        {
            _factory.Dispose();
        }

        [InlineData("/timestamp")]
        [Theory]
        public async Task ATest(string url)
        {
        }
    }
}

This in turn uses a custom factory, that i have made just for this, wrapped by an IClassFixture. In XUnit an IClassFixture is called once before the test class is started, and once after the test has finished, and not before each test in the stomach of the test. So what does our custom base class do? I will show you

    public class CustomWebApplicationFactoryBase<T>
        : WebApplicationFactory<T> where T : class
    {
        private EnvironmentController _environmentController;
        public IEnumerable<ConnectionString> ConnectionStrings { get { return _environmentController.ConnectionStrings; } }

        public CustomWebApplicationFactoryBase()
        {
            _environmentController = new EnvironmentController();
            _environmentController.Setup();
        }

        protected override IHostBuilder CreateHostBuilder()
        {
            var builder = base.CreateHostBuilder();

            builder.ConfigureServices(services =>
            {
                services.Configure<HostOptions>(hostOptions =>
                        {
                            hostOptions.BackgroundServiceExceptionBehavior = BackgroundServiceExceptionBehavior.Ignore;
                        });
            });

            return builder;
        }

        protected override void Dispose(bool disposing)
        {
            base.Dispose(disposing);
            _environmentController.Dispose();
        }
    }

The whole shebang can be found here (there isn't much more to it other than some comments and a namespace).

What it does is simply to spin up our environment controller and dispose it when it is done. Because this is done in an IClassFixture the controller is called before the test is started, and after it is done.

There isn't actaully anything non-standard to this, and we like that :)

To recap:

  1. The EnvironmentController is called be an IClassFixture
  2. The IClassFixture is attached to a test
  3. A IClassFixture is called once pr test

So what does it do? Magic? MAGIC! Naaaah.

The fashionable and boring inner workings of the magic class

The class can be boiled down to this:

    public class EnvironmentController : IDisposable
    {
        private IContainerService _azureiteContainer;
        private IContainerService _rabbitmqContainer;
        private IContainerService _mssqlContainer;
        private ICompositeService _compose;
        private bool disposedValue;

        public List<ConnectionString> ConnectionStrings { get; } = new List<ConnectionString>();

        public void Setup()
        {
            var prefix = Guid.NewGuid().ToString().Replace("-", String.Empty);

            Console.WriteLine(prefix);
            _compose = new Builder()
                              .UseContainer()
                              .UseCompose()
                              .ServiceName(prefix) // we want a random container name and network name
                              .FromFile("test-env.yaml")
                              .RemoveOrphans()
                              .Build().Start();

            string saPassword = "YourStrong(!)Password^2";

            _mssqlContainer = _compose.Containers.First(c => c.Name.Equals("mssql-database"));

            var mssqlPort = _mssqlContainer.ToHostExposedEndpoint("1433/tcp").Port;
            var mssqlHost = GetHostFromEnvironment();

            _rabbitmqContainer = _compose.Containers.First(c => c.Name.Equals("rabbitmq"));
            var rabbitmqPort = _rabbitmqContainer.ToHostExposedEndpoint("5672/tcp").Port;
            var rabbitmqManagementPort = _rabbitmqContainer.ToHostExposedEndpoint("15672/tcp").Port;
            var rabbitmqHostString = _rabbitmqContainer.ToHostExposedEndpoint("5672/tcp").Address.ToString();
            var rabbitmqHost = GetHostFromEnvironment();

            _azureiteContainer = _compose.Containers.First(c => c.Name.Equals("azurecontainer"));
            var azureStorageBlobPort = _azureiteContainer.ToHostExposedEndpoint("10000/tcp").Port;
            var azureStorageQueuePort = _azureiteContainer.ToHostExposedEndpoint("10001/tcp").Port;
            var azureStorageHost = GetHostFromEnvironment();

            var mssqlConnectionString = BuildMssqlConnectionString(mssqlHost, mssqlPort, saPassword);
            var rabbitmqConnectionString = BuildRabbitmqConnectionString(rabbitmqHost, rabbitmqPort);
            var rabbitmqManagementConnectionString = BuildRabbitmqManagementConnectionString(rabbitmqHost, rabbitmqManagementPort);
            var azureStorageConnectionString = BuildAzureStorageConnectionString(azureStorageHost, azureStorageBlobPort, azureStorageQueuePort);

            ConnectionStrings.Add(mssqlConnectionString);
            ConnectionStrings.Add(rabbitmqConnectionString);
            ConnectionStrings.Add(azureStorageConnectionString);
            ConnectionStrings.Add(rabbitmqManagementConnectionString);

            WaitForRabbitmqToBeReady(rabbitmqConnectionString);
            WaitForMsSqlToBeReady(mssqlConnectionString);
            WaitForAzureStorageToBeReady(azureStorageConnectionString);
            RunMigrations(mssqlConnectionString);
        }

        // Basic clean - it deletes all things an recreate them
        // Not all here is tested - only MSSQL. In my current setup i dont need to reuse storage accounts and rabbitmq
        public void Clean()
        {
            var tableNames = new List<string>();
            CleanAzureStorage(tableNames);
            CleanMssql();
            CleanRabbitmqQueues();
        }

    }

Or put in another way:

  1. Run docker-compose, which have some predefined some services. For my test it is, MSSQL, Azure Storage and rabbitmq. Why do it through a docker-compose file? Well we can actually use this, if we manually want to spin up the containers (think: if you want to run some API that uses these services). Then we can call docker-compose -f the_file.yml -d ét voila, and we have the exact same environment, that we use to do the functional test
  2. The above containers is spun up, without mapping a port, so docker gets to decide, what port to be used - a good way of randomizing port creation. We need to this, because if we want to spin up this for a lot of tests, and we need to run it in parallel, then we need the port creation to be randomized
  3. After these services is started (actually: starting), we want to get the randomized ports from each service
  4. When we have the ports, we build all the connection strings
  5. When have built the connection strings, we can wait for the services, to actaully be started

There is also a Clean method, that we can call, that will clean all the services (empty all the tables in the database etc).

Thats the magic of it: boring stuff that needs to be programmed. But extremely useful in the long run.

Injecting connectionstrings

The ugly bit of my PoC, that could easily be put into some kind of utility class (same as the EnvironmentController is), but i didn't want to do this. I wanted to copy the code (and make updates to it), so i could be more aware of what to change, if i should end up rewriting this PoC to something useful (a nuget library, a github template of some sort). I am also a believer of the fact that it is good to copy code, and let it settle, before deciding what to "pull out" in some sort of utility concept: let the architecture grow, and show it self, instead of forcing it through.

So: in the EnvironmentController we did build the connectionstrings, and we did add them to a public List property of the EnvironmentController so we can fetch those strings, WITH the randomized ports attached, but we still needs to inject those into our API, that is spun up through WebApplicationFactory. It is actually pretty straight forward to inject these, because we can make use of the normal IoC stuff in ASP.NET. Lets look at some code

        [InlineData("/timestamp")]
        [Theory]
        public async Task Post_TimestampWithNoDateTime(string url)
        {
            var messageQueueConnectionString = _factory.ConnectionStrings.First(c => c.Identifier.Equals("rabbitmq"));

            // Arrange
            var customFactory = _factory.WithWebHostBuilder(builder =>
            {
                builder.ConfigureTestServices(services =>
                {
                })
                .ConfigureAppConfiguration((context, configuration) =>
                {
                    // we inject rabbitmq into the configuration
                    // this needs to be done this way, because MassTransit is set up in Startup, and therefore doesnt get injected into the IoC
                    configuration.AddInMemoryCollection(new Dictionary<string, string>
                    {
                        {"MessageQueueOptions:Username",messageQueueConnectionString.Username},
                        {"MessageQueueOptions:Password",messageQueueConnectionString.Password},
                        {"MessageQueueOptions:Protocol",messageQueueConnectionString.Protocol},
                        {"MessageQueueOptions:Host",messageQueueConnectionString.Host},
                        {"MessageQueueOptions:Path",messageQueueConnectionString.Path},
                        {"MessageQueueOptions:Port",messageQueueConnectionString.FirstPort.ToString()}
                    });
                });
            });

            var client = customFactory.CreateClient();

            // ... More
        }

Where are we? We are at the start of a test, and make use of the ability to overwrite the configuration, and add the host, path etc to the configuration of the API under test. The API can be found here. In the API we bind a MessageQueueOptions object to the MessageQueueOptions section of the Configuration

        public void ConfigureServices(IServiceCollection services)
        {
            // we do it this way because we dont want to be dependent on IOptions
            var messageQueueOptions = new MessageQueueOptions();
            var messageQueueOptionsSection = Configuration.GetSection(nameof(MessageQueueOptions));

            messageQueueOptionsSection.Bind(messageQueueOptions);

            // More below, removed
        }

We can therefore make use of the AddInMemoryCollection of the Configuration, and add the username, password, host, random port etc. In turn this will override what ever is passed the API when it is started up with the WebApplicationFactory. TLDR: we can customize the API under test, just before it is started, and feed it with the connectionstring that was built when we started the services in the EnvironmentController. MARVELOUS!!

Run, code RUN

We now have

  1. The services is online, run with docker-compose
  2. We have built the connectionstrings
  3. We have customized the API

Because this is run during the setup of the XUnit test, we can just call dotnet test to spin up the services, run the tests, and afterwards through disposing, tear all the services down again. Easy! This helps other developers of the project tremendously, becuase, to run the functional tests, the don't need to spin up docker-compose locally, and potentially get errors because the chosen ports is already taken. The developers needs to run some tests, and the developer therefore run dotnet test. Isn't that beautiful? I think it is.

Parallelization of tests

This isn't the first time in history someone has tried to run tests that depend on external stuff, but the tests tends to be run one after the other, becuase, well, the ports is either fixed, or it was hard (time consuming) to spin up the databases. It was also error prone, because the external stuff was spun up once, and needed to be cleaned, before the next set of tests was run (We don't want to contaminate the database with wrongfully data that doesn't match the specific test). But this is all gone if we do it with the EnvironmentController, because:

  1. all services has random ports, so the don't clash
  2. we use docker do setup and teardown the services that is needed for the tests

Because of this we can now run the functional tests in parallel :O YES! In parallel, and XUnit actually makes that pretty easy, because the parallelization of the tests depends on the logical processors in your PC. Sweet. It is also easy to override if you have 100 logical processors, and you don't want to hit 100% on all. You just overwrite maxParallelThreads in a file called xunit.runner.json (read more about it here). I sometimes set it to 4, because … Well. That is a fine number for me :)

Because each test class in the same assembly in XUnit is in it's own TestCollection, every test class in run in parallel. This is the default behaviour. Read more here. So each functional test, or IClassFixture is run in parallel.

CI/CD - it is dynamite (bad AC/DC pun)

It is all fine and dandy. Everyone can run these test locally, but what about in the CI/CD environment? Well, because we don't make any magic, and use docker, we can use DinD, docker-in-docker. It is actaully quite easy, and all major CI/CD providers offer it. I use gitlab, and to do this, we use an image called docker:20.10.12-dind-alpine3.14 and add a service to it, named docker:stable-dind. We can then install our dotnet version (6.0), and run dotnet test

variables:
  DOCKER_TLS_CERTDIR: ""
  FF_NETWORK_PER_BUILD: 1
stages:
  - test
tests:
  stage: test
  services:
    - docker:stable-dind
  image: docker:20.10.12-dind-alpine3.14
  script:
    - apk update && apk add wget bash icu-dev docker-compose icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib
    - apk add libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/
    - wget https://dot.net/v1/dotnet-install.sh
    - chmod +x dotnet-install.sh
    - ./dotnet-install.sh -c 6.0 --install-dir /usr/bin
    - dotnet test Api1.IntegrationTest
    - dotnet test Worker1.IntegrationTest

I use alpine linux, because it is a small image, and it installs fast. I then apply dotnet and docker-compose. These two takes a while to install, and if i was going to use this at a large scale i would:

  1. Have one or more dedicated CI/CD agents to use
  2. Have some prebuild images for the agents to use

But: this is a PoC, and i actually spent quite some time getting this to work properly. I like the outcome and the base of it (being alpine). If was to prebuild the image, i have very small image to load for each test.

The pipeline can be found here.

Non-HTTP tests

This is all fine, when we talk HTTP workloads, but what about testing non-HTTP workloads, like a HostedService or a console program? Well then we need to reinvent the wheel a bit, and i have tried that by doing my own raw and hammered implementation of a WebApplicationFactory factory. The shebangs can be found here:

  1. GenericHostApplicationFactory A base for testing console applications that is bundled with the EnvironmentController here and used in tests specified here
  2. GenericHostedServiceFactory A base for testing hosted services that is bundled with the EnironmentController here and used in this test

The pattern is the same (minus the WebApplicationFactory part … Well because Microsoft gives us that out of the box), the Generic* bases is my equvivalent of the WebApplicationFactory, and the WebApplicationFactoryBase is the same as the Custom* implementations i have made mentioned above.

The reason for why i have made two almost identical implementations for testing hsoted services and console programs is … Well, we need one if we need to spin up a console program (that is in turn a hosted service host), and i wanted to create one that kind of looked like what the WebApplicationFactory exposes: some way of starting the host, but overriding some part of the configuration/IoC before starting. Fx for the console i have made so that i can:

  1. CreateHostBuilder (or override it)
  2. StartHost (or override it) and apply my own configuration to it
  3. StopHost (or override it)

For a simple test i can add the CustomGenericHostApplicationFactory (That inherits from my PoC GenericHostApplicationFactory) as an IClassFixture and add my connectionstrings with the AddInMemoryCollection and add/remove things needed

_factory.StartHost((builder) =>
            {
                builder.UseEnvironment("Integration");
                builder.ConfigureAppConfiguration((context, configuration) =>
                {
                    // we inject rabbitmq into the configuration
                    // this needs to be done this way, because MassTransit is set up in Startup, and therefore doesnt get injected into the IoC
                    configuration.AddInMemoryCollection(new Dictionary<string, string>
                    {
                        {"MessageQueueOptions:Username",messageQueueConnectionString.Username},
                        {"MessageQueueOptions:Password",messageQueueConnectionString.Password},
                        {"MessageQueueOptions:Protocol",messageQueueConnectionString.Protocol},
                        {"MessageQueueOptions:Host",messageQueueConnectionString.Host},
                        {"MessageQueueOptions:Path",messageQueueConnectionString.Path},
                        {"MessageQueueOptions:Port",messageQueueConnectionString.FirstPort.ToString()}
                    });
                });
                builder.ConfigureServices((services) =>
                {
                    // we need to remove the database options that has been setup by the apis Startup
                    var databaseOptionsServiceDescriptor = services.FirstOrDefault(descriptor => descriptor.ServiceType == typeof(DatabaseOptions));
                    services.Remove(databaseOptionsServiceDescriptor);

                    // we add our own database that has been handed to us by the environment controller
                    // we REMEMBER to set the correct initial catalog - in environment controller land we were on master, because it needs to create the database
                    var connectionStringBuilder = new SqlConnectionStringBuilder();
                    connectionStringBuilder.InitialCatalog = "TimeStampDatabase";
                    connectionStringBuilder.Password = databaseConnectionString.Password;
                    connectionStringBuilder.UserID = databaseConnectionString.Username;
                    connectionStringBuilder.DataSource = $"{databaseConnectionString.Host},{databaseConnectionString.FirstPort}";
                    connectionStringBuilder.TrustServerCertificate = true;

                    services.AddSingleton(new DatabaseOptions { ConnectionString = connectionStringBuilder.ConnectionString });
                });

            });

This is somewhat what we saw when we used the WebApplicationFactory, but now, instead of calling WithWebHostBuilder, we call StartHost. The vocabulary is different, but the outcome is the same. Here CustomGenericHostApplicationFactory acts as our WebApplicationFactoryBase, but for a console program.

I have also made the same, but for hosted services. A hosted service is a tricky thing to test: in theory, the console application we made, is a hosted service, but a hosted service is also a thing you can add to a web application. The hosted service i actaully spun up when you call .CreateClient, and get a HttpClient for, but it just sounds bad, vocabulary-wise (is that even a word?), that you have to call, .CreateClient, to start your hosted service, that 99% of the time, doesn't need a HttpClient to be tested. It feels of. Because there is so much in common for the console app test, and the HostedService, i have created an almost identical copy of my GenericHostApplicationFactory, and called it GenericHostedServiceFactory. These two resolves the same way var builder = ResolveFactory<IHostBuilder>(assembly, "CreateHostBuilder"); but in the GenericHostApplicationFactory, i have a different StartHost implementation

#+BGEIN_SRC csharp public virtual async Task StartHost(Action<IHostBuilder> configuration) { var hostBuilder = new HostBuilder() .ConfigureWebHost(builder => { builder.UseTestServer(); builder.UseStartup<T>(); });

configuration(hostBuilder);

_host = await hostBuilder.StartAsync(); Services = _host.Services; }

#+END_SRC

This actaully boots up the entire webapplication, along with all the hosted services. If you want to just test, once hosted service, you can remove them from the ServiceCollection. Here i only have one server to test. I actaully like this approach: minimal coding from my side, and reusing all the parts of the hosted service injection in the webapp. Another approach, was to somehow get all the HostedService's by doing some reflection, and inject them into my own console host. The downside is, that i also need the entire ServiceCollection, and inject that as well … plus maybe some infrastrur … yeah: we want to to functional testing, so we really need to start up all of the internal infrastructure of the hosted services in order, to properly test it, else we are just doing some obscure half way cut of an integration test, that isn't really testing, because … well, we taking the "animals of its natural habit" - the host, that is (we can't even account for what abscure things happen at Startup, that is needed by the hosted service under test). SO: we start up the web app, but we have again redefined the vocabulary, to suit our testing needs! YES, that is right, you can call: StartHost, instead of .CreateClient :)

Here, in the GenericHostedServiceFactory, i boot up the UseTestServer, myself. I can now do a simple test, with an IClassFixture, with a CustomHostedServiceFactory. It is a messy test, because … well, the hosted service, wakes up once in a while, take something off a queue, and save it to the database (i have created this hosted services, to be pretty annoying to test). So i really have to, start the service, add some lines to the queue, and wait for the underlying database to receive the rows, so i will not list the entire code here, but you can check it out, from the link earlier (the simple test). This actually breaks from what is proper functional test, because this is black boxed, as we are going to know the underlying details of the program. In a real world scenario, we might have had some logic, that fetched the rows, that we could use, to make it a proper black box test, but in real life … that is not always the case. I would still argue that this can fit inside a functional test, beacuse we want to test the functional needs of the application: our functional requirements for the application could have sounded something alongside:"we need to store some datetimes in the database". You get the point. The ugly truth of testing: it doesn't always fit the theory.

What about … integration testing?

We can do that as well. I have a two tests, here and here. It is more like we know it: it uses a MSSQLDateTimeRepositoryFixture, that you can see here. Nothing fancy. We fetch the connection string from the EnironmentController and we pass it to the Repository, in the constructor of the fixture

        public MSSQLDataTimeRepositoryFixture()
        {
            _environmentController.Setup();
            var builder = new ConfigurationBuilder()
                .SetBasePath(Directory.GetCurrentDirectory())
                .AddJsonFile("appsettings.Integration.json", optional: true, reloadOnChange: true);

            var databaseConnectionString = _environmentController.ConnectionStrings.First(c => c.Identifier.Equals("mssql"));
            var configuration = builder.Build();

            var connectionStringBuilder = new SqlConnectionStringBuilder();
            connectionStringBuilder.InitialCatalog = "TimeStampDatabase";
            connectionStringBuilder.Password = databaseConnectionString.Password;
            connectionStringBuilder.UserID = databaseConnectionString.Username;
            connectionStringBuilder.DataSource = $"{databaseConnectionString.Host},{databaseConnectionString.FirstPort}";
            connectionStringBuilder.TrustServerCertificate = true;

            var databaseOptions = new DatabaseOptions { ConnectionString = connectionStringBuilder.ConnectionString };
            Repository = new MSSQLDateTimeRepository(databaseOptions);
        }

I can then save the timestamp and compare the returned, with a method, on the repository, that returns all rows

        public void SaveTimeStamp()
        {
            DateTimeOffset now = DateTimeOffset.Now;
            var returned = _fixture.Repository.Upsert(now);
            var all = _fixture.Repository.GetAll();

            Assert.Equal(now, returned);
            Assert.Equal(1, all.Count());
        }

NICE. But if you read carefully, I use ordering (OMG), but i actaully think that is OK, when doing integration testing (and functional testing) - but not unit testing. Why? Let me tell you.

Ordering tests

If we take a closer look at the test before, it actually has ordering attached to it

        [Fact, Order(1)]
        public void SaveTimeStamp()
        {
            DateTimeOffset now = DateTimeOffset.Now;
            var returned = _fixture.Repository.Upsert(now);
            var all = _fixture.Repository.GetAll();

            Assert.Equal(now, returned);
            Assert.Equal(1, all.Count());
        }

        [Fact, Order(2)]
        public void SaveTimeStampAgain()
        {
            DateTimeOffset now = DateTimeOffset.Now;
            var returned = _fixture.Repository.Upsert(now);
            var all = _fixture.Repository.GetAll();

            Assert.Equal(now, returned);
            Assert.Equal(2, all.Count());
        }

And in the world of XUnit, that is a no-go (and unit testing as a whole), and i agree: we dont want to have ordering on those, because then we are either not testing a single unit no more, or you have two units that actaully needs to be one, but we are not doing unit testing, but integration testing and functional testing, that is more or less rooted in actually data, than code paths: we look at input and output, and for that we sometimes need ordering, and tests can sometimes (not always) depend on each other, and it is fine. Maybe we want to create a flow of small tests, that independently is a test, but when nitted together, actaully creates a flow, where the last test (maybe a GET to a calculation method) has to have a certain amout of data attached to it. I am well away of that you can create a single test method, where the flow is written, but i actaully sometimes, like to create a small test class, that has many valid tests, that each contributes to create data, that the last test is responsible for validating. When I run that test, i can see that the first 5 five tests is green, but the last two is read, so i know, for that particulary test case, there is something wrong, from that step, forward. I tend to call these UseCase tests, and I have them both as integration tests, and functional tests. I do try to avoid these kind of tests, but sometimes, i can see that they fit the purpose, of doing proper testing of a given requirement. And for that: I need ordering. I always use XUnit.Extensions.Ordering. You have to patch the test framework, but it is fairly easy to do. Then you just add , Order(your_order) to your tests and you will get ordering. Clean and simple.

I the above test, i don't clear the database after each test, so i just keep adding data to it, to suit my UseCase test, but in the other test, i have a Dispose method, to clear test, after each Fact is run

        public void Dispose()
        {
            _fixture.Clean(); //Cleans up after each test
        }

It clears the database, after each test, and we only get single rows in the database

        [Fact, Order(1)]
        public void SaveTimeStamp()
        {
            DateTimeOffset now = DateTimeOffset.Now;
            var returned = _fixture.Repository.Upsert(now);
            var all = _fixture.Repository.GetAll();

            Assert.Equal(now, returned);
            Assert.Equal(1, all.Count());
        }

        [Fact, Order(2)]
        public void SaveTimeStampAgain()
        {
            DateTimeOffset now = DateTimeOffset.Now;
            var returned = _fixture.Repository.Upsert(now);
            var all = _fixture.Repository.GetAll();

            Assert.Equal(now, returned);
            Assert.Equal(1, all.Count());
        }

As we see with each Assert. Sometimes, you do this. Actually more than often you will want to do SEA:

  1. Setup, prepare the data to the given test
  2. Execute, the logic under test
  3. Assert that everything is as you expected

Why not just tear down, and setup a whole new database, as we do with each Fixture?, well that would be time consuming, expensive (you need big test agents when doing this in an CI/CD ): we spin up the infrastructure for each class of tests, and then we use the infrastructure for the tests. The tests should be logically grouped: BasketTests, CalculationTests etc. We can then clear the infrastructure after each test, and XUnit does that if we add a Clean in the Dispose method of the test class. This is how i do most of my testing: clean the database, run a test etc. Sometimes we also need to have ordering in this, and we can.

(I avoid ordering for integration testing and functional testing as much as i can - but i need to have the tool in hand when i need it). If i am going to use it, i always ask myself, if i am doing something wrong, and mostly: i am :) Then i go back to the drawing board, and figure out a way to test it without ordering. But as i said before: i like to sometimes have UseCase tests, where each step in functional requirement, is a Fact it self, and the last Fact is a validation that the requirement is fullfilled:


[Fact, Order(1)]
public void AddProductOneToBasket() {  }

[Fact, Order(2)]
public void AddProductTwoToBasket() {  }

[Fact, Order(3)]
public void AddProductThreeThatTriggersDiscount() {  }

[Fact, Order(4)]
public void HasDiscountBeenSetOnBasket() {  }

I will make these kind of tests, for critical parts of my functional testing (or when the QA can't test it easily when doing the final testing), but i tend to hold my horses a bit. I like this flow better than having a single test


[Fact]
public void AddProductOneTwoThreeToBasketAndValidateThatDiscountHasBeenSet() {  }

The function name is 1) to long, the body contains WAY TO MUCH CODE and it quickly gets muddy. I like that my three add tests from above actually is tests that needs to be valid when looking at them in isolation. So i do get much more out of it: 1) clean names that explains what they are doing 2) isolated tests that always needs to be valid, even if the usecase fails (due to a change in requirements), and 3) a single test that describes a given usecase, that can be used as the basis of documentation: when we have added three products to the basket, it has to trigger an discount.

We need order for that :)

But you call all of your functional tests, for integration tests?

Good question, easy answer: when i started this PoC, nine months ago, I had no idea what i was heading into. I wasn't a novice in testing. I have written a lot of integration tests, and unit tests, but i had never really put any tought into the theory behind, other than what i had learned in school (and i haven't really needed that knowledge until now). I haven't changed the naming in my PoC, and I haven't changed the file structure, but if i was going to, this is what i would change:

  • Api.IntegrationTest should be renamed to Api.FunctionalTest (including all reference to Integration in that folder)
  • Worker1.IntegrationTest should be renamed to Worker1.FunctionalTest. The SingleIntegrationTest.cs and SingleIntegrationTestCleared.cs should be moved to a folder named Worker1.IntegrationTest
  • All functional tests should be in a project for itself, named FunctionalTests
  • All integration tests should be in a project for itself, named IntegrationTests
  • If there was any unit tests, a project should be created named UnitTests

With the above structure you can easily run all the tests. I would also create a Test.sln that references all tests, that i can run, when doing CI/CD.

Why I haven't done this for the PoC sandbox? Well, i have made so many changes, that i kind of stopped "trying" to follow the standard. I am experimenting on spinning up the infrastructure arround the tests, and i can easily do that not following the best practice folder structure, or the better naming (that i have learned while doing this).

Should it be called functional test instead of integration testing?

In my opinion, almost all the integration testsj in the Microsoft doc about WebApplicationFactory should be renamed to functional tests, because when we start to test on input and output, on the API level, we are doing functional testing. I know that there isn't mentioned functional requirements in the docs: afterall it is only examples, but i resist a bit, when reading that it is integration testing. I can see that the test the different components, when spinning the API up, and by that doing integration testing, eg: is the components working together when put together, but when you start to make HTTP requests to the API, you are moving towards functional testing, in my opinion. I can see that you also want to do integration testing, and see everything is working together, when doing a HTTP request … Well. I think the difference between functional testing, and integration testing really comes down to whatever we are testing the functional requirements or not.

In my PoC, i am doing functional testing, more than integration testing. I haven't written any requirements down on paper, but i had some requirements that should be fullfilled. I know that the in a functional test we are testing a slice of the system, and that includes the frontend, but we dont always have a frontend and buttons to press, so the slice of the system is really from the input and until the output is saved, or handled, and sometimes, well that starts by an HTTP request to an API and ends when the data has been saved.

Interesting indeeed! There is grey areas, when doing black and white box testing :) Feel free to comment this below, if you have any opinions or just want to blarf out your view on things!

Tips

I have done a lot of digging, and found some stuff that i would like to outline. I have actually been at this PoC for so long that i first upgraded to .NET 5 then .NET 6. I had some trouble doing these upgrades. Especially with the CI/CD pipeline, but i did manage to get it to run. I will not spell that journey out to you. What i do want to spell out a bit, is working with docker, and the library i use, Ductus (also known as FluentDocker)

Docker-compose network

  • Remember to call Dispose properly when working with Ductus. Not because there is something wrong, but if you don't do it your docker networks is going to be filled up, and in the end, it will throw up (and it takes a long time to throw up). Because i use docker-compose, it is enough for me to call Dispose on the _compose instance i have:

            protected virtual void Dispose(bool disposing)
          {
              if (!disposedValue)
              {
                  if (disposing)
                  {
                      _compose.Dispose();
                  }
    
                  disposedValue = true;
              }
          }

Docker-compose projects

When spinning up multipe docker-composes in parallel, you will want to name them uniquely. On the commandline, when doing docker-compose up, if you don't specfiy -p unique name, docker-compose will prefix the instances with the name of the folder it is started in. Because of this, when running multiple docker-compose's from the same folder like we do, you need to rename them. Again, this takes a bit to discover, if you only run a single test at a time (i did this for a long time). With Ductus you can do this by chaining in .ServiceName(...) when building the compose file

            var prefix = Guid.NewGuid().ToString().Replace("-", String.Empty);

            Console.WriteLine(prefix);
            _compose = new Builder()
                              .UseContainer()
                              .UseCompose()
                              .ServiceName(prefix) // we want a random container name and network name
                              .FromFile("test-env.yaml")
                              .RemoveOrphans()
                              .Build().Start();

Here i just take a random Guid, strip it from dashes, and send it in. It took me a long time to find this, because there isn't a 1:1 between -p (project) and .ServiceName. Now you know.

I really like Dutus and i have had a lot of fun spinning my test infrastructure up using it. I can recommend it!

Good testing tools

I don't use some of these in my PoC, but i would like to mention them anyway, since you are reading a post about testing:

I will keep adding to this list, when something pop up, but these are the essentials that i keep using again and again.

The laye on top (a thought experiment)

Now, all this is pretty cool, and very technical. It adds some features to the testing experince for the developers writing tests, but couldn't it be cool if we could somehow build a layer on top of it, fx with SpecFlow or StoryTeller? Let me make an example, with SpecFlow. SpecFlow supports the Gherkin syntax, Given, When, Then:

Given two numbers, 1 and 2
When they are added
Then it should be 3

We can then map this, to a test, fx see this example. Imagine that we expand the Gherkin syntax, so it reads

With 1 mssql database, and 1 rabbitmq bus
Given two numbers, 1 and 2
When they are added
Then it should be 3

This will bring the infrastructure part into the test: the QA and devs can both write the Given, When, Then part, that can be mapped properly, and the devs can add the With (either as a last step when writing the tests, or in an automated process). This will effectly bring the BDD aspect into tests where we have to spin up some infrastructure. The infrastructure in the above would be picked from the same docker-compose file, that we have defined in the start (so we can't just make up infrastructure on the run). The With will be mapped and the EnvironmentController will be spun up, kicking in and mapping to the random ports into the system under test.

I kind of like the idea of it, but before you can use this extra layer, you have to educate the people in the process, and the have to have some kind of understanding, that the Gherkin syntax is mapped to code, and they have to follow a strict set of rules (that can be expanded). This is often cumbersome, and has a steep learning curve, but it pays of in the end (I have been told so).

Using this, you are moving away of using a unit testing framework, to do integration testing, and functional testing. Also: by using the EnvironmentController you hide the nasty bits, that spin up the test infrastructure, and the developer doesn't need to spin extra things up before running the unit testing framework, that also is doing the integration testing and functional testing. This is something i am going to take a deeper look at sometime in the future. For now, i like the EnvironmentController part, and it is something i will try to introduce at my work in some form. We use XUnit to do functional and integration testing, and by adding this to the stack, you remove a lot of confusion for other developers, when running test locally, and you also have some logic that be reused, when we move our functional and integration testing to the next layer (if we want to do it).

What is missing?

I think we are missing a framework, like XUnit, but for doing integration and functional testing. We could name it XEnv (lame name) or something of the like, where we get some tools and hooks to spin up the environment, run ordered tests, make usecases etc. Right now it feels forced with XUnit, altough for now i am quite happy with the current setup albeit it takes some plumming, and has a wrecked vocabulary. I know we have frameworks like SpecFlow and StoryTeller but they don't quite fill out the parts i want.

I will keep an eye out, for the framework to be, and if it is named XEnv, well … Then that is fine by me :)

Final thoughts

Phew, what a post. If you have gotten this far: wow, respect! I have learned a lot making a deepdive into testing, and i really need to look more into SpecFlow and StoryTeller. I started out this journey, trying to see how many of the Azure workloads, that could be tested properly (you can find the broken code under the examples area - i don't think it will compile anymore). I did this because at my former work, we used Azure to all the things, and that meant we used Azure Storage and Azure Functions (and webjobs as well), but when i changed work, i descided to not focus on that anymore. What i found out was that, it isn't easy to test Azure functions, because you need to spin up the Azure Storage in docker. This can NOW be done with azurite, but when i started that wasn't the case). It is also not possible to spin up the azure functions host, as described here and we need that to do properly integration and functional testing: we need to be able to spin up the functions the same way as with webapps (imagine having a FunctionsApplicationFactory). I have changed job, where we don't use Azure, so i will not be digging more into it for now, but i think it is kind of fail, that the two big sellers: storage and functions, can't be spun up easily (i know that storage can be now, but that is ONLY because some open source dude took his time to implement the Storage parts and make sure it could be spun up in docker. Microsoft took it in after looooong time, and mentioned it). That is fail, in my eyes, because Microsoft has coined the cloud to be "the new, come to use, it is easy and not onprem", but … It is kind of onprem, when you can't spin it up and run tests easily.

Well: that was all from me. I will try not to write anything in a while. Oh: i am a bad speller and this is my second post in english, so please bear over with all the missspelling. I will correct it, when i reread this. Also, drop a comment, if you find anything completely way out there (in the ropes).

Cheers!