Blog

Useful tidbits related to software development, that I think might be of use or interest to everyone else (or to me when I forget what I did!)

Creating additional monitors in Immersed for Linux

July 19, 2022

Immersed is an application that allows you to connect to your computer from within Virtual Reality and use the full power of your Windows/Mac/Linux machine to run apps etc. but using virtual screens that are in a VR world. This can be useful if you want to get away from the home office environment and into a world where you can fully concentrate on the task at hand, or if you're travelling and can't take around your usual multi-screen physical setup. The Windows and Mac versions of Immersed allow you to create additional "virtual" monitors so that when you enter into VR you can see not only your physical display but one or more "virtual" displays too. Unfortunately the Linux version of the desktop agent does not yet support this feature, so if you have a 2 screen setup in the real world, then you'll see 2 screens in VR. However, there is a way of adding these screens in Linux using a tool called "xrandr" (works with X11). I originally tried the methods documented on the virtual-display-linux GitHub page, however this doesn't work when using Cinnamon desktop on Linux Mint as the additional virtual screen causes Cinnamon to crash on startup. It actually turns out that you don't need to create a virtual screen as long as you have some unused graphics ports on your machine, for example on my machine running xrandr -q shows:
Screen 0: minimum 320 x 200, current 3200 x 1200, maximum 16384 x 16384
eDP-1 connected (normal left inverted right x axis y axis)
   1920x1080     60.02 +  60.01    59.97    59.96    59.93    48.00  
   ...
DP-1 disconnected (normal left inverted right x axis y axis)
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
HDMI-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
HDMI-3 disconnected (normal left inverted right x axis y axis)
DP-3-1 disconnected (normal left inverted right x axis y axis)
DP-3-2 connected primary 1600x1200+0+0 (normal left inverted right x axis y axis) 367mm x 275mm
   1600x1200     60.00*+
   ...
DP-3-3 connected 1600x1200+1600+0 (normal left inverted right x axis y axis) 367mm x 275mm
   1600x1200     60.00*+
   ...
The "eDP-1" adapter is my physical laptop screen (which is connected but with the lid closed is not active). Then my two physical monitors plugged into my docking station are both running under the "DP-3" display port, as "DP3-2" and "DP3-3".. This means I have "HDMI-1", "HDMI-2", "DP3-1" all available to "plug something in". You don't actually need to physically plug something in to use these in VR though, so I can just activate one (or more) of them at my desired resolution and position it wherever I'd like it to appear when I enter VR. In my case I like to split my two physical monitors apart with a 3rd low res, wide monitor (1400x900) that makes working in VR easier. For example:
Real world:
/-------\ /-------\
| DP3-2 | | DP3-3 |
\-------/ \-------/

Virtual world:
/-------\ /--------\ /-------\
| DP3-2 | | HDMI-1 | | DP3-3 |
\-------/ \--------/ \-------/
To achieve this, I've written a shell script which will add the new display settings before starting up the Immersed Agent, and will then reset the settings when the process finishes:
#!/bin/sh
xrandr --addmode HDMI-1 1400x900
xrandr --output DP-3-2 --pos 0x0 --output HDMI-1 --mode 1400x900 --right-of DP-3-2 --output DP-3-3 --right-of HDMI-1
~/.local/bin/Immersed/Immersed-x86_64.AppImage
xrandr --output HDMI-1 --off --output DP-3-3 --right-of DP-3-2

Easily Wrapping Standalone Components in Redux Provider

January 28, 2022

In order to setup a context when using React hooks related to a Redux store (e.g. useDispatch, useSelector) you need to have your component nested inside of a "Provider" component (from the react-redux package). This isn't always possible as not all applications are built as a single "app" with components nested under a single root. In my case I am using ReactJs.Net together with a CMS to allow the end user to defined any combination of a number of pre-defined "components" on a page. It turns out that you don't need all components to be nested inside of the same "Provider" instance, as long as the "store" itself is a singleton then you can have many "Provider" instances on the page all sharing the same "store". I wanted an easy way to start wrapping my existing components in a "Provider" component without having to change too much about my non-Redux application structure. What I came up with was creating a simple high order component as a function and then simply wrapping my exported component with a call to the HOC when I want to wrap it with the Redux provider, e.g.
import React from 'react';
import { Provider } from 'react-redux';
import { store } from './store';

export const withReduxStore = Component => ({ ...props }) =>
    (<Provider store={store}><Component {...props} /></Provider>)
This assumes you have singleton store, for example:
import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import rootReducer from './ducks/rootReducer'

const store = createStore(rootReducer, applyMiddleware(thunk));

export { store };
And now to update a component that previously didn't have access to the store context and give it access:
import React from 'react';
import { useDispatch } from 'react-redux';
import { withReduxStore } from './state/withReduxStore.jsx';

const MyExampleComponent = (props) => {
    const dispatch = useDispatch();

    return <>
        <button onClick={() => dispatch({hello: "world"})} type="button">Dispatch Something</button>
    </>
}

export default withReduxStore(MyExampleComponent); <-- simply wrap it with a call to "withReduxStore"

Combining Two Objects in C# Similar to JavaScript Spread Operator

September 16, 2021

I wouldn't necessarily recommend doing something like this in production code however I find this useful when writing SpecFlow tests - I want the Gherkin to call out a few key properties of a class, then I want to generate a "valid" instance (to pass any validation) but using the test data supplied. Imagine the following scenario:
public class Customer
{
   public string Firstname { get; set; }
   public string Surname { get; set; }
   public string EmailAddress { get; set; }       // validation states that this must be a valid email address
}

// imagine some kind of "valid instance builder" used in testing
public static class TestCustomerBuilder
{
      private static readonly Fixture Fixture = new();
      public static Customer AnyValidInstance()
     {
         return Fixture.Build<Customer>()
                   .With(c => c.EmailAddress, _fixture.Create<MailAddress>().Address) // make sure this passes validation by default
                   .Create();
     }
}
Now imagine you're writing some Gherkin that doesn't care about email - you're just testing something to do with Firstname and Surname, so you might write:
Given the create customer request contains the below details
| Firstname | Surname |
| Hello     | World   |
When the create customer endpoint is called
Then a new customer is created with the following details
| Firstname | Surname |
| Hello     | World   |
It's a contrived example, but you should see the point when it comes to implementing the step definitions I like to use the built in helpers from SpecFlow rather than "magic strings" as much as possible (as it makes the steps more re-usable) so how about the below:
[Given("the create customer request contains the below details")]
public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
{
   _testContext.CustomerRequest = table.CreateInstance<Customer>();
}
The problem with the above is the created instance won't be valid, on account of it having no email address. You could code around this by manually only setting certain properties but that introduces the re-usability problem again. Enter the "model combiner" which is designed to copy all non-null properties from a source instance to a destination instance, e.g.:
[Given("the create customer request contains the below details")]
public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
{
   var testDataInstance  = table.CreateInstance<Customer>();
   var validInstance = TestCustomerBuilder.AnyValidInstance();

   ModelCombiner.Comine(testDataInstance, validInstance);

   _testContext.CustomerRequest = validInstance;
}
Now the request contains a "valid" customer but also has our specific data taken from the Gherkin. The model binder class looks as below (which I got from an idea seen here: https://stackoverflow.com/questions/8702603/merging-two-objects-in-c-sharp)
public static class ModelCombiner
{
	private static readonly HashSet<Type> SupportedTypes = new();

	private static Mapper Mapper { get; } = new(new MapperConfiguration(expression =>
	{
		Setup<Customer>(expression);
		Setup<SomeOtherType>(expression);
	}));

	public static T Combine<T>(T source, T destination)
	{
		if (!SupportedTypes.Contains(typeof(T)))
			throw new InvalidOperationException(
				$"Cannot combined unsupported type {typeof(T).FullName}. Please add it to the setup in {nameof(ModelCombiner)}");

		return Mapper.Map(source, destination);
	}

	private static void Setup<T>(IProfileExpression expression)
	{
		SupportedTypes.Add(typeof(T));

		expression.CreateMap<T, T>()
			.ForAllMembers(opts => opts
				.Condition((_, _, srcMember) => srcMember != null));
	}
}
Another option I found online that looks worth a look: https://github.com/kfinley/TypeMerger

Using Fluent Assertions Inside of a Moq Verify

September 06, 2021

FluentAssertions adds many helpful ways of comparing data in order to check for "equality" beyond a simple direct comparison (for example check for equivalence across types, across collections, automatically converting types, ignoring elements of types, using fuzzy matching for dates and more). Making a "fluent assertion" on something will automatically integrate with your test framework, registering a failed test if something doesn't quite match. e.g. to compare an object excluding the DateCreated element:
actual.Should()
	.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated));
However, sometimes the "actual" value you want to make the assertion on is only available as part of a Moq Verify statement, which only supports matching based on a boolean return type. e.g.
myMock.Verify(m => 
	m.Method(It.Is<MyData>(actual => 
		actual == expected));
As you can see above, replacing "==" with a "Fluent" assertion is not possible out of the box. However there is a trick you can use by setting up the below helper method:
public static class FluentVerifier
{
	public static bool VerifyFluentAssertion(Action assertion)
	{
		using (var assertionScope = new AssertionScope())
		{
			assertion();

			return !assertionScope.Discard().Any();
		}
	}
}
Now you can nest the Fluent Assertion inside of the Verify statement as follows:
myMock.Verify(m => 
	m.Method(It.Is<MyData>(actual => 
		FluentVerifier.VerifyFluentAssertion(() => 
			actual.Should()
			.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated), ""))));
Note however that since Lambda expressions can't contain calls to methods with optional parameters, you must specify the "becauseArgs" parameter of the "BeEquivalentTo" method.

Test Data Definition Framework for .NET

August 20, 2021

I have just released the initial version of my new open source project which is designed to allow one set of integration tests to run against in-memory fakes and against a "real" repository using only pre-compiler directives. This is useful when you have a suite of SpecFlow tests that you want to quickly run locally whilst developing (e.g. with NCrunch) and on a cloud build server (e.g. GitHub) where you don't always want to hit "real" data resources, but want the flexibility of occasionally switching to "real data mode" and running the same set of tests against a real MongoDB or SQL server (for example). The initial version has one backing store implementation, for MongoDB, but I'm hoping by making this open source other backing stores can be added over time. You can read more about it on the GitHub page here: https://github.com/craigwardman/TestDataDefinitionFramework

Run Windows Command from WSL Bash

August 18, 2021

Since the advent of WSL I spend most of my time using Bash to perform my CLI tasks in Windows. Sometimes however, I'd like to run a command as though I were running it in CMD.. (I'm not talking about wslview though). The example that springs to mind is when starting a dotnet web application, where I'd like it to bind on the Windows IP/Port not on the WSL one. So although I could run "dotnet run" from Bash, I actually want to run "dotnet run" from Windows (with minimal effort of course) For this I've created a bash alias called "winrun" which looks as follows:
alias winrun='cmd.exe /c start cmd /k'
So now if I'm in Bash and want to run a dotnet project I just type:
winrun dotnet run

Cloning a Visual Studio Project With A New Name

July 21, 2021

Often when I'm working on solutions I have a lot of projects that are all pretty much starting from the same place as another. For example, "unit test project targeting .net5, with FluentAssertions, Moq, Autofixture" - but also sometimes projects with a similar class structures too like a "repository that talks to MongoDB, with an ISomethingConfig", or a "Lambda with DI container". Of course I could create Visual Studio templates for these, but often templates fall out of date or I'd end up still needing to copy in some specific classes from a recently used project that I want to replicate. In a lot of cases I found myself copying and pasting and entire project folder, renaming the files accordingly then going through the csproj/cs files to replace the namespace and/or class names to target my "new" project. To automate this I wrote a Bash script, which might not be the most elegant or robust script in the world, but it does the job so I thought I'd share it here:
# Script to create a new Visual Studio project folder from an existing one
# Usage: ./csprojclone.sh Your.Namespaced.Source Your.Namespaced.Destinationn

if [ $# -lt 2 ]
	then
		echo "Please pass source and destination namespace arguments"
		return -1
fi
source=${1%%/}
dest=${2%%/}

if [[ ! -d $source ]]
	then
		echo "Source directory does not exist or isn't a folder - make sure you are in the correct working directory"
		return -1
fi

if [[ -e $dest ]]
        then
                echo "Destination folder already exists in the working directory"
                return -1
fi

oldproject=${source%%.UnitTests}
oldproject=${oldproject##*.}
newproject=${dest%%.UnitTests}
newproject=${newproject##*.}

cp -r $source $dest
rm -r $dest/bin/
rm -r $dest/obj/
find $dest -iname "*$source*" -exec rename -v "s/$source/$dest/" {} \;
find $dest -iname "*$oldproject*" -exec rename -v "s/$oldproject/$newproject/" {} \;
find $dest -type f -print0 | xargs -0 sed -i "s/$source/$dest/g"
find $dest -type f -print0 | xargs -0 sed -i "s/$oldproject/$newproject/g"
As per the script, the usage is to call the script from within your solution directory, passing in the name of the existing project folder and then a destination one. The script will clone the source, rename the files, search and replace the namespace change and update any classes that had the specific project name in. If you're using it on a unit tests project, it will strip of ".UnitTests" from the path - so if that's not your naming convention then feel free to edit those bits. Here's an example of how it might work: PWD = /path/to/SolutionFolder
    My.Namespace.UserRepository 
    My.Namespace.UserRepository/My.Namespace.UserRepository.csproj
    My.Namespace.UserRepository/MongoStuff/...
    My.Namespace.UserRepository/IUserRepositoryConfig.cs
    etc..
. /path/to/script/csprojclone.sh My.Namespace.UserRepository My.Namespace.OrderRepository
Will create:
    My.Namespace.OrderRepository
    My.Namespace.OrderRepository/My.Namespace.OrderRepository.csproj
    My.Namespace.OrderRepository/MongoStuff/...
    My.Namespace.OrderRepository/IOrderRepositoryConfig.cs
With all namespaces also updated.

Easy Way To Test All Constructor Null Argument Checks

February 09, 2021

Often C# classes will have several dependencies passed into the constructor and some (maybe all) of these will be mandatory for the class to function. In this case, ordinarily you'd add a null argument check in the ctor and if something is null throw an "ArgumentNullException". This is pretty boiler plate and is usually auto-generated code, but still it needs to be tested in your unit tests to assert that indeed all mandatory options have been checked (and conversely that all optional dependencies can indeed be null). This can be quite tedious and repetitive and later changing the signature of the ctor can result in many tests that requiring fixing up. To make these tests smaller and more concise I've come up with a new strategy using test cases and nullable mocks, as follows:
// SomeClass.cs
public class SomeClass
{
	private readonly IDependency1 _dependency1;
	private readonly IDependency2 _dependency2;
	private readonly IDependency3 _dependency3;

	public SomeClass(IDependency1 dependency1, IDependency2 dependency2, IDependency3 dependency3)
	{
		_dependency1 = dependency1 ?? throw new ArgumentNullException(nameof(dependency1));
		_dependency2 = dependency2 ?? throw new ArgumentNullException(nameof(dependency2));
		_dependency3 = dependency3 ?? throw new ArgumentNullException(nameof(dependency3));
	}
}

// SomeClass.tests.cs
[TestFixture]
public class SomeClassTests
{
	private Mock<IDependency1> _dependency1Mock;
	private Mock<IDependency2> _dependency2Mock;
	private Mock<IDependency3> _dependency3Mock;

	[SetUp]
	public void SetUp()
	{
		_dependency1Mock = new Mock<IDependency1>();
		_dependency2Mock = new Mock<IDependency2>();
		_dependency3Mock = new Mock<IDependency3>();
	}

	[TestCase("dependency1")]
	[TestCase("dependency2")]
	[TestCase("dependency3")]
	public void Ctor_RequiredDependencyNull_ThrowsException(string dependency)
	{
		var setup = new Dictionary<string, Action>
		{
			{"dependency1", () => _dependency1Mock = null },
			{"dependency2", () => _dependency2Mock = null },
			{"dependency3", () => _dependency3Mock = null }
		};
		setup[dependency]();

		Func<SomeClass> act = GetDefaultSut;

		act.Should().Throw<ArgumentNullException>().And.ParamName.Should().Be(dependency);
	}

   private SomeClass GetDefaultSut()
	{
		return new SomeClass(_dependency1Mock?.Object, _dependency2Mock?.Object, _dependency3Mock?.Object);
	}
}
My above example is for NUnit, using FluentAssertions and Moq but can be converted to your testing tools of choice. An even easier option, when all your dependencies are required is to use the "GuardClauseAssertion" from "AutoFixture.Idioms"
private readonly Fixture _fixture;

public SomeClassTest(){
     _fixture = new Fixture();
     _fixture.Customize(new AutoMoqCustomization());
}

[Test]
public void Ctor_NullDependency_Throws()
{
        new GuardClauseAssertion(_fixture)
                .Verify(typeof(SomeClass).GetConstructors());
}

Recommendations For Integration Testing ASP.NET Core APIs Using SpecFlow

December 10, 2020

Before I begin with my recommendations it's probably worth me defining what I mean by "integration tests" as there is some ambiguity with this term. Given that I'm interested in running these tests as part of the development/build process and the purposes is to prove out that "this particular microservice behaves correctly against these external dependency contracts" - then I'm testing the intergration of the "units" of code within this system, not testing the integration of this system with the external dependencies. I find that writing these type of "integration tests" using SpecFlow is a great way to de-couple your behaviour from your code structure, making TDD more realistic and also meaning after a re-factor (which often results in re-factoring the unit tests) you can confirm there are no breaking changes. It also allows you to involve your QA/BA in the process by quantifying in plain English what scenario's you are catering for and how the system behaves for each. It's worth noting that SpecFlow can also be used to automate your "full integration tests", however that's a little more complex to setup as usually involved spinning up SQL servers, Kafka instances, mocked external APIs etc. and also is too slow to run those types of tests on build, whereas the integration tests I will demonstrate below you can quickly run on build like any other unit test. To create a SpecFlow project for testing an API add an NUnit test project and install the SpecFlow.NUnit.Runners & Microsoft.AspNetCore.Mvc.Testing NuGet packages into that test project, add a reference to the Api project and then begin creating your tests. My recommendations to consider are below:
  • Create a "WebTestFixture" that inherits from "WebApplicationFactory<Startup>"
    • Where "Startup" is your API Statup class
    • Take constructor params to capture shared class instances from BoDi (the SpecFlow DI container) - e.g. your mocks
    • Override the "ConfigureWebHost" method and use "builder.ConfigureTestServices" to replace any "real" dependencies with mocks defined in the test project
    • Also register any other class instances that you want to share between BoDi and the .NET DI container
  • Create a folder structure that allows you consider the following genres of classes:
    • Infrastructure - e.g. SpecFlow hooks, Value Retrievers, Transformations etc. (basically the custom SpecFlow pipework)
    • TestDataProviders - with a subfolder for each high level dependency you are mocking (e.g. what would be a class library in the real implementation)
      • EachDataProvider - containing:
        • Mock Factory - create a class that will build your default mock for each interface (I prefer to use Moq)
        • StepDefinitions - All the SpecFlow step definitions for interacting with these mocks
      • Mocks Root Class - for easy injection of all your mocks into WebTestFixture and your step definitions
    • FolderPerController - the "tests" live in here so assuming your controllers align with a sensible functional grouping it makes sense to mirror that structure
      • Interactions - create a class which interacts with this controller via the "WebTestFixture.CreateClient()" HttpClient
      • Features - create a SpecFlow feature file per endpoint of the controller - in here create the scenario's this endpoint supports
      • Context - any classes that represent the data context of the controller itself (such as the data you will post, or the response from the API)
      • StepDefinitions - All the SpecFlow step definitions for interacting with this API controller and the assertions of the features

    This structure works well for me as it allows me to keep a separation of code specific to a controller or endpoint (making it easier to see what is involved with which moving part) from each other, but also allows code re-use of steps which are for contriving data in your mocked repositories, with a clear separation again which would match the structure of your class libraries of your project. And of course, once you have defined the features/steps/data required to interact with all mocks and all controllers/endpoints - you can create a high level folder of features that interact across multiple of these, if you have such scenarios to assert. For a strategy of preparing your test data that can be used both in-memory and against a "real" datastore, see http://www.craigwardman.com/Blogging/BlogEntry/test-data-definition-framework-for-net

Global Error Logging For HttpClient Calls In ASP.NET

September 22, 2020

When you have multiple downstream dependencies that are accessed via HttpClient in .NET then you want a simple way of logging all the error responses that are received from those calls. Often your application will react to the non-successful response by logging it's own error, but this can sometimes miss the detail of what actually went wrong downstream. An easy to way to capture all the non-successful outbound calls your application makes is to inject a custom delegating handler into all instances of HttpClient (via DI) which can inspect the return code and call out to your logger if necessary: LoggingMessageHandler.cs:
public class LoggingMessageHandler : DelegatingHandler
{
	private readonly IExceptionLogger _exceptionLogger;

	public LoggingMessageHandler(IExceptionLogger exceptionLogger)
	{
		_exceptionLogger = exceptionLogger ?? throw new ArgumentNullException(nameof(exceptionLogger));
	}

	protected override async Task<HttpResponseMessage> SendAsync(
		HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
	{
		var response = await base.SendAsync(request, cancellationToken);
		if (!response.IsSuccessStatusCode)
		{
			var responseContent = await response.Content.ReadAsStringAsync();
			await _exceptionLogger.LogExceptionAsync(new HttpRequestException(
				$"Api call to {request.RequestUri} failed with status code {(int)response.StatusCode}. Response content: {responseContent}"));
		}
		return response;
	}
}
Your implementation of "IExceptionLogger" may vary, but this is your opportunity to write something to your logs/DB etc. To set this delegating handler on your HttpClient, setup the Microsoft DI container as follows:
services.AddHttpClient<ISomeClient, SomeClient>().AddHttpMessageHandler<LoggingMessageHandler>()
Since this class and custom logic will typically live in your composition root, I'd recommend passing the builder delegate down to any class libraries you are building (where you choose to define the ServiceCollectionExtensions inside the class library). For example: Startup.cs:
services.AddMyCustomLibrary(builder => builder.AddHttpMessageHandler<LoggingMessageHandler>());
CustomLibrary/ServiceCollectionExtensions.cs:
public static void AddMyCustomLibrary(this IServiceCollection services, Action<IHttpClientBuilder> clientBuilderDelegate = null)
{
	var someInnerClientBuilder = services.AddHttpClient<ISomeInnerClient, SomeInnerClient>();
	clientBuilderDelegate?.Invoke(someInnerClientBuilder);
}