Blog

Useful tidbits related to software development, that I think might be of use or interest to everyone else (or to me when I forget what I did!)

Deploying to Vercel from the CLI

May 26, 2023

In order to make deployments to Vercel fully repeatable and automatable I wanted the entire process to be encapsulated in the build process and happen using the CLI, rather than requiring some of the work having to be done manually through the Vercel UI, e.g. setting up projects and adding sensible defaults for environment variables etc.). Some of the steps that I wanted to be able to automate are:
  1. Create the "Project" in Vercel
  2. Deploy an application (from within a monorepo)
  3. Configure the project as part of the deployment, to avoid having to configure it through the UI
  4. Configure the deployed application environment variables with sensible defaults from .env
  5. Layer on environment specific environment variables (similar to how Helm ".values" files work)
To accomplish this, I created a makefile which can be used in conjunction with some config files, to perform the transformations and CLI operations to automate the deployment. The initial folder structure looks like this: / <-- monorepo root /apps/exampleApp/* <-- Next.js application /apps/exampleApp/.env <-- Next.js application default variables /build/deploy/makefile <-- deployment commands /build/deploy/dev.env.json <-- development environment specific variables /build/deploy/vercel.json <-- Vercel project configurations /build/deploy/token <-- Vercel CLI token, this should be swapped in from secrets /packages/* <-- other npm projects in the monorepo The makefile contents looks as follows:
.PHONY: create
.PHONY: deploy-dev

MAKEFILE_PATH:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
ROOT_PATH:=$(shell realpath "${MAKEFILE_PATH}/../..")

create:
	npx vercel project add YOUR_PROJECT_NAME --token "$(shell cat token)"

../../.vercel/project.json: 
	npx vercel link --cwd ${ROOT_PATH} -p YOUR_PROJECT_NAME --yes --token "$(shell cat token)"

../../.vercel/base-env.json: ../../apps/exampleApp/.env
	cat ../../apps/exampleApp/.env | \
	jq -Rn '{ "env": [inputs | select(length > 0 and (startswith("#") | not)) | capture("(?<key>.+?)=(?<value>.+)") | {(.key): .value }] | add }' \
	> ../../.vercel/base-env.json

../../.vercel/dev-local-config.json: vercel.json dev.env.json ../../.vercel/base-env.json
	jq -s '.[0] * .[1] * .[2]' vercel.json ../../.vercel/base-env.json dev.env.json \
	> ../../.vercel/dev-local-config.json

deploy-dev: ../../.vercel/project.json ../../.vercel/dev-local-config.json
	npx vercel deploy --cwd ${ROOT_PATH} --token "$(shell cat token)" --local-config ../../.vercel/dev-local-config.json

clean:
	rm -rf ../../.vercel
Essentially what you have is one command "create" to create the remote project in Vercel and one command "deploy-dev" to deploy the application using the development variables. All the other files are used to generate a custom configuration for the deploy step. The other significant files are: vercel.json - this is where you can configure the project settings of Vercel.
{
	"framework": "nextjs",
	"outputDirectory": "apps/exampleApp/.next",
	"env": {
		"EXAMPLE_SETTING": "some_value"
	}
}
dev.env.json - just the environment section for "dev" deployments, e.g.
{
    "env": {
        "EXAMPLE_SETTING_A": "dev.specific.value",
    }
}
The contents of your typical .env file might look like this:
EXAMPLE_SETTING_A="default.value"
EXAMPLE_SETTING_B="another one"
You will notice that the makefile also makes reference to several files in the .vercel folder, this folder is transient and is created by "vercel link" -- it isn't checked in to Git, but here's a description of what the files do: /.vercel <-- Created by "Vercel Link", this is not committed in Git /.vercel/project.json <-- Created by "Vercel Link" /.vercel/base-env.json <-- sensible defaults, created from .env by the makefile which replicate whatever is in .env for the app /.vercel/dev-local-config.json <-- the combined configuration values created by the makefile for the project + dev variables to be used on the CLI In the above example, the base-env.json would look like this:
{
    "env": {
        "EXAMPLE_SETTING_A": "default.value",
        "EXAMPLE_SETTING_B": "another one",
    }
}
The dev-local-config.json would look like:
{
	"framework": "nextjs",
	"outputDirectory": "apps/exampleApp/.next",
	"env": {
		"EXAMPLE_SETTING": "some_value",
                "EXAMPLE_SETTING_A": "dev.specific.value",
                "EXAMPLE_SETTING_B": "another one"
	}
}
So you can see, that the final configuration sent to Vercel for the "deploy-dev" step configures the project as Next.js, configures the location of the build asset and has a 3 way combined "env" section from "vercel.json" + ".env" + "dev.env.json" With this starting point you could now add more environments simply by having additional "*.env.json" files and replicating the makefile step to generate and use that config.

SSG, ISR and Environment Variables in Next.js and Docker

March 02, 2023

Containerising a Next.js application that uses SSG (where the content is different per environment) isn't currently that great of an experience. The Vercel docs suggest that you would simply build a container image targeting a certain environment, but that doesn't sit well with the usual approach to containerisation and route to live process of many software teams. In this blog post I'll dig into the problem with some suggestions on how you can solve it.

What is SSG?

SSG (static site generation) is a feature of Next.js that allows you to pre-compute the content of pages, based on content from a headless CMS, at BUILD TIME so that your website doesn’t need to communicate with the CMS on a per-request basis. This improves website performance due to having pre-computed responses ready to go and reduces load on the CMS server. SSG can be broken down into two categories, based on whether the route is dynamic or fixed, e.g. /page.tsx ← fixed route, can contain SSG generated content /pages/[...page.tsx] ← dynamic route (includes slug), build time SSG pages defined by getStaticPaths

What is ISR?

ISR (incremental static regeneration) is a feature of Next.js that, given an existing SSG cached page, will at RUN TIME, go and rebuild/refresh that cache with the latest content from the CMS, so that SSG pages do not become stale. This gives you the benefits of a “static” website, but still means the underlying content can be editable without rebuilding/redeploying the site.

What about environment variables?

Typically when you’re building an app in JavaScript you use “process.env.XYZ” in the code so that the value is substituted with an environment variable. For code that runs on the server, the value is substituted by the Node process in real-time. For code that runs on the client, the value is swapped in for the literal at BUILD TIME by the compiler.

Sounds great, what’s the problem?

The problem stems from SSG and client-side environment variables being a “build time” computation. In order to build a Docker image that is “ready to start”, you’d need to:
  • Be happy that you’re building an image targeting a specific environment
    • A container image would not be able to be deployed to any other environment than the one it was created for
  • Be happy that you’re baking in a certain set of client-side environment variables
    • Changing environment variables and restarting the container image has no effect on client-side code, you’d need to rebuild from source.
  • Be happy that you’re baking in a “point-in-time” cache of the content
    • This would get more and more stale as time goes by (ISR kind of solves this issue but only when it kicks in (e.g. after 5 secs) and the delta would keep increasing)
  • Have connectivity to the target environment CMS API to get content during the build
    • In order to get the content at build time, you’d need network connectivity between your build agent and whatever private network is hosting your content/CMS server.
None of the above makes for a good 12 factor app and is not consistent with the usual approach to containerisation (being that the same image can be configured/deployed many times in different ways).

What is the solution?

Environment Variables For environment variables, luckily there is an easy solution known as “Runtime Configuration” - this essentially keeps the “process.env” parts of the code on the server and the client-side gets access to the config by calling a react hook and using the configuration object. SSG/ISR ISR we get just by including “revalidate” in the getStaticProps return value. This means content is refreshed on the next request after every n seconds (configured per page). However the “initial content” is still whatever was included at the SSG/build stage. You can also forcefully update the content cache using "on demand revalidation" by creating an API endpoint that calls the revalidate method. Data Fetching: Incremental Static Regeneration | Next.js For SSG (that don’t use dynamic routes) there is no simple solution, having build time connectivity to the CMS and building images specific to a single environment is out of the question, so there is an alternative approach.
I just want SSG, even if it means deferring the "next build" During the Dockerfile “build” phase, we could omit the "next build" step; a better option is to spin up a mocked “CMS" to allow us to perform the “next build”, which at least ensures the code will compile as well as warming up the “.next” cache of transpiled output (even though the content cache is still empty). We can set the image entrypoint as “npm run build && npm run start” so that wherever the container finds itself being spun up, it will re-build the baked in code files with the environment variables provided and will connect to the configured CMS to generate it's cache. If you're using "readiness" checks in Kubernetes, the pod won't come online until the cache has been generated. The pros of this approach; it’s very simple to reason about, it follows usual Docker paradigms with regards to configurability and deployability and takes care of client side environment variables without the need for using runtime configuration (simpler DX). The cons are; since the production image needs to build the source on startup you need to include the source code files and the package “devDependencies” so that this can occur. It also means the container startup is slower than if it were pre-built.

Alternative considerations:

A) Scrap SSG - Use SSR+CDN caching Since environment variables are easily solved through runtime configuration, we could solve that problem that way and replace SSG with SSR+CDN for caching. Pros: Removes the requirement for the “build on startup”, which will make the container images smaller and faster to boot. Cons: Relies on external CDN tooling, which would require a different approach for warming up and refreshing caches. Not ideal that we must forfeit SSG. B) Vercel could offer “runtime only SSG” (basically ISR but with an empty starting point) If Vercel ever create a feature whereby SSG can become a “run time” rather than “build time” operation, then we should switch to using that, in combination with runtime configuration for environment variables. Pros: Can use a multistage Dockerfile to split “build time” from “run time” dependencies, so container image will be smaller. More 12FA compliant as the build artifact is compiled code only. Cons: Still has a slower "fully ready and cached" startup time when compared to build time SSG, due to SSG/ISR only kicking in after the container has started (although with file level caching with a network share this would only be applicable to the first container instance). Footnote on option B) This is already possible for “dynamic routes” (i.e. routes that use a slug and would export getStaticPaths). In this case you can return an empty array of paths and fallback mode “blocking”, so that during the build there are no SSG pages found, but at runtime any URLs requested would act like SSR on first request and then be cached. You can populate the cache by triggering the “on demand revalidation” at runtime (using the actual paths you’d like to generate), by calling an API endpoint you have created for this purpose. This is hinted at on Vercel’s website here Data Fetching: getStaticPaths | Next.js NB. With all of the above, when using SSG/ISR, if multiple instances of the site are running and it’s absolutely vital that the content is the same across all instances, then you should use a network share with file caching, as noted here: Data Fetching: Incremental Static Regeneration | Next.js

Conclusion

In conclusion, if you want to use Next.js with Docker:
  • No SSG? Use "runtime configuration" for environment variables and follow the usual multi-stage build approach in the Dockerfile
  • SSG for dynamic paths only? Use "runtime configuration" for environment variables, use "empty paths, fallback blocking" trick to skip SSG on build, then use "on demand revalidation" after the container starts to populate the cache
  • SSG for fixed routes? Embed a "build" step into the container startup
As mentioned, if Vercel release a feature to enable "run time only SSG" this would become the best option for all.

Writing an NPM startup script in TypeScript to support complex scenarios

December 19, 2022

NPM scripts are often used to abbreviate longer commands with many arguments or to sequence multiple commands that are required to compile or run a Node based system. Sometimes they can get complex; if you have multiple moving parts to consider, such as conditionally setting environment variables, conditional compilation steps or multiple sequential and parallel stages of the build. This is especially true if you trying to write a startup script for automation tests, which probably need to spin up mocked APIs, build and start the system under test, wait for startup to complete then execute tests. It is equally as useful when you have multiple configurations of the application and it's supporting local development services (such as running a mock, changing an environment flag etc.) This can be achieved using daisy chained NPM scripts and helper packages such as "npm-run-all", "concurrently", "start-server-and-test", but I found that this doesn't scale well if you have multiple options for developers to choose from when running the app. A contrived example of some options you might want to give developers: - run in "development" or "production" mode (usually determined by running either "npm run dev" or "npm run build/start") - setting environment variables based on the mode (e.g. you might have used "cross-env-shell" in your npm script) - start a mocked api, or don't (maybe you'd spin this up asynchronously using "npm-run-all" or "concurrently") - build for "flavour 1" or "flavour 2" of the code (say, for example, you can choose whether to compile for REST or GraphQL at build time) You might also have some automation tests that need to: - start a mocked api (and wait for it to load) - build and run the system under test in production mode (and wait for it to load) - run the tests (in local or browserstack mode) - kill all the child processes Aside from the complexities of describing all of the above in NPM scripts, it gets very repetitive which each variation. Even with the limited choices described above you get the Cartesian product of all combinations, represented as scripts e.g.: "development mode", "with mock", "GraphQL" "development mode", "with mock", "REST" "development mode", "no mock", "GraphQL" "development mode", "no mock", "REST" "production mode", "with mock", "GraphQL" "production mode", "with mock", "REST" "production mode", "no mock", "GraphQL" "production mode", "no mock", "REST" You can remove one level of complexity by using NPM configuration settings, to switch between modes, that live in your .npmrc file such as:
mock-api='true'
api-mode='REST'
use-browserstack='false'
And then only having the usual 3 scripts in your package.json (for example using Next.js) that take into account the config settings: "dev", "build", "start" By using settings in the .npmrc file we get all the power of NPM configuration, which means by default it will use the values defined in .npmrc, but these can be overridden with environment variables (so maybe you'd set different defaults in your Dockerfile or build pipeline, than what local devs might use), or can be override with CLI arguments (so maybe you'd do this when spinning up a system from an automation test suite). The next complexity to solve is how to interpret the NPM configuration settings, such that the right build steps are executed and are sequential or parallel in nature accordingly. This where I decided that NPM scripts still weren't the best choice and it would be easier to write that logic as TypeScript (and have npm simply execute the script). The below example shows how this can work for the automation test scenario, making use of a "test-runner" script and re-using "build" and "start" scripts from the system under test: package.json snippet:
"test:automation": "cross-env-shell NODE_ENV='production' \"ts-node --project ./tsconfig.commonjs.json test-runner.ts\"",
tsconfig snippet:
"compilerOptions": {
	"module": "commonjs"
}
test-runner.ts:
#!/usr/bin/env ts-node
import { subProcess, subProcessSync } from 'subspawn';

// example of reading NPM config from within a script
let serverHost = 'localhost';
if (process.env.npm_config_use_browserstack === 'true') {
  serverHost = 'bs-local.com';
}

// example of overriding NPM config from within a script (to disable system's built in mock api when it's script runs)
process.env.npm_config_mock_api = 'false';

// example of setting general environment variables used by your application (override api to point at the mock)
process.env.NEXT_PUBLIC_API_HOST = `http://${serverHost}:5038`;

// example of spinning up background services and waiting for them to load
subProcess('automation-tests', 'npm run start-wiremock', true);
subProcessSync('npx wait-on tcp:5038', false);

// example of re-using scripts that exists for spinning up the system
process.chdir('../../src');

if (process.env.npm_config_skip_build !== 'true') {
  process.env.PUBLIC_URL = `http://${serverHost}:3000`;

  require('../../src/build'); // pull in the build script for the SUT
}
// start the SUT
require('../../src/start'); // pull in the start script for the SUT
process.chdir('../tests/integration');

// begin the test execution
subProcessSync('npm run execute-tests', true);

// exiting the process will also kill all the background processes
process.exit(0);

export {};

You'll notice the use of "npx wait-on" which is a handy package for testing when a dependency has become available. You'll also notice the use of "subspawn" which is an NPM package I created specifically for this use case to address the complexities of spawning, killing and integrating the stdout of child processes in Node in a cross-platform way.

Proxying clientside API traffic from deployed website to local machine

November 08, 2022

Recently I ran into a situation where I wanted to proxy the calls made by a client application (i.e. javascript AJAX calls) for a web application that was hosted on another machine. So the configuration looked something like below: https://www.mywebsite.com -> user machine (javascript) -> https://api.mywebsite.com/api-1/endpoint I didn't want to run "www.mywebsite.com" code on my machine, I wanted to run the deployed website in my browser, but I wanted all calls to "api.mywebsite.com/api-1" to be routed to my local development environment so I could either debug or mock the API responses as I wanted. The solution comprised of 3 basic elements:
  1. An Nginx reverse proxy running in Docker on my machine
  2. Self signed SSL certificates for TLS termination in Nginx
  3. Running Chrome with custom host resolver rules (this could also be done in /etc/hosts but I only wanted a temporary solution)
If your API calls don't use HTTPS then you don't need the TLS termination, but in my case I do need it so I created some self-signed certificates that I will later trust within Chrome:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout nginx-selfsigned.key -out nginx-selfsigned.crt
Now create a reverse-proxy Nginx configuration file, like below:
server {
    listen 0.0.0.0:80;
    listen 0.0.0.0:443 ssl;
    server_name host.docker.internal;
    ssl_certificate       /etc/ssl/certs/server.crt;
    ssl_certificate_key   /etc/ssl/certs/server.key;
    location /api-1/ {
        proxy_pass http://host.docker.internal:5001/;
    }
}
This essentially now routes the traffics from "https://localhost/api-1" to "http://host-machine:5001/" which is where I can run the development mode API. With those things in place, whenever I want to the run the deployed website against my local machine APIs, I can use the below commands:
docker run --rm \
-p 80:80 -p 443:443 \
--name nginx-reverse-proxy \
--add-host=host.docker.internal:host-gateway \
-v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf \
-v $(pwd)/nginx-selfsigned.crt:/etc/ssl/certs/server.crt \
-v $(pwd)/nginx-selfsigned.key:/etc/ssl/certs/server.key \
nginx &
google-chrome https://www.mywebsite.com/ --host-resolver-rules="MAP api.mywebsite.com 127.0.0.1"
The first time you run it, Chrome won't trust your SSL certs, but if you open an API URL in a new tab and manually trust it, then the clientside calls will work :)

Creating additional monitors in Immersed for Linux

July 19, 2022

Immersed is an application that allows you to connect to your computer from within Virtual Reality and use the full power of your Windows/Mac/Linux machine to run apps etc. but using virtual screens that are in a VR world. This can be useful if you want to get away from the home office environment and into a world where you can fully concentrate on the task at hand, or if you're travelling and can't take around your usual multi-screen physical setup. The Windows and Mac versions of Immersed allow you to create additional "virtual" monitors so that when you enter into VR you can see not only your physical display but one or more "virtual" displays too. Unfortunately the Linux version of the desktop agent does not yet support this feature, so if you have a 2 screen setup in the real world, then you'll see 2 screens in VR. However, there is a way of adding these screens in Linux using a tool called "xrandr" (works with X11). I originally tried the methods documented on the virtual-display-linux GitHub page, however this doesn't work when using Cinnamon desktop on Linux Mint as the additional virtual screen causes Cinnamon to crash on startup. It actually turns out that you don't need to create a virtual screen as long as you have some unused graphics ports on your machine, for example on my machine running xrandr -q shows:
Screen 0: minimum 320 x 200, current 3200 x 1200, maximum 16384 x 16384
eDP-1 connected (normal left inverted right x axis y axis)
   1920x1080     60.02 +  60.01    59.97    59.96    59.93    48.00  
   ...
DP-1 disconnected (normal left inverted right x axis y axis)
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
HDMI-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
HDMI-3 disconnected (normal left inverted right x axis y axis)
DP-3-1 disconnected (normal left inverted right x axis y axis)
DP-3-2 connected primary 1600x1200+0+0 (normal left inverted right x axis y axis) 367mm x 275mm
   1600x1200     60.00*+
   ...
DP-3-3 connected 1600x1200+1600+0 (normal left inverted right x axis y axis) 367mm x 275mm
   1600x1200     60.00*+
   ...
The "eDP-1" adapter is my physical laptop screen (which is connected but with the lid closed is not active). Then my two physical monitors plugged into my docking station are both running under the "DP-3" display port, as "DP3-2" and "DP3-3".. This means I have "HDMI-1", "HDMI-2", "DP3-1" all available to "plug something in". You don't actually need to physically plug something in to use these in VR though, so I can just activate one (or more) of them at my desired resolution and position it wherever I'd like it to appear when I enter VR. In my case I like to split my two physical monitors apart with a 3rd low res, wide monitor (1400x900) that makes working in VR easier. For example:
Real world:
/-------\ /-------\
| DP3-2 | | DP3-3 |
\-------/ \-------/

Virtual world:
/-------\ /--------\ /-------\
| DP3-2 | | HDMI-1 | | DP3-3 |
\-------/ \--------/ \-------/
To achieve this, I've written a shell script which will add the new display settings before starting up the Immersed Agent, and will then reset the settings when the process finishes:
#!/bin/sh
xrandr --addmode HDMI-1 1400x900
xrandr --output DP-3-2 --pos 0x0 --output HDMI-1 --mode 1400x900 --right-of DP-3-2 --output DP-3-3 --right-of HDMI-1
~/.local/bin/Immersed/Immersed-x86_64.AppImage
xrandr --output HDMI-1 --off --output DP-3-3 --right-of DP-3-2

Easily Wrapping Standalone Components in Redux Provider

January 28, 2022

In order to setup a context when using React hooks related to a Redux store (e.g. useDispatch, useSelector) you need to have your component nested inside of a "Provider" component (from the react-redux package). This isn't always possible as not all applications are built as a single "app" with components nested under a single root. In my case I am using ReactJs.Net together with a CMS to allow the end user to defined any combination of a number of pre-defined "components" on a page. It turns out that you don't need all components to be nested inside of the same "Provider" instance, as long as the "store" itself is a singleton then you can have many "Provider" instances on the page all sharing the same "store". I wanted an easy way to start wrapping my existing components in a "Provider" component without having to change too much about my non-Redux application structure. What I came up with was creating a simple high order component as a function and then simply wrapping my exported component with a call to the HOC when I want to wrap it with the Redux provider, e.g.
import React from 'react';
import { Provider } from 'react-redux';
import { store } from './store';

export const withReduxStore = Component => ({ ...props }) =>
    (<Provider store={store}><Component {...props} /></Provider>)
This assumes you have singleton store, for example:
import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import rootReducer from './ducks/rootReducer'

const store = createStore(rootReducer, applyMiddleware(thunk));

export { store };
And now to update a component that previously didn't have access to the store context and give it access:
import React from 'react';
import { useDispatch } from 'react-redux';
import { withReduxStore } from './state/withReduxStore.jsx';

const MyExampleComponent = (props) => {
    const dispatch = useDispatch();

    return <>
        <button onClick={() => dispatch({hello: "world"})} type="button">Dispatch Something</button>
    </>
}

export default withReduxStore(MyExampleComponent); <-- simply wrap it with a call to "withReduxStore"

Combining Two Objects in C# Similar to JavaScript Spread Operator

September 16, 2021

I wouldn't necessarily recommend doing something like this in production code however I find this useful when writing SpecFlow tests - I want the Gherkin to call out a few key properties of a class, then I want to generate a "valid" instance (to pass any validation) but using the test data supplied. Imagine the following scenario:
public class Customer
{
   public string Firstname { get; set; }
   public string Surname { get; set; }
   public string EmailAddress { get; set; }       // validation states that this must be a valid email address
}

// imagine some kind of "valid instance builder" used in testing
public static class TestCustomerBuilder
{
      private static readonly Fixture Fixture = new();
      public static Customer AnyValidInstance()
     {
         return Fixture.Build<Customer>()
                   .With(c => c.EmailAddress, _fixture.Create<MailAddress>().Address) // make sure this passes validation by default
                   .Create();
     }
}
Now imagine you're writing some Gherkin that doesn't care about email - you're just testing something to do with Firstname and Surname, so you might write:
Given the create customer request contains the below details
| Firstname | Surname |
| Hello     | World   |
When the create customer endpoint is called
Then a new customer is created with the following details
| Firstname | Surname |
| Hello     | World   |
It's a contrived example, but you should see the point when it comes to implementing the step definitions I like to use the built in helpers from SpecFlow rather than "magic strings" as much as possible (as it makes the steps more re-usable) so how about the below:
[Given("the create customer request contains the below details")]
public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
{
   _testContext.CustomerRequest = table.CreateInstance<Customer>();
}
The problem with the above is the created instance won't be valid, on account of it having no email address. You could code around this by manually only setting certain properties but that introduces the re-usability problem again. Enter the "model combiner" which is designed to copy all non-null properties from a source instance to a destination instance, e.g.:
[Given("the create customer request contains the below details")]
public void GivenTheCreateCustomerRequestContainsTheBelowDetails(Table table)
{
   var testDataInstance  = table.CreateInstance<Customer>();
   var validInstance = TestCustomerBuilder.AnyValidInstance();

   ModelCombiner.Comine(testDataInstance, validInstance);

   _testContext.CustomerRequest = validInstance;
}
Now the request contains a "valid" customer but also has our specific data taken from the Gherkin. The model binder class looks as below (which I got from an idea seen here: https://stackoverflow.com/questions/8702603/merging-two-objects-in-c-sharp)
public static class ModelCombiner
{
	private static readonly HashSet<Type> SupportedTypes = new();

	private static Mapper Mapper { get; } = new(new MapperConfiguration(expression =>
	{
		Setup<Customer>(expression);
		Setup<SomeOtherType>(expression);
	}));

	public static T Combine<T>(T source, T destination)
	{
		if (!SupportedTypes.Contains(typeof(T)))
			throw new InvalidOperationException(
				$"Cannot combined unsupported type {typeof(T).FullName}. Please add it to the setup in {nameof(ModelCombiner)}");

		return Mapper.Map(source, destination);
	}

	private static void Setup<T>(IProfileExpression expression)
	{
		SupportedTypes.Add(typeof(T));

		expression.CreateMap<T, T>()
			.ForAllMembers(opts => opts
				.Condition((_, _, srcMember) => srcMember != null));
	}
}
Another option I found online that looks worth a look: https://github.com/kfinley/TypeMerger

Using Fluent Assertions Inside of a Moq Verify

September 06, 2021

FluentAssertions adds many helpful ways of comparing data in order to check for "equality" beyond a simple direct comparison (for example check for equivalence across types, across collections, automatically converting types, ignoring elements of types, using fuzzy matching for dates and more). Making a "fluent assertion" on something will automatically integrate with your test framework, registering a failed test if something doesn't quite match. e.g. to compare an object excluding the DateCreated element:
actual.Should()
	.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated));
However, sometimes the "actual" value you want to make the assertion on is only available as part of a Moq Verify statement, which only supports matching based on a boolean return type. e.g.
myMock.Verify(m => 
	m.Method(It.Is<MyData>(actual => 
		actual == expected));
As you can see above, replacing "==" with a "Fluent" assertion is not possible out of the box. However there is a trick you can use by setting up the below helper method:
public static class FluentVerifier
{
	public static bool VerifyFluentAssertion(Action assertion)
	{
		using (var assertionScope = new AssertionScope())
		{
			assertion();

			return !assertionScope.Discard().Any();
		}
	}
}
Now you can nest the Fluent Assertion inside of the Verify statement as follows:
myMock.Verify(m => 
	m.Method(It.Is<MyData>(actual => 
		FluentVerifier.VerifyFluentAssertion(() => 
			actual.Should()
			.BeEquivalentTo(expected, cfg => cfg.Excluding(p => p.DateCreated), ""))));
Note however that since Lambda expressions can't contain calls to methods with optional parameters, you must specify the "becauseArgs" parameter of the "BeEquivalentTo" method.

Test Data Definition Framework for .NET

August 20, 2021

I have just released the initial version of my new open source project which is designed to allow one set of integration tests to run against in-memory fakes and against a "real" repository using only pre-compiler directives. This is useful when you have a suite of SpecFlow tests that you want to quickly run locally whilst developing (e.g. with NCrunch) and on a cloud build server (e.g. GitHub) where you don't always want to hit "real" data resources, but want the flexibility of occasionally switching to "real data mode" and running the same set of tests against a real MongoDB or SQL server (for example). The initial version has one backing store implementation, for MongoDB, but I'm hoping by making this open source other backing stores can be added over time. You can read more about it on the GitHub page here: https://github.com/craigwardman/TestDataDefinitionFramework

Run Windows Command from WSL Bash

August 18, 2021

Since the advent of WSL I spend most of my time using Bash to perform my CLI tasks in Windows. Sometimes however, I'd like to run a command as though I were running it in CMD.. (I'm not talking about wslview though). The example that springs to mind is when starting a dotnet web application, where I'd like it to bind on the Windows IP/Port not on the WSL one. So although I could run "dotnet run" from Bash, I actually want to run "dotnet run" from Windows (with minimal effort of course) For this I've created a bash alias called "winrun" which looks as follows:
alias winrun='cmd.exe /c start cmd /k'
So now if I'm in Bash and want to run a dotnet project I just type:
winrun dotnet run