Containerising a Next.js application that uses SSG (where the content is different per environment) isn't currently that great of an experience. The Vercel docs
suggest that you would simply build a container image targeting a certain environment, but that doesn't sit well with the usual approach to containerisation and route to live process of many software teams.
In this blog post I'll dig into the problem with some suggestions on how you can solve it.
What is SSG?
SSG (static site generation) is a feature of Next.js that allows you to pre-compute the content of pages, based on content from a headless CMS, at BUILD TIME so that your website doesn’t need to communicate with the CMS on a per-request basis.
This improves website performance due to having pre-computed responses ready to go and reduces load on the CMS server.
SSG can be broken down into two categories, based on whether the route is dynamic or fixed, e.g.
← fixed route, can contain SSG generated content
← dynamic route (includes slug), build time SSG pages defined by
What is ISR?
ISR (incremental static regeneration) is a feature of Next.js that, given an existing SSG cached page, will at RUN TIME, go and rebuild/refresh that cache with the latest content from the CMS, so that SSG pages do not become stale.
This gives you the benefits of a “static” website, but still means the underlying content can be editable without rebuilding/redeploying the site.
What about environment variables?
For code that runs on the server, the value is substituted by the Node process in real-time. For code that runs on the client, the value is swapped in for the literal at BUILD TIME by the compiler.
Sounds great, what’s the problem?
The problem stems from SSG and client-side environment variables being a “build time” computation.
In order to build a Docker image that is “ready to start”, you’d need to:
- Be happy that you’re building an image targeting a specific environment
- A container image would not be able to be deployed to any other environment than the one it was created for
- Be happy that you’re baking in a certain set of client-side environment variables
- Changing environment variables and restarting the container image has no effect on client-side code, you’d need to rebuild from source.
- Be happy that you’re baking in a “point-in-time” cache of the content
- This would get more and more stale as time goes by (ISR kind of solves this issue but only when it kicks in (e.g. after 5 secs) and the delta would keep increasing)
- Have connectivity to the target environment CMS API to get content during the build
- In order to get the content at build time, you’d need network connectivity between your build agent and whatever private network is hosting your content/CMS server.
None of the above makes for a good 12 factor app and is not consistent with the usual approach to containerisation (being that the same image can be configured/deployed many times in different ways).
What is the solution?
For environment variables, luckily there is an easy solution known as “Runtime Configuration
” - this essentially keeps the “process.env” parts of the code on the server and the client-side gets access to the config by calling a react hook and using the configuration object.
ISR we get just by including “revalidate” in the
return value. This means content is refreshed on the next request after every n seconds (configured per page). However the “initial content” is still whatever was included at the SSG/build stage.
You can also forcefully update the content cache using "on demand revalidation" by creating an API endpoint that calls the
method. Data Fetching: Incremental Static Regeneration | Next.js
For SSG (that don’t use dynamic routes) there is no simple solution, having build time connectivity to the CMS and building images specific to a single environment is out of the question, so there is an alternative approach.
I just want SSG, even if it means deferring the "next build"
During the Dockerfile “build” phase, we could omit the "next build" step; a better option is to spin up a mocked “CMS" to allow us to perform the “next build”, which at least ensures the code will compile as well as warming up the “.next” cache of transpiled output (even though the content cache is still empty). We can set the image entrypoint as “npm run build && npm run start” so that wherever the container finds itself being spun up, it will re-build the baked in code files with the environment variables provided and will connect to the configured CMS to generate it's cache. If you're using "readiness" checks in Kubernetes, the pod won't come online until the cache has been generated.
of this approach; it’s very simple to reason about, it follows usual Docker paradigms with regards to configurability and deployability and takes care of client side environment variables without the need for using runtime configuration (simpler DX).
are; since the production image needs to build the source on startup you need to include the source code files and the package “devDependencies” so that this can occur. It also means the container startup is slower than if it were pre-built.
Scrap SSG - Use SSR+CDN caching
Since environment variables are easily solved through runtime configuration, we could solve that problem that way and replace SSG with SSR+CDN for caching.
Removes the requirement for the “build on startup”, which will make the container images smaller and faster to boot.
Relies on external CDN tooling, which would require a different approach for warming up and refreshing caches. Not ideal that we must forfeit SSG.
Vercel could offer “runtime only SSG” (basically ISR but with an empty starting point)
If Vercel ever create a feature whereby SSG can become a “run time” rather than “build time” operation, then we should switch to using that, in combination with runtime configuration for environment variables.
Can use a multistage Dockerfile to split “build time” from “run time” dependencies, so container image will be smaller. More 12FA compliant as the build artifact is compiled code only.
Still has a slower "fully ready and cached" startup time when compared to build time SSG, due to SSG/ISR only kicking in after the container has started (although with file level caching with a network share this would only be applicable to the first container instance).
Footnote on option B)
This is already possible for “dynamic routes” (i.e. routes that use a slug and would export
). In this case you can return an empty array of paths and fallback mode “blocking”, so that during the build there are no SSG pages found, but at runtime any URLs requested would act like SSR on first request and then be cached. You can populate the cache by triggering the “on demand revalidation” at runtime (using the actual paths you’d like to generate), by calling an API endpoint you have created for this purpose. This is hinted at on Vercel’s website here Data Fetching: getStaticPaths | Next.js
With all of the above, when using SSG/ISR, if multiple instances of the site are running and it’s absolutely vital that the content is the same across all instances, then you should use a network share with file caching, as noted here: Data Fetching: Incremental Static Regeneration | Next.js
In conclusion, if you want to use Next.js with Docker:
- No SSG? Use "runtime configuration" for environment variables and follow the usual multi-stage build approach in the Dockerfile
- SSG for dynamic paths only? Use "runtime configuration" for environment variables, use "empty paths, fallback blocking" trick to skip SSG on build, then use "on demand revalidation" after the container starts to populate the cache
- SSG for fixed routes? Embed a "build" step into the container startup
As mentioned, if Vercel release a feature to enable "run time only SSG" this would become the best option for all.