This page lists best practices for writing your own Dockerfiles and maintains a collection of common build scenarios encountered during Docker-based Sitecore development.
When writing Dockerfiles, you'll need to consider both the impact to the Docker build process and also the resulting image. A poorly structured Dockerfile can easily cause long build times or a large image size. Luckily, there are numerous ways to optimize.
The best guides come direct from Docker and Microsoft. Both of these are worth a thorough read.
- Use multi-stage builds to remove build dependencies and reduce the size of your final image.
- Include a .dockerignore file to reduce the build context (and image size).
- Understand image layers and leverage the build cache.
- Order your steps from least to most frequently changing to optimize caching.
NuGet restore optimizations
Performing a NuGet restore is a common step when building your solution in a Dockerfile. However, this can eat up build time if not thinking about optimization.
Remember, each build step will cache the results if all previous steps are cached, and with
COPY commands, if the hash of the source files hasn't changed. With that in mind, you can be a bit more selective about files copied in for the NuGet restore to minimize cache busting.
Here's a simple example:
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build # Copy NuGet essentials and restore as distinct layers COPY *.sln nuget.config . COPY src\*.csproj .\src\ RUN nuget restore # Copy everything else, build, etc COPY src\. .\src\ RUN msbuild /p:Configuration=Release [...]
We copy over only essential NuGet files first, run a
nuget restore, and then pull in everything else. This caches the NuGet restore step more frequently so we don't have to re-download these every time.
Be aware that if using floating (*) or version ranges for package references (only available with PackageReference format), this may result in older package versions in the cached restore layer. This is not a concern if using exact versions.
This works great for basic solutions with a simple folder structure. However, due to wildcard limitations of the 'COPY' command which causes folder structure to be lost, this quickly gets unwieldy for most solutions (e.g. Sitecore Helix).
There are a few workarounds out there for this, most of which require making assumptions about the folder structure and project naming. The method you'll see in most Sitecore examples utilizes another "prep" build stage along with
robocopy (which removes any of those assumptions).
FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS prep # Gather only artifacts necessary for NuGet restore, retaining directory structure COPY *.sln nuget.config \nuget\ COPY src\ \temp\ RUN Invoke-Expression 'robocopy C:\temp C:\nuget\src /s /ndl /njh /njs *.csproj *.scproj packages.config' [...] # New build stage, independent cache FROM mcr.microsoft.com/dotnet/framework/sdk:4.8 AS build # Copy prepped NuGet artifacts, and restore as distinct layer COPY --from=prep .\nuget .\ RUN nuget restore # Copy everything else, build, etc COPY src\ .\src\ RUN msbuild /p:Configuration=Release [...]
Using private NuGet feeds
There are times your build will need to retrieve NuGet packages from a private feed. Special considerations must be taken for managing credentials when building in a Docker context to ensure these are protected.
Please refer to the following article for details:
Building with Team Development for Sitecore
Docker solution builds with Team Development for Sitecore (TDS) require the
HedgehogDevelopment.TDS NuGet package as well as TDS license environment variables, as described here:
You can see an example of this on the Helix.Examples repository on GitHub.