Docker for NodeJS - a production setup by example
Setting up a Docker to run a NodeJS application is easy and simple. A lot of examples can be found in the wild. If you want to run it in production a lot of changes needs to be applied. I will guide you through the must haves to harden your setup.
In the Docker build I apply a couple of best practices I found on various security blog posts. In the Docker build I rely on layering, so I can minimize the image to a reasonable size.
Base image
As a starting point we use a Node alpine version. Make sure you install every none npm dependency you need to run your service later in the final image.
ARG NODE_VERSION=20.11.1
ARG GITHUB_NPM_TOKEN
FROM node:${NODE_VERSION}-alpine AS base
ENV npm_config_unsafe_perm true
ARG GITHUB_NPM_TOKEN
RUN apk add --update --no-cache curl-dev git tini
Build Image
The build image uses the base image. You now install all dependencies additionally needed to run the npm build
command. Once this is done, we build our project and immediately remove the dev dependencies from the node_modules directory to save a huge amount of disc space.
Tokens for private packages
In case you have some private packages in your dependency list, you need to provide a token for installation as a .npmrc
file. This token is a secret and has to be treated as such. If it is part of the Docker file system at the end of a single step, it can be read out from the final image! Therefore, we run a single command to create the npmrc
file, install all dependencies, and delete the file, so the sensitive token is not part of the final result.
FROM base AS build
ENV npm_config_unsafe_perm true
ARG GITHUB_NPM_TOKEN
# Create app directory
WORKDIR /app/
# Copy app files
COPY . .
# Install all dependencies (dev + prod)
RUN echo '@konzentrik:registry=https://npm.pkg.github.com' > "$HOME/.npmrc" && \
echo '//npm.pkg.github.com/:_authToken=${GITHUB_NPM_TOKEN}' >> "$HOME/.npmrc" && \
echo 'always-auth=true' >> "$HOME/.npmrc" && \
npm ci && \
rm -f "$HOME/.npmrc"
# Build stuff and remove dev dependencies afterward so we copy less to the final image
RUN npm run build && npm prune --omit=dev
Production image
In case you have no additional dependencies installed in the first step, you can create a new image from NodeJS bullseye-slim image. This will reduce your image size even more.
# Create the final image
FROM node:${NODE_VERSION}-bullseye-slim as final
RUN apt-get update && apt-get install -y --no-install-recommends tini
If there are dependencies, you continue from the base image.
# Create the final image
FROM base AS final
Non-root user
We want to run the application as none root user. Therefore, we copy the relevant files from the build image for this user and start the process with the according user.
ENV npm_config_unsafe_perm true
ENV NODE_ENV production
# Create app directory
WORKDIR /app/
# Copy build result
COPY --chown=node:node --from=build /app/package.json .
COPY --chown=node:node --from=build /app/node_modules/ ./node_modules
COPY --chown=node:node --from=build /app/dist/ ./dist
USER node
Kubernetes-friendly execution
If you run the Docker image in a Kubernetes cluster, you are highly interested in graceful shutdowns. In the context of RapidStream, I wrote about it in detail. Feel free to take a look at the article
EXPOSE 8080
ENTRYPOINT ["tini", "-v", "-e", "143", "--", "node", "build/index.js"]
Final thoughts
As we continue to refine our cluster's efficiency and security, it's essential to revisit a crucial aspect of containerization: Docker images. By implementing best practices, we can significantly reduce spin-up times, enhance security, and minimize storage requirements.
To reap the benefits of faster deployment cycles, ensure that your Docker images are as lightweight as possible. This means:
- Keeping only relevant data within each container
- Avoiding unnecessary dependencies and libraries
Remember to consider the following when optimizing your Docker images: even if you don't access an old layer anymore, it may still contain sensitive information – such as tokens or credentials.
Keep optimizing, keep securing!