The controller is a central, coordinating process which stores configuration, loads plugins, and renders the various user interfaces for Jenkins.

An agent is typically a machine, or container, which connects to a Jenkins controller and executes tasks when directed by the controller.

A node is a machine which is part of the Jenkins environment and capable of executing Pipelines or jobs. Both the Controller and Agents are considered to be Nodes.

An executor is a slot for execution of work defined by a Pipeline or job on a Node. A Node may have zero or more Executors configured which corresponds to how many concurrent Jobs or Pipelines are able to execute on that Node.

A workspace is disposable directory on the file system of a Node where work can be done by a Pipeline or job. Workspaces are typically left in place after a Build or Pipeline run completes unless specific Workspace cleanup policies have been put in place on the Jenkins Controller. [1]

1. Distributed Builds Architecture

A Jenkins controller can operate by itself both managing the build environment and executing the builds with its own executors and resources. If you stick with this "standalone" configuration you will most likely run out of resources when the number or the load of your projects increase.

An agent, where the workload of building projects are delegated to, is a machine set up to offload projects from the controller. The method with which builds are scheduled depends on the configuration given to each project. For example, some projects may be configured to "restrict where this project is run" which ties the project to a specific agent or set of labeled agents. Other projects which omit this configuration will select an agent from the available pool in Jenkins.

In a distributed builds environment, the Jenkins controller will use its resources to only handle HTTP requests and manage the build environment. Actual execution of builds will be delegated to the agents. With this configuration it is possible to horizontally scale an architecture, which allows a single Jenkins installation to host a large number of projects and build environments. [2]

In order for a machine to be recognized as an agent, it needs to run a specific agent program to establish bi-directional communication with the controller.

There are different ways to establish a connection between controller and agent:

  • The SSH connector: Configuring an agent to use the SSH connector is the preferred and the most stable way to establish controller-agent communication.

  • The Inbound connector: In this case the communication is established starting the agent through a connection initiated by an agent program.

  • The Inbound-HTTP connector: This approach is quite similar to the Inbound-TCP Java Web Start approach, with the difference in this case being that the agent is executed as headless and the connection can be tunneled via HTTP(s).

  • Custom-script: It is also possible to create a custom script to initialize the communication between controller and agent if the other solutions do not provide enough flexibility for a specific use-case.

2. Nodes and Components

Builds in a distributed builds architecture use nodes, agents, and executors, which are distinct from the Jenkins controller itself. Understanding what each of these components are is useful when managing nodes: [3]

2.1. Controllers

The Jenkins controller is the Jenkins service itself and where Jenkins is installed. It is also a web server that also acts as a "brain" for deciding how, when, and where to run tasks. Management tasks such as configuration, authorization, and authentication are executed on the controller, which serves HTTP requests. Files written when a Pipeline executes are written to the filesystem on the controller, unless they are off-loaded to an artifact repository such as Nexus or Artifactory.

2.2. Agents

Agents manage the task execution on behalf of the Jenkins controller by using executors. An agent is a small (170KB single jar) Java client process that connects to a Jenkins controller and is assumed to be unreliable. An agent can use any operating system that supports Java. Any tools required for building and testing get installed on the node where the agent runs. Because these tools are a part of the node, they can be installed directly or in a container, such as Docker or Kubernetes. Each agent is effectively a process with its own Process Identifier (PID) on the host machine. In practice, nodes and agents are essentially the same but it is good to remember that they are conceptually distinct.

2.3. Nodes

Nodes are the "machines" on which build agents run. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync, and response time. A node is taken offline if any of these values go outside the configured threshold. Jenkins supports two types of nodes:

  • agents (described above)

  • built-in node

    The built-in node is a node that exists within the controller process. It is possible to use agents and the build-in node to run tasks. However, running tasks on the built-in node is discouraged for security, performance, and scalability reasons. The number of executors configured for the node determines the node’s ability to run tasks. Set the number of executors to 0 to disable running tasks on the built-in node.

2.4. Executors

An executor is a slot for the execution of tasks. Effectively, it is a thread in the agent. The number of executors on a node defines the number of concurrent tasks that can run. In other words, this determines the number of concurrent Pipeline stages that can execute at the same time. Determine the correct number of executors per build node must be determined based on the resources available on the node and the resources required for the workload. When determining how many executors to run on a node, consider CPU and memory requirements, as well as the amount of I/O and network activity:

  • One executor per node is the safest configuration.

  • One executor per CPU core can work well, if the tasks running are small.

  • Monitor I/O performance, CPU load, memory usage, and I/O throughput carefully when running multiple executors on a node.

3. Installing Jenkins with Docker

Due to Docker’s fundamental platform and container design, a Docker image for a given application, such as Jenkins, can be run on any supported operating system or cloud service also running Docker. [4]

3.1. Configuring Controller

  1. Open up a terminal window, and create a directory named controller.

    mkdir controller
    cd controller
  2. Create an environment file named .env and set the project name with jenkins.

    echo -n COMPOSE_PROJECT_NAME=jenkins > .env
  3. Create a groovy file named executors.groovy with the following content.

    import jenkins.model.*
    Jenkins.instance.setNumExecutors(0) // Recommended to not run builds on the built-in node
  4. Create a bridge network for the controller.

    docker network create -d bridge jenkins-controller
  5. Create a compose file named compose.yml with the following content.

    version: "2.4"
    services:
      controller:
        container_name: jenkins-controller
        build:
          context: .
          dockerfile_inline: |
            ARG JENKINS_TAG=2.426.3-jdk21
            FROM jenkins/jenkins:$${JENKINS_TAG} (1)
            COPY --chown=jenkins:jenkins executors.groovy /usr/share/jenkins/ref/init.groovy.d/executors.groovy (2)
        restart: on-failure
        ports:
          - "8080:8080"
          - "50000:50000" (3)
        volumes:
          - jenkins-home:/var/jenkins_home:rw (4)
        networks:
          jenkins-controller:
    volumes:
      jenkins-home:
        name: jenkins-home
    networks:
      controller:
        external: true (5)
        name: jenkins-controller
    1 Use the recommended official jenkins/jenkins image from the Docker Hub repository. [4]
    2 Extend the image and change it to your desired number of executors (recommended 0 executors on the built-in node). [5]
    3 In order to connect agents through an inbound TCP connection, map the port: -p 50000:50000. That port will be used when you connect agents to the controller.

    If you are only using SSH (outbound) build agents, this port is not required, as connections are established from the controller. If you connect agents using web sockets (since Jenkins 2.217), the TCP agent port is not used either. [5]

    4 NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission issues (the user used inside the container might not have rights to the folder on the host machine). If you really need to bind mount jenkins_home, ensure that the directory on the host is accessible by the jenkins user inside the container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run. [5]
    5 external specifies that this network’s lifecycle is maintained outside of that of the application.
  6. (Optional) Create a compose file named compose.override.yml with the following content.

    Docker Compose lets you merge and override a set of Compose files together to create a composite Compose file.

    By default, Compose reads two files, a compose.yml and an optional compose.override.yml file. By convention, the compose.yml contains your base configuration. The override file can contain configuration overrides for existing services or entirely new services. [8]

    version: "2.4"
    services:
      controller:
        build:
          args:
            - JENKINS_TAG=2.426.3-jdk21
        environment:
          - TZ=Asia/Shanghai
  7. Starting the controller container:

    docker compose up -d
  8. Post-installation setup wizard.

    Following this Post-installation setup to finish the last steps.

    Print the password at console.

    $ sudo docker inspect jenkins-home
    ...
            "Mountpoint": "/var/lib/docker/volumes/jenkins-home/_data",
            "Name": "jenkins-home",
    ...
    $ sudo cat /var/lib/docker/volumes/jenkins-home/_data/secrets/initialAdminPassword
    80df7355be5c4b15933742f7024dd739

3.2. Configuring Jenkins SSH Credential

  1. Generating an SSH key pair.

    To generate the SSH key pair, execute a command line tool named ssh-keygen on a machine you have access to. [6]
    ssh-keygen -t ed25519 -f ~/.ssh/jenkins_agent_key
  2. Create a Jenkins SSH credential.

    1. Go to your Jenkins dashboard.

    2. Go to Manage Jenkins option in left main menu and click on the Credentials button under the Security.

    3. Select the drop option Add Credentials from the (global) item under the Stores scoped to Jenkins.

    4. Fill in the form.

      • Kind: SSH Username with private key

      • ID: jenkins

      • Description: Jenkins SSH private key

      • Username: jenkins

      • Private Key: Select Enter directly and press the Add button to insert the content of your private key file at ~/.ssh/jenkins_agent_key.

      • Passphrase: Fill your passphrase used to generate the SSH key pair (leave empty if you didn’t use one at the previous step) and then press the Create button.

3.3. Configuring Agents using SSH Connector in Docker

  1. Open up a terminal window, and create a directory named agents.

    mkdir agents
    cd agents
  2. Create an environment file named .env and set the project name with jenkins-agents.

    echo -n COMPOSE_PROJECT_NAME=jenkins-agents > .env
  3. Create a bridge network for the agent.

    docker network create -d bridge jenkins-agents
  4. Create a compose file named compose.yml with the following content.

    version: "2.4"
    services:
      agent:
        container_name: jenkins-agent
        image: jenkins/ssh-agent:alpine-jdk21
        restart: on-failure
        ports:
          - "2200:22"
        environment:
          - "JENKINS_AGENT_SSH_PUBKEY=[your-public-key]" (1)
          # e.g. - "JENKINS_AGENT_SSH_PUBKEY=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBBHLJ+8RuLPO8dO1tm3RAt5kc3HqYwJUYMmRPjhtI3" (1)
        volumes:
          - agent-home:/home/jenkins/agent:rw (2)
        networks:
          jenkins-agent:
    volumes:
      agent-home:
        name: jenkins-agent-home
    networks:
      agents:
        external: true
        name: jenkins-agents
    1 The value of JENKINS_AGENT_SSH_PUBKEY MUST include the full contents of your .pub file created above (i.e. ~/.ssh/jenkins_agent_key.pub), including the ssh-XXXX prefix. [6]
    2 When using the Linux image, you have to set the value of the Remote root directory to /home/jenkins/agent in the agent configuration UI.

    When using the Windows image, you have to set the value of the Remote root directory to C:/Users/jenkins/Work in the agent configuration UI. [7]

  5. Starting the agent container.

    docker compose up -d
  6. Setup up the jenkins-agent on jenkins.

    1. Go to your Jenkins dashboard.

    2. Go to Manage Jenkins option in left main menu.

    3. Go to Nodes item under the System Configuration.

    4. Go to New Node option in top right menu.

    5. Fill the Node name and select the type; (e.g. Name: agent1, Type: Permanent Agent), and then press the Create button.

    6. Now fill the fields.

      • Remote root directory; (e.g. /home/jenkins/agent)

      • Labels; (e.g. agent1 )

      • Usage; (e.g. Use this node as much as possible)

      • Launch method; (e.g. Launch agents by SSH)

      • Host; (e.g. localhost or your IP address)

      • Credentials; (e.g. jenkins)

      • Host Key verification Strategy (e.g.: Non verifying Verification Strategy. test only, NOT recommended)

      • Expand the Advanced tab, and set the Port to be 2200

    7. Press the Save button and the agent1 will be registered, and be launched by the Controller.

  7. Delegating the first job to agent1.

    1. Go to your Jenkins dashboard

    2. Select New Item on side menu

    3. Enter an item name. (e.g.: First Job to Agent1)

    4. Select the Freestyle project and press OK.

    5. Now select the option Execute shell at Build Steps section.

    6. Add the command: echo $NODE_NAME in the Command field of the Execute shell step and the name of the agent will be printed inside the log when this job is run.

    7. Press the Save button and then select the option Build Now.

    8. Wait some seconds and then go to Console Output page.

      Started by user admin
      Running as SYSTEM
      Building remotely on agent1 in workspace /home/jenkins/agent/workspace/test
      [test] $ /bin/sh -xe /tmp/jenkins5590136104445527177.sh
      + echo agent1
      agent1
      Finished: SUCCESS

3.4. Configuring Agents running Docker in Docker

  1. Open up a terminal window, and create a directory named agents/dind:

    mkdir -p agents/dind
    cd agents/dind
  2. Create an environment file named .env and set the project name with jenkins-agents-dind:

    echo -n COMPOSE_PROJECT_NAME=jenkins-agents-dind > .env
  3. Create a bridge network for the agent:

    docker network create -d bridge jenkins-agents-dind
  4. Create a compose file named compose.yml with the following content:

    version: "2.4"
    services:
      agent:
        container_name: jenkins-agent
        build:
          context: .
          dockerfile_inline: |
            ARG SSH_AGENET_TAG=jdk21
            FROM jenkins/ssh-agent:$${SSH_AGENET_TAG}
            ARG DOCKER_CE_CLI_VERSION=5:25.0.1-1~debian.12~bookworm
            RUN apt-get update \
                && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
                    ca-certificates \
                    curl \
                    lsb-release \
                && rm -rf /var/lib/apt/lists/*
            RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc https://download.docker.com/linux/debian/gpg
            RUN echo "deb [arch=$(dpkg --print-architecture) \
                      signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
                      https://download.docker.com/linux/debian \
                      $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
            RUN apt-get update \
                && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
                    docker-ce-cli=$${DOCKER_CE_CLI_VERSION} \ (1)
                && rm -rf /var/lib/apt/lists/*
        restart: on-failure
        ports:
          - "2200:22" (2)
        environment:
          - "JENKINS_AGENT_SSH_PUBKEY=[your-public-key]" (3)
          # e.g. - "JENKINS_AGENT_SSH_PUBKEY=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBBHLJ+8RuLPO8dO1tm3RAt5kc3HqYwJUYMmRPjhtI3"
          - DOCKER_HOST=tcp://docker:2376
          - DOCKER_CERT_PATH=/certs/client
          - DOCKER_TLS_VERIFY=1
        volumes:
          - agent-home:/home/jenkins/agent:rw
          - docker-certs:/certs/client:ro
        networks:
          agents:
        depends_on:
          - docker
      docker:
        container_name: jenkins-docker
        image: docker:25
        restart: on-failure
        ports:
          - "2376"
        privileged: true
        environment:
           - DOCKER_TLS_CERTDIR=/certs
        volumes:
          - agent-home:/home/jenkins/agent:rw (4)
          - docker-certs:/certs/client:rw
          - docker-root:/var/lib/docker:rw
        networks:
          agents:
            aliases:
              - docker
    volumes:
      agent-home:
        name: jenkins-agent-home-dind
      docker-certs:
        name: jenkins-agent-docker-certs
      docker-root:
        name: jenkins-agent-docker-root
    networks:
      agents:
        external: true
        name: jenkins-agents-dind
    1 Extend the jenkins/ssh-agent image to install Docker CLI.
    2 If your machine already has a ssh server running on the 22 port, use another port to publish the agent container port 22 (SSH), such as 2200:22.
    3 The value of JENKINS_AGENT_SSH_PUBKEY MUST include the full contents of your .pub file created above (i.e. ~/.ssh/jenkins_agent_key.pub), including the ssh-XXXX prefix. [6]
    4 Share the agent home volume (i.e. agent-home) to the Docker container, otherwise the pipeline will be stuck.
    . . .
    process apparently never started in /home/jenkins/agent/workspace/jenkins-getting-started_main@tmp/durable-7a43d858
    (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    $ docker stop --time=1 383e1c4132052f8e461d87bf75108d3e627623cafe3de5f7f5ca80f843c324ae
    $ docker rm -f --volumes 383e1c4132052f8e461d87bf75108d3e627623cafe3de5f7f5ca80f843c324ae
    [Pipeline] // withDockerContainer
    [Pipeline] }
    [Pipeline] // withEnv
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    ERROR: script returned exit code -2
    Finished: FAILURE
  5. (Optional) Create a compose file named compose.override.yml with the following content:

    version: "2.4"
    services:
      agent:
        build:
          args:
            - SSH_AGENET_TAG=jdk21
            - DOCKER_CE_CLI_VERSION=5:25.0.1-1~debian.12~bookworm
      docker:
        image: docker:25
        # If an insecure registry isn’t marked as insecure,
        # docker pull, docker push, and docker search result
        # in error messages, prompting the user to either
        # secure or pass the --insecure-registry flag to the
        # Docker daemon.
        # command: ["--insecure-registry=192.168.56.0/24"]
  6. Starting the agent and docker container:

    docker compose up -d
  7. Refer to Configuring agents using the SSH connector in Docker to setup up the agent on jenkins, and create a Freestyle project using Execute shell with docker version command, and select the option Build Now then go to Console Output page.

    Started by user admin
    Running as SYSTEM
    Building remotely on agent1 in workspace /home/jenkins/agent/workspace/test
    [test] $ /bin/sh -xe /tmp/jenkins2069680891022148280.sh
    + docker version
    Client: Docker Engine - Community
     Version:           25.0.1
     API version:       1.44
     Go version:        go1.21.6
     Git commit:        29cf629
     Built:             Tue Jan 23 23:09:46 2024
     OS/Arch:           linux/amd64
     Context:           default
    
    Server: Docker Engine - Community
     Engine:
      Version:          25.0.1
      API version:      1.44 (minimum version 1.24)
      Go version:       go1.21.6
      Git commit:       71fa3ab
      Built:            Tue Jan 23 23:09:59 2024
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          v1.7.12
      GitCommit:        71909c1814c544ac47ab91d2e8b84718e517bb99
     runc:
      Version:          1.1.11
      GitCommit:        v1.1.11-0-g4bccb38
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    Finished: SUCCESS

4. Blue Ocean

Blue Ocean as it stands provides easy-to-use Pipeline visualization. It was intended to be a rethink of the Jenkins user experience, designed from the ground up for Jenkins Pipeline. Blue Ocean was intended to reduce clutter and increases clarity for all users. [9]

  • Sophisticated visualization of continuous delivery (CD) Pipelines, allowing for fast and intuitive comprehension of your Pipeline’s status.

  • Pipeline editor makes the creation of Pipelines more approachable, by guiding the user through a visual process to create a Pipeline.

  • Personalization to suit the role-based needs of each member of the team.

  • Pinpoint precision when intervention is needed or issues arise. Blue Ocean shows where attention is needed, facilitating exception handling and increasing productivity.

  • Native integration for branches and pull requests, which enables maximum developer productivity when collaborating on code in GitHub and Bitbucket.

When Jenkins is installed on most platforms, the Blue Ocean plugin and all necessary dependent plugins, which compile the Blue Ocean suite of plugins, are not installed by default.

To install the Blue Ocean suite of plugins on an existing Jenkins instance: [10]

  1. Ensure you are logged in to Jenkins as a user with the Administer permission.

  2. From the Jenkins home page, select Manage Jenkins on the left and then Plugins under the System Configuration.

  3. Select the Available plugins tab and enter blue ocean in the Filter text box. This filters the list of plugins based on the name and description.

  4. Select the box to the left of Blue Ocean, and then select either the Install after restart option (recommended) or the Install without restart option at the top right of the page.

    It is not necessary to select other plugins in this list. The main Blue Ocean plugin automatically selects and installs all dependent plugins, composing the Blue Ocean suite of plugins.

    If you select the Install without restart option, you must restart Jenkins to gain full Blue Ocean functionality.

Once a Jenkins environment has Blue Ocean installed and log in to the Jenkins classic UI, the Blue Ocean UI can be accessed by selecting Open Blue Ocean on the left side of the screen.

Alternatively, access Blue Ocean directly by appending /blue to the end of the Jenkins server’s URL. For example https://jenkins-server-url/blue.

If you need to access these features, select the Go to classic icon at the top of a common section of Blue Ocean’s navigation bar.

5. Pipeline

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile) which in turn can be committed to a project’s source control repository, which is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. [9]

5.1. Pipeline Concepts

The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax.

  • Pipeline

    A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

    Also, a pipeline block is a key part of Declarative Pipeline syntax.

  • Node

    A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline. = Also, a node block is a key part of Scripted Pipeline syntax.

  • Stage

    A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages), which is used by many plugins to visualize or present Jenkins Pipeline status/progress.

  • Step

    A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process). For example, to execute the shell command make, use the sh step: sh 'make'. When a plugin extends the Pipeline DSL, that typically means the plugin has implemented a new step.

    For an overview of available steps, please refer to the Pipeline Steps reference which contains a comprehensive list of steps built into Pipeline as well as steps provided by plugins. [12]

5.2. Pipelines Creating

A Pipeline can be created in one of the following ways:

  • Through Blue Ocean - after setting up a Pipeline project in Blue Ocean, the Blue Ocean UI helps you write your Pipeline’s Jenkinsfile and commit it to source control.

    Blue Ocean automatically generates an SSH public/private key pair or provides you with an existing pair for the current Jenkins user. This credential is automatically registered in Jenkins with the following details for this Jenkins user:

    • Domain: blueocean-private-key-domain

    • ID: jenkins-generated-ssh-key

    • Name: <jenkins-username> (jenkins-generated-ssh-key)

  • Through the classic UI - you can enter a basic Pipeline directly in Jenkins through the classic UI.

  • In SCM - you can write a Jenkinsfile manually, which you can commit to your project’s source control repository.

The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.

5.3. Jenkinsfile

Using a text editor, ideally one which supports Groovy syntax highlighting, create a new Jenkinsfile in the root directory of the project. [11]

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                echo 'Building..'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying....'
            }
        }
    }
}

The Declarative Pipeline example above contains the minimum necessary structure to implement a continuous delivery pipeline. The agent directive, which is required, instructs Jenkins to allocate an executor and workspace for the Pipeline. Without an agent directive, not only is the Declarative Pipeline not valid, it would not be capable of doing any work! By default the agent directive ensures that the source repository is checked out and made available for steps in the subsequent stages.

The stages directive, and steps directives are also required for a valid Declarative Pipeline as they instruct Jenkins what to execute and in which stage it should be executed.

5.4. Using Docker

Many organizations use Docker to unify their build and test environments across machines, and to provide an efficient mechanism for deploying applications.

To use the Docker with Pipeline, install the Docker Pipeline plugin:

  • Using the GUI: From your Jenkins dashboard navigate to Manage Jenkins > Plugins and select the Available plugins tab. Locate this plugin by searching for docker-workflow.

  • Using the CLI tool:

    jenkins-plugin-cli --plugins docker-workflow:572.v950f58993843
  • Using direct upload. Download one of the releases and upload it to your Jenkins instance.

Pipeline is designed to easily use Docker images as the execution environment for a single Stage or the entire Pipeline. Meaning that a user can define the tools required for their Pipeline, without having to manually configure agents. Any tool that can be packaged in a Docker container can be used with ease, by making only minor edits to a Jenkinsfile. [13]

pipeline {
    agent {
        docker { image 'node:20.11.0-alpine3.19' }
    }
    stages {
        stage('Test') {
            steps {
                sh 'id'
                sh 'node --version'
            }
        }
    }
}

When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within:

. . .
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ id
uid=1000(node) gid=1000(node) groups=1000(node)
[Pipeline] sh
+ node --version
v20.11.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
. . .

5.4.1. Workspace Synchronization

If it is important to keep the workspace synchronized with other stages, use reuseNode true. Otherwise, a dockerized stage can be run on the same agent or any other agent, but in a temporary workspace.

By default, for a containerized stage, Jenkins:

  1. Picks an agent.

  2. Creates a new empty workspace.

  3. Clones pipeline code into it.

  4. Mounts this new workspace into the container.

If you have multiple Jenkins agents, your containerized stage can be started on any of them.

When reuseNode is set to true, no new workspace will be created, and the current workspace from the current agent will be mounted into the container. After this, the container will be started on the same node, so all of the data will be synchronized.

pipeline {
    agent any
    stages {
        stage('Build') {
            agent {
                docker {
                    image 'gradle:8.2.0-jdk17-alpine'
                    // Run the container on the node specified at the
                    // top-level of the Pipeline, in the same workspace,
                    // rather than on a new node entirely:
                    reuseNode true
                }
            }
            steps {
                sh 'gradle --version'
            }
        }
    }
}

5.4.2. Caching Data for Containers

Many build tools will download external dependencies and cache them locally for future re-use. Since containers are initially created with "clean" file systems, this can result in slower Pipelines, as they may not take advantage of on-disk caches between subsequent Pipeline runs.

Pipeline supports adding custom arguments that are passed to Docker, allowing users to specify custom Docker Volumes to mount, which can be used for caching data on the agent between Pipeline runs. The following example will cache ~/.m2 between Pipeline runs utilizing the maven container, avoiding the need to re-download dependencies for subsequent Pipeline runs.

pipeline {
    agent {
        docker {
            image 'maven:3.9.3-eclipse-temurin-17'
            args '-v $HOME/.m2:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -B'
            }
        }
    }
}

5.4.3. Using Multiple Containers

It has become increasingly common for code bases to rely on multiple different technologies. For example, a repository might have both a Java-based back-end API implementation and a JavaScript-based front-end implementation. Combining Docker and Pipeline allows a Jenkinsfile to use multiple types of technologies, by combining the agent {} directive with different stages.

pipeline {
    agent none
    stages {
        stage('Back-end') {
            agent {
                docker { image 'maven:3.9.6-eclipse-temurin-17-alpine' }
            }
            steps {
                sh 'mvn --version'
            }
        }
        stage('Front-end') {
            agent {
                docker { image 'node:20.11.0-alpine3.19' }
            }
            steps {
                sh 'node --version'
            }
        }
    }
}

Appendix A: GitLab Jenkins Integration

GitLab is a fully featured software development platform that includes, among other powerful features, built-in GitLab CI/CD to leverage the ability to build, test, and deploy your apps without requiring you to integrate with CI/CD external tools. [14]

However, many organizations have been using Jenkins for their deployment processes, and need an integration with Jenkins to be able to onboard to GitLab before switching to GitLab CI/CD. Others have to use Jenkins to build and deploy their applications because of the inability to change the established infrastructure for current projects, but they want to use GitLab for all the other capabilities.

With GitLab’s Jenkins integration, you can effortlessly set up your project to build with Jenkins, and GitLab will output the results for you right from GitLab’s UI.

After configured a Jenkins integration, trigger a build in Jenkins when push code to your repository or create a merge request in GitLab. The Jenkins pipeline status displays on merge request widgets and the GitLab project’s home page. [21]

To configure a Jenkins integration with GitLab:

  • Grant Jenkins access to the GitLab project.

  • Configure the Jenkins server.

  • Configure the Jenkins project.

  • Configure the GitLab project.

A.1. Install GitLab using Docker

  1. Open a terminal, and a bridge network named gitlab-ce.

    docker network create gitlab-ce
  2. Create a compose.yml file.

    version: "2.4"
    services:
      gitlab-ce:
        container_name: gitlab-ce
        image: gitlab/gitlab-ce:16.5.8-ce.0 # Pin GitLab to a specific Community Edition version
        restart: "on-failure:3"
        volumes:
          - data:/var/opt/gitlab:rw # For storing application data.
          - logs:/var/log/gitlab:rw # For storing logs.
          - config:/etc/gitlab:rw   # For storing the GitLab configuration files.
        networks:
          gitlab-ce:
    volumes:
      data:
        name: gitlab-ce-data
      logs:
        name: gitlab-ce-logs
      config:
        name: gitlab-ce-config
    networks:
      gitlab-ce:
        external: true
        name: gitlab-ce
  3. Create a compose.override.yml file.

    version: "2.4"
    services:
      gitlab-ce:
        # Pin GitLab to a specific Community Edition version
        image: gitlab/gitlab-ce:16.5.8-ce.0
        # Use a valid externally-accessible hostname or IP address. Do not use `localhost`.
        hostname: 'node-0'
        environment:
          # If you want to use a different host port than 80 (HTTP), 443 (HTTPS), or 22 (SSH), you
          # need to add a separate --publish directive to the docker run command.
          GITLAB_OMNIBUS_CONFIG: |
            # Add any other gitlab.rb configuration here, each on its own line
            external_url 'http://node-0:8929'
            gitlab_rails['gitlab_shell_ssh_port'] = 2424
        ports:
          - '8929:8929'
          - '2424:22'
        extra_hosts:
          - "node-0:192.168.56.130"
  4. Start the gitlab-ce container.

    docker compose up -d

    The initialization process may take a long time. You can track this process with: [20]

    docker logs -f gitlab-ce

    After starting the container, you can visit node-0. It might take a while before the Docker container starts to respond to queries.

    Visit the GitLab URL, and sign in with the username root and the password from the following command:

    sudo cat $(docker inspect gitlab-ce-config -f "{{.Mountpoint}}")/initial_root_password
    The password file is automatically deleted in the first container restart after 24 hours.

Appendix B: Sonatype Nexus Repository OSS

Sonatype Nexus Repository Manager provides a central platform for storing build artifacts. [15]

B.1. Installing Nexus Repository with Docker

  1. Open a terminal, and create a .env file, and set the project name with sonatype-nexus.

    echo -n COMPOSE_PROJECT_NAME=sonatype-nexus > .env
  2. Creata a bridge network named sonatype-nexus.

    docker network create -d bridge sonatype-nexus
  3. Create a compose.yml file.

    version: "2.4"
    services:
      nexus:
        container_name: sonatype-nexus
        user: nexus:nexus
        image: sonatype/nexus3:3.64.0
        restart: "on-failure:3"
        volumes:
          - data:/nexus-data:rw
        networks:
          nexus:
    volumes:
      data:
        name: nexus-data
    networks:
      nexus:
        external: true
        name: sonatype-nexus
  4. Create a compose.override.yml file.

    version: "2.4"
    services:
      nexus:
        ports:
          - "8081:8081"
          - "8082:8082" # Using for Docker Registry
        # environment:
        #   NEXUS_CONTEXT: nexus (1)
        #   INSTALL4J_ADD_VM_PARAMS, passed to the Install4J startup script. Defaults to -Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs.
    1 An environment variable can be used to control the Nexus Context Path, NEXUS_CONTEXT, defaults to /. [16] [17]
  5. Start the sonatype-nexus container.

    docker compose up -d
  6. Go to a browser with http://localhost:8081, click the Sign in button on the top right, and fill the login fields, and then complete required setup tasks.

    Your admin user password is located in /nexus-data/admin.password on the server.

    1. Inspect the Docker volume (i.e. nexus-data).

      $ docker inspect nexus-data
      ...
              "Mountpoint": "/var/lib/docker/volumes/nexus-data/_data",
      ...
    2. Print the user password.

      sudo cat /var/lib/docker/volumes/nexus-data/_data/admin.password

B.2. Docker Hosted Repositories

A hosted repository using the Docker repository format is typically called a private Docker registry. It can be used to upload your own container images as well as third-party images. It is common practice to create two separate hosted repositories for these purposes. [18]

  1. Go the Nexus dashboard, and select the gear icon at the top bar, or enter http://localhost:8081/#admin/repository.

  2. Select the Repositories on the left menu to the Manage repositories panel, or enter http://localhost:8081/#admin/repository/repositories.

  3. Click the Create repository button, and select the docker (hosted) recipe, then fill the form.

    • Name: docker-registry

    • Http:: 8082

  4. Click the Create repository button at the bottom.

  5. Login in with Docker, and push/pull images from/to the Nexus.

    docker login -u admin -p [YOUR ADMIN PASSWORD OF NEXUS] http://localhost:8082
    $ docker pull busybox
    Using default tag: latest
    latest: Pulling from library/busybox
    9ad63333ebc9: Pull complete
    Digest: sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74
    Status: Downloaded newer image for busybox:latest
    docker.io/library/busybox:latest
    $ docker tag busybox:latest localhost:8082/busybox
    $ docker push localhost:8082/busybox
    Using default tag: latest
    The push refers to repository [localhost:8082/busybox]
    2e112031b4b9: Pushed
    latest: digest: sha256:d319b0e3e1745e504544e931cde012fc5470eba649acc8a7b3607402942e5db7 size: 527
    $ docker pull localhost:8082/busybox
    Using default tag: latest
    latest: Pulling from busybox
    Digest: sha256:d319b0e3e1745e504544e931cde012fc5470eba649acc8a7b3607402942e5db7
    Status: Image is up to date for localhost:8082/busybox:latest
    localhost:8082/busybox:latest
  6. Go back to the Browser (e.g. http://localhost:8081/#browse/browse:docker-registry) in the Nexus to check the Repository status.

By default, Docker assumes all registries to be secure, except for local registries. Communicating with an insecure registry isn’t possible if Docker assumes that registry is secure. In order to communicate with an insecure registry, the Docker daemon requires --insecure-registry in one of the following two forms:

  • --insecure-registry myregistry:5000 tells the Docker daemon that myregistry:5000 should be considered insecure.

  • --insecure-registry 10.1.0.0/16 tells the Docker daemon that all registries whose domain resolve to an IP address is part of the subnet described by the CIDR syntax, should be considered insecure.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

If an insecure registry isn’t marked as insecure, docker pull, docker push, and docker search result in error messages, prompting the user to either secure or pass the --insecure-registry flag to the Docker daemon as described above.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure as of Docker 1.3.2. It isn’t recommended to rely on this, as it may change in the future.

$ docker info
  . . .
 Insecure Registries:
  127.0.0.0/8

B.3. NuGet Hosted Repositories

A hosted repository for NuGet can be used to upload your own packages as well as third-party packages. The repository manager includes a hosted NuGet repository named nuget-hosted by default. [19]

  1. Go the Nexus dashboard, sign in, and click the user name at the top right, or enter http://localhost:8081/#user/account.

  2. On the left panel, select the NuGet API Key.

  3. Click the Access API Key, authentication with your credential, and then click Copy to Clipboard.

  4. Click the gear icon at the top panel, select the Realms on the left panel under the Security.

  5. Select the NuGet API-Key Realm on the left Available tab panel, and transfer it to the right Active tab panel.

  6. Click the Save button at the bottom right.

  7. Push a Nuget package on Nexus.

    $ dotnet new classlib -o HelloLib
    The template "Class Library" was created successfully.
    . . .
    $ dotnet pack HelloLib/
    $ dotnet nuget push HelloLib/bin/Release/HelloLib.1.0.0.nupkg -k [REPLACE WITH YOUR API KEY] -s http://localhost:8081/repository/nuget-hosted/index.json
    warn : You are running the 'push' operation with an 'HTTP' source, 'http://localhost:8081/repository/nuget-hosted/index.json'. Non-HTTPS access will be removed in a future version. Consider migrating to an 'HTTPS' source.
    Pushing HelloLib.1.0.0.nupkg to 'http://localhost:8081/repository/nuget-hosted'...
    warn : You are running the 'push' operation with an 'HTTP' source, 'http://localhost:8081/repository/nuget-hosted/'. Non-HTTPS access will be removed in a future version. Consider migrating to an 'HTTPS' source.
      PUT http://localhost:8081/repository/nuget-hosted/
      Created http://localhost:8081/repository/nuget-hosted/ 40ms
    Your package was pushed.

    You can also create a nuget.config and add the NuGet source to the project.

    dotnet new console -o HelloApp
    cd HelloApp/
    dotnet new nugetconfig
    dotnet nuget add source -n nexus http://localhost:8081/repository/nuget-hosted/index.json
    dotnet add package HelloLib --version 1.0.0

Appendix C: Jenkins for a .NET application using Docker

  1. Open a terminal, create a working folder if you haven’t already, and enter it.

    In the working folder, run the following command to create a demo ASP.NET Core Web project:

    dotnet new gitignore
    dotnet new globaljson --sdk-version=8.0.101 --roll-forward=latestFeature
    dotnet new sln -n jenkins-getting-started
    dotnet new web -o src/HelloWorld
    dotnet sln add -s src src/HelloWorld/
  2. Create Dockerfile using to build Docker image.

    FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
    WORKDIR /source
    
    # Copy everything
    COPY . ./
    # Restore as distinct layers
    RUN dotnet restore
    # Build and publish a release
    RUN dotnet publish -c release -o /app --no-restore
    
    # Build runtime image
    FROM mcr.microsoft.com/dotnet/aspnet:8.0
    WORKDIR /app
    COPY --from=build /app ./
    ENTRYPOINT ["dotnet", "HelloWorld.dll"]
  3. Create Jenkinsfile.

    pipeline {
    
        environment {
            // Explicitly specify the DOTNET_CLI_HOME environment variable to a writable directory, like /tmp:
            // See also: https://github.com/dotnet/cli/pull/9327
            //           https://github.com/dotnet/sdk/blob/main/src/Common/CliFolderPathCalculatorCore.cs#L14
            // System.UnauthorizedAccessException: Access to the path '/.dotnet' is denied.
            DOTNET_CLI_HOME = '/tmp'
        }
    
        agent any
    
        stages {
            stage('Build') {
                agent {
                    docker {
                        image 'mcr.microsoft.com/dotnet/sdk:8.0'
                        // Run the container on the node specified at the
                        // top-level of the Pipeline, in the same workspace,
                        // rather than on a new node entirely:
                        reuseNode true
                    }
                }
                steps {
                    sh 'dotnet build'
                }
            }
            stage('Test') {
                agent {
                    docker {
                        image 'mcr.microsoft.com/dotnet/sdk:8.0'
                        // Run the container on the node specified at the
                        // top-level of the Pipeline, in the same workspace,
                        // rather than on a new node entirely:
                        reuseNode true
                    }
                }
                steps {
                    sh 'dotnet test'
                }
            }
            stage('Deploy') {
                agent {
                    docker {
                        image 'mcr.microsoft.com/dotnet/sdk:8.0'
                        // Run the container on the node specified at the
                        // top-level of the Pipeline, in the same workspace,
                        // rather than on a new node entirely:
                        reuseNode true
                    }
                }
                steps {
                    sh 'dotnet publish'
                }
            }
            stage('Docker') {
                // Execute the stage on a node pre-configured to accept Docker-based Pipelines
                environment {
                    // Create the Docker Registry credential with ID as `jenkins-docker-registry-creds` on Jenkins.
                    DOCKER_REGISTRY_CREDS = credentials('jenkins-docker-registry-creds')
                    // Replace the following variables with your registry.
                    REGISTRY_SCHEME= 'http'
                    REGISTRY_HOSTNAME = '192.168.56.130'
                    REGISTRY_PORT = '8082'
                }
                steps {
                    sh 'docker build . -t $REGISTRY_HOSTNAME:$REGISTRY_PORT/hello-world:$BRANCH_NAME'
                    sh 'docker login -u $DOCKER_REGISTRY_CREDS_USR -p $DOCKER_REGISTRY_CREDS_PSW $REGISTRY_SCHEME://$REGISTRY_HOSTNAME:$REGISTRY_PORT'
                    sh 'docker push $REGISTRY_HOSTNAME:$REGISTRY_PORT/hello-world:$BRANCH_NAME'
                    sh 'docker logout $REGISTRY_SCHEME://$REGISTRY_HOSTNAME:$REGISTRY_PORT'
                }
            }
        }
    }
  4. The final project structure should be as below.

    $ tree
    .
    ├── Dockerfile
    ├── global.json
    ├── Jenkinsfile
    ├── jenkins-getting-started.sln
    └── src
        └── HelloWorld
            ├── appsettings.Development.json
            ├── appsettings.json
            ├── HelloWorld.csproj
            ├── Program.cs
            └── Properties
                └── launchSettings.json
    
    4 directories, 9 files
  5. Build and test the project.

    Run the Web application.

    $ dotnet run --project src/HelloWorld/
    Building...
    info: Microsoft.Hosting.Lifetime[14]
          Now listening on: http://localhost:5062
    info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Development
    ...

    Open another terminal, and test the above endpoint.

    $ curl -i http://localhost:5062
    HTTP/1.1 200 OK
    Content-Type: text/plain; charset=utf-8
    Date: Tue, 30 Jan 2024 03:25:20 GMT
    Server: Kestrel
    Transfer-Encoding: chunked
    
    Hello World!
  6. The following is a sample output on Jenkins.

    . . .
    + dotnet build
    MSBuild version 17.8.3+195e7f5a3 for .NET
      Determining projects to restore...
    . . .
    
    + docker build . -t 192.168.56.130:8082/hello-world:main
    DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
                Install the buildx component to build images with BuildKit:
                https://docs.docker.com/go/buildx/
    
    Sending build context to Docker daemon  1.535MB
    . . .
    
    + docker login -u **** -p **** http://192.168.56.130:8082
    WARNING! Using --password via the CLI is insecure. Use --password-stdin.
    WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    [Pipeline] sh
    + docker push 192.168.56.130:8082/hello-world:main
    The push refers to repository [192.168.56.130:8082/hello-world]
    . . .
    
    + docker logout http://192.168.56.130:8082
    Removing login credentials for 192.168.56.130:8082
    . . .