The controller is a central, coordinating process which stores configuration, loads plugins, and renders the various user interfaces for Jenkins.

An agent is typically a machine, or container, which connects to a Jenkins controller and executes tasks when directed by the controller.

A node is a machine which is part of the Jenkins environment and capable of executing Pipelines or jobs. Both the Controller and Agents are considered to be Nodes.

An executor is a slot for execution of work defined by a Pipeline or job on a Node. A Node may have zero or more Executors configured which corresponds to how many concurrent Jobs or Pipelines are able to execute on that Node.

A workspace is disposable directory on the file system of a Node where work can be done by a Pipeline or job. Workspaces are typically left in place after a Build or Pipeline run completes unless specific Workspace cleanup policies have been put in place on the Jenkins Controller. [1]

1. Distributed Builds Architecture

A Jenkins controller can operate by itself both managing the build environment and executing the builds with its own executors and resources. If you stick with this "standalone" configuration you will most likely run out of resources when the number or the load of your projects increase.

An agent, where the workload of building projects are delegated to, is a machine set up to offload projects from the controller. The method with which builds are scheduled depends on the configuration given to each project. For example, some projects may be configured to "restrict where this project is run" which ties the project to a specific agent or set of labeled agents. Other projects which omit this configuration will select an agent from the available pool in Jenkins.

In a distributed builds environment, the Jenkins controller will use its resources to only handle HTTP requests and manage the build environment. Actual execution of builds will be delegated to the agents. With this configuration it is possible to horizontally scale an architecture, which allows a single Jenkins installation to host a large number of projects and build environments. [2]

In order for a machine to be recognized as an agent, it needs to run a specific agent program to establish bi-directional communication with the controller.

There are different ways to establish a connection between controller and agent:

  • The SSH connector: Configuring an agent to use the SSH connector is the preferred and the most stable way to establish controller-agent communication.

  • The Inbound connector: In this case the communication is established starting the agent through a connection initiated by an agent program.

  • The Inbound-HTTP connector: This approach is quite similar to the Inbound-TCP Java Web Start approach, with the difference in this case being that the agent is executed as headless and the connection can be tunneled via HTTP(s).

  • Custom-script: It is also possible to create a custom script to initialize the communication between controller and agent if the other solutions do not provide enough flexibility for a specific use-case.

2. Nodes and Components

Builds in a distributed builds architecture use nodes, agents, and executors, which are distinct from the Jenkins controller itself. Understanding what each of these components are is useful when managing nodes: [3]

2.1. Controllers

The Jenkins controller is the Jenkins service itself and where Jenkins is installed. It is also a web server that also acts as a "brain" for deciding how, when, and where to run tasks. Management tasks such as configuration, authorization, and authentication are executed on the controller, which serves HTTP requests. Files written when a Pipeline executes are written to the filesystem on the controller, unless they are off-loaded to an artifact repository such as Nexus or Artifactory.

2.2. Agents

Agents manage the task execution on behalf of the Jenkins controller by using executors. An agent is a small (170KB single jar) Java client process that connects to a Jenkins controller and is assumed to be unreliable. An agent can use any operating system that supports Java. Any tools required for building and testing get installed on the node where the agent runs. Because these tools are a part of the node, they can be installed directly or in a container, such as Docker or Kubernetes. Each agent is effectively a process with its own Process Identifier (PID) on the host machine. In practice, nodes and agents are essentially the same but it is good to remember that they are conceptually distinct.

2.3. Nodes

Nodes are the "machines" on which build agents run. Jenkins monitors each attached node for disk space, free temp space, free swap, clock time/sync, and response time. A node is taken offline if any of these values go outside the configured threshold. Jenkins supports two types of nodes:

  • agents (described above)

  • built-in node

    The built-in node is a node that exists within the controller process. It is possible to use agents and the build-in node to run tasks. However, running tasks on the built-in node is discouraged for security, performance, and scalability reasons. The number of executors configured for the node determines the node’s ability to run tasks. Set the number of executors to 0 to disable running tasks on the built-in node.

2.4. Executors

An executor is a slot for the execution of tasks. Effectively, it is a thread in the agent. The number of executors on a node defines the number of concurrent tasks that can run. In other words, this determines the number of concurrent Pipeline stages that can execute at the same time. Determine the correct number of executors per build node must be determined based on the resources available on the node and the resources required for the workload. When determining how many executors to run on a node, consider CPU and memory requirements, as well as the amount of I/O and network activity:

  • One executor per node is the safest configuration.

  • One executor per CPU core can work well, if the tasks running are small.

  • Monitor I/O performance, CPU load, memory usage, and I/O throughput carefully when running multiple executors on a node.

3. Installing Jenkins with Docker

Due to Docker’s fundamental platform and container design, a Docker image for a given application, such as Jenkins, can be run on any supported operating system or cloud service also running Docker. [4]

3.1. Configuring Controller

  1. Open up a terminal window, and create a directory named controller.

    mkdir controller
    cd controller
  2. Create an environment file named .env and set the project name with jenkins.

    echo -n COMPOSE_PROJECT_NAME=jenkins > .env
  3. Create a groovy file named executors.groovy with the following content.

    import jenkins.model.*
    Jenkins.instance.setNumExecutors(0) // Recommended to not run builds on the built-in node
  4. Create a bridge network for the controller.

    docker network create -d bridge jenkins-controller
  5. Create a compose file named compose.yml with the following content.

    version: "2.4"
    services:
      controller:
        container_name: jenkins-controller
        build:
          context: .
          dockerfile_inline: |
            ARG JENKINS_TAG=2.426.3-jdk21
            FROM jenkins/jenkins:$${JENKINS_TAG} (1)
            COPY --chown=jenkins:jenkins executors.groovy /usr/share/jenkins/ref/init.groovy.d/executors.groovy (2)
        restart: always (3)
        ports:
          - "8080:8080"
          - "50000:50000" (4)
        volumes:
          - jenkins-home:/var/jenkins_home:rw (5)
        networks:
          controller:
    volumes:
      jenkins-home:
        name: jenkins-home
    networks:
      controller:
        external: true (6)
        name: jenkins-controller
    1 Use the recommended official jenkins/jenkins image from the Docker Hub repository. [4]
    2 Extend the image and change it to your desired number of executors (recommended 0 executors on the built-in node). [5]
    3 Always restart the container if it stops. If it’s manually stopped, it’s restarted only when Docker daemon restarts or the container itself is manually restarted. (See the bullet listed in restart policy details)
    4 In order to connect agents through an inbound TCP connection, map the port: -p 50000:50000. That port will be used when you connect agents to the controller.

    If you are only using SSH (outbound) build agents, this port is not required, as connections are established from the controller. If you connect agents using web sockets (since Jenkins 2.217), the TCP agent port is not used either. [5]

    5 NOTE: Avoid using a bind mount from a folder on the host machine into /var/jenkins_home, as this might result in file permission issues (the user used inside the container might not have rights to the folder on the host machine). If you really need to bind mount jenkins_home, ensure that the directory on the host is accessible by the jenkins user inside the container (jenkins user - uid 1000) or use -u some_other_user parameter with docker run. [5]
    6 external specifies that this network’s lifecycle is maintained outside of that of the application.
  6. (Optional) Create a compose file named compose.override.yml with the following content.

    Docker Compose lets you merge and override a set of Compose files together to create a composite Compose file.

    By default, Compose reads two files, a compose.yml and an optional compose.override.yml file. By convention, the compose.yml contains your base configuration. The override file can contain configuration overrides for existing services or entirely new services. [8]

    version: "2.4"
    services:
      controller:
        build:
          args:
            - JENKINS_TAG=2.426.3-jdk21
        environment:
          - TZ=Asia/Shanghai
  7. Starting the controller container:

    docker compose up -d
  8. Post-installation setup wizard.

    Following this Post-installation setup to finish the last steps.

    Print the password at console.

    $ sudo docker inspect jenkins-home
    ...
            "Mountpoint": "/var/lib/docker/volumes/jenkins-home/_data",
            "Name": "jenkins-home",
    ...
    $ sudo cat /var/lib/docker/volumes/jenkins-home/_data/secrets/initialAdminPassword
    80df7355be5c4b15933742f7024dd739
  9. (Optional) Expose Jeknins with a Kubernetes service.

    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app: jenkins
      name: jenkins
    spec:
      ports:
      - protocol: TCP
        port: 8080
        targetPort: 8080
        name: ''
      type: ClusterIP
    ---
    apiVersion: discovery.k8s.io/v1
    kind: EndpointSlice
    metadata:
      name: jenkins-1
      labels:
        kubernetes.io/service-name: jenkins
    addressType: IPv4
    ports:
      - name: ''
        appProtocol: http
        protocol: TCP
        port: 8080
    endpoints:
      - addresses:
          - "192.168.56.130" (1)
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: jenkins.dev.test
      labels:
        app: jenkins
      annotations:
        nginx.ingress.kubernetes.io/ssl-redirect: "false"
    spec:
      ingressClassName: "nginx"
      tls: (2)
        - hosts:
          -  "*.dev.test"
          secretName: "dev.test"
      rules:
        - host: jenkins.dev.test (2)
          http:
            paths:
              - path: /
                pathType: ImplementationSpecific
                backend:
                  service:
                    name: jenkins
                    port:
                      number: 8080
    1 Replace the IP address with the server hosting the Jenkins controller, e.g, 192.168.56.130.
    2 Replace the TLS and hosts of the Ingress with your settings.

3.2. Configuring Jenkins SSH Credential

  1. Generating an SSH key pair.

    To generate the SSH key pair, execute a command line tool named ssh-keygen on a machine you have access to. [6]
    ssh-keygen -t ed25519 -f ~/.ssh/jenkins_agent_key
  2. Create a Jenkins SSH credential.

    1. Go to your Jenkins dashboard.

    2. Go to Manage Jenkins option in left main menu and click on the Credentials button under the Security.

    3. Select the drop option Add Credentials from the (global) item under the Stores scoped to Jenkins.

    4. Fill in the form.

      • Kind: SSH Username with private key

      • ID: jenkins

      • Description: Jenkins SSH private key

      • Username: jenkins

      • Private Key: Select Enter directly and press the Add button to insert the content of your private key file at ~/.ssh/jenkins_agent_key.

      • Passphrase: Fill your passphrase used to generate the SSH key pair (leave empty if you didn’t use one at the previous step) and then press the Create button.

3.3. Configuring Agents using SSH Connector in Docker

  1. Open up a terminal window, and create a directory named agents.

    mkdir agents
    cd agents
  2. Create an environment file named .env and set the project name with jenkins-agents.

    echo -n COMPOSE_PROJECT_NAME=jenkins-agents > .env
  3. Create a bridge network for the agent.

    docker network create -d bridge jenkins-agents
  4. Create a compose file named compose.yml with the following content.

    version: "2.4"
    services:
      agent:
        container_name: jenkins-agent
        image: jenkins/ssh-agent:alpine-jdk21
        restart: always
        ports:
          - "2200:22"
        environment:
          - "JENKINS_AGENT_SSH_PUBKEY=[your-public-key]" (1)
          # e.g. - "JENKINS_AGENT_SSH_PUBKEY=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBBHLJ+8RuLPO8dO1tm3RAt5kc3HqYwJUYMmRPjhtI3" (1)
        volumes:
          - agent-home:/home/jenkins/agent:rw (2)
        networks:
          agents:
    volumes:
      agent-home:
        name: jenkins-agent-home
    networks:
      agents:
        external: true
        name: jenkins-agents
    1 The value of JENKINS_AGENT_SSH_PUBKEY MUST include the full contents of your .pub file created above (i.e. ~/.ssh/jenkins_agent_key.pub), including the ssh-XXXX prefix. [6]
    2 When using the Linux image, you have to set the value of the Remote root directory to /home/jenkins/agent in the agent configuration UI.

    When using the Windows image, you have to set the value of the Remote root directory to C:/Users/jenkins/Work in the agent configuration UI. [7]

  5. Starting the agent container.

    docker compose up -d
  6. Setup up the jenkins-agent on jenkins.

    1. Go to your Jenkins dashboard.

    2. Go to Manage Jenkins option in left main menu.

    3. Go to Nodes item under the System Configuration.

    4. Go to New Node option in top right menu.

    5. Fill the Node name and select the type; (e.g. Name: agent1, Type: Permanent Agent), and then press the Create button.

    6. Now fill the fields.

      • Remote root directory; (e.g. /home/jenkins/agent)

      • Labels; (e.g. agent1 )

      • Usage; (e.g. Use this node as much as possible)

      • Launch method; (e.g. Launch agents via SSH)

      • Host; (e.g. localhost or your IP address)

      • Credentials; (e.g. jenkins)

      • Host Key verification Strategy (e.g.: Non verifying Verification Strategy. test only, NOT recommended)

        It’s recommended to use Manually trusted key Verification Strategy, then enter the agent configure page to trust the host key manually.
      • Expand the Advanced tab, and set the Port to be 2200

    7. Press the Save button and the agent1 will be registered, and be launched by the Controller.

  7. Delegating the first job to agent1.

    1. Go to your Jenkins dashboard

    2. Select New Item on side menu

    3. Enter an item name. (e.g.: First Job to Agent1)

    4. Select the Freestyle project and press OK.

    5. Now select the option Execute shell at Build Steps section.

    6. Add the command: echo $NODE_NAME in the Command field of the Execute shell step and the name of the agent will be printed inside the log when this job is run.

    7. Press the Save button and then select the option Build Now.

    8. Wait some seconds and then go to Console Output page.

      Started by user admin
      Running as SYSTEM
      Building remotely on agent1 in workspace /home/jenkins/agent/workspace/test
      [test] $ /bin/sh -xe /tmp/jenkins5590136104445527177.sh
      + echo agent1
      agent1
      Finished: SUCCESS

3.4. Configuring Agents running Docker in Docker

  1. Open up a terminal window, and create a directory named agents/dind:

    mkdir -p agents/dind
    cd agents/dind
  2. Create an environment file named .env and set the project name with jenkins-agents-dind:

    echo -n COMPOSE_PROJECT_NAME=jenkins-agents-dind > .env
  3. Create a bridge network for the agent:

    docker network create -d bridge jenkins-agents-dind
  4. Create a compose file named compose.yml with the following content:

    version: "2.4"
    services:
      agent:
        container_name: jenkins-agent-dind
        # image: qqbuby/jenkins-ssh-dind-agent:5.25.0-jdk21
        build:
          context: .
          dockerfile_inline: |
            ARG SSH_AGENET_TAG=jdk21
            FROM jenkins/ssh-agent:$${SSH_AGENET_TAG}
            ARG DOCKER_CE_CLI_VERSION=5:25.0.1-1~debian.12~bookworm
            RUN apt-get update \
                && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
                    ca-certificates \
                    curl \
                    lsb-release \
                && rm -rf /var/lib/apt/lists/*
            RUN curl -fsSLo /usr/share/keyrings/docker-archive-keyring.asc https://download.docker.com/linux/debian/gpg
            RUN echo "deb [arch=$(dpkg --print-architecture) \
                      signed-by=/usr/share/keyrings/docker-archive-keyring.asc] \
                      https://download.docker.com/linux/debian \
                      $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
            RUN apt-get update \
                && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
                    docker-ce-cli=$${DOCKER_CE_CLI_VERSION} \ (1)
                && rm -rf /var/lib/apt/lists/*
        restart: always
        ports:
          - "2210:22" (2)
        environment:
          - "JENKINS_AGENT_SSH_PUBKEY=[your-public-key]" (3)
          # e.g. - "JENKINS_AGENT_SSH_PUBKEY=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBBHLJ+8RuLPO8dO1tm3RAt5kc3HqYwJUYMmRPjhtI3"
          - DOCKER_HOST=tcp://docker:2376
          - DOCKER_CERT_PATH=/certs/client
          - DOCKER_TLS_VERIFY=1
        volumes:
          - agent-home:/home/jenkins/agent:rw
          - docker-certs:/certs/client:ro
        networks:
          agents:
        depends_on:
          - docker
      docker:
        container_name: jenkins-docker
        image: docker:25
        restart: always
        ports:
          - "2376"
        privileged: true
        environment:
           - DOCKER_TLS_CERTDIR=/certs
        volumes:
          - agent-home:/home/jenkins/agent:rw (4)
          - docker-certs:/certs/client:rw
          - docker-root:/var/lib/docker:rw
        networks:
          agents:
            aliases:
              - docker
    volumes:
      agent-home:
        name: jenkins-agent-home-dind
      docker-certs:
        name: jenkins-agent-docker-certs
      docker-root:
        name: jenkins-agent-docker-root
    networks:
      agents:
        external: true
        name: jenkins-agents-dind
    1 Extend the jenkins/ssh-agent image to install Docker CLI.
    2 If your machine already has a ssh server running on the 22 port, use another port to publish the agent container port 22 (SSH), such as 2210:22.
    3 The value of JENKINS_AGENT_SSH_PUBKEY MUST include the full contents of your .pub file created above (i.e. ~/.ssh/jenkins_agent_key.pub), including the ssh-XXXX prefix. [6]
    4 Share the agent home volume (i.e. agent-home) to the Docker container, otherwise the pipeline will be stuck.
    . . .
    process apparently never started in /home/jenkins/agent/workspace/jenkins-getting-started_main@tmp/durable-7a43d858
    (running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
    [Pipeline] }
    [Pipeline] // stage
    [Pipeline] }
    $ docker stop --time=1 383e1c4132052f8e461d87bf75108d3e627623cafe3de5f7f5ca80f843c324ae
    $ docker rm -f --volumes 383e1c4132052f8e461d87bf75108d3e627623cafe3de5f7f5ca80f843c324ae
    [Pipeline] // withDockerContainer
    [Pipeline] }
    [Pipeline] // withEnv
    [Pipeline] }
    [Pipeline] // node
    [Pipeline] End of Pipeline
    ERROR: script returned exit code -2
    Finished: FAILURE
  5. (Optional) Create a compose file named compose.override.yml with the following content:

    version: "2.4"
    services:
      agent:
        build:
          args:
            - SSH_AGENET_TAG=jdk21
            - DOCKER_CE_CLI_VERSION=5:25.0.1-1~debian.12~bookworm
      docker:
        image: docker:25
        # If an insecure registry isn’t marked as insecure,
        # docker pull, docker push, and docker search result
        # in error messages, prompting the user to either
        # secure or pass the --insecure-registry flag to the
        # Docker daemon.
        # command: ["--insecure-registry=192.168.56.0/24"]
  6. Starting the agent and docker container:

    docker compose up -d
  7. Refer to Configuring agents using the SSH connector in Docker (replace SSH port with 2210 instead of 2200) to setup up the agent on jenkins, and create a Freestyle project using Execute shell with docker version command, and select the option Build Now then go to Console Output page.

    Started by user admin
    Running as SYSTEM
    Building remotely on agent1 in workspace /home/jenkins/agent/workspace/test
    [test] $ /bin/sh -xe /tmp/jenkins2069680891022148280.sh
    + docker version
    Client: Docker Engine - Community
     Version:           25.0.1
     API version:       1.44
     Go version:        go1.21.6
     Git commit:        29cf629
     Built:             Tue Jan 23 23:09:46 2024
     OS/Arch:           linux/amd64
     Context:           default
    
    Server: Docker Engine - Community
     Engine:
      Version:          25.0.1
      API version:      1.44 (minimum version 1.24)
      Go version:       go1.21.6
      Git commit:       71fa3ab
      Built:            Tue Jan 23 23:09:59 2024
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          v1.7.12
      GitCommit:        71909c1814c544ac47ab91d2e8b84718e517bb99
     runc:
      Version:          1.1.11
      GitCommit:        v1.1.11-0-g4bccb38
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
    Finished: SUCCESS

4. Blue Ocean

Blue Ocean as it stands provides easy-to-use Pipeline visualization. It was intended to be a rethink of the Jenkins user experience, designed from the ground up for Jenkins Pipeline. Blue Ocean was intended to reduce clutter and increases clarity for all users. [9]

  • Sophisticated visualization of continuous delivery (CD) Pipelines, allowing for fast and intuitive comprehension of your Pipeline’s status.

  • Pipeline editor makes the creation of Pipelines more approachable, by guiding the user through a visual process to create a Pipeline.

  • Personalization to suit the role-based needs of each member of the team.

  • Pinpoint precision when intervention is needed or issues arise. Blue Ocean shows where attention is needed, facilitating exception handling and increasing productivity.

  • Native integration for branches and pull requests, which enables maximum developer productivity when collaborating on code in GitHub and Bitbucket.

When Jenkins is installed on most platforms, the Blue Ocean plugin and all necessary dependent plugins, which compile the Blue Ocean suite of plugins, are not installed by default.

To install the Blue Ocean suite of plugins on an existing Jenkins instance: [10]

  1. Ensure you are logged in to Jenkins as a user with the Administer permission.

  2. From the Jenkins home page, select Manage Jenkins on the left and then Plugins under the System Configuration.

  3. Select the Available plugins tab and enter blueocean in the Filter text box. This filters the list of plugins based on the name and description.

  4. Select the box to the left of Blue Ocean, and then select either the Install after restart option (recommended) or the Install without restart option at the top right of the page.

    It is not necessary to select other plugins in this list. The main Blue Ocean plugin automatically selects and installs all dependent plugins, composing the Blue Ocean suite of plugins.

    If you select the Install without restart option, you must restart Jenkins to gain full Blue Ocean functionality.

Once a Jenkins environment has Blue Ocean installed and log in to the Jenkins classic UI, the Blue Ocean UI can be accessed by selecting Open Blue Ocean on the left side of the screen.

Alternatively, access Blue Ocean directly by appending /blue to the end of the Jenkins server’s URL. For example https://jenkins-server-url/blue.

If you need to access these features, select the Go to classic icon at the top of a common section of Blue Ocean’s navigation bar.

5. Pipeline

Jenkins Pipeline (or simply "Pipeline" with a capital "P") is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins.

The definition of a Jenkins Pipeline is written into a text file (called a Jenkinsfile) which in turn can be committed to a project’s source control repository, which is the foundation of "Pipeline-as-code"; treating the CD pipeline as a part of the application to be versioned and reviewed like any other code. [9]

5.1. Pipeline Concepts

The following concepts are key aspects of Jenkins Pipeline, which tie in closely to Pipeline syntax.

  • Pipeline

    A Pipeline is a user-defined model of a CD pipeline. A Pipeline’s code defines your entire build process, which typically includes stages for building an application, testing it and then delivering it.

    Also, a pipeline block is a key part of Declarative Pipeline syntax.

  • Node

    A node is a machine which is part of the Jenkins environment and is capable of executing a Pipeline.

    Also, a node block is a key part of Scripted Pipeline syntax.

  • Stage

    A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages), which is used by many plugins to visualize or present Jenkins Pipeline status/progress.

  • Step

    A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process). For example, to execute the shell command make, use the sh step: sh 'make'. When a plugin extends the Pipeline DSL, that typically means the plugin has implemented a new step.

    For an overview of available steps, please refer to the Pipeline Steps reference which contains a comprehensive list of steps built into Pipeline as well as steps provided by plugins. [12]

5.2. Pipelines Creating

A Pipeline can be created in one of the following ways:

  • Through Blue Ocean - after setting up a Pipeline project in Blue Ocean, the Blue Ocean UI helps you write your Pipeline’s Jenkinsfile and commit it to source control.

    Blue Ocean automatically generates an SSH public/private key pair or provides you with an existing pair for the current Jenkins user. This credential is automatically registered in Jenkins with the following details for this Jenkins user:

    • Domain: blueocean-private-key-domain

    • ID: jenkins-generated-ssh-key

    • Name: <jenkins-username> (jenkins-generated-ssh-key)

  • Through the classic UI - you can enter a basic Pipeline directly in Jenkins through the classic UI.

  • In SCM - you can write a Jenkinsfile manually, which you can commit to your project’s source control repository.

The Multibranch Pipeline project type enables you to implement different Jenkinsfiles for different branches of the same project. In a Multibranch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkinsfile in source control.

5.3. Jenkinsfile

Using a text editor, ideally one which supports Groovy syntax highlighting, create a new Jenkinsfile in the root directory of the project. [11]

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                echo 'Building..'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing..'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying....'
            }
        }
    }
}

The Declarative Pipeline example above contains the minimum necessary structure to implement a continuous delivery pipeline. The agent directive, which is required, instructs Jenkins to allocate an executor and workspace for the Pipeline. Without an agent directive, not only is the Declarative Pipeline not valid, it would not be capable of doing any work! By default the agent directive ensures that the source repository is checked out and made available for steps in the subsequent stages.

The stages directive, and steps directives are also required for a valid Declarative Pipeline as they instruct Jenkins what to execute and in which stage it should be executed.

5.4. Using Docker

Many organizations use Docker to unify their build and test environments across machines, and to provide an efficient mechanism for deploying applications.

To use the Docker with Pipeline, install the Docker Pipeline plugin:

  • Using the GUI: From your Jenkins dashboard navigate to Manage Jenkins > Plugins and select the Available plugins tab. Locate this plugin by searching for docker-workflow.

  • Using the CLI tool:

    jenkins-plugin-cli --plugins docker-workflow:572.v950f58993843
  • Using direct upload. Download one of the releases and upload it to your Jenkins instance.

Pipeline is designed to easily use Docker images as the execution environment for a single Stage or the entire Pipeline. Meaning that a user can define the tools required for their Pipeline, without having to manually configure agents. Any tool that can be packaged in a Docker container can be used with ease, by making only minor edits to a Jenkinsfile. [13]

pipeline {
    agent {
        docker { image 'node:20.11.0-alpine3.19' }
    }
    stages {
        stage('Test') {
            steps {
                sh 'id'
                sh 'node --version'
            }
        }
    }
}

When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within:

. . .
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
+ id
uid=1000(node) gid=1000(node) groups=1000(node)
[Pipeline] sh
+ node --version
v20.11.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
. . .

5.4.1. Workspace Synchronization

If it is important to keep the workspace synchronized with other stages, use reuseNode true. Otherwise, a dockerized stage can be run on the same agent or any other agent, but in a temporary workspace.

By default, for a containerized stage, Jenkins:

  1. Picks an agent.

  2. Creates a new empty workspace.

  3. Clones pipeline code into it.

  4. Mounts this new workspace into the container.

If you have multiple Jenkins agents, your containerized stage can be started on any of them.

When reuseNode is set to true, no new workspace will be created, and the current workspace from the current agent will be mounted into the container. After this, the container will be started on the same node, so all of the data will be synchronized.

pipeline {
    agent any
    stages {
        stage('Build') {
            agent {
                docker {
                    image 'gradle:8.2.0-jdk17-alpine'
                    // Run the container on the node specified at the
                    // top-level of the Pipeline, in the same workspace,
                    // rather than on a new node entirely:
                    reuseNode true
                }
            }
            steps {
                sh 'gradle --version'
            }
        }
    }
}

5.4.2. Caching Data for Containers

Many build tools will download external dependencies and cache them locally for future re-use. Since containers are initially created with "clean" file systems, this can result in slower Pipelines, as they may not take advantage of on-disk caches between subsequent Pipeline runs.

Pipeline supports adding custom arguments that are passed to Docker, allowing users to specify custom Docker Volumes to mount, which can be used for caching data on the agent between Pipeline runs. The following example will cache ~/.m2 between Pipeline runs utilizing the maven container, avoiding the need to re-download dependencies for subsequent Pipeline runs.

pipeline {
    agent {
        docker {
            image 'maven:3.9.3-eclipse-temurin-17'
            args '-v $HOME/.m2:/root/.m2'
        }
    }
    stages {
        stage('Build') {
            steps {
                sh 'mvn -B'
            }
        }
    }
}

5.4.3. Using Multiple Containers

It has become increasingly common for code bases to rely on multiple different technologies. For example, a repository might have both a Java-based back-end API implementation and a JavaScript-based front-end implementation. Combining Docker and Pipeline allows a Jenkinsfile to use multiple types of technologies, by combining the agent {} directive with different stages.

pipeline {
    agent none
    stages {
        stage('Back-end') {
            agent {
                docker { image 'maven:3.9.6-eclipse-temurin-17-alpine' }
            }
            steps {
                sh 'mvn --version'
            }
        }
        stage('Front-end') {
            agent {
                docker { image 'node:20.11.0-alpine3.19' }
            }
            steps {
                sh 'node --version'
            }
        }
    }
}

5.5. Deploying on Kubernetes

  1. Install Kubernetes CLI plugin.

    1. Using the GUI: From the Jenkins dashboard navigate to Manage Jenkins > Plugins and select the Available tab. Locate this plugin by searching for kubernetes-cli.

    2. Using the CLI tool:

      jenkins-plugin-cli --plugins kubernetes-cli:1.12.1
  2. Configure Credentials

    The following types of credentials are supported and can be used to authenticate against Kubernetes clusters:

    If the Jenkins Agent is running within a Pod (e.g. by using the Kubernetes plugin), you can fallback to the Pod’s ServiceAccount by not setting any credentials.

    Now, let’s create a KubeConfig credential using the Secret file. On the Jenkins dashboard, go to Manage Jenkins > Credentials, move mouse over the (global) and select the Add credentials. Fill the fields as below:

    • Kind: Secret file.

    • Scope: Global (Jenkins, nodes, items, all child items, etc)

    • File: Upload your cluster kubeconfig file.

    • ID: kubernetes-admin.

    • Description: (optional)

  3. Create a testing Freestyle project job:

    • Scroll down to the Build Environment section.

      1. Select Configure Kubernetes CLI (kubectl) with multiple credentials.

      2. In the Credential dropdown, select the credentials (e.g., kubernetes-admin) to authenticate on the cluster or the kubeconfig stored in Jenkins.

    • On the Build Steps, using Execute shell with kubectl cluster-info command.

    • Click "Save", and select the option Build Now then go to Console Output page.

  4. Wait a seconds and then go to Console Output page.

    Started by user admin
    Running as SYSTEM
    Building remotely on agent-dind-2 in workspace /home/jenkins/agent/workspace/First Job to K8s
    [First Job to K8s] $ /bin/sh -xe /tmp/jenkins17537654207595799867.sh
    + kubectl cluster-info
    /tmp/jenkins17537654207595799867.sh: 2: kubectl: not found (1)
    Build step 'Execute shell' marked build as failure
    [kubernetes-cli] kubectl configuration cleaned up
    Finished: FAILURE
    1 To solve the kubectl: not found problem, it’s required to install the kubectl command line tool to the agent node.

    You can also try to use the docker cp to copy the kubectl into the specific agent container.

    $ docker cp $(which kubectl) jenkins-agent-dind:/usr/local/bin
    Successfully copied 49.7MB to jenkins-agent-dind:/usr/local/bin
  5. Again, click the Build Now, and see the log on the Console Output page.

    Started by user admin
    Running as SYSTEM
    Building remotely on agent-dind-2 in workspace /home/jenkins/agent/workspace/First Job to K8s
    [First Job to K8s] $ /bin/sh -xe /tmp/jenkins9182137363539535938.sh
    + kubectl cluster-info
    [0;32mKubernetes control plane[0m is running at [0;33mhttps://192.168.56.130:6443[0m
    [0;32mCoreDNS[0m is running at [0;33mhttps://192.168.56.130:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy[0m
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    [kubernetes-cli] kubectl configuration cleaned up
    Finished: SUCCESS

5.6. Using SSH on a remote machine

  1. Create a SSH key pair with ssh-keygen

    ssh-keygen -t ed25519 -f .ssh/id_ed25519
    Regenerate the public key using ssh-keygen -y -f .ssh/id_ed25519 if you lost it.
  2. Copy the public key to the destination host

    ssh-copy-id -i .ssh/id_ed25519.pub [user@]hostname // e.g., jenkins@node-3
  3. Create a SSH Username with private key credential with ID as jenkins-ssh-key-for-node-3

  4. The following snippet is used to execute a command (e.g., date) on a remote host (e.g, 192.168.211.133).

    environment {
        LOGIN_NAME="jenkins"
        DESTINATION_HOST="192.168.211.133"
    }
    steps {
        // Create a SSH Username with private key credential with ID as `jenkins-ssh-key-for-node-3` on Jenkins.
        withCredentials(bindings: [sshUserPrivateKey(credentialsId: 'jenkins-ssh-key-for-node-3', \
                                                     keyFileVariable: 'JENKINS_SSH_KEY_FOR_NODE_3')]) {
            sh 'ssh -T -o StrictHostKeyChecking=no -i $JENKINS_SSH_KEY_FOR_NODE_3 -l $LOGIN_NAME $DESTINATION_HOST date'
        }
    }
    By convention, variable names for environment variables are typically specified in capital case, with individual words separated by underscores.

6. Installing a Jenkins agent on Windows

Here, we use the OpenSSH to establish a SSH connection between the Jenkins Controller and the Windows Agent.

  1. It’s also required to install the necessary build tools, like Git etc., on the Windows agent, and make sure the Git is in the Path environment variables, like . . .;C:\Program Files\Git\cmd;. . ..

    > $env:Path # `path` on batch shell
    . . .;C:\Program Files\Git\cmd;. . .
  2. It’s suggested to use the powershell (Windows PowerShell Script) or bat (Windows Batch Script) step on Windows instead of the sh, like:

    pipeline {
    
        agent { label 'dotnet && windows' }
    
        stages {
            stage('Publish') {
                steps {
     //             bat 'dotnet publish .\\src\\Example.WebApp'
     //             bat '''dotnet build ^
     //    .\\src\\Example.WebApp.Installer\\ ^
     //    -r win-x64 ^
     //    -c Release ^
     //    -p:InstallerPlatform=x64 ^
     //    -p:SuppressValidation=true'''
                    powershell 'dotnet publish .\\src\\Example.WebApp'
                    powershell '''dotnet build ^
        .\\src\\Example.WebApp.Installer\\ ^
        -r win-x64 ^
        -c Release ^
        -p:InstallerPlatform=x64 ^
        -p:SuppressValidation=true'''
                }
            }
        }
    }

6.1. Install OpenSSH for Windows

To install OpenSSH using PowerShell, run PowerShell as an Administrator. To make sure that OpenSSH is available, run the following cmdlet:

Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

Then, install the server or client components as needed:

# Install the OpenSSH Client
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

# Install the OpenSSH Server
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

To start and configure OpenSSH Server for initial use:

# Set the sshd service to be started automatically
Get-Service -Name sshd | Set-Service -StartupType Automatic

# Now start the sshd service
Start-Service sshd

6.2. Create a Local User for Jenkin on Windows Agent

Run PowerShell as an Administrator with the following cmdlet:

$Password = Read-Host -AsSecureString
New-LocalUser -Name jenkins -Password $Password
The home directory (i.e., $env:USERPROFILE) will be initializad at the fist sign-in.

6.3. Copy the public key to the Windows Agent

$JENKINS_AGENT_SSH_PUBKEY = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKBBHLJ+8RuLPO8dO1tm3RAt5kc3HqYwJUYMmRPjhtI3"
$USERPROFILE = "C:\Users\jenkins"
$REMOTE_POWER_SHELL = "powershell New-Item -Force -ItemType Directory -Path $USERPROFILE\.ssh; Add-Content -Force -Path $USERPROFILE\.ssh\authorized_keys -Value '$JENKINS_AGENT_SSH_PUBKEY'"
ssh jenkins@servername $REMOTE_POWER_SHELL

6.4. Install JDK on Windows Agent

It’s recommended to install JDK with the same version of the Jenkins Controller.
java --version

For example,

> ssh jenkins@node-4 java -version
openjdk version "21.0.2" 2024-01-16 LTS
OpenJDK Runtime Environment Temurin-21.0.2+13 (build 21.0.2+13-LTS)
OpenJDK 64-Bit Server VM Temurin-21.0.2+13 (build 21.0.2+13-LTS, mixed mode, sharing)

6.5. Configure Jenkins Agent in Jenkins:

  • Go to Manage JenkinsManage NodesNew Node

  • Enter the Node Name there and select Permanent Agent then click OK ,

  • Remote Directory: The directory where Jenkins will perform builds on the agent machine. i.e. C:\Users\jenkins\agent

  • Launch method: Choose "Launch agent via SSH"

    • Enter the Host address (e.g., 192.168.211.134)

    • Select the Credentials created in the previous steps, i.e. `jenkins (Jenkins SSH private key)

    • Select "Manually trusted key Verification Strategy" for Host key Verification Strategy

    • Others Keep default. (Click help icon and know everything from the page itself)

  • Click Save to save the new node

Appendix A: GitLab Jenkins Integration

GitLab is a fully featured software development platform that includes, among other powerful features, built-in GitLab CI/CD to leverage the ability to build, test, and deploy your apps without requiring you to integrate with CI/CD external tools. [14]

However, many organizations have been using Jenkins for their deployment processes, and need an integration with Jenkins to be able to onboard to GitLab before switching to GitLab CI/CD. Others have to use Jenkins to build and deploy their applications because of the inability to change the established infrastructure for current projects, but they want to use GitLab for all the other capabilities.

With GitLab’s Jenkins integration, you can effortlessly set up your project to build with Jenkins, and GitLab will output the results for you right from GitLab’s UI.

After configured a Jenkins integration, trigger a build in Jenkins when push code to your repository or create a merge request in GitLab. The Jenkins pipeline status displays on merge request widgets and the GitLab project’s home page. [21]

To configure a Jenkins integration with GitLab:

  • Grant Jenkins access to the GitLab project.

  • Configure the Jenkins server.

  • Configure the Jenkins project.

  • Configure the GitLab project.

A.1. Install GitLab using Docker

  1. Open a terminal, and a bridge network named gitlab-ce.

    docker network create gitlab-ce
  2. Create a compose.yml file.

    version: "2.4"
    services:
      gitlab-ce:
        container_name: gitlab-ce
        image: gitlab/gitlab-ce:16.5.8-ce.0 # Pin GitLab to a specific Community Edition version
        restart: always
        volumes:
          - data:/var/opt/gitlab:rw # For storing application data.
          - logs:/var/log/gitlab:rw # For storing logs.
          - config:/etc/gitlab:rw   # For storing the GitLab configuration files.
        networks:
          gitlab-ce:
    volumes:
      data:
        name: gitlab-ce-data
      logs:
        name: gitlab-ce-logs
      config:
        name: gitlab-ce-config
    networks:
      gitlab-ce:
        external: true
        name: gitlab-ce
  3. Create a compose.override.yml file.

    version: "2.4"
    services:
      gitlab-ce:
        # Pin GitLab to a specific Community Edition version
        image: gitlab/gitlab-ce:16.5.8-ce.0
        # Use a valid externally-accessible hostname or IP address. Do not use `localhost`.
        hostname: 'node-0'
        environment:
          # If you want to use a different host port than 80 (HTTP), 443 (HTTPS), or 22 (SSH), you
          # need to add a separate --publish directive to the docker run command.
          GITLAB_OMNIBUS_CONFIG: |
            # Add any other gitlab.rb configuration here, each on its own line
            gitlab_rails['gitlab_shell_ssh_port'] = 2424 (1)
            external_url 'http://node-0:8929' (2)
        ports:
          - '8929:8929'
          - '2424:22'
        extra_hosts:
          - "node-0:192.168.56.130"
    1 If you don’t want to change the server’s default SSH port, you can configure a different SSH port that GitLab uses for Git over SSH pushes. In that case, the SSH clone URLs looks like ssh://git@gitlab.example.com:<portNumber>/user/project.git. [20]
    2 To display the correct repository clone links to your users, you must provide GitLab with the URL your users use to reach the repository. You can use the IP of your server, but a Fully Qualified Domain Name (FQDN) is preferred. [22]
  4. Start the gitlab-ce container.

    docker compose up -d

    The initialization process may take a long time. You can track this process with: [20]

    docker logs -f gitlab-ce

    After starting the container, you can visit node-0. It might take a while before the Docker container starts to respond to queries.

    Visit the GitLab URL, and sign in with the username root and the password from the following command:

    sudo cat $(docker inspect gitlab-ce-config -f "{{.Mountpoint}}")/initial_root_password
    The password file is automatically deleted in the first container restart after 24 hours.

Appendix B: Sonatype Nexus Repository OSS

Sonatype Nexus Repository Manager provides a central platform for storing build artifacts. [15]

B.1. Installing Nexus Repository with Docker

  1. Open a terminal, and create a .env file, and set the project name with sonatype-nexus.

    echo -n COMPOSE_PROJECT_NAME=sonatype-nexus > .env
  2. Creata a bridge network named sonatype-nexus.

    docker network create -d bridge sonatype-nexus
  3. Create a compose.yml file.

    version: "2.4"
    services:
      nexus:
        container_name: sonatype-nexus
        user: nexus:nexus
        image: sonatype/nexus3:3.64.0
        restart: always
        volumes:
          - data:/nexus-data:rw
        networks:
          nexus:
    volumes:
      data:
        name: nexus-data
    networks:
      nexus:
        external: true
        name: sonatype-nexus
  4. Create a compose.override.yml file.

    version: "2.4"
    services:
      nexus:
        ports:
          - "8081:8081"
          - "8082:8082" # Using for Docker Registry
        # environment:
        #   NEXUS_CONTEXT: nexus (1)
        #   INSTALL4J_ADD_VM_PARAMS, passed to the Install4J startup script. Defaults to -Xms2703m -Xmx2703m -XX:MaxDirectMemorySize=2703m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs.
    1 An environment variable can be used to control the Nexus Context Path, NEXUS_CONTEXT, defaults to /. [16] [17]
  5. Start the sonatype-nexus container.

    docker compose up -d
  6. Go to a browser with http://localhost:8081, click the Sign in button on the top right, and fill the login fields, and then complete required setup tasks.

    Your admin user password is located in /nexus-data/admin.password on the server.

    1. Inspect the Docker volume (i.e. nexus-data).

      $ docker inspect nexus-data
      ...
              "Mountpoint": "/var/lib/docker/volumes/nexus-data/_data",
      ...
    2. Print the user password.

      sudo cat /var/lib/docker/volumes/nexus-data/_data/admin.password

B.2. Docker Hosted Repositories

A hosted repository using the Docker repository format is typically called a private Docker registry. It can be used to upload your own container images as well as third-party images. It is common practice to create two separate hosted repositories for these purposes. [18]

  1. Go the Nexus dashboard, and select the gear icon at the top bar, or enter http://localhost:8081/#admin/repository.

  2. Select the Repositories on the left menu to the Manage repositories panel, or enter http://localhost:8081/#admin/repository/repositories.

  3. Click the Create repository button, and select the docker (hosted) recipe, then fill the form.

    • Name: docker-registry

    • Http:: 8082

  4. Click the Create repository button at the bottom.

  5. Login in with Docker, and push/pull images from/to the Nexus.

    docker login -u admin -p [YOUR ADMIN PASSWORD OF NEXUS] http://localhost:8082
    $ docker pull busybox
    Using default tag: latest
    latest: Pulling from library/busybox
    9ad63333ebc9: Pull complete
    Digest: sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74
    Status: Downloaded newer image for busybox:latest
    docker.io/library/busybox:latest
    $ docker tag busybox:latest localhost:8082/busybox
    $ docker push localhost:8082/busybox
    Using default tag: latest
    The push refers to repository [localhost:8082/busybox]
    2e112031b4b9: Pushed
    latest: digest: sha256:d319b0e3e1745e504544e931cde012fc5470eba649acc8a7b3607402942e5db7 size: 527
    $ docker pull localhost:8082/busybox
    Using default tag: latest
    latest: Pulling from busybox
    Digest: sha256:d319b0e3e1745e504544e931cde012fc5470eba649acc8a7b3607402942e5db7
    Status: Image is up to date for localhost:8082/busybox:latest
    localhost:8082/busybox:latest
  6. Go back to the Browser (e.g. http://localhost:8081/#browse/browse:docker-registry) in the Nexus to check the Repository status.

By default, Docker assumes all registries to be secure, except for local registries. Communicating with an insecure registry isn’t possible if Docker assumes that registry is secure. In order to communicate with an insecure registry, the Docker daemon requires --insecure-registry in one of the following two forms:

  • --insecure-registry myregistry:5000 tells the Docker daemon that myregistry:5000 should be considered insecure.

  • --insecure-registry 10.1.0.0/16 tells the Docker daemon that all registries whose domain resolve to an IP address is part of the subnet described by the CIDR syntax, should be considered insecure.

The flag can be used multiple times to allow multiple registries to be marked as insecure.

If an insecure registry isn’t marked as insecure, docker pull, docker push, and docker search result in error messages, prompting the user to either secure or pass the --insecure-registry flag to the Docker daemon as described above.

Local registries, whose IP address falls in the 127.0.0.0/8 range, are automatically marked as insecure as of Docker 1.3.2. It isn’t recommended to rely on this, as it may change in the future.

$ docker info
  . . .
 Insecure Registries:
  127.0.0.0/8

B.3. NuGet Hosted Repositories

A hosted repository for NuGet can be used to upload your own packages as well as third-party packages. The repository manager includes a hosted NuGet repository named nuget-hosted by default. [19]

  1. Go the Nexus dashboard, sign in, and click the user name at the top right, or enter http://localhost:8081/#user/account.

  2. On the left panel, select the NuGet API Key.

  3. Click the Access API Key, authentication with your credential, and then click Copy to Clipboard.

  4. Click the gear icon at the top panel, select the Realms on the left panel under the Security.

  5. Select the NuGet API-Key Realm on the left Available tab panel, and transfer it to the right Active tab panel.

  6. Click the Save button at the bottom right.

  7. Push a Nuget package on Nexus.

    $ dotnet new classlib -o HelloLib
    The template "Class Library" was created successfully.
    . . .
    $ dotnet pack HelloLib/
    $ dotnet nuget push HelloLib/bin/Release/HelloLib.1.0.0.nupkg -k [REPLACE WITH YOUR API KEY] -s http://localhost:8081/repository/nuget-hosted/index.json
    warn : You are running the 'push' operation with an 'HTTP' source, 'http://localhost:8081/repository/nuget-hosted/index.json'. Non-HTTPS access will be removed in a future version. Consider migrating to an 'HTTPS' source.
    Pushing HelloLib.1.0.0.nupkg to 'http://localhost:8081/repository/nuget-hosted'...
    warn : You are running the 'push' operation with an 'HTTP' source, 'http://localhost:8081/repository/nuget-hosted/'. Non-HTTPS access will be removed in a future version. Consider migrating to an 'HTTPS' source.
      PUT http://localhost:8081/repository/nuget-hosted/
      Created http://localhost:8081/repository/nuget-hosted/ 40ms
    Your package was pushed.

    You can also create a nuget.config and add the NuGet source to the project.

    dotnet new console -o HelloApp
    cd HelloApp/
    dotnet new nugetconfig
    dotnet nuget add source -n nexus http://localhost:8081/repository/nuget-hosted/index.json
    dotnet add package HelloLib --version 1.0.0

Appendix C: Jenkins for a .NET application using Docker

  1. Open a terminal, create a working folder if you haven’t already, and enter it.

    In the working folder, run the following command to create a demo ASP.NET Core Web project:

    dotnet new gitignore
    dotnet new globaljson --sdk-version=8.0.101 --roll-forward=latestFeature
    dotnet new sln -n jenkins-getting-started
    dotnet new web -o src/HelloWorld
    dotnet sln add -s src src/HelloWorld/
  2. Create Dockerfile using to build Docker image.

    FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
    WORKDIR /source
    
    # Copy everything
    COPY . ./
    # Restore as distinct layers
    RUN dotnet restore
    # Build and publish a release
    RUN dotnet publish -c release -o /app --no-restore
    
    # Build runtime image
    FROM mcr.microsoft.com/dotnet/aspnet:8.0
    WORKDIR /app
    COPY --from=build /app ./
    ENTRYPOINT ["dotnet", "HelloWorld.dll"]
  3. Create Jenkinsfile.

    pipeline {
    
        environment {
            // Explicitly specify the DOTNET_CLI_HOME environment variable to a writable directory, like /tmp:
            // See also: https://github.com/dotnet/cli/pull/9327
            //           https://github.com/dotnet/sdk/blob/main/src/Common/CliFolderPathCalculatorCore.cs#L14
            // System.UnauthorizedAccessException: Access to the path '/.dotnet' is denied.
            DOTNET_CLI_HOME = '/tmp'
            // Replace the following variables with your container registry.
            REGISTRY_SCHEME= 'http'
            REGISTRY_HOSTNAME = '192.168.211.130'
            REGISTRY_PORT = '8082'
        }
    
        agent none
    
        stages {
            stage('Build') {
                agent {
                    docker {
                        label 'docker && linux' (1)
                        image 'mcr.microsoft.com/dotnet/sdk:8.0'
                        // Run the container on the node specified at the
                        // top-level of the Pipeline, in the same workspace,
                        // rather than on a new node entirely:
                        reuseNode true
                    }
                }
                steps {
                    sh 'dotnet build'
                }
            }
    
            stage('Test') {
                agent {
                    docker {
                        label 'docker && linux'
                        image 'mcr.microsoft.com/dotnet/sdk:8.0'
                        // Run the container on the node specified at the
                        // top-level of the Pipeline, in the same workspace,
                        // rather than on a new node entirely:
                        reuseNode true
                    }
                }
                steps {
                    sh 'dotnet test'
                }
            }
    
            stage('Docker') {
                when { tag "*" }
                agent { label 'docker && linux' }
                // Execute the stage on a node pre-configured to accept Docker-based Pipelines
                environment {
                    // Create a Username and password credential with ID as `jenkins-docker-registry-creds` for your Docker Registry on Jenkins.
                    DOCKER_REGISTRY_CREDS = credentials('jenkins-docker-registry-creds') (2)
                }
                steps {
                    sh 'docker build . -f src/WebApplication1/Dockerfile -t $REGISTRY_HOSTNAME:$REGISTRY_PORT/webapplication1:$TAG_NAME'
                    sh 'docker login -u $DOCKER_REGISTRY_CREDS_USR -p $DOCKER_REGISTRY_CREDS_PSW $REGISTRY_SCHEME://$REGISTRY_HOSTNAME:$REGISTRY_PORT'
                    sh 'docker push $REGISTRY_HOSTNAME:$REGISTRY_PORT/webapplication1:$BRANCH_NAME'
                    sh 'docker logout $REGISTRY_SCHEME://$REGISTRY_HOSTNAME:$REGISTRY_PORT'
                }
            }
    
            stage('Deploy') {
                when { (3)
                    tag "*"
                    expression {
                        currentBuild.result == null || currentBuild.result == 'SUCCESS'
                    }
                }
                agent { label 'docker && linux' }
                environment {
                    container_name="webapplication1"
                    image="$REGISTRY_HOSTNAME:$REGISTRY_PORT/webapplication1:$TAG_NAME"
                    login_name="jenkins"
                    destination_host="192.168.211.133"
                }
                steps {
                    // Create a SSH Username with private key credential with ID as `jenkins-ssh-key-for-node-3` on Jenkins.
                    withCredentials(bindings: [sshUserPrivateKey(credentialsId: 'jenkins-ssh-key-for-node-3', \
                                                                 keyFileVariable: 'JENKINS_SSH_KEY_FOR_NODE_3')]) {
                        sh '''
    cat <<EOF | ssh -T -o StrictHostKeyChecking=no -i $JENKINS_SSH_KEY_FOR_NODE_3 -l $login_name $destination_host
    #!/bin/sh
    
    set -ex
    
    docker container inspect $container_name -f \'{{ json .State }}\' \\
        && docker rm --force $container_name
    
    docker run --name $container_name --restart always --detach --publish 7890:8080 $image \\ (4)
        && docker ps -n 1
    EOF
                           '''
                    }
                }
            }
        }
    }
  4. The final project structure should be as below.

    $ tree
    .
    ├── Dockerfile
    ├── global.json
    ├── Jenkinsfile
    ├── jenkins-getting-started.sln
    └── src
        └── HelloWorld
            ├── appsettings.Development.json
            ├── appsettings.json
            ├── HelloWorld.csproj
            ├── Program.cs
            └── Properties
                └── launchSettings.json
    
    4 directories, 9 files
  5. Build and test the project.

    Run the Web application.

    $ dotnet run --project src/HelloWorld/
    Building...
    info: Microsoft.Hosting.Lifetime[14]
          Now listening on: http://localhost:5062
    info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Development
    ...

    Open another terminal, and test the above endpoint.

    $ curl -i http://localhost:5062
    HTTP/1.1 200 OK
    Content-Type: text/plain; charset=utf-8
    Date: Tue, 30 Jan 2024 03:25:20 GMT
    Server: Kestrel
    Transfer-Encoding: chunked
    
    Hello World!
  6. The following is a sample output on Jenkins.

    . . .
    + dotnet build
    MSBuild version 17.8.3+195e7f5a3 for .NET
      Determining projects to restore...
    . . .
    
    + docker build . -t 192.168.56.130:8082/hello-world:main
    DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
                Install the buildx component to build images with BuildKit:
                https://docs.docker.com/go/buildx/
    
    Sending build context to Docker daemon  1.535MB
    . . .
    
    + docker login -u **** -p **** http://192.168.56.130:8082
    WARNING! Using --password via the CLI is insecure. Use --password-stdin.
    WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json.
    Configure a credential helper to remove this warning. See
    https://docs.docker.com/engine/reference/commandline/login/#credentials-store
    
    Login Succeeded
    [Pipeline] sh
    + docker push 192.168.56.130:8082/hello-world:main
    The push refers to repository [192.168.56.130:8082/hello-world]
    . . .
    
    + docker logout http://192.168.56.130:8082
    Removing login credentials for 192.168.56.130:8082
    . . .

Appendix D: Reverse proxy for Jenkins

An error message is displayed in the "Manage Jenkins" page: It appears that your reverse proxy setup is broken. [23]

For a reverse proxy to work correctly, it needs to rewrite both the request and the response. Request rewriting involves receiving an inbound HTTP call and then making a forwarding request to Jenkins (sometimes with some HTTP headers modified, sometimes not). Failing to configure the request rewriting is easy to catch, because you just won’t see any pages at all.

But correct reverse proxying also involves one of two options, EITHER

  • rewrite the response with a "Location" header in the response, which is used during redirects. Jenkins sends Location: http://actual.server:8080/jenkins/foobar and the reverse proxy must rewrite it to Location: http://nice.name/jenkins/foobar. Unfortunately, failing to configure this correctly is harder to catch; OR

  • set the headers X-Forwarded-Host (and perhaps X-Forwarded-Port) on the forwarded request. Jenkins will parse those headers and generate all the redirects and other links on the basis of those headers. Depending on your reverse proxy it may be easier to set X-Forwarded-Host and X-Forwarded-Port to the hostname and port in the original Host header respectively or it may be easier to just pass the original Host header through as X-Forwarded-Host and delete the X-Forwarded-Port # header from the request. You will also need to set the X-Forwarded-Proto header if your reverse proxy is changing from https to http or vice-versa.

Appendix E: OpenSSH for Windows

OpenSSH is the open-source version of the Secure Shell (SSH) tools used by administrators of Linux and other non-Windows for cross-platform management of remote systems. OpenSSH has been added to Windows (as of autumn 2018), and is included in Windows Server and Windows client. [24]

OpenSSH for Windows has the below commands built in.

  • ssh is the SSH client component that runs on the user’s local system

  • sshd is the SSH server component that must be running on the system being managed remotely

  • ssh-keygen generates, manages and converts authentication keys for SSH

  • ssh-agent stores private keys used for public key authentication

  • ssh-add adds private keys to the list allowed by the server

  • ssh-keyscan aids in collecting the public SSH host keys from hosts

  • sftp is the service that provides the Secure File Transfer Protocol, and runs over SSH

  • scp is a file copy utility that runs on SSH

E.1. Install OpenSSH for Windows

To install OpenSSH using PowerShell, run PowerShell as an Administrator. To make sure that OpenSSH is available, run the following cmdlet: [25]

Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

The command should return the following output if neither are already installed:

Name  : OpenSSH.Client~~~~0.0.1.0
State : NotPresent

Name  : OpenSSH.Server~~~~0.0.1.0
State : NotPresent

Then, install the server or client components as needed:

# Install the OpenSSH Client
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

# Install the OpenSSH Server
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

To start and configure OpenSSH Server for initial use, open an elevated PowerShell prompt (right click, Run as an administrator), then run the following commands to start the sshd service:

# Start the sshd service
Start-Service sshd

# OPTIONAL but recommended:
Set-Service -Name sshd -StartupType 'Automatic'

# Confirm the Firewall rule is configured. It should be created automatically by setup. Run the following to verify
if (!(Get-NetFirewallRule -Name "OpenSSH-Server-In-TCP" -ErrorAction SilentlyContinue | Select-Object Name, Enabled)) {
    Write-Output "Firewall Rule 'OpenSSH-Server-In-TCP' does not exist, creating it..."
    New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
} else {
    Write-Output "Firewall rule 'OpenSSH-Server-In-TCP' has been created and exists."
}

By default the sshd service is set to start manually. To start it each time the server is rebooted, run the following commands from an elevated PowerShell prompt on your server:

# Set the sshd service to be started automatically
Get-Service -Name sshd | Set-Service -StartupType Automatic

# Now start the sshd service
Start-Service sshd

E.2. Connect to OpenSSH Server

Once installed, you can connect to OpenSSH Server from a Windows or Windows Server device with the OpenSSH client installed. From a PowerShell prompt, run the following command.

ssh domain\username@servername

Once connected, you’ll see the Windows command shell prompt:

domain\username@SERVERNAME C:\Users\username>

For example,

$ ssh dev@node-4 cmd
dev@node-4's password:
Microsoft Windows [Version 10.0.19045.3803]
(c) Microsoft Corporation. All rights reserved.

dev@DESKTOP-A81NPH1 C:\Users\dev>

E.3. Uninstall OpenSSH for Windows

To uninstall the OpenSSH components using PowerShell, use the following commands:

# Uninstall the OpenSSH Client
Remove-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

# Uninstall the OpenSSH Server
Remove-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

E.4. OpenSSH configuration files

In Windows, the Open SSH Server (sshd) reads configuration data from %programdata%\ssh\sshd_config by default, or a different configuration file may be specified by launching sshd.exe with the -f parameter. If the file is absent, sshd generates one with the default configuration when the service is started. [26]

In Windows, the OpenSSH Client (ssh) reads configuration data from a configuration file in the following order:

  • By launching ssh.exe with the -F parameter, specifying a path to a configuration file and an entry name from that file.

  • A user’s configuration file at %userprofile%\.ssh\config.

  • The system-wide configuration file at %programdata%\ssh\ssh_config

E.4.1. Configuring the default shell for OpenSSH in Windows

Windows Jenkins agent requires the Windows Command shell as the default shell.

The default command shell provides the experience a user sees when connecting to the server using SSH. The initial default Windows is the Windows Command shell (cmd.exe). Windows also includes PowerShell, and third-party command shells are also available for Windows and may be configured as the default shell for a server.

To set the default command shell, first confirm that the OpenSSH installation folder is on the system path environment. For Windows, the default installation folder is %systemdrive%\Windows\System32\openssh.

Windows Command shell
path
PowerShell
$env:path

Configuring the default ssh shell is done in the Windows registry by adding the full path to the shell executable to HKEY_LOCAL_MACHINE\SOFTWARE\OpenSSH in the string value DefaultShell.

As an example, the following elevated PowerShell command sets the default shell to be powershell.exe:

New-ItemProperty `
  -Path "HKLM:\SOFTWARE\OpenSSH" `
  -Name DefaultShell `
  -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" `
  -PropertyType String `
  -Force

E.4.2. AuthorizedKeysFile

The default is .ssh/authorized_keys. If the path isn’t absolute, it’s taken relative to user’s home directory (or profile image path), for example, C:\Users\username. If the user belongs to the administrator group, %programdata%/ssh/administrators_authorized_keys is used instead.

The administrators_authorized_keys file must only have permission entries for the NT Authority\SYSTEM account and BUILTIN\Administrators security group. The NT Authority\SYSTEM account must be granted full control. The BUILTIN\Administrators security group is required for administrators to manage the authorized keys, you can choose the required access. To grant permissions you can open an elevated PowerShell prompt, and running the command icacls.exe "C:\ProgramData\ssh\administrators_authorized_keys" /inheritance:r /grant "Administrators:F" /grant "SYSTEM:F".

  • Standard user

    The example below copies the public key to the server (where "username" is replaced by your username). You’ll need to use the password for the user account for the server initially.

    # Get the public key file generated previously on your client
    $authorizedKey = Get-Content -Path $env:USERPROFILE\.ssh\id_ed25519.pub
    
    # Generate the PowerShell to be run remote that will copy the public key file generated previously on your client to the authorized_keys file on your server
    $remotePowershell = "powershell New-Item -Force -ItemType Directory -Path $env:USERPROFILE\.ssh; Add-Content -Force -Path $env:USERPROFILE\.ssh\authorized_keys -Value '$authorizedKey'"
    
    # Connect to your server and run the PowerShell using the $remotePowerShell variable
    ssh username@domain1@contoso.com $remotePowershell
  • Administrative user

    The example below copies the public key to the server and configures the ACL (where "username" is replaced by your user name). You’ll need to use the password for the user account for the server initially.

    # Get the public key file generated previously on your client
    $authorizedKey = Get-Content -Path $env:USERPROFILE\.ssh\id_ed25519.pub
    
    # Generate the PowerShell to be run remote that will copy the public key file generated previously on your client to the authorized_keys file on your server
    $remotePowershell = "powershell Add-Content -Force -Path $env:ProgramData\ssh\administrators_authorized_keys -Value '''$authorizedKey''';icacls.exe ""$env:ProgramData\ssh\administrators_authorized_keys"" /inheritance:r /grant ""Administrators:F"" /grant ""SYSTEM:F"""
    
    # Connect to your server and run the PowerShell using the $remotePowerShell variable
    ssh username@domain1@contoso.com $remotePowershell

References