Skip to content

Introduction to CI/CD with Jenkins

In this course we will try to build a simple CI/CD Pipeline using a variety of tools like jenkins, git, docker, node, Prometheus, Grafana... We will begin with basic CI-related concepts, like source control and build mini automation 😎

Then we will covering containers, orchestration, and monitoring. And finally the kinky part : how to strengthen the stability of our pipeline using things like self-healing, autoscaling, and canary deployments 🚀

Ay you may know most of courses focus on how to use a particular tech tool with deep dive work and many use case that will never happen. Here we will be focusing on using many tools and see how they fit together as part of the CI/CD Pipeline for and hands on example.

In this course we will see all the development phase of a prioject. From creating a Development Environment, a Node app and progressively creating a more and more full-featured CI/CD pipeline as we go along. It will give you a good idea of what implementing CI/CD can look like in practice.

We will go through setting up Jenkins, as well as how to run your project build as a CI build in Jenkins, use Jenkins Pipelines and how to implement automated deployment as part of a Jenkins pipeline.

Next we will move our mini node app to a docker containers. Then I talk about orchestration and we will see you how to quickly implement Kubernetes and how we can then take our existing deployment and transition that into an orchestrated deployment, using Jenkins and Kubernetes together 🤓

We will also talk a little bit about monitoring with Prometheus and Grafana which is an non negociable part of our CI/CD pipeline to remains stable and reliable inside a Kubernetes cluster.

In the final sections we will see how to implement some additional orchestration features to improve the robustness of your environment and be more devops pro 🚀

Enjoy the lecture 🥳

What is CI/CD

In development life cycle, CI/CD stands for continuous integration and continuous deployment (you can find also the term continuous delivery). It means having an automated process that delivers applications. The continuous integration portion is an automation process designed for developers, and updates from a development team is tested to ensure it's valid and is error free before it gets added to a production repository.

So once the CI portion's completed, the delivery part means it can be deployed to an environment by an operations team, for instance, multiple times as required, and its purpose is for it to be a minimal effort to deploy new code because as you may know our goal as developer is to be lazy 😂

It can also refer to having code deployed automatically, though it doesn't have to be, and it's a good idea to just think that CI/CD is a process that goes from build and test and merge, that continuous integration process, to then be delivered to your repository and then deployed to a production environment like cloud servers.

So making use of a CI/CD pipeline with automation can save your developers a considerable amount of time, which saves you money !

It may be a little complex to set up and put the relevant processes in place, but in the end, you get efficiencies in your processes that will help cover the effort, and this is something you can use Jenkins for 🤓

Set-Up a simple Development Environment

You need a consistent and reproducible development environment for your Node.js application. You don’t want to rebuild the Docker image every time you make small changes to the Node.js sources.

So let's create a Docker image that includes all the required dependencies, mount external volumes during the development and use a suited Dockerfile for distributing the image with the application to other developers.

First we need a Node.js Hello World simple app that includes two files:

app.js
// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.end("Hello World");
});
// Listen on port 8000, IP defaults to "0.0.0.0"
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
package.json
{
"name": "hello-world", 
"description": "hello world", 
"version": "0.0.1", 
"private": true, 
"dependencies": {
"express": "3.x" },
"scripts": {"start": "node app.js"} 
}

Add Docker

To create the Docker image, you can use this simple Dockerfile:

FROM node
WORKDIR /app
ADD package.json /app/ 
RUN npm install
ADD . /app
EXPOSE 8000
ENTRYPOINT ["npm", "start"]

This Dockerfile installs all the application dependencies and adds the application to the image, ready to be started by using the ENTRYPOINT instruction.

When you have your three files, you can build the Docker image and run a container like we used to do.

docker build -t my_nodejs_image . &&\
docker run -p 8000:8000 my_nodejs_image

To be able to test your application changes, you can mount a volume with the source into the container by using the following command:

docker run -p 8000:8000 -v "$PWD":/app my_nodejs_image

This mounts the current folder with the latest sources inside the container as the application folder. If you are not confortable with this check the docker section 🤓

To share your images with others and push the images to alternative testing environments, you can use a Docker cloud registry like this :

docker build -t <docker registry URL>:<docker registry port> \ /containersol/nodejs_app:<image tag> &&\
docker push <docker registry URL>:<docker registry port>\ /containersol/nodejs_app:<image tag>

Automate the Build, test and push

To simplify the work with the development environment and ease the future integra‐ tion into a centralized testing environment, you can use the following three scripts: build.sh, test.sh and push.sh.

These scripts will become a single command interface for every common operation you are required to perform during the development.

build.sh
#!/bin/bash
# The first parameter passed to this script will be used as an image version. 
# If none is passed, latest will be used as a tag.
if [ -z "${1}" ]; then
   version="latest"
else
   version="${1}"
fi

cd nodejs_app

docker build -t localhost:5000/containersol/nodejs_app:${version} . 

cd ..
Then let's write the script for the test of our app here :
test.sh
#!/bin/bash
# The first parameter passed to this script will be used as an image version. 
# If none is passed, latest will be used as a tag.
if [ -z "${1}" ]; then
   version="latest"
else
   version="${1}"
fi

docker run -d --name node_app_test -p 8000:8000 -v "$PWD":/app localhost:5000/containersol/nodejs_app:${version}

echo "Testing image: localhost:5000/containersol/nodejs_app:${version}"

# Allow the webserver to start up
sleep 1

# Test will be successful if the webpage at the 
# following URL includes the word “success” 
curl -s GET http://localhost:8000 | grep success status=$?

# Clean up the testing container 
docker kill node_app_test && docker rm node_app_test

if [ $status -eq 0 ] ; 
    then echo "Test succeeded"
else
   echo "Test failed"
fi

exit $status
push.sh
#!/bin/bash

if [ -z "${1}" ]; then
   version="latest"
else
   version="${1}"
fi

docker push localhost:5000/containersol/nodejs_app:"${version}"

Now you can build, test, and push the resulting image to a Docker registry by using the following commands:

./build.sh <version>  
./test.sh <version> 
./push.sh <version>

It is generally a good practice to have a consistent set of build, test, and deployment commands that can be executed in any environment, including development machines.

This way, developers can test the application in exactly the same way as it is going to be tested in the continuous integration environment and catch the problems related to the environment itself at earlier stages.

This example uses simple shell scripts, but a more common way to achieve the same results is to use build systems such as Maven or Gradle. Both systems have Docker plugins and can be easily used to build and push the images, using the same build interface already used for compiling and packaging the code.

Our current testing environment has only a single container, but in case you need a multicontainer setup, you can use docker-compose to set up the environment as well as replace a simple curl/grep combination with more-appropriate testing systems such as Selenium 🤓

Add Source Code Management

According to Wikipedia, in software engineering, version control (also known as revision control, source control, or source code management) is a class of systems responsible for managing changes to computer programs, documents, large web sites, or other collections of information.

We will be using the well known Git as our SCM here, if you are not already familiar with git take a look at this 5minutes tutorial : just a simple guide for getting started with git. no deep shit ;)

Here are the summary of what you need to know as a good tech worker 🤓

  • Install and configure git with the CLI
  • Clone repository and making changes
  • Manage branches and pull requests

Let's push our mini node app into a private github repo for the next part !