CI/CD, Jenkins, Containers, and Microservices | A Hands On Primer | Part II

449b6-1c9fpdt-trcfoyjgb6qzohg.jpeg

This is the second of a three-part series on getting Jenkins installed and contributing to the workflow of your organization. In Part 1, we walked through what Jenkins is, what it can do for you, and the installation process. Now we’ll look at the steps required to set up a rudimentary build pipeline, which is a popular use case for Jenkins.

Logging into Jenkins, the landing page presented includes a summary and status of any configured jobs once available. Jobs can be described as individual tasks Jenkins is configured to perform, for instance, initiate a build automatically upon a code commit, or alert specific developer (or team) in the event something doesn’t go as planned. You may configure as many jobs as needed to complete a given task, and these jobs can work independently, or together, conditionally waterfall, or pause for human interaction. The possibilities are many.

Creating Pipelines

To begin with this simplified pipeline example, select ‘New Item’ from the menu, top left. Jenkins requires a name for the Job, and a job type. Note, this step assumes that the recommended plugins from Pt 1 have been installed. This presence of these plugins determine what job types are available, so install the Pipeline Plugin at this time, if needed, to continue.

With the new pipeline job created, the configuration page presented contains four sections: General, Build Triggers, Advanced Project Options, and Pipeline.

The ‘General’ section allows for a pipeline name, which will automatically populate with the chosen job name from the previous steps, as well as a form that allows for a more detailed description of the job. There are also a number of options in the form of Radio buttons, and their subsequent options. Each of these is described in detail with the integrated help menu, revealed by clicking the question mark icon on the right side. For this example, select the radio button for ‘GitHub project’, and provide a valid GitHub URL for a project.

The ‘Build Triggers’ section allows the configuration of conditions that will initiate the execution of a job, and can be configured in several ways:

  • As conditional upon another build, periodically regardless of activity or condition

  • In a ‘push’ configuration via a GitHub Web Hook, which we’ll set up in a minute

  • In a ‘pull’ configuration based on polling the remote repository for changes

  • After a ‘quiet period’, where Jenkins will wait a specified time after a commit to take an action, which can be useful to handle many quick commits

  • Via a scriptable URL endpoint

  • An option for disabling builds altogether also exists in this menu.

The next section, ‘Advanced Project Options’, can be configured to use a custom display name. This can be skipped for the time being.

The ‘Pipeline’ section is where the power of Jenkins comes alive. It is in this section that the real automation work begins, and can be set up in two ways in this example. The first way, with a definition of ‘Pipeline Script’, allows for build stages, tasks, and conditions to be fully automated as code directly within the management interface in Jenkins, and can be suitable for small projects. The second deployment option, ‘Pipeline script from SCM’, allows these same actions to be defined in a file located in source code, and defaults to ‘Jenkinsfile’ located at the root of the repository. For this example, we’ll use the Jenkinsfile approach to showcase the power of iterative pipeline development. In practice, this approach allows an entire team to collaborate, track changes, and contribute to the build pipeline development.

Creating the Jenkinsfile is trivial. Simply verify the script path is set to Jenkinsfile (the default) from within the Jenkins management interface, and create a file with the same name, and check it into the root level of the repository.

$ sudo touch Jenkinsfile
$ sudo git add .
$ sudo git commit -m ‘create jenkinsfile’
$ sudo git push

Creating WebHooks

Now that our Jenkinsfile has been created, let’s look at the process of setting up a WebHook. A WebHook is the mechanism for a ‘push’ configuration between a repository and the automation server, and is often favored for bandwidth savings over other approaches, and offers a trigger for an instant build when a developer pushes a new commit.

1) On the Jenkins side, after source code management source and destination have been configured, choose the radio button for ‘GitHub hook trigger for GITScm polling’. This option may exist under a different name in previous versions of Jenkins.

2) On the GitHub side, click ‘Settings’ from within a repository, then ‘Webhooks’, then ‘Add Webhook’. A URL for the hook (or IP address), content type, and an optional secret must be configured, as well as the type of event that will trigger the hook. For this simplified example, the default option of ‘Just the push event’ will suffice. Lastly, ensure the ‘Active’ radio button has been checked, and click ‘Add Webhook’.

For this scenario, the server hosting our Jenkins instance must be externally resolvable from GitHub. While configuration through a router or firewall is beyond the scope of this article, this would work in the same way internally with a resource such as GitLab. You may need to consult your network engineer, or spend time with this configuration before following along.

With the above items complete, we’re ready to begin developing and testing the pipeline. The following code is a rudimentary example of what a Jenkinsfile can contain.

· The ‘agent’ directive (required) allocates an executor for the pipeline, and by default ensures that the source directory is checked out and available to any following stages.

· The ‘stage’ directive is also required, and defines at which logical stage execution will take place. Within each stage, additional tasks such as executing automated testing and deployment can be defined. Keep in mind, Jenkins itself is not a substitute for a set of build tools (Make, Maven, et. al), but can bind multiple phases of a deployment and testing process together.

The following example should print out the indicated string for each stage, demonstrating our pipeline is executing properly and is ready for development.

pipeline {
agent any
stages {
stage(‘Build’) {
steps {
echo ‘Building..’
}
}
stage(‘Test’) {
steps {
echo ‘Testing..’
 }
}
stage(‘Deploy’) {
steps {
echo ‘Deploying….’
 }
}
}

In the next post, we’ll look at some more advanced configuration and use cases, scaling, parallel execution, and handling errors.