As DevOps has evolved from a culture to a job title, many companies have realized the value of a well-thought-out Continuous Integration and Continuous Delivery/Deployment pipeline. Jenkins is an open-source automation platform that has become one of the more popular CI/CD tools available. So, here is a crash course to help guide you through basic Jenkins concepts.
How to use this crash course: Reading every word of this “crash course” will not make you a Jenkins expert. The goal for this crash course is two-fold: to provide a reference for Jenkins concepts and to provide a framework for someone to teach themselves, Jenkins, relatively quickly. The topics here are presented in a very bare-bones fashion and provide an overview of the basic Jenkins concepts. The recommended approach to learn Jenkins is to step through each topic, read it, and then google it if needed to understand the topic deeper.
Continuous Integration vs. Continuous Delivery vs. Continuous Deployment
- Continuous Integration: Small frequent steps where developers commit and merge their code, after which automated tests and builds are run.
- Continuous Delivery: Continually creating code automatically that is a candidate for deployment. This means creating code that is ready to deploy, but not necessarily deploying it.
- Continuous Deployment: Continually using automated processes to push code out.
A job in Jenkins is any runnable task in Jenkins. It is now called a project in Jenkins documentation (but everyone will probably call it a job in real life).
- Freestyle Project: This is the most common type of project. Normally executes a bash or Powershell command.
- Pipeline: Used to be called workflow. Written in DSL. For things that do not fit in a freestyle project because they are too complicated or span multiple nodes.
- Multi-configuration Project: For projects that will be tested in multiple environments and require different configurations, depending on the environment.
- Github Organization: Can use the source control platform’s organization and allows Jenkins to act on Jenkinsfiles within the orgs repositories.
- Folder: A method to group projects together. The folder name becomes part of the path of the projects.
- Multi-Branch Pipelines: Jenkins uses a Jenkinsfile to mark repositories. If a branch is created in that repository, Jenkins will create a project/job in Jenkins for that branch automatically.
Includes all items that are part of the job/project. In some cases, global libraries will be available to all jobs.
A build is the result of a single execution of a project.
A single task within a project.
Criteria for starting a new pipeline to run.
Immutable files that are generated during a build or a pipeline run. These are archived on the Jenkins master for later retrieval. Typically artifacts are maintained in a separate repository for access.
A repository is a place that holds items that need to be retrieved later, such as deployment artifacts.
The software that actually performs the build portion of the pipeline.
Build Configuration steps
The configuration depends on the tool, but the general steps are:
- Start Jenkins and install the required plugins.
- Perform Global Configuration steps – Telling Jenkins what the build tool is and how you interact with it.
- Create a Job/project that uses the tool.
- Update tools configuration file.
Clean checkout is when you delete the local repository and check out a new copy of the repo whenever a job runs.
- Unit Test: Test of individual components (classes, methods, modules) to ensure outputs are as expected.
- Smoke Test: General test of the software to check functionality. To see if it is stable enough for further testing.
- Verification Test: “Did we satisfy the build requirements?”
- Functional Test: “Does this feature work?” or “Can a user do this?”
- Acceptance Test: Handoff test with the client to verify that it does what they expect it to do.
Notifications give active feedback to the status of processes within the project.
Types of notifications
- Instant Messaging configured by plugins
Build jobs in which the executor of the build is located on a node separate from the master. This is done for parallelism: Running multiple steps or stages at once. The master node acts as a controller for the build job, running specific builds on specific agents.
Agents are tagged to determine what type should be run on what agent. This is so that a node with a specific configuration can run pipeline steps that match it. This is important so that build jobs can run on the agents with the tools available that they need.