Continuously Delivering Continuous Delivery Pipelines
As published in Jenkins on August 17, 2016.
In smaller companies with a handful of apps and fewer silos, implementing Continuous Delivery (CD) pipelines to support these apps is fairly straightforward using one of the many delivery orchestration tools available today.
For example, there is usually a limited tool set, application types, and security practices to support –generally fewer cooks in the kitchen. But in a larger organization, there are seemingly endless unique requirements and mountains to climb to reach this level of automation on each new project.
Enter the Jenkins Pipeline plugin. In the past, I set out to implement continuous delivery enterprise-wide. After considering several pipeline orchestration tools, we determined the Pipeline plugin (at the time called Workflow) to be the superior solution for our needs. Pipeline has continued Jenkins’ legacy of presenting an extensible platform with just the right set of features to allow organizations to scale its capabilities as they see fit, and to do so rapidly. As early adopters of Pipeline with a protracted set of requirements, we used it both to accelerate the pace of onboarding new projects and to reduce the ongoing feature delivery time of our applications.
In my presentation at Jenkins World, I demonstrated the methods we used to enable this. A few examples:
- We leveraged the Pipeline Remote File Loader plugin to write shared common code and sought and received community enhancements to these functions.
Jenkinsfile, loading a shared AWS utilities function library
awsUtils.groovy, snippets of some AWS functions
- We migrated from EC2 agents to Docker-based agents running on Amazon’s Elastic Container Service, allowing us to spin up new executors in seconds and for teams to own their own executor definitions.
Pipeline run #1 using standard EC2 executors, spinning up EC2 instance for each node; Pipeline run #2 using shared ECS cluster with near-instant instantiation of a Docker slave in the cluster for each node.
We also created a Pipeline Library of common pipelines, enabling projects that fit certain models to use ready-made end-to-end pipelines. Some examples:
— Maven JAR Pipeline: Pipeline that clones git repository, builds JAR file from pom.xml, deploys to Artifactory, and runs maven release plugin to increment next version.
— Angular.JS Pipeline: Pipeline that executes a grunt and bower build, then runs S3 sync to Amazon S3 bucket in Dev, then Stage, then Prod buckets.
— Pentaho Reports Pipeline: Pipeline that clones git repository, constructs zip file, and executes Pentaho Business Intelligence Platform CLI to import new set of reports in Dev, Stage, then Prod servers.
Perhaps most critically, a shout-out to the saving grace of this quest for our security and ops teams: the manual input step! While the ambition of continuous delivery is to have as few of these as possible, this was the single-most pivotal feature in convincing others of Pipeline’s viability, since now any step of the delivery process could be gate-checked by an LDAP-enabled permission group. Were it not for the availability of this step, we may still be living in the world of: “This seems like a great tool for development, but we will have a segregated process for production deployments.” Instead, we had a pipeline full of many input steps at first, and then used the data we collected around the longest delays to bring management focus to them and unite everyone around the goal of strategically removing them, one by one.
Going forward, we’ll be working together to further mature the use of these Pipeline plugin features as we continue moving towards continuous delivery. We have migrated several components of our healthcare.gov project to Pipeline. The team has been able to consolidate several Jenkins jobs into a single, visible delivery pipeline, to maintain the lifecycle of the pipeline with our application code base in our SCM, and to more easily integrate with our external tools.