Continuous Delivery Pipelines

Paweł Maszota

29 May 2024.9 minutes read

Continuous Delivery Pipelines webp image

A wise man once told me that life is too short to read good books - you should focus on the outstanding ones. Dave Farley’s “Continuous Delivery Pipelines” certainly belongs to this category.

With around 40 years of experience, 30 of which working with large scale distributed systems, Dave Farley is one of the most recognizable figures in the DevOps community. In addition to his exceptional know-how, he’s also a frequent conference speaker and - perhaps most noticeably - coauthor of the widely respected book “Continuous Delivery”.

The guy worked longer than I’ve lived! He’s also one of my role models as an engineer. Personal sympathies aside however, I think that with such a rich resume, it goes without saying that Dave is someone worth listening to.

And on that note I would like to share with you a few lessons that I’ve learnt from his book.

A paradigm shift

The very first thing that caught my attention was the definition of a deployment pipeline. I have to admit - it can be quite a paradigm shift for some, even the more seasoned engineers. In Farley’s own words:

The Deployment Pipeline is NOT:

  • only an automated build, test and deploy workflow
  • a series of separate Pipelines for build, test and deployment
  • just a collection of tools and processes
  • for proving that new software is good1

So here I am. Almost 10 years on the job, developing pipelines to build, test and deploy software using a whole host of tools and processes. And I get my entire career invalidated by a 5-point bullet list. Thanks Dave…

A definition

So if that’s not what a deployment pipeline is then what? There are actually 2 definitions at play here - the what and the how.

The what is Continuous Delivery - a holistic approach encompassing all aspects of software development, from idea to the end product in the hands of the users. It’s not only a technical discipline as it also requires an optimal organizational structure, performance and culture. All of which are essential to foster collaboration, teamwork and empowerment for teams to make decisions about their code and share responsibility. It also means working iteratively and optimizing for learning by generating fast and frequent feedback.

Most noticeably, Continuous Delivery is an Engineering discipline following the scientific principle of falsifiability to test code in order to reject it - if a single test fails, the Release Candidate will be discarded.

Just think about the quality implications of the last paragraph. “That test keeps failing, I don’t know why, let’s ignore it”. Not any more! Either you write proper software and proper tests (ideally using TDD) or you won’t be able to release your code. Period.

The how is the Deployment Pipeline - a platform supporting development teams in producing high quality software. It should be designed to enable testing of ideas and making changes safely through the collection of test results and production of data about stability, lead-time and throughput in order to help make evidence-based decisions.

It should also be organized to conduct fast, technical testing first and be scoped around an independently deployable unit (i.e module or microservice).

But most importantly, the Deployment Pipeline defines releasability and is the only route to production. Consequently, it contains all the steps needed to achieve relaseability - unit tests, acceptance tests, validation, integration, etc. In Dave’s own words: “Our aim is that, when the Pipeline completes, we are happy to deploy without the need for any additional work”2

Working this way has tremendous benefits - if everything reaching production has to go through the pipeline, that means there will be no manual change that is unaccounted for. Let’s face it - the main reason for infrastructure configuration drift are quick and dirty manual hot-fixes.

A lean approach

While the above definitions may seem daunting, you have to remember that Continuous Delivery follows a lean approach. It’s focused on reducing lead times by eliminating waste - duplication, delays, complex organizational structures, etc.

By extension, the Deployment Pipeline should contain only the absolutely necessary steps needed to release the software into production and perform each of them only once. Any deviation from this principle is waste and should be eliminated, both to simplify the process and reduce lead time.

An antithesis to this are all the forms of gate-keeping that managers like so much - change management boards, manual sign-offs and a risk-averse approach to change (”If it works don’t touch it!”). The DORA report states that these processes are negatively correlated with lead-time and deployment frequency, while having no correlation with change failure rate. They’re wasteful and should be removed or reduced to the absolute minimum.

A quick look at agility

It’s not surprising that Continuous Delivery is deeply rooted in agile. Actually the term itself is taken from the Agile Manifesto: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”. Consequently, the book provides some ideas on how to implement agile practices.

The most obvious one is working in small iterations. Ideally, small enough to commit new code directly to Trunk/Main every 10 - 15 minutes. At minimum once per day. The purpose is to have the code continuously evaluated (not deployed!) by the pipeline. That’s right - Farley’s ideal branching strategy is “Don’t branch!3

Another bit of insight centers around acceptance testing - there should be at least one acceptance test for every acceptance criteria specified in a user story.

A reassuring story

The book discusses how to build a Continuous Delivery pipeline from scratch, advising an iterative approach in which we start out with an absolute minimum setup (think a handful of deployment scripts and a hard drive for artifact storage) and expand the capabilities as the project grows.

It then states that setting up this minimal version of a Continuous Delivery Pipeline can take a few weeks. I find this personally reassuring.

I have once (in a previous company, SoftwareMill is awesome!) been asked to assess how long it would take to set up a pipeline from absolute scratch, only to be ridiculed by my own manager at the 20 man day estimate I came back with. It’s good to know that I wasn’t in the wrong and that I managed to find confirmation from someone much more senior than myself.

An implementation detail

We all want to work with the latest and greatest tools right? Wrong! Tech doesn’t get you anywhere. In fact, in the world of Continuous Delivery tooling is merely an implementation detail.

Towards the end of the book, Dave discusses the case of LMAX (London Multi-Asset Exchange) - one of the highest performance financial exchanges in the world. And in his own words “Most, if not all, tools and parts of the system were migrated at some time. We used the technology that seemed most appropriate at the time, and changed and evolved implementations and choices as the system grew and changed, and we learned more about what was required and what worked.”4

Talk about agile, architecture and eliminating technical debt! None of this would be possible if it wasn’t for design, approach, patterns and techniques driving the project in the first place.

On the contrary - if the LMAX team had a poor architecture and approach they would be glued to the tools they selected at the beginning and the whole project would end in a disaster.

Side note - the suggested architecture is described, but I don’t want to spoil everything. Get the book - It’s really worth it :)

A note on test environments

Ultimately you always test in prod. Before you skin me alive for this blasphemy, hear me out. The real deployments and associated disasters happen in production. There’s no denying that. We can’t guarantee that our testing will save us from problems in production - no matter how many “pre-prod” environments we have.

But we can minimize the possibility of issues occurring if we ensure that each and every environment gets deployed and configured using the exact same tooling and scripts. That way, each pre-prod deployment also tests the process itself.

A complete, well-rounded engineer

There’s no doubt that Dave Farley has a unique perspective. What hit me quite hard however, was the realization of how this came to be.

Dave is first and foremost a software engineer. He spent years writing millions of lines of code before there were any DevOps tools. He knows the development process inside out, which allows him to derive efficient and scalable automation solutions.

It should be a point of honor for any DevOps Engineer to achieve a similar level of understanding as this will unlock a whole new perspective on our roles and the work we do.

A reflection in the mirror

At 145 pages, the book contains numerous checklists, bullet points and rules of thumb. For example:

  • Commit stage tests complete in under 5 minutes
  • Allow 10 minutes to commit or revert a change
  • Complete the entire pipeline in under an hour

I encourage (or dare) you to look at all those guidelines through the prism of your own work and try to reflect on what you could do better. I’ll be honest with you - my most often reaction was a loud, sad “Ouch!”.

A final word

Continuous Delivery offers a coherent approach to software development and deployment centered around technical practices (test driven development, pair programming, trunk-based development) as well as modular architecture and a shared responsibility approach.

These are all the things that we like to say aren’t suitable for the “real world” because “We have to crank out features fast and this slows us down”. So the natural question is “How scalable is this approach?”. The book provides a few insights.

First of all, it quotes the DORA report on the effect Continuous Delivery has on software development:

  • 44% more time spent on features
  • 50% higher market cap growth over 3 years
  • 8000x faster deployment lead time
  • 50% less time spent fixing security defects
  • 50% lower change-failure rate
  • 21% less time spent on unplanned work and rework5

And what about LMAX - the project that Dave is so proud of? Well… Let’s hear what he has to say.

“The Deployment Pipeline grew with the software to handle nearly 100,000 tests, process 1.3x the daily data-volume of Twitter, commonly transact £100 billion in assets each day, and archive over 1TB of data every week.

The pipeline integrated with 20 to 30 external third-party systems, of various kinds and functions: from providing APIs for external trading, to complex integrations with clearing houses and customer verification systems.

Every change, on any part of the system was automatically regulatory compliant because the production of all the FCA documentation and audit requirements were built into the Deployment Pipeline”6

And this is what I would like to leave you with. If you made it this far - congratulations and thank you. I hope you found this text valuable and some good food for thought. And once again - get the book. It’s worth it :)

Reviewed by: Daniel Ryczko, Adam Pietrzykowski

  1. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 13 

  2. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 62 

  3. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 44 

  4. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 127 

  5. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 4 

  6. Dave Farley, Continuous Delivery Pipelines: How To Build Better Software Faster (Great Britain: Amazon, 2021), 126 

Blog Comments powered by Disqus.