Sie möchten Snyk in Aktion erleben?
Over the course of a generation, CI/CD has gone from a niche topic to a mainstream approach to software development and delivery that has been taken for granted in the field. Although the term is thrown around with confidence by many, the exact meanings of CI and CD are often misused and misunderstood.
In this article, we explain these two concepts, followed by a discussion on CI/CD pipelines and important CI/CD tools. Last, we consider how CI/CD and security can dovetail to provide a solid foundation for a DevSecOps approach to software delivery.
What Is CI/CD?
CI/CD is a commonly used acronym in software development. It stands for “continuous integration” and “continuous delivery.” Although these are distinct concepts, they are often treated as though they are one.
Continuous Integration is a standard development process where all code in a project is regularly committed to a single branch, whether the development that work is part of is complete or not.
Continuous delivery is a regular process that packages up the deployment unit or units that comprise the codebase’s outputs. These processes are commonly associated with development automation, DevOps, and — more recently — GitOps.
CI/CD is a commonly used acronym in software development. It stands for “continuous integration” and “continuous delivery.”
Continuous integration (CI) is commonly understood as a development practice of regularly integrating code in development into a single branch. This single branch is usually called the “trunk.” While teams can branch off for specific reasons (e.g., to make a hotfix to a live system), these cases are treated as specific exceptions to the rule.
To manage a work in progress, “feature flags” are used to ensure that the code is not activated until it is ready. This single-branch approach contrasts with other forms of development, such as GitFlow, which have multiple long-running branches that allow for multiple streams of development.
By using CI, you can avoid the traditional problem of “merge day,” where these streams of development need to be carefully reconciled. This reconciliation can be complicated and error prone, and reduce confidence in releasing code changes at all. The practice of CI also helps foster other good practices, such as a regular test cadence for your unit or integration tests if you have an automated CI pipeline. From the technical perspective, CI requires that developers frequently check in their in-flight work to a single branch (i.e., they do not branch their feature work at all), but in common use this is not always adhered to.
Continuous integration ensures that changes in development are regularly integrated into the main line of code. Continuous delivery packages up the code into a deliverable unit that can then be deployed by the developers themselves (in a pure DevOps model) or by a separate operations team if necessary. This is often confused or conflated with “continuous deployment” which refers to a process that automatically deploys changes to production. Again, in practice, the second usage is more common than the technical definition.
Automation is ideal for CI and CD practices, since they require the same actions be performed on a regular basis. The automation of CI and CD processes is typically referred to as “pipelines,” an analogy of traditional factory product automation pipelines. Since a key principle of DevOps is automation (the “A” in the DevOps CALMS model), CI/CD pipelines are often considered integral to DevOps practices. Either a single team can build and maintain the pipeline to production (a purer DevOps model), or the CI/CD pipeline can deliver a stable and more tested set of build artifacts to a separate operations team for deployment.
A CI/CD pipeline also facilitates the introduction of other changes that can improve reliability. For example, it is relatively easy to insert unit or integration testing earlier in the build/deployment cycle. This has been referred to as “shifting left” and can result in significant cost reductions, as problems are found earlier on in the delivery process.
Similarly, pipelines foster an environment where changes can be released “little and often,” which also reduces risk, as each of these smaller changes poses less risk to the system as a whole. The more traditional “big bang” approach, on the other hand, bundles many changes together into a single major and irregular release.
Finally, the reduction of manual intervention further reduces risk, as machines are more reliable than people. There is little danger of an automated pipeline running the wrong command as part of a build or of forgetting to run a QA test as part of a release cycle.
The recent popularity of GitOps builds on this pipeline code, insisting that the pipeline is represented entirely in source control. Moreover, the deployment state is managed by automated controlling agents that ensure the state matches the source.
When it comes to software security management, the increasing popularity of CI/CD pipelines has brought about new opportunities but also new threats. On the positive side, CI/CD pipelines limit free access to the build and deployment process. In addition, it is easier to grant those users (both “real” users and services) fine-grained access to just the resources they need rather than full administrator access. Pipelines also significantly increase the auditability of build and delivery, as with each step, it is relatively trivial to log what action was performed, the outcome, and what (or who) triggered it.
As noted, a drawback of CI/CD is of course the increase in threats. Since 2000, a number of factors have resulted in the amount of code as well as the number of sources and software platforms proliferating. As development and deployment have accelerated and pipelines have become increasingly reliable, software is now being deployed faster than ever.
The rise of open-source software libraries, platforms, and tooling have also offered developers far more software options. Finally, the rise of containerization as a fungible packaging and deployment technology as well as the interoperability of software components via REST interfaces and gRPC has meant that these components can be built and deployed together easier and quicker than ever before.
These factors combined have created a tsunami of new software for centralized departments to try to manage. Security, operations, and architecture teams have all had to adapt to this new and ever-changing environment.
This pressure has given rise to DevSecOps, an extension of the DevOps model of shared responsibility for development, deployment, and maintenance in which security interests are tightly integrated.
CI/CD pipelines began as combinations of simple shell scripts and descendants of Make files such as Ant and Maven. Over time, more fully fledged applications that perform this function have become widely used. Some of these originated as straightforward server-side applications but went on to become successful commercial products in their own right. The biggest players in this space are Jenkins and TeamCity. These tools originally stored their pipeline configuration in a stateful way on the server side via the application’s GUI. More recently, however, declarative “pipelines as code” picked up from remote source repositories have become the norm.
Snyk can help you to continuously avoid known vulnerabilities in your dependencies with static application security testing for example. You can find Snyk security integrations with TeamCity, Jenkins, and many other CI/CD tools and systems. Here you can check out Snyk's integration configuration examples in our GitHub repository.
The major source control services have also gotten in on the act. GitLab was first to the punch with its GitLab CI/CD offering; GitHub followed with GitHub Actions. Snyk offers integration with both GitLab and GitHub.
Since CI’s original coinage in 1991, CI/CD has gone from a relatively niche practice to the industry standard. Along with it, the combined and mutually reinforcing effects of the rise of open-source, containerization, and distributed applications have resulted in an explosion of software artifacts that span a seemingly infinite array of tools and technologies.
Yet this has posed a security problem for software delivery pipeline owners looking to keep all these new attack vectors under control. Manual verification, however, is neither a sustainable, efficient, nor reliable approach. Learn more about the different types of security audits you can add to your pipeline here.
That's it for this collection!Mehr Lernressourcen