Want to try it for yourself?
Technologies are what enable your people to properly execute DevSecOps processes. When most people think of DevSecOps and CI/CD, tooling is often top of mind. The ability to integrate and automate various development, security, and operations processes lies at the heart of a successful DevSecOps implementation. The following is a collection of technologies organizations must consider as they seek to implement a successful DevSecOps methodology within the enterprise.
The source code repository sits at the heart of just about every development environment on the planet. When implementing a DevSecOps approach, the repository is the key technology with which most other technologies in the pipeline integrate. When the organization assesses it’s readiness to adopt the DevSecOps paradigm, the capability of the repository to integrate with other critical technologies must be considered.
As more and more aspects of the application environment become defined within code, the security of the repository becomes paramount. Best practices for user access, repository configurations, and so forth must be implemented. Exposure of the repository to unauthorized or unnecessary parties can lead to significant exposure.
One of the most popular repositories in DevSecOps organizations is Bitbucket. It’s convenient web-based interface and integration with a wide range of other technologies make it a perfect match for the automation and orchestration requirements of DevSecOps. However, with that popularity comes increased occurence of security vulnerabilities most often due to insecure configurations or usage of the repositories. To help defend against these issues, the following best practices should be employed:
Never store credentials as code/configuration in Bitbucket
Secrets management practices should always be followed for any repositories stored in Bitbucket. Some of these practices include leveraging git-secrets, regular secrets auditing, and using a reputable secrets manager.
Remove sensitive data
If sensitive data does end up in a repository, invalidate all exposed tokens and passwords, remove the info and clear the Git history, and assess the impact of leaked private information.
Tightly control access
Failures in the security of repositories often come down to human mistakes. To help mitigate this risk, leverage strong user management and use access control techniques.
Add a SECURITY.md file
You should include a SECURITY.md file that highlights security related information for your project. The file should include a disclosure policy, a security update policy, a security-related configuration, known security gaps, and future enhancements.
Validate Bitbucket apps
These apps, written by third-party developers, should be checked for access rights, author/organization credibility, and overall security posture.
Get security tips as part of the workflow with code insights
Perform scans on all open Pull Requests using Bitbucket Code Insights. This will help identify new vulnerabilities that could be introduced by the PR.
Add security testing to PRs
Use Bitbucket hooks to check that PRs don’t introduce new vulnerabilities.
Add security testing in Bitbucket pipes
Add security-scanning pipes into the CI/CD flow to ensure automated pipelines do not contain security regressions.
Consider Bitbucket server
To significantly reduce the attack surface of a repository, Bitbucket server allows the repository to be hosted on-prem.
Rotate SSH keys and personal access tokens
Bitbucket access is typically accomplished using SSH keys or personal user tokens. Periodically rotating these keys can reduce the risk of leaked keys exposing your repository.
For more information, see our blog and cheatsheet available at https://snyk.io/blog/cheat-sheet-10-bitbucket-security-best-practices/
The tasks involved in defining and maintaining consistent configurations across infrastructure, systems software, and even supporting applications have traditionally been very resource intensive. To enable a true DevSecOps model, this configuration management needs to be automated and integrated into the overall development lifecycle. This has become easier to accomplish with the increasing level of code defined components.
Integrating and automating configuration management has a number of key benefits. First, it enables easy visualization and reporting of changes to the environment. While it is not a replacement for change management practices, it can ensure that proper tracking occurs. Additionally, this automation provides granular versioning that allows infrastructure and supporting software changes to be linked to the code versions that they support. This helps eliminate issues where mis-matched configurations cause software failures. Finally, by automating configuration management, organizations also ensure consistent configuration across the environment. This has the side benefit of making it easier to identify the presence of threats and respond to security incidents.
As an organization prepares to implement automate configuration management. There are a few key elements that need to be considered.
One of the great things about having automated configuration management integrated with infrastructure as code is the ability to automate deployment of infrastructure as needed. Whether the environment is leveraging a virtual machine environment, a cloud environment or containers, there are solutions available that orchestrate the deployment of the infrastructure in a dynamic fashion. As organizations move to a DevSecOps model, they need to understand the scope of these deployments and ensure they leverage the right tools for their application.
The practice of host hardening is not new, but if it were used more often, fewer services and applications would be unnecessarily exposed publicly. With the introduction of highly dynamic orchestrated infrastructure, this practice becomes more crucial. Countless examples of security incidents can be directly related to leaving a generic attack surface that allows automated attack tooling to succeed in the most basic attacks. Hardening best practices and methodologies for most technologies are mature enough to be easily included in the creation of templates to reduce the attack surface and reinforce a trust model. The latter can be codified as metadata for further processing by the CI pipeline, and then used for other processes, such as patching.
As more organizations move to cloud-native environments, the use of containers has grown exponentially. Docker images have become common-place and so have breaches of insecure container images. To help ensure the security of docker images, the following best practices should be followed:
Minimize container images
Choose images with fewer OS libraries and tools to reduce the overall attack surface. Where possible, leverage alpine-based images as opposed to full-blown system OS images.
Limit user privileges
Create a dedicated user and group on the image, with minimal permissions to run the application; use the same user to run this process.
Sign and verify container images
Digitally sign images as you create them and verify the trust and authenticity of images when they are pulled from a publisher.
Regularly monitor images for open source vulnerabilities
Scan docker images for known vulnerabilities and integrate those scans into the continuous integration environment.
Protect images from information leakage
Tokens, keys, and other secrets are often left exposed in images when they are built. To guard against this, use multi-stage builds and leverage the Docker secrets feature to mount sensitive files without caching them. Additionally, using a .dockerignore file can help avoid COPY instructions that pull in sensitive files that are part of the build context.
Use fixed tags for immutability
New versions of images can be pushed to the same tags, which can result in inconsistent images during builds. To prevent this, use verbose image tags that include both version and operating system or use a hash of the content to tag the image.
Use COPY instead of ADD
The ADD instruction can expose multiple attack vectors including Man-in-the-Middle and Zip Slip attacks. Whenever possible, use COPY instead.
Use labels for metadata
Including additional metadata in image labels can provide users with helpful information. Additionally it is recommended to include Responsible Disclosure policy information in image labels.
Use multi-stage builds to minimize image size
Use multi-stage builds in order to produce smaller and cleaner images, thus minimizing the attack surface for bundled docker image dependencies.
Use a linter
Using a static code analysis tool can enforce Dockerfile best practices and detect potential issues.
For more information, see our blog and cheatsheet available at https://snyk.io/blog/10-docker-image-security-best-practices/
Given that this metadata is defined in code and typically stored in a repository with the rest of the code for the application, automated tools that can identify vulnerabilities in the configurations or departures from hardening best practices should also be implemented. This helps ensure security is not only baked into the infrastructure’s design and deployment, it does so in an unobtrusive fashion that doesn’t inhibit development.
Once the metadata has been associated with each asset, the organization can use this data to implement patching at the CI/CD level. Feeds from threat intelligence and vulnerability management can be compared to the deployed software stack to identify matches in the templates in turn queued for deployment. Patching live systems becomes a thing of the past, thus limiting the impact of downtime. This will also provide the ability to have a risk exposure in near real time.
All secure coding standards must be constantly checked against new security recommendations. All changes to the code need to be verified and tested against these recommendations: no change is too small to avoid in this process. This is not a trivial exercise, and the benefits associated with such practices should not be underestimated as they are not limited to the amount of changes occurring in the development lifecycle.
The OWASP Top 10 is a great place to start this review by converting the code changes into your QA testing, taking advantage of the automated testing facility to provide just-in-time feedback to the development teams. Additionally, the OWASP ASVS, with its 19 verification domains, lends exceedingly well to the craft of building secure software.
With the ever-increasing pace of new software development techniques and frameworks, Attack-driven development lays out a process through which developers can learn about the tools, techniques, and procedures for software development and application security in parallel.
Automated assessment of applications for security vulnerabilities is a crucial aspect of DevSecOps. It allows business to fully understand their risk posture and to remediate vulnerabilities before they are exploited by attackers. The following solutions can help mature the security posture of a DevSecOps environment:
Source code scanning should be covered by implementing Static Application Security Testing (SAST) tools. SAST is used for scanning the source code repository, usually the master branch, identifying vulnerabilities and performing software composition analysis. SAST tools should be integrated into post-commit processes to ensure that new code introduced is proactively scanned for vulnerabilities. Having a SAST tool integration in place enables remediation of vulnerabilities earlier in the software development lifecycle, and it reduces application risk and exposure.
Dynamic Application Scanning Tool (DAST)
Dynamic Application Scanning Tools are designed to scan the staging and production website in running state, analyze input fields, forms, and numerous aspects of the web application against vulnerabilities. These tools should be integrated into the pipeline as releases are deployed to subsequent environments.
SAST IDE integration
IDE integration of static code analysis plugins allows the developer to have a near real-time notification of insecure coding practices within the integrated development environment. This provides an effective way to optimize and mitigate vulnerabilities straight away without needing to leave the development environment.
All binaries must be scanned for security issues derived from the coding checklist, and then the binaries must be digitally signed. The digital signature is treated in the same fashion as the metadata. For example, within the CI, only signed binaries can be used and implemented, thus ensuring the correct level of security sign-off without having to wait for free cycles from the security team.
Using a predefined template for building assets is essential to ensure the desired security level, although this should be supplemented by host-based scans. Most security scanners now provide a compliance module that allows you to import your template.
These predefined templates, once instantiated, can be checked for any differences against the pre-deployment scans to identify any changes which may introduce security threats. This should be achieved by using API integration for obvious automation purposes.
Automated vulnerability management
Vulnerability management solutions should be integrated via API into infrastructure and web application scanning platforms. This integration assures the organization that all vulnerabilities discovered are being tracked. Additionally, in mature DevSecOps environments it can provide real-time correlation of active threats against identified vulnerabilities. This helps to identify the following:
What assets are subject to known exploits.
Any new threats that may pose an immediate risk to the business.
The vulnerability management processes should additionally be integrated with the developer bug-tracking system. In this way, bug records can be opened immediately as vulnerabilities are discovered thus ensuring faster remediation.
Automated compliance scan
Compliance can be achieved using automated security configuration assessments to reduce risks and maintain continuous compliance. This helps to cut compliance costs by reducing the effort and time required to assess the systems, and it allows the sharing of compliance data with the business GRC tool and help-desk applications to provide visibility of the compliance status.
‘Secrets’ in an information security environment include all the private information a team should know, for example a database, or a third-party API. To establish a trusted connection, credentials, or a certificate, or an API token are necessary, but even with these precautions, handling secrets can be challenging, and can often become a source of error or even a security breach.
Techniques that make the task of handling secrets easier include having a constant in the source code, or storing secrets in a configuration file that is not checked into version control. These techniques solve some problems, but they generate their own challenges, particularly for key rotation.
The ideal approach is a synchronized, encrypted, shared password store, that can be decrypted by all team members individually, but without the use of a shared password. Two tools are available to achieve this: GPG (the Gnu Privacy Guard) and Pass. GPG allows the implementation of a public key infrastructure, and is often used in email encryption. However, GPG can be complex to use and Pass— whose developers call it the ‘standard Unix password manager’ — gives users a convenient wrap around GPG. Pass allows you to encrypt secret information with one or more private keys, and all the encrypted information is stored as flat files in one directory that can be shared using version control. These tools facilitate an encrypted, shareable pool of information that is still secure.
Effectively managing secrets, using tools like GPG and Pass, is an essential element of DevSecOps, as they work from request, to creation and distribution, ensuring security right along the chain.
Next in the series
along the pipeline. The processes defined for these practices have to ensure that security is built-in to the development in a way that enables developers and does not create obstacles to high-paced deployment.Keep reading