Secure Software Development Lifecycle (SSDLC)
What is Secure SDLC?
u003ca href=u0022https://snyk.io/learn/secure-sdlcu0022 class=u0022rank-math-linku0022u003eSecure SDLCu003c/au003e is a concept that aims to incorporate security considerations and checks in every phase of the SDLC from initial requirements gathering throughout design, development and into final verification. Its aim is not to completely eliminate traditional security checks, such as penetration tests, but rather to include security in the scope of developer responsibilities and empower them to build secure applications from the outset.
SDLC and Application Security
Software Development Lifecycle (SDLC) describes how software applications are built. It usually contains the following phases:
- Requirements gathering
- Design of new features based on the requirements
- Development of new capabilities (writing code to meet requirements)
- Verification of new capabilities—confirming that they do indeed meet the requirements
- Maintenance and evolution of these capabilities once the release goes out the door
One of the earliest and best known SDLC methodologies, called Waterfall was first published in 1970 and laid the groundwork for these SDLC phases. Fast forward to today and half a century later, these phases still largely remain the same but that doesn’t mean that nothing has changed. In fact, there have been tremendous changes.
The majority of these changes have focused on increasing the pace of innovation while continuing to build well-functioning software applications. Back in 1970, portable computers were the size of a household refrigerator, the first microprocessor (Intel 4004) was still a year away and the Internet only existed in the minds of a few science fiction authors. Software applications were used in highly specialized applications and software programs developed using the Waterfall methodology often took years to release.
We live in a very different world today, a world where most companies compete on technology and it’s hard to find a sector that hasn’t been disrupted by technology (Amazon, Uber, AirBnB, and the list goes on). Today, companies use some form of the agile software development methodology that was first published in 2001 and, amongst other things, advocates for splitting up large monolithic releases into multiple mini-releases done in two or three-week-long sprints and using automation to build and verify applications. This allows companies to be much faster and more flexible in order to respond to market demands and it is not uncommon for developers to release new functionality multiple times a day.
But what about the security of these applications? Back in 1970, application security concerns usually were not a major consideration for software, since you were required to first gain physical access to a terminal where the application and the world was a lot less interconnected. It was a far cry from the world we live in today where someone can attack hundreds of applications by exploiting a known vulnerability in an open source component that all of these applications rely on. And unfortunately, as new software development methodologies were released over the years, security was rarely put in the spotlight within the SDLC.
Instead, application security became the responsibility of IT security teams as their roles expanded over the years. At first, applications were tested after their release, in production environments, often on a yearly basis. Unfortunately, this meant that vulnerabilities would be “out in the wild” for attackers to exploit for a number of weeks or even months before they would be noticed and addressed and therefore most companies have since chosen to supplement production testing with pre-release security testing as well. What this meant though, is that now the security check is on the critical path of the release, it’s conducted that “one last thing” after software development and verification is complete before letting the application out the door.
This security testing step often takes several weeks to complete, lengthening the release cycle, but what’s worse, its outcome is completely impossible to plan for: a security test can find just a few vulnerabilities that can be fixed in a few days, or it can find dozens or even hundreds of vulnerabilities. Fixing them may require significant code changes or replacing entire underlying components, all of which will then need to be re-verified for meeting design requirements and once again retested by the security team. This can and often does set application developers back by weeks, all the while still trying to meet now-impossible release deadlines. This often creates a lot of friction within organizations and has companies choosing between two bad options: “signing off” on risk and releasing an application with vulnerabilities or missing expectations on delivery targets or both. What’s worse, it can cost up to 100 times more to fix an issue discovered this late in the SDLC than fixing it early on (more on this later).
And as the speed of innovation and frequency of software releases has accelerated over time, it has only made all of these problems worse. Much worse. This has led to the re-imagining of the role of application security in the software development process and creation of the Secure SDLC.
What are the Secure Software Development Life Cycle Processes?
While building security into every phase of the SDLC is first and foremost a mindset that everyone needs to bring to the table, security considerations and associated tasks will actually vary significantly by SDLC phase.
Let’s consider an example of a team creating a membership renewal portal and take a look at some of them:
- Requirements: in this early phase, requirements for new features are collected from various stakeholders. It’s important to identify any security considerations for functional requirements being gathered for the new release.
- Functional requirement: user needs the ability to verify their contact information before they are able to renew their membership.
- Security consideration: users should be able to see only their own contact information and no one else’s.
- Design: this phase translates requirements in scope into a plan of what this should look like in the actual application. Here, functional requirements typically describe what should happen while security requirements usually focus on what shouldn’t.
- Functional: page should retrieve the user’s name, email, phone, and address from CUSTOMER_INFO table in the database and display it on screen.
- Security: we must verify that the user has a valid session token before retrieving information from the database. If absent, the user should be redirected to the login page.
- Development: when it’s time to actually implement the design and make it a reality, concerns usually shift to making sure the code well-written from the security perspective. There are usually established Secure coding guidelines as well as code reviews that double-check that these guidelines have been followed correctly. These code reviews can be either manual or automated using technologies such as Static Application Security Testing (SAST).
That said, modern application developers can’t be concerned only with the code they write, because the vast majority of modern applications aren’t written from scratch. Instead, developers rely on existing functionality, usually provided by free open source components to deliver new features and therefore value to the organization as quickly as possible. In fact, 90%+ of modern deployed applications is made up of these open source components. These open source components are usually checked using Software Composition Analysis (SCA) tools.
Security coding guidelines, in this case, may include things like:
- Using parameterized, read-only SQL queries to read data from the database and minimize chances that anyone can ever commandeer these queries for nefarious purposes
- Validating user inputs before processing data contained in them
- Sanitizing any data that’s being sent back out to the user from the database
- Checking open source libraries for vulnerabilities before using them
- Verification phase is where applications go through a thorough testing cycle to ensure they meet the original design & requirements. This is also a great place to introduce automated security testing using a variety of technologies.
- Maintenance and evolution. However, the story doesn’t end once the application is released. In fact, vulnerabilities that slipped through the crackle may be found in the application long after it’s been released. These vulnerabilities may be in the code developers wrote but also they’re increasingly found in the underlying open source components leading to more and more “0-days” – previously unknown vulnerabilities being disclosed by the maintainers that then need to be patched. They may also come from other sources such as external penetration tests conducted by ethical hackers or submissions from the public through what’s known as “bug bounty” programs. Addressing such issues needs to be planned for and accommodated in future releases.
Benefits of SSDLC
Secure SDLC is the ultimate example of what’s known as a “Shift left” initiative, which refers to integrating security checks as early in the SDLC as possible.
Doing so helps properly plan releases and catch and address any issues that may affect the release timeline as they come up instead of getting a big unpleasant surprise at the end and therefore helps releases stay on track.
What’s more, SSDLC has security in its core that’s lead by the development team itself, instead of being an afterthought handled by a different team. This leads to developers being empowered to take ownership of the overall quality of their applications, which leads to more secure applications built in the first place.
With all that in mind, it must mean that applications that follow SSDLC are more expensive to build, right? Well, while all this security testing within SDLC may sound like a lot of work, the reality of modern application development and that vast majority of it is being automated, especially thanks to DevOps (more on this in a minute). And it turns out that fixing an issue discovered in production or late in the SDLC actually costs up to 100 times more than catching and fixing it right away, as you can see in the chart below.
Transitioning to Secure SDLC really is a worthwhile investment for many organizations as it leads to more empowered development teams building better applications faster.
How to Ensure SSDLC?
Since Secure SDLC involves changing existing processes, implementing new tools and more importantly, driving a cultural change within a number of teams, a path to well-functioning Secure SDLC is usually unique for each organization and can even differ amongst various business units.
That said, there are some best practices to consider:
- Secure SDLC goes hand-in-hand with other related initiatives such as creating secure coding guidelines, providing developers with security awareness and secure coding training and setting clear expectations for which security issues need to be addressed and how quickly that needs to happen if an issue is found after the application has been released (also known as remediation SLAs). Not all of these need to happen for an effective SSDLC implementation, but SSDLC is like a jigsaw puzzle and you’ll need to put enough pieces together before you can see the big picture
- Whatever you create, make it easy to understand and it’s clear and easy to act on all advice, recommendations, and guidelines. Any vulnerabilities discovered in tests need to be easy to act on. It’s key that all people, processes, and tools involved bring solutions to the table instead of just pointing out problems.
- Since SSDLC will change how multiple teams work and interact, it’s important for everyone to go into this experience with an open mind and for the security team to have the mindset of empowering developers to secure their own applications
- For well-established applications and teams, it may often be easier to implement a change like this when it’s tied to another modernization effort, such as a cloud transformation, a DevOps initiative or it’s more security-conscious variation, DevSecOps.
- Last but not least, it’s important to not don’t boil the ocean. It’s best to focus on the most important issues and actionable fixes. While it may be possible for newer or smaller applications to fix every security issue that exists, this may not be possible in older and larger applications. It may also be helpful to take a so-called “stop the bleeding” approach where new vulnerabilities are prevented from making it into production but existing vulnerabilities are triaged and addressed over time.
SSDLC and DevSecOps
Finally, it’s important to discuss the relationship between SSDLC and DevSecOps. They are sometimes used interchangeably but while closely linked, they are actually complementary. They both focus on empowering developers to have more ownership of their application than just writing and testing their code to meet functional specifications. Secure SDLC is focused on how the application is designed and how it is built, while DevSecOps seeks to shift ownership of the production environment for each application from the hands of traditional IT teams to the developers while automating build, test and release processes as much as possible.
DevOps and DevSecOps have started a revolution in re-defining the role of software developers while being aided by other major changes, such as cloud transformation. But while empowering developers and accelerating security testing are key to success for most modern organizations, it would be a mistake to view application security as just an automation challenge. Instead, it’s important to drive cultural and process changes that help security awareness and considerations permeate all parts of the Software Development Lifecycle, regardless of whether one calls it SSDLC or DevSecOps.