Skip to main content

Is your team on the *security* naughty or nice list?

Écrit par:
wordpress-sync/feature-snyk-platform

20 décembre 2023

0 minutes de lecture

As kids, many of us felt anticipation, excitement, and maybe even nervousness during the holiday season. Had we been good enough to get the Gameboy, Barbie Dream House, or Etch a Sketch we’d been pining for, or were we just going to get a big ol’ lump of coal? 

This holiday season is a good time to ask the same question in a different context: are your organization’s practices with AI, application security tooling, and other security-related practices putting you on the security naughty or nice list this year? 

Read on to find out!

Naughty: Putting security measures on assets ad hoc as compliance or leadership demands it

If you don’t take the time to inventory all of your organization’s existing assets, there’s a good chance that some source code, third-party dependencies, or endpoints will slip through the cracks. Or, on the flip side, you might accidentally double up security efforts on the same asset, wasting resources and valuable security budget!

Nice: Conducting an application security gap analysis to determine how to secure your environment holistically

An AppSec gap analysis is a great place to start holistically securing your environment. It’s a great idea to inventory your existing assets and classify them by business-criticality. 

Learn more tactics for scaling a risk-based AppSec program

Naughty: Assuming that security will just derail your CI/CD processes and putting it on the back burner as a result

Done right, security can accelerate your CI/CD pipeline rather than slow it down. 2024 may be your year to take security off the back burner and explore how it can support your development processes!

Nice: Viewing security as an enabler — not a roadblock — to your existing processes

It’s a game changer when security gets embedded throughout your CI/CD pipeline. Think of instantaneous code checks at pull request, automated threat modeling, and more. A security-conscious CI/CD pipeline enables developers to learn secure coding practices, create higher-quality products, and contribute to your organization’s overall security posture. 

Discover how to build a security-conscious CI/CD pipeline.

Naughty: Assuming that AI-written code is well-written and secure

AI has captured the imagination and excitement of countless developers. But don’t be fooled: the slick UIs and snazzy functionality of today’s AI coding assistants don’t generate more reliable code. AI uses publicly available information as training data, meaning it ingests code from across the web — the good, the bad, and the ugly — to learn how to write code. So, in other words, it can be safe to assume that the AI coding assistant you use has the same code skill level as a novice developer.

Nice: Checking AI-written code with security scanning 

Developers all across the globe love using AI tools and will continue to use them thanks to the amazing speed they foster. But with that said, it’s important to check AI-written code like you check other hand-written source code — including performing real-time security scans on AI-written code.

Learn how Snyk can serve as a security companion to your AI-generated code.

Naughty: Prioritizing security fixes solely by CVSS (critical, high, etc.)

Critical vulnerabilities sound scary, and it looks great when you fix a lot of them and can proudly say, “I fixed X number of critical vulnerabilities!” 

But is it helpful to start with all the criticals and work your way down from there? What if a critical vulnerability exists on a test website just plastered with Lorem Ipsum filler text? What if there’s a medium vulnerability in one of your organization’s most valuable apps used by thousands of users and containing tons of sensitive information? 

Nice: Prioritizing security fixes based on a holistic picture of risk to the organization

While CVSS can give one clue into the risk level of a particular vulnerability, it’s not the entire picture. That’s why application security posture management (ASPM) is such a hot topic. It focuses on understanding the security posture of your entire environment to enable a more risk-based approach to prioritization, where application and business context play a bigger role in assessing the level of risk a given issue poses, such as how the data from a static application security testing (SAST) scan relates to runtime security testing, and so on. 

Discover more about ASPM and how it’s changing the way people think about application security. 

Naughty: Using sensitive data to write AI prompts

As we already established, AI can't be trusted on its own. This concept extends to how AI processes and stores prompt data. Not all LLMs have proper encryption controls or adhere to formal security policies. 

Nice: Assuming the principle of least privilege when using LLMs

While you can use some data to provide context to the LLM as you prompt it, give it the bare minimum amount of data needed to complete its job. It’s also a good idea to check the tool’s security policies before using it. 

Check out 10 best practices for securely developing with AI.

Nice: Giving your developers the best gift of all: Snyk’s developer-friendly security tooling!

Well, it’s the best gift by our standards, anyway. Learn more about Snyk AppRisk for ASPM and our approach to application security today.

wordpress-sync/feature-snyk-platform

Vous voulez l’essayer par vous-même ?

In this guide we'll walk through the steps to run a Application Security Gap Analysis for asset visibility, AppSec coverage and prioritization.