What does Biden's Executive Order on AI safety measures mean for businesses?

feature-ai-ide-dark

November 2, 2023

0 mins read

On October 30, U.S. President Joseph Biden issued a sweeping Executive Order (“EO”) focused on making AI safer and more accountable. 

Summarising the AI executive order

The Order covers a lot of ground ranging from algorithmic bias to preserving privacy to regulations on the safety of frontier AI models. Frontier models are the largest, most cutting-edge models like GPT-4, Bard and Llama 2. The Order mandates that many government agencies create specific areas of AI regulation in the coming year. The EO also included a section on encouraging open development of AI technologies, fostering innovations in AI security, and building tools with AI to improve security. 

The technology world is still digesting the EO and processing potential impacts. In certain important areas, the EO is silent. For example, the EO does not mention open source or carve out any exemptions for open source projects that may be, for example, creating new foundational models. In other areas, how to interpret the wording is unclear. The EO does suggest that models requiring more than a specific amount of computational resources must report their results, but doesn’t clearly explain whether this applies to all models or only to models that might be easily repurposed for malicious purposes.

The EO is also not a legally enforceable set of rules but rather was designed to form the structure for coming regulations. So the EO will likely inform the priorities and approach that regulators take, informing future laws around AI. The big question, of course, is how will this order affect technology development and deployment at companies and organizations building with or on top of AI systems (for the purposes of this blog post, “AI system” means any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI). AI moves very quickly, and your organization may already be forming AI practices that could be difficult or costly to unwind when regulations come into effect. Ideally, you would want to avoid this situation, and proactively future-proof your AIOps and products with best practices following the guidance from the Executive Order. 

Key questions raised in the AI executive order

Here’s a quick overview of the big questions we are hearing and what we think the answers are at this point in the game. (Note: We may update this post over time, as the landscape develops. Updates will be noted with a date stamp.)

Questions covered

Q: The Executive Order talks about a registry of AI models. Will I have to register?

A: It is currently unclear, but we believe in most cases, companies building smaller AI models or using pre-existing models will not have to register.

Back to top

Q: What triggers the reporting requirements laid out in the Executive Order?

A: The EO mandates that AI developers report any large-scale AI training runs that could pose a significant security risk. It also mandates reporting the results of any safety evaluations, or "red-teaming". Cloud service providers are also required to notify the government if a foreign individual attempts to purchase computational services capable of training a large AI model. 

Based on the language in the EO, the threshold for reporting will be very large models – models trained with tens of millions of compute hours on the latest GPUs. In ops terms, this means models trained on a cluster with computing power greater than 1026 integer or floating-point operations or any computing cluster with local networking throughput of greater than and 100 Gbit/s. Backing out the math, this implies model training on tens of trillions of tokens. GPT-4 was reportedly trained on 1.7 trillion tokens, an order of magnitude less. In other words, the thresholds mentioned in the EO will likely only hit the very largest model-makers in the near term, and the largest compute clusters, probably training the largest models and costing many millions of dollars per training run. Please note, however, that the EO states that the government may change these criteria in the future – meaning the thresholds could go either up or down, depending on regulatory decisions.

Back to top

Q: The Executive Order talks about reporting standards for “dual-use” AI models. How are those defined? 

A: In short, they are not well-defined in the Order. The intention appears to be to capture AI models that can be used for malicious purposes. In practice, however, nearly any AI can be trained and used for malicious purposes, so this one is rather vague. That said, we've provided some basic guidance below. 

Back to top

Q: The Executive Order does not say anything about open source software. How will it apply to open source AI models?

A: There is no clear guidance on open source models and open source AI as yet. Overall, the general guidance in the Executive Order will likely apply to open source AI and any models that are developed and released into open source. 

Back to top

Q: The Executive Order talks specifically about foundational models that are very large. Are large open source models like Llama 2, Falcon 70B or Mistral included? 

A: The Order specifically notes that the provisions and regulations will apply to models not yet released. So, these existing models will likely face different levels of scrutiny. If you are building on and modifying these models, there is a significant chance your team will have to comply with at least a subset of regulatory oversight and reporting. 

Back to top

Q: There is a lot about AI safety in the Executive Order. Is there any actionable guidance you can give me?

A: Safety is a pretty subjective term, but here’s a quick take. The order emphasizes the need for AI systems to be safe and secure. It requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies and mechanisms to test, understand, and mitigate risks from these systems before they are put to use. So if your organization is training or tuning  AI systems for use in public-facing products or production applications, consider putting in place a set of structured and auditable security processes specifically focused on your AI assets.

Back to top

Q: There is a section in the Executive Order about civil liberties, privacy, and algorithmic bias. How could that affect my organization?

A: The order requires that Americans' privacy and civil liberties be protected as AI continues to advance. It does not lay out any specific guidance but, in reality, this is something that other laws either at the state level (like California) or the national level (in the European Union, for example) already have started to address. The bottom line is that AI applications and models must be treated with the same safeguards, transparency, and provenance as legacy applications that do not use AI. One area where this can be particularly challenging is training AI systems with data sets that contain hidden biases, or contain anonymized personally identifiable information (PII) that can be extracted with the right prompts. These two scenarios could result in violations of the law or, potentially, lawsuits. 

Back to top

Q: How does the Order impact AI-generated code?

A: The Executive Order does not contain any specific provisions for, or mentions of, AI-generated code. Still, you should expect that AI-generated code will have to comply with broader regulations around application and code security, and more specifically supply chain security. We believe that every company building software and using AI coding tools should consider those tools part of your software supply chain. This is particularly important for open source software, which might include many transient, nested, conditional, or direct dependencies on other open source companies. In other words, you will likely see requirements similar to those developing around the software bill of materials (SBOM).

Back to top

Q: Sounds like we might need an AIBOM!

A: Indeed, creating something like an AIBOM that mirrors your SBOM is a good idea. At a minimum, documenting all your AI uses, processes, and training data and even conducting a mock audit and compliance check against existing software and technology regulations might be a good proxy for an AIBOM, and will prepare your organization for any regulations that might be implemented in the near to mid-term.

Back to top

Q: I work with biological data. How do the bioweapons provisions impact my organization?

A: There is a slightly lower compute threshold for triggering reporting requirements for training runs. Models using computing power greater than 1023 integer or floating-point operations must report training runs. However, this is still comparatively large, and broadly larger than most models used for biological data today.

Back to top

Q:  I am working on very large models but not designing those models. How would this EO impact me?

A: If you are doing training runs with very large models that approach the stated compute thresholds, even if you are doing it on pre-existing models you might still need to report your runs and the results of any safety evaluations. Even if you are not doing training runs at, or near, the threshold, you might still need to register your application, but you will likely only have certainty following the upcoming regulatory design process.

Back to top

Q: There is a provision in the EO about cybersecurity and AI tools. Am I required to do anything because of that?

A: No. That is mostly a provision around fostering new tools and new research for cybersecurity, both using AI and monitoring and safeguarding AI models, AIOps, and application development and deployment leveraging AI. Of course, if you are building any new cybersecurity capabilities with AI or to protect against AI risks, DARPA probably would love for you to enter their contest!

Back to top

Q: I am already using a lot of open source AI applications and models. What should I do to ensure they are safe?

A: Because open source AI is now part of your supply chain, you need to treat it as such and expect that adversaries will try to contaminate the open source AI supply chain in the same way they have tried to contaminate the open source software supply chain. (In reality, the two supply chains have considerable overlap). Special care should be taken to identify who controls open source AI components and code, and to scan that code frequently for vulnerabilities. You also should monitor the components for changes of control, unexplained changes to code or data, or other indicators that could inject risk.

Back to top

Q: The EO seems to be worded very generally and ambiguously, which makes it difficult for me to know whether future regulations coming out of this guidance will impact me/ the business. What is the point of such unclear guidance?

A: Where new areas come under regulatory scrutiny, it is common to see regulatory bodies word guidance and regulations very broadly, because they have no idea how things will play out. This ambiguous wording will allow regulators the flexibility to capture as many scenarios as possible. We think that the same is happening with this EO, and that more specificity will come in, after the initial wave of regulations, and when more use cases and scenarios start to take shape. This is why organizations should err on the side of caution and have more restrictive policies and procedures where AI systems are concerned, in order to avoid being caught out by evolving regulations.

Back to top

Q: So what are the takeaways for cybersecurity, application security, and compliance and audit teams?

A: The Executive Order defines the term “AI system” as “…any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI." This includes

"Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies." In other words, you should expect to secure your AI systems and AI stack and all the elements around it, including generative code, to the same degree as anything else in your codebase, infrastructure, or attack surface. What does this mean for security teams?

First, consider this a starting gun for what will likely be a race to AI regulations. That means consider putting guardrails and safety measures in place to protect all your AI infrastructure. That includes your data, your operational pipelines, and the application code you are using to build out your AI apps. This means making sure you extend all existing cybersecurity measures, where applicable, to AIOps. For example, even if your AI stack is rapidly changing and evolving, putting the code through scanning tools and automating security practices is now moving from a nice-to-have to a need-to-have. 

Second, assume that your AI stack and processes will be audited or that audits of these will be required for legal compliance. This means you will need to create a compliance process and plan for AI, including documenting all cybersecurity measures that relate to AI applications.

Lastly, for AI-generated code, make sure that all AI code suggestions run through the same level of scrutiny and auditing as any other code. There is strong evidence that organizations using AI code suggestions increase their productivity and the velocity of their code shipments. This implies that running automated security checks becomes even more important as a means to stay on top of an even more rapidly changing codebase. 

Back to top

Key takeaway? Keep applying best practices.

Snyk is a leader in security for AI-generated code. We are 50x faster than other solutions and run checks silently in the IDE, in real time, with full application context. Best of all, we plug into whatever generative AI coding tool you need for your business, today or tomorrow. Secure your AI stack from the ground up, starting with your AI-generated code, to stay safe and compliant.

Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon