fbpx

To Secure Generative AI Applications, Start with Your APIs

Product security and information security teams are facing a new challenge: securely integrating generative AI into their applications. API security can help.

The conversation around securing generative AI came hot on the heels of the ChatGPT beta launch sixteen months ago. Almost overnight, organizations were assembling tiger teams to define a generative AI strategy for their business. Security leaders were tasked with figuring out how to adopt AI safely and securely while safeguarding company and customer data.

Red teams quickly went to work jailbreaking large language models (LLMs) and LLM-powered applications, highlighting the unique vulnerabilities of this new technology. Frameworks including the OWASP LLM Top 10 and MITRE ATLAS Matrix documented the tactics and techniques used to attack LLMs. We saw LLM security incidents including prompt injection at Bing and overreliance at Air Canada. A new wave of security startups emerged to address the new LLM attack surface. The security industry did its thing and made “AI Security” a buzzword.

Months later, product leaders are pushing to release LLM-powered capabilities in their applications and many security leaders are still wondering where to begin.

For most organizations, the answer lies in a bit of application architecture you are already familiar with: your APIs. Whether your organization is using third-party LLMs like OpenAI’s GPT-4 or Anthropic’s Claude, or hosting a private LLM on your own infrastructure, APIs are responsible for connecting the LLM to other services in your application. They are perfectly poised to provide visibility and control over LLM inputs and outputs, and address many of the security risks outlined in the OWASP LLM Top 10. 

Breaking Down AI Security

To understand where to begin when securely leveraging LLMs in your organization’s applications and products, it’s important to break down what we mean by “AI security.” Two distinct problems frequently get lumped under this term:

  1. Secure AI model development
  2. Security for LLM-powered applications

Secure AI model development is only relevant for organizations developing their own AI models. Secure AI model development entails securing the model development supply chain including training data, open source models, and other OSS libraries involved in model training. It also involves securing the MLOps tooling and processes, and finally testing the models for robustness against threats.

If you, like many organizations, are leveraging LLMs as-a-service like OpenAI’s GPT-4 or Anthropic’s Claude, secure AI model development is the responsibility of those providers. 

Security for LLM-powered applications, in contrast, pertains to any organization that is using generative AI in a product or application. This subdomain of AI security is concerned with ensuring secure and safe interactions between LLMs and other services, either internal or external to your application.

Whether you are using a third-party LLM as-a-service or self-hosting your own LLM, every interaction your application has with the LLM is powered by APIs. That attack surface needs effective protection. A new category of “AI firewall” tools has emerged to monitor and secure the traffic into and out of LLMs to identify threats and abuses. API security platforms, which already monitor API traffic, are naturally positioned to provide these capabilities.

Security Risks for LLM-powered Applications

LLM-powered applications face a new set of security challenges based on the unique characteristics and vulnerabilities of LLMs. Their non-deterministic nature makes LLMs a wild-card in your application, and requires a trust boundary between LLMs and other services, just like the trust boundaries that exist for applications in cloud and data centers. This is especially true as LLMs move beyond simple chatbots and into more complex applications where LLM outputs are used to automate actions without a human in the loop.

The OWASP LLM Top 10 describes the unique risks that LLMs pose in the application context. Three of these risks– prompt injection, sensitive data disclosure, and insecure output handling– can be mitigated at the API layer by monitoring and controlling LLM inputs and outputs:

  • Prompt injection is an attempt to manipulate an LLM with inputs that cause it to go “off script.” The goal of prompt injection is often to trick the LLM into revealing private information or to take an action that subverts its instruction prompt. 
  • Sensitive data disclosure occurs when LLMs reveal sensitive information in their outputs, either inadvertently or as the result of a prompt injection attack. 
  • Insecure output handling occurs “when an LLM output is accepted without scrutiny.” For example, an LLM outputs a malicious executable script and is allowed to hand it off to another system to execute.

In most organizations LLMs are accessed through an API wrapper. These LLM APIs form the connective tissue between LLMs and other application services or user interfaces. All LLM inputs and outputs are passed via these LLM APIs. This makes APIs the natural place for LLM security controls within your application environment. The reference architecture Microsoft has provided for effectively planning to secure LLM-based applications is a good resource to understand the different aspects at play here.

LLM Application Security is an Extension of API Security

Securing LLM-powered applications is a natural extension of API security. A well-rounded API security program involves comprehensive discovery and posture management for all APIs in your applications, API vulnerability testing, and API threat detection and protection. This provides a strong foundation for an LLM application security program, which should include the following capabilities:

Discovery of Shadow AI in your applications

As it becomes easier for product teams to experiment with LLMs with a simple subscription to third-party models, “Shadow AI” will pop up in more and more places in your applications. By identifying LLMs APIs in your applications, product security can discover Shadow AI and better manage LLM security posture.

Sensitive Data Discovery and Detection

When adopting LLMs into your applications, it is important to monitor how data is flowing between LLMs and other services in your application, and identify and block sensitive data in LLM inputs or outputs. Common sensitive data types are proprietary company data, privacy related data, competitive data, toxic or profane data, and bias. Understanding the normal and abnormal data flow patterns becomes the foundation for detecting data loss which has been one of the top concerns for enterprise adoption of LLM. 

LLM API Testing

It’s considered best practice to test APIs for security vulnerabilities such as the OWASP API Top 10. This best practice will transfer to LLM APIs, where we can test for LLM-specific vulnerabilities including those in the OWASP LLM Top 10. This testing is critical to maintaining a strong security posture against threats targeting LLMs. 

LLM Threat Detection and Protection

Security threats emanating from prompt injection, sensitive data disclosure, and insecure outputs can be detected by monitoring API requests and responses to and from LLMs. Security controls need to be inserted at the relevant points in the application stack which is where API security has an advantage of being able to inspect the relevant flows. It’s then possible to detect and block these attacks at runtime to prevent data leaks, remote code execution, and privilege escalation.

The Future: Securing Generative AI APIs with Traceable

At Traceable, we believe that API security will play an essential role in LLM application security as more organizations integrate LLMs into their applications. Traceable’s context-aware API security platform lays the foundation for LLM application security, providing discovery of LLM APIs, detection of sensitive data in requests and responses to LLM APIs, and comprehensive monitoring of LLM inputs and outputs. We are continuously evolving our testing and detection capabilities to stay ahead of today’s API threat landscape, which now includes protecting against the vulnerabilities and threats that are unique to LLM APIs. 

If you are interested in learning more about how Traceable can secure your LLM applications, we’d love to hear from you! Contact us to learn more.