Watch Now

AI is here; it’s changing how we work and making us (hopefully) more efficient! But while AI is changing the way we work, we haven’t changed the way it works with us. When we need a reliable way to call code that’s easy to implement into our existing applications, with easy-to-read structure and documentation, we always look to APIs. AI security starts at the API, whether a 3rd party API like OpenAI, open-source AI deployment tools leveraging APIs like HuggingFace’s Text Generation Inference, or your own GenAI stack with an API wrapper to make it easy to use! Whether you’re leveraging AI tools safely, enabling your engineers to build robust AI applications, or trying to understand the evolving GenAI attack surface, this webinar will break down all that and more as we explore GenAI API security.

Join us to learn:

  • Where the new generative AI attack surface is
  • Some of the key security risks of generative AI
  • The role APIs play in generative AI security and how to secure generative AI at the API level
  • And finally, how you can leverage your existing skill sets, tools, and processes to tackle the security risks of generative AI

If you’re just starting out in generative AI security or you’re battling ‘shadow’ GenAI API usage in your organization, this webinar will cover everything you need to know.


Dr. Katie Paxton-Fear
Ethical Hacker & Technical Marketing Manager