Red Teaming: How to challenge and secure LLMs

August 14, 2024 10:00 AM Europe/Copenhagen

Dive into the world of Red Teaming in LLM applications development. 

Join our webinar and discover how the Red Teaming approach can revolutionize your development process and enhance both security and efficiency.  

Red Teaming is a proactive and dynamic approach to testing security. It involves a group of experts challenging the LLM by mimicking potential attackers. 

The Red Teaming method:

  • Exposes vulnerabilities 
  • Reveals how systems can be exploited
  • Improve system resilience and robustness

Sign up to gain a deeper understanding of red teaming and get useful, practical advice to secure your LLM. 

Ahmed Zewain

Lead product data scientist, 2021.AI

Lead product data scientist with a background in mathematical modelling from DTU. Great passion for building AI products that generate value. During the past couple of years, he has worked with NLP and LLMs, which is why red teaming is, in particular, a very interesting topic for him.

Julian Róin Skovhus

Data Scientist, 2021.AI

Data scientist graduated from DTU with a specialisation in Mathematical Modelling and Computing with external semester at EPFL in Switzerland. Focused on time series prediction and deep learning especially in context of LLMs. Curious about ethical use and safe implementation of AI solutions.

Add to calendar

Ask a question

Ask as (log out)