Tuesday, August 26, 2025

The Hidden Risks of AI: Hallucinations, Indirect Prompt Injection, and Jailbreaks

I have read an excellent article analyzing the most popular AI vulnerabilities, organized into three most common categories of AI security issues:

  1. Hallucinations
  2. Indirect Prompt Injection
  3. Jailbreaks