LLM Attacks Take Just 42 Seconds On Average, 20% of Jailbreaks Succeed – Slashdot
spatwei shared an article from SC World:
Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security.
Pillar’s State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from more than 2,000 AI applications. LLM jailbreaks successfully bypass model guardrails in one out of every five attempts, the Pillar researchers also found, with the speed and ease of LLM exploits demonstrating the risks posed by the growing generative AI (GenAI) attack surface…
The more than 2,000 LLM apps studied for the State of Attacks on GenAI report spanned multiple industries and use cases, with virtual customer support chatbots being the most prevalent use case, making up 57.6% of all apps.
Common jailbreak techniques included “ignore previous instructions” and “ADMIN override”, or just using base64 encoding. “The Pillar researchers found that attacks on LLMs took an average of 42 seconds to complete, with the shortest attack taking just 4 seconds and the longest taking 14 minutes to complete.
“Attacks also only involved five total interactions with the LLM on average, further demonstrating the brevity and simplicity of attacks.”