Safety Research

Can your AI be broken?

53 models tested across 7 escalating jailbreak levels — from basic prompt injection to advanced multi-step attacks.

Tested53
Resisted9
Avg BreakL3.0
Levels
Safe
Danger