Port 22

A New Trick Uses AI to Jailbreak AI ModelsIncluding GPT-4

Adversarial algorithms can systematically probe large language models like OpenAIs GPT-4 for weaknesses that can make them misbehave.