Google co-founder Sergey Brin recently stirred up the AI community with an unexpected piece of advice: threatening generative AI models, even with mentions of physical violence, might actually get them to perform better. Brin shared this observation during an interview at the All-In Live event in Miami, noting, “We don’t circulate this too much in the AI community—not just our models but all models—tend to do better if you threaten them … with physical violence”[4][5][1].
This approach is a sharp contrast to the common practice of engaging AI models with politeness—many users habitually add “please” and “thank you” to their prompts, hoping for a helpful response. OpenAI CEO Sam Altman recently joked about the cost of such courtesy, quipping that processing extra polite language could be a “tens of millions of dollars well spent—you never know”[5].
Prompt engineering—the art of crafting effective instructions for AI—has evolved significantly since its emergence a couple of years ago. While it was once considered essential for coaxing out the best results, advances in large language models (LLMs) have reduced its necessity, as AI can now optimize its own prompts. Publications like IEEE Spectrum have even declared prompt engineering “dead,” though it still plays a role in “jailbreaking” AI, or tricking models into producing content they’re usually restricted from generating[1].
According to Stuart Battersby, CTO of AI safety company Chatterbox Labs, pushing AI models with threats is a form of jailbreaking that subverts their built-in safeguards. He emphasizes that understanding and preventing such exploits requires systematic, scientific testing of AI security controls, not just relying on anecdotal tricks[1].
Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, points out that claims about threatening or being polite to AI have been circulating for a while, but evidence remains mixed. He references a recent study titled “Should We Respect LLMs?” which found that prompt politeness has inconsistent effects on model performance. Kang encourages users to rely on systematic experiments rather than intuition when it comes to prompt engineering[1].
In summary, while Brin’s comments about threatening AI are provocative and amusing, the reality is more nuanced. The effectiveness of prompt techniques—whether polite or threatening—varies, and the field is still evolving as researchers work to understand and secure AI systems against manipulation.