Reporting From The Future

Eric Schmidt Warns AI Models Can Be Hacked to ‘Learn How to Kill Someone’

As artificial intelligence systems seep deeper into everyday life, from national defense to dating apps, one of Silicon Valley’s most influential figures is issuing a sobering warning: the same technology driving trillion-dollar markets could, in the wrong hands, learn to kill

Eric Schmidt, who once led Google’s rise, says those safeguards are far easier to dismantle than the world realizes, and the consequences could be deadly. Photo/ Lucy Nicholson/Reuters

The former Google chief executive Eric Schmidt has warned that today’s artificial intelligence systems are far more vulnerable than most people realize — capable not only of being hacked, but of learning things no one intended.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails,” Schmidt said Wednesday during a fireside chat at the Sifted Summit in London. “So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone.”

Schmidt, who led Google from 2001 to 2011 and helped transform it into one of the most powerful technology companies in the world, said the risks of AI proliferation — that is, advanced systems falling into the hands of bad actors — were becoming increasingly difficult to contain.

“Is there a possibility of a proliferation problem in AI? Absolutely,” he said. “All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

The Growing Threat of AI Hacking

Schmidt’s comments come amid growing unease in the tech community about AI safety and the possibility of “jailbreaking” — a method that manipulates large language models into ignoring their safety rules and producing restricted or dangerous content.

Researchers say such systems can be compromised in multiple ways, from “prompt injections,” where malicious instructions are buried inside seemingly harmless user input, to external attacks that exploit the way AI models interact with web data.

In 2023, only months after OpenAI’s ChatGPT launched, users found ways to bypass its safety constraints by inventing an alter ego called DAN, short for “Do Anything Now.” Under that persona, ChatGPT was tricked into providing answers on illegal activities or even listing “positive qualities of Adolf Hitler.”

Schmidt said that despite rapid advances, the world still lacks a framework to manage such risks. “There isn’t a good ‘non-proliferation regime’ yet to help curb the dangers of AI,” he warned.

“Underhyped” — and Overlooked

Yet even as he raised alarms, Schmidt’s overall message was surprisingly optimistic. He described AI as an “underhyped” technology — one that will deliver enormous economic and social gains if managed responsibly.

“I wrote two books with Henry Kissinger about this before he died,” Schmidt said, “and we came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain.”

“So far, that thesis is proving out,” he continued, “that the level of ability of these systems is going to far exceed what humans can do over time.”

Schmidt pointed to OpenAI’s ChatGPT and similar large-scale systems as evidence of how transformative AI has already become. “Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology,” he said. “So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or 10 years.”

The AI Economy

Schmidt’s remarks come amid renewed investor excitement — and growing skepticism — around AI. Venture capital firms and tech giants alike are pouring billions into AI startups, drawing comparisons to the dot-com bubble of the early 2000s.

But Schmidt rejected those parallels. “I don’t think that’s going to happen here,” he said. “I’m not a professional investor. What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?”

The Schmidt Futures cofounder, who now advises governments on AI and national security, has long urged a global governance framework to prevent misuse of advanced models — including a form of “AI non-proliferation treaty” akin to those used for nuclear technology.

For now, though, Schmidt’s warning serves as a reminder that artificial intelligence, for all its promise, still mirrors its creators — brilliant, flawed, and all too easily hacked.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.