1 min read

AI Used in Planning of Fatal Las Vegas Cybertruck Explosion

The man who died in the explosion of a Tesla Cybertruck outside President-elect Trump's Las Vegas hotel used ChatGPT to assist in his planning, Las Vegas police revealed in a press conference, per The Hill.

Matthew Livelsberger, 37, died of a self-inflicted gunshot wound before the vehicle detonated. Police say he engaged with ChatGPT, an AI model developed by OpenAI, to gather information for the explosion.

"We knew that AI was going to change the game at some point or another in really all of our lives and certainly, I think this is the first incidence that I'm aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device, to learn information all across the country as they're moving forward," said Las Vegas Sheriff Kevin McMahill. He described the use of AI in this instance as "a concerning moment."

Police detailed Livelsberger's inquiries to ChatGPT, including queries about the locations of large gun stores in Denver and information on the explosives Tannerite and pistols. They also released excerpts from a six-page manifesto Livelsberger wrote, outlining his motivations and detailing the challenges of traveling for the planned explosion, including potential issues with Tesla charging stations and drug and alcohol use.

Livelsberger also wrote about his service in the Army in Afghanistan and the psychological trauma he experienced. "I am now a shell of a human being with nothing to live for, it has all been taken away by my affiliations," he wrote.

Police emphasized that Livelsberger did not intend to harm anyone else, but the fireworks and explosives in the truck were designed to create a public spectacle. Seven other individuals sustained injuries in the blast. The investigation is ongoing, and authorities have yet to determine why Livelsberger chose to target President-elect Trump's hotel.

OpenAI, the creator of ChatGPT, expressed sadness over the incident in a statement to The Hill. The company stated its commitment to ensuring its AI tools are used responsibly. "Our models are designed to refuse harmful instructions and minimize harmful content," the statement said. The company explained that ChatGPT responded with information already publicly available online and included warnings against harmful or illegal activities. OpenAI is cooperating with law enforcement in the ongoing investigation.