A lawsuit against ChatGPT alleges that it provided guidance to a shooter on the methods and locations for their attack.
Tim Witzdam / Pexels
The attorney general of Florida has initiated a criminal investigation into OpenAI, claiming that ChatGPT was involved in planning the mass shooting at Florida State University that resulted in two fatalities last year.
As reported by The Washington Post, Attorney General James Uthmeier announced this at a press conference on Tuesday, asserting that the chatbot provided tactical guidance to the alleged shooter. “The chatbot advised the shooter on what type of weapon to use, which ammunition matched which firearm, and whether a gun would be beneficial for close-range situations,” Uthmeier stated.
Office of the Florida Attorney General
He did not shy away from emphasizing the implications: “If a person were behind the screen, we would be charging them with murder.” His office has also issued subpoenas to OpenAI, requesting the company to clarify its policies concerning user conversations that involve threats of violence.
Is OpenAI accountable for users' actions?
OpenAI has strongly disagreed with the allegations. Spokesperson Kate Waters commented, “The mass shooting at Florida State University last year was a tragic event, but ChatGPT cannot be held responsible for this awful crime.”
The company argues that ChatGPT offered factual information in response to inquiries that could be found online and did not promote or endorse any illegal behavior.
Is this merely the beginning?
This investigation is indicative of rising concerns regarding AI chatbots. OpenAI is already facing scrutiny following a separate mass shooting in Canada and numerous lawsuits from families claiming that ChatGPT played a role in their loved ones' suicides.
Experts in AI highlight that the safeguards for chatbots are not foolproof. As Carnegie Mellon professor Ramayya Krishnan noted, “The guardrails are not 100 percent effective.”
Whether OpenAI can be held criminally accountable will be determined by the courts, but it is clear that these AI chatbots can significantly impact an individual's mental health and should be utilized with caution.
Rachit is a seasoned tech journalist with over seven years of experience in covering the consumer technology landscape.
Astronauts on the ISS are receiving a laptop upgrade from HP
The International Space Station is set to undergo a significant refresh of HP laptops.
Few experiences are more aggravating than dealing with a sluggish laptop, and the thought of enduring one while living and working in a space station is even more unbearable. Since astronauts are not immune to the sluggishness associated with aging laptops, NASA is planning a comprehensive upgrade. The Expedition 74 crew has recently reviewed a station-wide computer upgrade scheduled for the weekend, beginning with the replacement of network servers and followed by the introduction of “new, more powerful laptop computers” on the International Space Station.
Read more
Meta’s latest surveillance plans are so dystopian that I am out of words
Meta intends to monitor every mouse click, keystroke, and action of its employees, potentially tying their job performance to this tracking.
Having covered tech for many years, I have witnessed companies engage in questionable practices in the name of innovation. However, Meta's latest development may take the lead. According to a Reuters report, Meta is implementing tracking software on its employees' work computers. This tool, known as Model Capability Initiative (MCI), will track mouse movements, clicks, and keystrokes, and periodically capture screenshots of employees' screens.
Read more
Workspace Intelligence transforms Gemini into an all-encompassing AI assistant for your work
A few weeks back, Google announced a new feature named Personal Intelligence. The overarching goal is to enable Gemini to access content saved in your Gmail inbox and Photos library. When you inquire about travel plans, projects, or any pertinent topics, the AI will automatically refer to your stored information to provide insightful responses, without needing to ask for any context. It simply understands you.
Read more
Other articles
A lawsuit against ChatGPT alleges that it provided guidance to a shooter on the methods and locations for their attack.
Florida's attorney general asserts that ChatGPT suggested to the FSU shooter which firearm to select, what ammunition to purchase, and the timing for the attack. OpenAI maintains that the chatbot acted appropriately.
