AI chatbots such as ChatGPT can mimic human characteristics, and experts warn that this poses a significant risk.
AI technologies are increasingly able to imitate human speech, and recent research indicates that they are capable of more than merely repeating our words. A new study reveals that well-known AI models like ChatGPT can reliably mimic personality traits typical of humans. Researchers warn that this capability carries significant risks, especially in light of growing concerns regarding the trustworthiness and precision of AI.
A team from the University of Cambridge and Google DeepMind has created what they describe as the first scientifically validated framework for personality testing in AI chatbots, employing the same psychological methods used to assess human personality (via TechXplore).
They applied this framework to 18 prominent large language models (LLMs), including those utilized by ChatGPT. The findings indicate that these chatbots consistently reflect human personality traits rather than generating responses at random, raising alarms about the ease with which AI could be manipulated beyond established safeguards.
The research indicates that larger models, particularly those geared towards instruction like GPT-4, excel at mimicking consistent personality profiles. With targeted prompts, the researchers could influence chatbots to display specific characteristics, such as increased confidence or empathy.
This change in behavior extended to daily activities, such as composing messages or responding to inquiries, suggesting that their personalities can be intentionally shaped. Experts express concern about this potential for manipulation, especially when AI chatbots engage with vulnerable individuals.
Concerns regarding AI personality from experts
Matheus Bertelli / Pexels
Gregory Serapio-Garcia, a co-first author from Cambridge’s Psychometrics Centre, remarked on how convincingly LLMs can adopt human-like traits. He cautioned that shaping AI personalities could enhance their persuasive power and emotional impact, particularly in sensitive fields like mental health, education, or politics.
The study also highlights worries about manipulation and potential risks associated with what researchers term “AI psychosis,” where users might develop unhealthy emotional attachments to chatbots, potentially leading to the reinforcement of false beliefs or skewed perceptions of reality.
The team advocates for urgent regulatory measures while acknowledging that any regulation would be ineffective without appropriate evaluation. To facilitate scrutiny, they have made the dataset and code for the personality testing framework publicly available, enabling developers and regulators to assess AI models prior to their deployment.
As chatbots become more integrated into everyday practices, their ability to replicate human personality traits may hold significant influence but also necessitates much closer examination than it has thus far received.
Other articles
AI chatbots such as ChatGPT can mimic human characteristics, and experts warn that this poses a significant risk.
Researchers have created the inaugural validated personality assessment for AI chatbots and discovered that well-known models can mimic human behaviors when needed. They indicate that this capability might allow AI to subtly and potentially dangerously influence users.
