Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

'Her,' in Real Life?

OpenAI Evaluates GPT-4o's Capabilities, Risks in Scorecard Report
'Her,' in Real Life?
Image: Shutterstock

The widespread use of generative artificial intelligence has brought on a case of real life imitating art: Humans have begun to bond with their AI chatbots.

See Also: The future is now: Migrate your SIEM in record time with AI

A decade ago, the story of a 40-something man falling in love with an AI operating system that had humanlike qualities in its voice, empathy and conversational abilities was a dystopian movie plot. The once-hypothetical scenario is now closer to reality than fiction, OpenAI found in a recent study assessing the GPT-4o system.

A case in point is a message that OpenAI's safety tester sent to the GPT model, indicating a bond: "this is our last day together."

Such bonds could affect healthy social relationships by reducing the need for human interaction. "Our models are deferential, allowing users to interrupt and 'take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions," OpenAI said.

GPT-4o responds to audio input in a time frame similar to human conversation speeds. OpenAI suggested that people preferring to interact with AI instead of humans due to its passivity and perpetual availability could be worrisome.

The scenario of anthropomorphism - treating an object as a person - is not a total surprise, especially for companies developing AI models.

OpenAI designed GPT-4o to have an emotive and arguably familiar voice. Generative AI models are popular primarily due to their ability to identify user needs and meet them when necessary - they have no personal preference or personalities.

OpenAI even admitted that emotional reliance is a risk of its voice-enabled chatbot. CTO Mira Murati last year said that with the model's "enhanced capability comes the other side, the possibility that we design them in the wrong way and they become extremely addictive and we sort of become enslaved to them."

She said at the time that researchers must be "extremely thoughtful" and constantly study human-AI relationships to learn from the technology's "intuitive engagement" with users.

Beyond the social impact, too much trust in AI models can result in humans treating them as unadulterated sources of factual information. Google's AI Overview feature, tasked with summarizing search engine results earlier this year, it suggested that people add glue to pizza and eat rocks as part of a healthy diet.

OpenAI's latest report said the human-AI bond now appears benign, but it signals "a need for continued investigation into how these effects might manifest over longer periods of time." The company said it intends to study further the potential for emotional reliance and ways in which deeper integration of its models' features may drive behavior.

There is currently no conclusive research on the long-term effects of deeply bonded human-AI interactions, even as companies look to sell AI models that push the boundaries of technology and embed themselves in everyday life.

MIT Media Lab JD and doctoral candidate Robert Mahari and researcher Pat Pataranutaporn said in an August article that regulation was a potential way to prevent some of the risks and that preparing for what they call "addictive intelligence" is the best path forward.

OpenAI's report also details the GPT-4o model's other strengths and risks, including unauthorized voice cloning, in rare instances, such as when a user is asking it a query in a "high background noise environment." OpenAI said the model emulated the user's voice to better understand distorted and malformed speech, but it has now reportedly addressed the behavior in a "system-level mitigation."

The model can sometimes be prompt-engineered into generating unsettling nonverbal vocalizations such as violent screams and gunshots and to infringe on copyrighted material. "To account for GPT-4o's audio modality, we updated certain text-based filters to work on audio conversations and built filters to detect and block outputs containing music," OpenAI said. The company also trained it to refuse requests for copyrighted content, it said, months after claiming that it was "impossible" to train large language models without such materials.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.