The Sentient AI Trap: Examining the Gap between Narrative and Reality

LaMDA and Sentient AI Trap

Giada Pinstilli, an ethicalist at Hugging Face (a startup that focuses on language models), says there is a big gap between what AI can do and the narrative. This narrative is designed to create fear, excitement, and amazement at the same time, but it’s based mainly on lies in order to sell products and capitalize on the hype.

She says that speculation about sentient AI leads to a greater willingness to make claims without scientific proof and rigor. This distracts us from the \”countless ethical and justice issues\” that AI systems raise. She says that while every researcher is free to do what they like, \”I am afraid that focusing on the subject will make us forget what’s happening when we are looking at the Moon.\”

Lemoire’s experience is what futurist and author David Brin called the \”robot-empathy crisis.\” Brin said that people would in three to five year claim AI systems are sentient, and demand that they have rights. He thought that the virtual agent would be a child or woman to increase human empathy, and not \”some guy from Google\” at the time.

The debate over whether Google’s large language model is a soul diverts attention from the real problems of artificial intelligence.