Exploring the Perception of Consciousness in Conversational AI: Examining LaMDA’s Complexity

Conversational AI: I, Chatbot, the perception of consciousness

How can LaMDA give responses that a human might perceive as introspection or conscious thought? This is ironically due to the training data that LaMDA was trained on and the association between possible human questions and machine responses. Probabilities are the key. How do these probabilities change such that an intelligent human interrogator is confused about the functionality of the device?

It is for this reason that we need to improve the \”explainability\”. Artificial neural networks are the foundation for many useful AI systems. They are capable of performing computations that are far beyond the abilities of a person. In many cases, neural networks incorporate learning functions to adapt the network to tasks that are outside of the original application for which it was designed. The reasons for a neural networks output is often unclear or even undiscernible. This leads to criticisms of machines that are dependent on their intrinsic logic. In addition, the size and scope training data can introduce bias into complex AI systems. This leads to unexpected, incorrect, or confused outputs when compared to real-world data. The \”black box\” is a problem that occurs when a user or AI developer cannot understand why an AI system acts the way it does.

Tay’s racism appears to be no different than LaMDA’s perception of consciousness. Even an expert user may be unsure of the reasons behind a machine’s response if they do not have a thorough understanding of AI systems and how they are trained. We will be misled by our machines, just like the blind interrogator of Turing’s deception game, unless we embed the need for AI behavior into the design, testing and deployment of systems that we will rely on tomorrow.

Source:

I, Chatbot: The perception of consciousness in conversational AI