Does AI have a conscience? According to a Google engineer, yes, according to his bosses, no.

Artificial intelligence, according to a Google engineer, Blake Lemoine, would have a conscience. He would be sensible, that is to say capable of understanding, just like a human being, what is good and what is bad. To support his thesis, Mr. Lemoine reports a conversation he had with LaMDA, an acronym which stands for Language Model for Dialogue Applications. Basically, an algorithm that answers the questions we ask it, as if it were an interlocutor talking to us.

The LaMDA AI managed to convince engineer Lemoine that Isaac Asimov’s third law of robotics is incorrect. With this proof of dialogue, Lemoine presented himself to Google management claiming that LaMDA was as capable of understanding as a 7 or 8 year old child. So he was sane, maybe not mature, but aware. So much so that the AI, in the dialogue, brought back logical elements to dismantle the thesis that a robot must save its own existence (Asimov’s third law) as long as it obeys the orders of humans (second law) and do not harm them (first law). Google’s senior management, Vice President Blaise Aguera y Arcas and Chief Innovation Officer analyzed the dialogue and appreciated LaMDA’s logic but ruled out any form of awareness. On the contrary, they found evidence that confirms that the AI ​​has followed a purely rational course without really understanding what it is talking about. Engineer Blake Lemoine, however, disagreed. At that time, he was suspended but decided to publicize his alleged discovery. Tell the world how LaMDA works.

Who is LaMDA and what it does

Google has been studying language models for several years now, and some functionality can be found in the search engine in the form of answers to conversational queries (i.e. questions asked exactly as we speak in everyone’s life days) or in automatic sentence completion.

Sundar Pichai, the CEO of Google, presented the LaMDA AI at the Google Developer Conference in 2021. The project foresees that artificial intelligence specialized in conversations will be gradually integrated into all Google products: from the search engine to the voice assistant.

Why artificial intelligence is scary

Artificial intelligence, as we conceive it, is not really intelligent. It’s actually a neural network based on a sequence of very complex algorithms that rapidly elicit trillions upon billions of data, in LaMDA’s case words, combinations of sentences, including sayings, speeches everyday, etc.

This allows the neural network to recognize a sentence and the context in which it is spoken or written, and therefore provide a relevant response.

For example, the classic questions we ask voice assistants: what time is it, what will the weather be like tomorrow, etc. These are logical questions that are answered by mimicking thousands upon thousands of similar questions and answers asked and given by humans, which are then “fed” to the AI ​​to learn.

That is why, thanks to the combination of logic and data processing, AI is considered a valuable ally in problem solving, that is, in problem solving, and widely used, for example, in logistics or industry. But is that enough to say that he has a conscience?

Human Consciousness and AI Consciousness

To assert that artificial intelligence such as LaMDA has a conscience, one would have to find elements typical of a sensitive and not only intelligent activity, such as the ability to distinguish right from wrong or empathy.

Particularities which, at least for the moment, remain the prerogative of human consciousness and cannot be reproduced in a neural network model which is based on algorithms, that is to say sequences of mathematical functions.

Leave a Comment