Aurélie Jean – No, algorithms have no conscience


VS’is the news that intrigues: a Google engineer has just been suspended for having revealed to the general public the existence of an artificial intelligence aware of its existence. This is the LaMDA chatbot (for Language Model for Dialogue Applications in English) designed by Google teams which would have demonstrated cognitive acuity through its exchanges with the engineer in question. One wonders which is more outrageous, seeing an employee fired for disclosing confidential information or the contents of the statement itself. Because at the risk of upsetting spooky futurologists, algorithms have no conscience.

Artificial intelligence works on algorithms embedded and executed on data. These algorithms therefore master analytical intelligence only. The machine analytically solves problems that for many – a priori – are not: like the recognition of a dog in a photo. The other intelligences – such as the creative and practical intelligences according to Robert Sternberg’s triarchic theory – can possibly be simulated but are not mastered. For example, when a chatbot tells you “I love you”, it does not master these terms which are here only mathematical objects. In other words, he doesn’t feel what he’s saying.

READ ALSOArtificial Intelligence – August Cole: “This book is a warning”

The passage to complete intelligence, also called general, constitutes the theory of technological singularity which supposes in a more or less near future the existence of a tipping point which implies de facto in these artificial intelligences to have the consciousness to exist. Even though this theory is hypothetical and unique to science fiction stories, it still deserves research. Through this work we will try to understand what an emotion is, even what consciousness is. Precisely, this is where all the subtlety in certain scientists who sometimes perhaps use the words of conscience or feelings with awkwardness, moving away from the terms and the definition specific to humans.

Responsibility and education…

On the other hand, always concerning a robot or an algorithm which tells you “I love you”, you can have the impression that the machine feels it, and that is the whole problem. Without education on these subjects and without responsibility of scientists and engineers on the words to use and the technologies to develop, we take the risk of falling into a so-called apparent singularity – but not effective! – which gives you the distorted impression that these chatbots – to use the example presented here – have a conscience.

His, Westworld, Matrix, Ex-Machina or 2001, a space odyssey are all science fiction stories dealing with the technological singularity with talent. Let’s leave the authors of these works the exclusivity of the approach and avoid overshadowing them. As for us, let’s make sure to stay enlightened. Spielberg thanked us!

READ ALSOArtificial intelligence challenged by ants


Leave a Comment