Software engineer Blake Lemoyne told reporters from The Washington Post that Google's LaMDA neural network language model has signs of its own consciousness. After that, the company suspended him from work.
Lemoyne said that when testing LaMDA for the use of discriminatory or hate speech by a chatbot, he came to the conclusion that the neural network has its own consciousness.
“If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics,” the engineer explained.
Lemoyne prepared a report in which he provided evidence for the existence of consciousness in LaMDA. However, Google found them unconvincing.
“He was told that there was no evidence that LaMDA was conscious. However, there is plenty of evidence to the contrary,” said Google spokesman Brian Gabriel.
Google presented the LaMDA language model for conversational applications at the Google I/O conference in 2021. The developers stated that they trained the model on a variety of data in order to apply it in a wide range of areas. Communication functions were promised to be included in Google Assistant, Search and Workspace. In addition, the model is being developed taking into account the fact that people communicate not only through text, but also by exchanging media files. This will allow users to ask LaMDA questions with various types of information, such as "find a route with great mountain views".
In April 2022, Google AI Research introduced the new Pathways Language Model (PaLM). It is capable of understanding more than 540 billion parameters, including complex concepts and relationships that were previously inaccessible to computers. So, the model can explain jokes, reason logically, explain its actions and write code.
Sources: Python.Engineering, Washingtompost.com