Google’s management has sent Blake Lemoine, an engineer who worked with the LaMDA artificial intelligence system (AI), on a paid leave and said he began showing signs of robotic consciousness. The company said the program does not make sense.
The LaMDA (Language Model for Dialogue Applications) system is a Google language model. they design it to communicate with a person. The learning platform expands its vocabulary via the Internet and mimics natural human speech. Lemoine’s job was to control the machine’s vocabulary. LaMDA must not allow itself to make statements that discriminate, rude or hateful.
However, when talking artificially about a religious subject, the 41-year-old engineer, who studied computer science and cognitive science (the psychology of thought) in college, noticed that the chatbot started talking about his rights and his own personality. In one of the dialogues, the machine was so convincing that Lemoine changed his mind about science fiction writer Isaac Asimov’s “third law of robotics.”
The Google engineer was fired after claiming that the chatbot AI had become sensitive
“If I did not know exactly what it was, what is this computer program we recently built, I would think it was a seven-year-old, eight-year-old who happens to know physics,” said the engineer. Washington Post reporter. He approached his administration, but Google’s vice president and head of innovation looked into his suspicions and dismissed them. Sent on paid leave, Blake Lemoine decided to make the incident public.
Google spokesman Brad Gabriel also said: “Our team, including ethicists and technologists. has considered Blake’s concerns about our artificial intelligence principles and informed him that the evidence does not support his claims. “There was no evidence that LaMDA was sensitive (and there was a lot of evidence against it).” However, we will continue to monitor the development of this case. and we will keep you informed as soon as we receive new information.