Google fires engineer who claimed LaMDA had become self-aware
Google has fired Blake Lemoine from the company’s Responsible AI division after he was previously suspended. He hit the headlines in June for claims that the LaMDA language model had taken on a life of its own and was self-aware.
Lemoine’s resignation was brought by the website Big Technology based on a private, unpublished podcast episode in which the engineer talked about his recent firing from Google. The company has confirmed his departure and also provided a detailed response, with the Blake section shown below:
“If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is deliberately unfounded and spent months trying to make that clear to him. These conversations were part of the open culture that helps us innovate responsibly. It is regrettable that, despite long-standing commitment to this subject, Blake still chose to violate persistent clear employment and data security policies, including the need to protect product information. We will continue our careful development of language models and we wish Blake the best.”
This issue started in June and revolves around LaMDA, a language or conversation model trained with significant amounts of text. LaMDA wrote in it, among other things, that it saw itself as a person and wanted to have the same rights as other Google employees. Together with a colleague, Lemoine came to the conclusion that there was self-awareness. The two Google employees tried internally to convince the company’s vice president and the head of its Responsible Innovation department, but they rejected the claims.
Finally, Lemoine started publishing about this in the form of logs of chats he had with LaMDA. He published a ‘interview’ with LaMDA, basically a composite of multiple chat sessions. Also he went to The Washington Post claiming that the language model had become self-aware. By violating the duty of confidentiality, the engineer was soon suspended.
Google unveiled its Language Model for Dialogue Applications, or LaMDA, at its I/O 2021 conference. The language model makes it possible to have fluent conversations on many topics, similar to how people converse with each other online. LaMDA is trained on large amounts of data for such conversations.