Google says LaMDA has undergone 11 “distinct AI principles reviews,” as well as “rigorous research and testing” related to quality, safety, and the ability to come up with statements that are fact-based.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said.”Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Google said.