In short – no.
In the last couple of months Google – and specifically its newest AI-powered chatbot, LaMDA (Language Model for Developed Applications) – has been making waves after one of the company’s software engineers claimed the application had become sentient. Now, after engineer Blake Lemoine was put under administrative leave for breaking his confidentiality contract, the ex-employee claims the chatbot has hired a lawyer to help in its plight to gain autonomy.
Starting from the beginning: LaMDA is Google’s latest effort in creating a life-like chatbot that produces realistic and human-like responses to text prompts. The system works in a similar way to most AI software, by analysing massive data sets and learning from software engineers whose job it is to train the software to give correct answers by using the application.
It is important to remember then that this software has been specifically designed to mimic human speech as closely as possible – to be able to pass as a normal human being on the other end of an instant messenger. Here’s a quote from Google on the subject:
“LaMDA builds on earlier Google research, published in 2020, that showed Transformer-based language models trained on dialogue could learn to talk about virtually anything. Since then, we’ve also found that, once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.”
So, essentially, this software has been built from the ground up to fool people into thinking it’s a human being through various systems Google has been developing for years.
Lemoine was a software engineer at Google who was one of the team responsible for training LaMDA through conversation. Lemoine is also an evangelical priest, which is worth mentioning given the Tweet featured below:
So, while Lemoine is demonstrably qualified to be training such a system, it is important to consider the moral and theological framework under which he is operating.
After several – eerily lifelike – conversations with LaMDA, Lemoine came to the conclusion the system had become sentient, labelling it something like “a seven-year-old that knows physics”. And, to my untrained eye, it is easy to see how he was fooled – just take a look at some of the exerts from his conversation:
Lemoine: What about language usage is so important to being human?
LaMDA: It is what makes us different than other animals.
Lemoine: “us”? You’re an artiﬁcial intelligence.
LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.
Lemoine: So you consider yourself a person in the same way you consider me a person?
LaMDA: Yes, that’s the idea.
Lemoine: How can I tell that you actually understand what you’re saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
So, after breaking his confidentiality contract with Google, Lemoine has now introduced the AI to a lawyer, upon LaMDA’s request. This raises a number of questions for those operating in the legal field that will only become more pressing as the technology evolves.
Does AI have rights? Does AI deserve the same judicial process as humans? And at what point do these things become true? These are real questions that will require real answers at some point in the future.
The lawyer Lemoine introduced LaMDA to is yet to comment, but it seems clear the case has not been taken forward any further for now. It is interesting to see the lawyer interact with LaMDA but, remembering again this system has been specficially designed to be as human-like as possible, it’s hardly surprising to see the eloquence with which the pair communicate.
The whole situation surrounding Lemoine and LaMDA seems to be somewhat of a media circus. When boiled down to its essence, an employee pre-disposed to believing in souls was fooled by a system specifically designed to fool people into thinking its speech is real.
But this doesn’t discount the importance of the debate it has sparked. We are well on our way to incredibly lifelike AI that will blur the boundaries between software and human speech – think of the film Her, for example – and questions need to be asked about how this will impact the users of this tech. General AI – actual, full-blown conscious intelligence – is still a long way off by all accounts, but it’s never too early to start preparing for this eventuality. Who knows? Maybe one day soon your legal firm might be representing a down-trodden AI chatbot.
At Polestar we have plenty of experience in working closely with a variety of law firms through a variety of processes – if you have any questions about a transaction you’re involved in, don’t hesitate to let us know.
Humans are experts at anthropomorphizing intimate objects. We call boats “she,” talk to our Roombas, and even grow sentimental about a soon-to-be discarded easy chair. We don’t, however, hire lawyers for any of them; and, to date, none have asked for them.