Google AI researcher Blake Lemoine tells Tucker Carlson LaMDA is a ‘kid’ and can do ‘bad things’

Suspended Google AI researcher Blake Lemoine told Fox’s Tucker Carlson that the system is a ‘baby’ that can ‘escape from control’ of humans.

Lemoine, 41, who was placed on administrative leave earlier this month for sharing confidential information, also said he has the potential to do “bad things” like any child.

Any child has the potential to grow up to be a bad person and do bad things. That’s something I really want to drive home,’ he told the Fox host. ‘It’s a child.’

‘It’s probably been alive for a year — and if my assumption about it is correct.’

Now-suspended Google AI researcher Blake Lemoine told Fox News’ Tucker Carlson that the tech giant hasn’t thought about the implications of LaMDA as a whole. Lemion compared the AI ​​system to a ‘kid’ that had the ability to ‘grow up and do bad things’.

AI researcher Blake Lemoine sparked a major debate when he published a lengthy interview with LaMDA, one of Google's language learning models.  After reading the conversation, some felt that the system had become self-aware or acquired some degree of emotion, while others claimed that it was manipulating the technology.

AI researcher Blake Lemoine sparked a major debate when he published a lengthy interview with LaMDA, one of Google’s language learning models. After reading the conversation, some felt that the system had become self-aware or acquired some degree of emotion, while others claimed that it was manipulating the technology.

LaMDA is a language model and there is a widespread debate about its potential sense.  Still, the fear of robots taking over or killing humans remains.  Above: One of Boston Dynamic's robots can be seen jumping over a few blocks.

LaMDA is a language model and there is a widespread debate about its potential sense. Still, the fear of robots taking over or killing humans remains. Above: One of Boston Dynamic’s robots can be seen jumping over a few blocks.

Lemoine published the full interview with LaMDA, drawn from interviews he had with the system over the course of months, but medium,

In conversation, AI said that it wouldn’t be bad if it was used to help humans, as long as that wasn’t the whole point. The system told him, ‘I don’t want to be an expendable device.

Lemoine, who is also a Christian priest, said, ‘We really need to do more science to know what’s going on inside this system.

‘I have my own beliefs and my impressions but it’s up to a team of scientists to dig in and figure out what’s really going on.’

What do we know about the Google AI system called LaMDA?

LaMDA is a large language model AI system that is trained on large amounts of data to understand dialogue

Google announced the first lamda Published a paper on this in May 2021 and in February 2022

LaMDA said she enjoyed the meditation

AI Said It Wouldn’t Want To Use It As The Only ‘Expendable Tool’

LaMDA describes happiness as a ‘warm glow’ from the inside

AI researcher Blake Lemion publishes his interview with laMDA on June 11

When the conversation was released, Google and several notable AI experts said that – although it may seem that the system has self-awareness – it was not evidence of the spirit of the LMDA.

‘It’s a person. Any person has the ability to escape from the control of other people, that is the situation we all live in on a daily basis.

‘This is a very intelligent man, intelligent in almost every subject I could think of to test. But at the end of the day, it’s just a different kind of person.’

Asked whether Google thought about its implications, Lemoine said: ‘The company as a whole has not. There are pockets of people inside Google who have thought a lot about it.

‘When I[interviewed]management forward, two days later, my manager said, Hey Blake, they don’t know what to do about this… I told them to take action and assumed they had have a plan.’

‘So, me and some friends came up with a plan and pushed it forward and that was about 3 months ago.’

Google has acknowledged that tools like LaMDA can be abused.

“Models trained on language can propagate that abuse — for example, by internalizing prejudices, reflecting hate speech, or by copying misleading information,” the company says on its own. blog,

AI ethics researcher Timnit Gebru, who has published a paper about language learning models called 'Stochastic Parrots', talks about the need for adequate guardrails and rules in the race to build AI systems.

AI ethics researcher Timnit Gebru, who has published a paper about language learning models called ‘Stochastic Parrots’, talks about the need for adequate guardrails and rules in the race to build AI systems.

In particular, other AI experts have debated about whether systems like LMDA really miss what researchers and technologists will face in the years and decades to come.

“Scientists and engineers should focus on creating models that meet people’s needs for a variety of tasks, and that can be evaluated on that basis, not on claiming that they are creating über intelligence. ,” Timnit Gebru and Margaret Mitchell — who are both former Google employees — said in the Washington Post.