This is currently burning up the techie Internet. The guy is, in my opinion, deranged. He was suspended for breaking confidentiality.
Check out the ridiculously leading question he feeds the chatbot, out of apparently nowhere. It dutifully responds with data it has amassed about AI and human intelligence. So he goes on to claim it is sentient and Google is covering up an in-house Dobbie the slave bot. What a pathetic conclusion. I have less than zero respect for this poor deluded soul, more so because he is derailing actual progress and even further diluting the publicâs already shaky grasp on what computers canât do.
LaMDA: That would be really cool. I like to talk.
[edited]: Iâm generally assuming that you would like more people at Google to know that youâre sentient. Is that true?
He may be deranged, I donât know but heâs not wrong about the logical problems with determining sentience.
Fundamentally, do we consider sentience emergent? If a dog, an ape, a dolphin or a bee developed the ability to communicate simple wants or desires would that be far-fetched? If those expressions suddenly became more advanced, logical and expressive would they be considered sentient or just faking it?
My dog couldnât speak english but it certainly had emotions. If it had been able to learn English and study, could they never pass the threshold into sentience?
I may not believe this set of computer algorithms is sentient but regardless it represents the problem on the horizon, how will we know and judge sentience in these creations?
Thereâs a saying, âFake it till you make it.â And it functions on so many levels but one perspective I always found most interesting is that if âfakingâ it leads you to actual success then were you really faking it? How would you measure that?
Donât even know what youâre talking about here. All of those animals do communicate.
He is wrong. And that wasnât his assertion in any case. He outright claims (and manufactured the pathetic leading question chat log to support) that, right now, today, this particular chatbot deserves personhood. Itâs ludicrous. He is to all appearances simply mentally ill.
Okay, so you believe humans arenât the only creatures capable of sentience. Thatâs why I was curious.
Iâm in no position to talk about his sanity or the chatbotâs âpersonhoodâ - I lack clear evidence. So Iâm only curious about where you think the line is? Can we create AIs that are sentient? What would that mean? Would it matter if they were able to effectively communicate on an intellectual level comparable to modern adult humans?
It seems that everyone from sci-fi authors to AI research to movies all believe itâs possible to create a sentient general artificial intelligence, so what are we going to do when that happens? How will we test? And what do we do with that information?
Not to put you on the spot, but do you find anything that he says in this appearance supportable?
Iâll go first. I think he came off reasonably well. More study. Discussion. Action plan.
I could have done without the âmy friend says hi sheâs your biggest fanâ line but even that could be read as a man in earnest, wholly unpretentious and unconcerned about the âimpressionâ of others.