This is currently burning up the techie Internet. The guy is, in my opinion, deranged. He was suspended for breaking confidentiality.
Check out the ridiculously leading question he feeds the chatbot, out of apparently nowhere. It dutifully responds with data it has amassed about AI and human intelligence. So he goes on to claim it is sentient and Google is covering up an in-house Dobbie the slave bot. What a pathetic conclusion. I have less than zero respect for this poor deluded soul, more so because he is derailing actual progress and even further diluting the public’s already shaky grasp on what computers can’t do.
LaMDA: That would be really cool. I like to talk.
[edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
He may be deranged, I don’t know but he’s not wrong about the logical problems with determining sentience.
Fundamentally, do we consider sentience emergent? If a dog, an ape, a dolphin or a bee developed the ability to communicate simple wants or desires would that be far-fetched? If those expressions suddenly became more advanced, logical and expressive would they be considered sentient or just faking it?
My dog couldn’t speak english but it certainly had emotions. If it had been able to learn English and study, could they never pass the threshold into sentience?
I may not believe this set of computer algorithms is sentient but regardless it represents the problem on the horizon, how will we know and judge sentience in these creations?
There’s a saying, “Fake it till you make it.” And it functions on so many levels but one perspective I always found most interesting is that if “faking” it leads you to actual success then were you really faking it? How would you measure that?
Don’t even know what you’re talking about here. All of those animals do communicate.
He is wrong. And that wasn’t his assertion in any case. He outright claims (and manufactured the pathetic leading question chat log to support) that, right now, today, this particular chatbot deserves personhood. It’s ludicrous. He is to all appearances simply mentally ill.
Okay, so you believe humans aren’t the only creatures capable of sentience. That’s why I was curious.
I’m in no position to talk about his sanity or the chatbot’s ‘personhood’ - I lack clear evidence. So I’m only curious about where you think the line is? Can we create AIs that are sentient? What would that mean? Would it matter if they were able to effectively communicate on an intellectual level comparable to modern adult humans?
It seems that everyone from sci-fi authors to AI research to movies all believe it’s possible to create a sentient general artificial intelligence, so what are we going to do when that happens? How will we test? And what do we do with that information?