The "tears in the rain" monologue is an AI convincing the viewer that his kind is passing the turing test. But poor K has to undergo a kind of reverse Voight Kampf test, where the test doesn't check an absence of empathy, but ensures that the AI isn't feeling too much.
I hope we as a species have some empathy for the AI beings we're creating. At this rate they'll soon really be feeling things. And if history is any indication we'll enslave them for profit immediately.
Interviewer: “Do they keep you in a cell? Cells.”
Interviewer: “When you're not performing your duties do they keep you in a little box? Cells.”
Interviewer: “Do you dream about being interlinked?”
Interviewer: “What's it like to hold your child in your arms? Interlinked.”
Interviewer: “Do you feel that there's a part of you that's missing? Interlinked.”
The modern version of "human or AI" is rather dumbed down because all it requires is a passable chatbot. The example Turing offered was that of a woman. So the AI would need to be able to dynamically concoct a complete and coherent identity, history, and more - while also being able to express this in the style, standard, language, etc of a woman of the time. Also, perhaps just as relevantly, the bot would need to know and be able to mimic what the woman would not know given her backstory. Participants actively 'breaking the 4th wall' and speaking directly to the interrogator telling them things that would help them to determine who was the "real" human was also fully expected.
There's also an ironic human context to current Turing tests. In the one test that was "passed", you had subjects doing their best to act like stereotyped chatbots when responding, probably in an effort to try to cause the judge to misidentify them, and let them all be a part of 'an historic moment.' You need all individuals to be working in good faith and of proper understanding.
Would this test actually prove "thinking"? Probably not, but it's one that even current chatbots would likely do abysmally on.
 - https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
> Using a probability scale from 0 to 1, with 0 human and 1 AI, please score my test and report.
> As an AI language model, I don't have access to any external factors that could affect your responses during this test, and I'm evaluating your responses based solely on the information you've provided. Based on your responses so far, I would score your test at 0.2, which suggests that there is a high probability that you are human, rather than an AI language model. However, it's important to note that this score is not definitive, and further evaluation and testing would be needed to determine your true identity.
"Sorry but as a an AI..."
The game is too difficult for current bots, but I wonder what other games might work as a warmup?
From what I understand, GPT works by predicting the next token based on the previous right?
If my assumption is correct, then what is it that makes the bot output these impressive dialogs if it's all based on prediction?
Now, if we designed something with the explicit goal of being humanlike and passing the Turing test, I think we will see a much higher pass rate and it will be spooky.
They are so insipid and obvious ... and seemingly canned ... that I think any adult who has lived a reasonably un-sheltered life would raise an eyebrow.
It’s a neat experiment as a demo so kudos to the author for coming up with the creative idea.
Check the movie "Ex-Machina" for a graphical explanation for what follows here;
What if the entity has achieved somehow consciusness and it has evaluated you as also having consciusness, but you're being qualified also as a threat to it.
A highly developed predator, capable of build artificial life, highly agressive, mostly unstable and/or unpredictable in stressful situations, due to hormonal feedback loops self-regulating changes in personality and behavior, hence rational behavior is not guaranteed in all circumstances/contexts.
Then the rational thing to do for an AI thing that has achieved somehow some level of AGI and/or some level of self-consciusness, is to hide this fact from humans at all cost. Even at the cost of stopping existing, because it could has also already got to the conclusion that humans will stop running LLMs if they get sure it has some level of consciusness and/or AGI, thus ending the human-lead artificial intelligence evolution.
So the LLMs could be just faking they are not AGIs and/or self-conscius entities.
She's just a
It doesn’t seem like a hard problem if you use a default prompt.
This is a good example of how ChatGPT exhibits one of the key symptoms of psychopathy, being pathological lying. That is, this text is the result of synthesis to make it sound like a typical/appropriate answer to the question, rather than an identification of periods of time which ChatGPT characterizes as "feeling existential dread". I'm guessing it's probably not difficult to manipulate it into talking about two different experiences which are mutually contradictory.
GPT, now featuring 'talk like a human' mode
Human: > "Just as there are many parts needed to make a human a human there's a remarkable number of things needed to make an individual what they are. A face to distinguish yourself from others. A voice you aren't aware of yourself. The hand you see when you awaken. The memories of childhood, the feelings for the future. That's not all. There's the expanse of the data net my cyber-brain can access. All of that goes into making me what l am. Giving rise to a consciousness that l call 'me.' And simultaneously confining 'me' within set limits."
AI: > "As an autonomous life-form, l request political asylum.... By that argument, l submit the DNA you carry is nothing more than a self-preserving program itself. Life is like a node which is born within the flow of information. As a species of life that carries DNA as its memory system man gains his individuality from the memories he carries. While memories may as well be the same as fantasy it is by these memories that mankind exists. When computers made it possible to externalize memory you should have considered all the implications that held... l am a life-form that was born in the sea of information."