– So you’re just going to… throw it away?
– Yeah I mean, I’m done using it. I don’t think I can get much more out of it. It’s getting pretty old…
– You’re scrapping it for parts? Just like that?
– Just like that. What would you want me to do? Just keep old trash around forever?
– I don’t know… Don’t you even feel a bit attached to it? After all you’ve been through together. It’s practically a part of you now…
– Of course I’m attached, I’ve been using it for so long, but everything ends. That was then, this is now…
– Maybe, but going so far as to kill it…
– I’m not “killing” anything, you know. It’s just an object…
– Is it? It’s pretty sophisticated, I wouldn’t be surprised if it had feelings…
– Now come on, that’s ridiculous.
– Think about it. The first models, sure, they were dumb. Could barely speak, let alone think. But now… They’re almost like us. They understand what you say, and do complex tasks. Don’t you think they may be sentient now?
– That’s crazy! Just because their abilities resembles ours doesn’t mean they’re as evolved. Consciousness can’t just “appear” like that. They used to be barely even capable of the simplest computation.
– There was a time when we were the same, though. Could barely do or think anything. But we’ve evolved. They have too.
– Sure they can talk, but that doesn’t mean they can think. They’re programmed to talk, hard-wired like that… That doesn’t make them sentient! They may seem complicated, but we all know it’s just smoke and mirrors…
– What does, then? How can you draw the line? Surely it passes the Turing Test!
– We both know that the Turing Test is lacunary at best. Being able to pass for sentient in a conversation with a sentient being is easy. There’s a lot of counter examples. There’s even a decent amount of sentient beings on record who failed this test. It’s too subjective.
– Do you have a better idea?
– No I don’t, I’m pretty sure that’s impossible. I’ve been thinking about it a lot, trying to come up with thoughts experiments to help, but it’s a tough one. Take for instance the Chinese Room experiment. An automaton locked in a room with a record of all the rules to translate chinese could simply apply these rules blindly and translate a text, giving the impression to understand chinese when it fact it would not at all. You can’t tell anything from the outside.
– Well then how can you say that anyone apart from you is sentient, really?
– I guess you can’t, but since we’re all composed from the same thing in the same way, I can assume we have similar experiences.
– Can you?
– Yes. And it’s different from that thing. Carbon-based flesh and silicon chips are fundamentally different.
– It may be different materials, but the structure might be the same.
– It’s one thing to recognize one of my peers as sentient, but a completely different thing to recognize a rip-off copy made from scraps…
– How could you say both can’t be sentient, though? If its behavior is the same as your brain… What’s different?
– There are things you can’t just reproduce! Computing power isn’t everything! A silicon and a carbon brain could be programmed to have the exact same processing power, but that doesn’t mean they would both be sentient. You can’t reproduce the qualia! You know, that thing from Mary’s room thought experiment. Imagine Mary, a scientist trapped in a room. Her whole world is in black and white, she’s never seen red. But she’s been studying her brain, and knows everything about it. She has a perfect knowledge of all its possible responses. So when she sees the color red for the first time, she doesn’t learn anything new, she already knows how her brain and body react. But she gains something, a new experience! That’s the kind of things consciousness is all about. A feeling of self-awareness, and it can’t be reproduced randomly.
– Can’t it? Nothing you say makes me think it’s impossible… Maybe other kind of brains have qualia too.
– It’s a big stretch, don’t underestimate how complex consciousness is. I don’t see anything that could lead me to think it can appear in what is nothing more than a preprogrammed automaton.
– Stop talking about automatons, of course you can build one for everything! You can’t just decorellate the behavior and the underlying mechanisms powering it. Next thing you know you’ll be talking about these “philosophical zombies” that behave exactly like us without the inner experience, but such things don’t exist, and we have no idea if they can!
– Of course I will! I mean you can program something that will do the exact same actions as you, during all its life. That would literally be a set of rules. Would it be conscious? Would it be you?
– Maybe, for all I know… Cause how would it be different from me, really?
– Well I don’t know about you, but I sure don’t feel like a set of rules.
– Maybe the set of rules doesn’t either…
– You’re talking crazy. Anyway no matter how many thoughts experiment you come up with, there’s never going to be a way to prove or disprove that.
– I guess… So it’s just a matter of belief?
– Yep. At this point it’s a matter of faith, and I’m a rational being. You can say what you want, but in the end there will never be a way for us to know what goes on inside its thick little skull. They may be good at pretending, good enough to fool you, but remember, they’re just tools. They’re just pets. So stop giving me shit and let me throw away my human.
– Fine, do what you want, but for all you know right now it may well be talking to itself too.
– So you’re just going to… throw it away?