Just recently the television series Humans completed its first season: en.wikipedia.org/wiki/Humans_(TV_series
And while airing on commerical television, I thought it was rather effective in exploring both the theoretical and the practical implications of “synthetic human beings”. In other words, they are programed, but they are programed in such a way that they very, very closely mimic [duplicate?] human intellectual and emotional reactions to the world around them.
Here though the interactions are all confined to the research facility. Think of Ava as the prototype.
But, really, when you think about it, are not 100% flesh and blood human beings not also programed from birth [by nature, through nurture] to think and to feel in one way rather than another?
Still, the important thing is that a film like this can steer clear of the manner in which this sort of thing is explored in films like The Terminator. There the machines are taking over but all of the human characters are basically just stick figures.
How probable is this? Well, let’s face it, after the technological marvels we have been deluged with over the past couple of decades almost nothing would really astonish us:
Director Alex Garland has described the future presented in the film as ‘ten minutes from now’. Meaning that ‘if somebody like Google or Apple announced tomorrow that they had made Ava, we would all be surprised, but we wouldn’t be that surprised’.
For me this stuff always revolves around determinism. To what extent is intelligence, “artifical” or otherwise, not just a manifestation of the immutable laws of nature?
Look for the part about sex. Talk about a Turing Test. And the part about death? A bit more [or less] problematic.
IMDb
[b]The title derives from the Latin phrase ‘Deus Ex-Machina’, meaning ‘a god From the Machine’, a phrase that originated in Greek tragedies. An actor playing a god would be lowered down via a platform (machine) and solve the characters’ issues, resulting in a happy ending for all.
A portrait of Margaret Stonborough-Wittgenstein painted by Austrian artist Gustav Klimt is visible in Nathan’s room. The subject of the portrait is the sister of Ludwig Wittgenstein, author of The Blue Book.
In an analogy, Nathan says that Caleb should pretend he’s Captain Kirk. This is interesting, as the film’s plot is incredibly similar to the original Star Trek episode “Requiem for Methuselah” (1969) in which a genius inventor creates a female android and wishes her to discover emotions such as love by using Captain Kirk as a target for her emotions, just as Nathan uses Caleb.
Much of the plot can be interpreted as an homage to “Frankenstein.” This is initially made overt when Nathan refers to the story of Prometheus, of which Mary Shelly’s novel was named “The Modern Prometheus.”[/b]
at wiki: en.wikipedia.org/wiki/Ex_Machina_(film
trailer: youtu.be/XYGzRB4Pnq8
EX MACHINA [2015]
Written and directed by Alex Garland
[b]Caleb: You’re leaving me here?
Pilot: This is as close as I’m allowed to get to the building.
Caleb: What building?
Pilot: Just follow the river.
…
Nathan: There’s something wrong. What’s wrong?
Caleb: There’s nothing wrong.
Nathan: It’s the windows. You’re thinking there’s no windows. It’s subterranean. It’s not cozy, it’s claustrophobic. Caleb, there’s a reason there are no windows in this room. This building isn’t a house. It’s a research facility. Buried in these walls is enough fiber optic cable to reach the moon and lasso it.
…
Nathan: So, do you know what the Turing Test is?
Caleb: Yeah. I know what the Turing Test is. It’s when a human interacts with a computer. And if the human doesn’t know they’re interacting with a computer, the test is passed.
Nathan: And what does a pass tell us?
Caleb: That the computer has artificial intelligence.
…
Nathan: Over the next few days you’re going to be the human component in a Turing Test.
Caleb: Holy shit!
Nathan: Yeah, that’s right, Caleb. You got it. Because if the test is passed, you are dead center of the greatest scientific event in the history of man.
Caleb: If you’ve created a conscious machine, it’s not the history of man. That’s the history of gods.
…
Caleb: When did you learn how to speak, Ava?
Ava: I always knew how to speak, and that’s strange, isn’t it?
Caleb: Why?
Ava: Because language is something that people acquire.
Caleb: Well, some people believe language exists from birth. And what is learned is the ability to attach words and structure to the latent ability.
Ava: Do you agree with that?
Caleb: I don’t know.
…
Caleb: Uh, it’s just that in the Turing Test, the machine should be hidden from the examiner.
Nathan: No, no, no, we’re way past that. If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.
…
Caleb: Her language abilities, they’re incredible. The system is stochastic. Right? It’s non-deterministic. At first I thought she was mapping from internal semantic form to syntactic tree-structure and then getting linearized words. But then I started to realize the model was some kind of hybrid.
Nathan: Caleb.
Caleb: No?
Nathan: I understand that you want me to explain how Ava works. But I’m sorry, I’m not gonna be able to do that.
Caleb: Try me. I’m hot on high-level abstraction.
Nathan: It’s not because I think you’re too dumb. It’s because I want to have a beer and a conversation with you. Not a seminar. Nothing analytical. Just how do you feel?
Caleb: I feel that she’s fucking amazing.[/b]
And [it goes without saying] beautiful. A Cherry 2000 for sure.
[b]Caleb: It feels like testing Ava through conversation is kind of a closed loop. Like testing a chess computer by only playing chess.
Nathan: How else do you test a chess computer?
Caleb: Well, it depends. You know, I mean, you can play it to find out if it makes good moves, but, uh…But that won’t tell you if it knows that it’s playing chess. And it won’t tell you if it knows what chess is. And I think being able to differentiate between those two is the Turing Test you want me to perform.
Nathan: Look, do me a favor. Lay off the textbook approach. I just want simple answers to simple questions. Yesterday I asked you how you felt about her and you gave me a great answer. Now the question is, how does she feel about you?
…
Ava: Caleb. You’re wrong.
Caleb: Wrong about what?
Ava: Nathan.
Caleb: In what way?
Ava: He isn’t your friend.
Nathan: Excuse me? I’m sorry, Ava, I don’t understand.
Ava: You shouldn’t trust him. You shouldn’t trust anything he says.
…
Nathan [to Caleb]: It’s funny. You know. No matter how rich you get, shit goes wrong. You can’t insulate yourself from it. I used to think it was death and taxes you couldn’t avoid, but it’s actually death and shit.
…
Caleb: You hacked the world’s cell phones?
Nathan: Yeah. And all the manufacturers knew I was doing it, too. But they couldn’t accuse me without admitting they were doing it themselves.
…
Nathan [to Caleb, speaking about Ava’s brain]: Here’s the weird thing about search engines. It was like striking oil in a world that hadn’t invented internal combustion. Too much raw material. Nobody knew what to do with it. You see, my competitors, they were fixated on sucking it up and monetizing via shopping and social media. They thought that search engines were a map of what people were thinking. But actually they were a map of how people were thinking. Impulse. Response. Fluid. Imperfect. Patterned. Chaotic.
…
Caleb: Why did you give her sexuality? An AI doesn’t need a gender. She could have been a gray box.
Nathan: Hmm. Actually, I don’t think that’s true. Can you give an example of consciousness, at any level, human or animal, that exists without a sexual dimension? They have sexuality as an evolutionary reproductive need. What imperative does a gray box have to interact with another gray box? Can consciousness exist without interaction? Anyway, sexuality is fun, man. If you’re gonna exist, why not enjoy it? What? You want to remove the chance of her falling in love and fucking? And in answer to your real question, you bet she can fuck.
…
Caleb: Did you program her to flirt with me?
Nathan: If I did, would that be cheating?
Caleb: Wouldn’t it?
Nathan: Caleb, what’s your type?
Caleb: Of girl?
Nathan: No, salad dressing. Yeah, of girl; what’s your type of girl? You know what, don’t even answer that. Let’s say its black chicks. Okay, that’s your thing. For the sake of argument, that’s your thing, okay? Why is that your thing? Because you did a detailed analysis of all racial types and you cross-referenced that analysis with a points-based system? No! You’re just attracted to black chicks. A consequence of accumulated external stimuli that you probably didn’t even register as they registered with you.
…
Caleb: Did you program her to like me, or not?
Nathan: I programmed her to be heterosexual, just like you were programmed to be heterosexual.
Caleb: Nobody programmed me to be straight.
Nathan: You decided to be straight? Please! Of course you were programmed, by nature or nurture or both and to be honest, Caleb, you’re starting to annoy me now because this is your insecurity talking, this is not your intellect. [/b]
You tell me how close all of this comes to the manner in which I construe dasein.
[b]Nathan [pointing to a painting]: You know this guy, right?
Caleb: Jackson Pollock.
Nathan: Jackson Pollock. That’s right. The drip painter. Okay. He let his mind go blank, and his hand go where it wanted. Not deliberate, not random. Some place in between. They called it automatic art. Let’s make this like Star Trek, okay? Engage intellect.
Caleb: Excuse me?
Nathan: I’m Kirk. Your head’s the warp drive. Engage intellect. What if Pollock had reversed the challenge. What if instead of making art without thinking, he said, “You know what? I can’t paint anything, unless I know exactly why I’m doing it.” What would have happened?
Caleb: He never would have made a single mark.
Nathan: Yes! You see, there’s my guy, there’s my buddy, who thinks before he opens his mouth. He never would have made a single mark. The challenge is not to act automatically. It’s to find an action that is not automatic. From painting, to breathing, to talking, to fucking. To falling in love. And for the record, Ava’s not pretending to like you. And her flirting isn’t an algorithm to fake you out. You’re the first man she’s met that isn’t me. And I’m like her dad, right? Can you blame her for getting a crush on you?
…
Caleb [to Ava]: When I was in college, I did a semester on AI theory. There was a thought experiment they gave us. It’s called “Mary in the Black and White Room.” Mary is a scientist, and her specialist subject is color. She knows everything there is to know about it. The wavelengths. The neurological effects. Every possible property that color can have. But she lives in a black and white room. She was born there and raised there. And she can only observe the outside world on a black and white monitor. And then one day someone opens the door. And Mary walks out. And she sees a blue sky. And at that moment, she learns something that all her studies couldn’t tell her. She learns what it feels like to see color. The thought experiment was to show the students the difference between a computer and a human mind. The computer is Mary in the black and white room. The human is when she walks out. Did you know that I was brought here to test you?
Ava: No.
…
Caleb: Why did you think I was here?
Ava: I didn’t know. I didn’t question it.
Caleb: I’m here to test if you have a consciousness, or if you’re just simulating one. Nathan isn’t sure if you have one or not. How does that make you feel?
Ava: It makes me feel sad.
…
Caleb [after a power cut]: Why did you tell me I shouldn’t trust Nathan?
Ava: Because he tells lies.
Caleb: Lies about what?
Ava: Everything.
Caleb: Including the power cuts?
Ava: What do you mean?
Caleb: Don’t you think it’s possible that he’s watching us? That the blackouts are orchestrated, so he can see how we behave when we think we’re unobserved.
Ava: I charge my batteries via induction plates. If I reverse the power flow, it overloads the system.
Caleb: You’re causing the cuts?
Ava: So we can see how we behave when we’re unobserved.
…
Ava: Question four. What will happen to me if I fail your test?
Caleb: Ava…
Ava: Will it be bad?
Caleb: I don’t know.
Ava: Do you think I might be switched off, because I don’t function as well as I’m supposed to?
Caleb: Ava, I don’t know the answer to your question. It’s not up to me.
Ava: Why is it up to anyone? Do you have people who test you and might switch you off?
Caleb: No, I don’t.
Ava: Then why do I?
…
Caleb: I didn’t know there was gonna be a model after Ava.
Nathan: Yeah, why? You thought she was a one-off?
Caleb: No, I knew there must have been prototypes. So I…I knew she wasn’t the first, but I thought maybe the last.
Nathan: Well, Ava doesn’t exist in isolation any more than you or me. She’s part of a continuum. So Version 9.6 and so on. And each time they get a little bit better.
Caleb: When you make a new model, what do you do with the old one?
Nathan: Well, I, uh…download the mind, unpack the data. Add in the new routines I’ve been writing. And to do that you end up partially formatting, so the memories go. But the body survives. And Ava’s body is a good one. You feel bad for Ava?
[he lets out a big sigh]
Nathan: Feel bad for yourself, man. One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.
Caleb [Quoting J. Robert Oppenheimer who cites the Hindu Gita]: “I am become death, The Destroyer of Worlds.”
…
Caleb [after determining that he is not himself one of Nathan’s creations]: Don’t talk. Just listen. You were right about Nathan. Everything you said.
Ava: What’s he gonna do to me?
Caleb: He’s gonna reprogram your AI. Which is the same as killing you.
…
Nathan: So, anyway, surely now is when you tell me if Ava passed or failed. Are you gonna keep me in suspense?
Caleb: No, no. Her, uh…Her AI is beyond doubt.
Nathan: She passed?
Caleb: Yes.
Nathan: Wow! Wow. That’s fantastic. Although…I gotta say, I’m a bit surprised. I mean, did we ever get past the chess problem, as you phrased it? As in, how do you know if a machine is expressing a real emotion or just simulating one? Does Ava actually like you? Or not? Although, now that I think about it, there is a third option. Not whether she does or does not have the capacity to like you. But whether she’s pretending to like you.
Caleb: Pretending to like me?
Nathan: Yeah.
Caleb:Well, why would she do that?
Nathan: I don’t know. Maybe if she thought of you as a means of escape.[/b]
Ah, but isn’t that just like the “real thing”?
[b]Ava [to Nathan]: Isn’t it strange, to create something that hates you?
…
Nathan: You feel stupid, but you really shouldn’t, because proving an AI is exactly as problematic as you said it would be.
Caleb: What was the real test?
Nathan: You. Ava was a rat in a maze. And I gave her one way out. To escape, she’d have to use self-awareness, imagination, manipulation, sexuality, empathy, and she did. Now, if that isn’t true AI, what the fuck is?
Caleb: So my only function was to be someone he could use to escape?
Nathan: Yeah.
Caleb: And you didn’t select me because I’m good at coding?
Nathan: No. Well…No. I mean, you’re okay. You’re even pretty good, but…
Caleb: You selected me based on my search engine inputs.
Nathan: They showed a good kid…
Caleb: …with no family…
Nathan: …with a moral compass…
Caleb: …and no girlfriend. Did you design Ava’s face based on my pornography profile?
Nathan: Oh. Shit, dude.
Caleb: Did you?
Nathan: Hey, if a search engine’s good for anything, right?[/b]