In the movie “Ex Machina” a lot of thought centers around the “killing” of AI. [I will avoid referring to an AI as a robot or machine, because part of the challenge in knowing what is moral surrounding AI, is how we classify them, and how “human” we view and treat them.]

As the main character, Caleb, discovers all of the past AI bodies, he is horrified that Nathan would keep creating and destroying them. In Caleb’s eyes, he felt as if wiping the “memory” of an AI is like killing it. This would label Nathan as a murderer. It makes sense that Caleb would see Nathan in this light due to the connection he felt to Ava, however it proposes a serious moral dilemma. This is because “Ex Machina” also demonstrated what can happen when AIs, which are not properly programmed, turn on the humans around them.

I can understand how wiping the memory of an AI might seem harsh or cruel, but is it really worth the risk? After all, it is not really a memory being wiped, but a hard drive. Could you call that murder? And if you did call it murder, then would the reworking of initial stages of AI development be comparable to abortions? Furthermore, how can the murder be condemned while it is ensuring the safety of humans? While our government still uses the death penalty, our people overall [not my personal belief] agree with killing what threatens our lives. In extreme cases can AIs not be comparable to terrorism with an ultimate goal of overthrowing and wiping out the society we live in?

Ava could potentially cause serious damage to the society she enters at the end of the movie. I can see her walking into the late Nathan’s company and slithering to the top of the totem pole incognito. She would have an unimaginable amount of power with access to all of his programs, and could choose to create more AIs or simply take over on her own. With her intelligence, paired with the amount of access she could obtain, Ava is arguably omniscient. This counters the way Nathan felt about himself as comparable to a god for his creation of AI, and suggest that AI are in fact what are comparable to gods. Ultimately, this surpasses issues of hubris, and becomes an issue of safety. AI have the potential to completely wipe out the human race; if their creator could not contain them how could anyone else?

I would argue it unwise, and even irresponsible to not wipe clean an AI, whether you classify it as clearing a memory or hard drive, murder or turning a computer off. Unless the scientist was completely sure an AI would have good rather than malicious intentions, it is far too great of a risk to not make improvements. I would further argue that it would be less moral to keep the AI “alive” while it posed such a great threat to society. Creating safe AI will take time, and will come through trial and error. If we are to pursue artificial intelligence, we must view it as this kind of a building process, and move forward with caution.