Why do people fear technology? An apocalyptic world where technology and robots have taken over and have subjugated and killed off the human race is a ubiquitous scene found in many hit movies and dystopian novels. At this moment I can think of two prime examples. The first one is the movie Terminator with the one and only Arnold Schwarzenegger, where artificial intelligence has become self-sufficient and independent from man and has begun a nuclear war to wipe out mankind. I, Robot, featuring Will Smith, is another example of a movie where the dangers of technology are portrayed; in the movie, robots have come to replace humans in most occupations, and the robots spiral out of control and launch a rebellion against the human race. Movies like these reiterate the notion that technology is bad and will gain power over man, and that there is nothing we can do to stop this from happening. We fear that AI will become superintelligent to the point where it can make its own decisions and do evil. Instead of anticipating a bleak future, we have to ask ourselves, “Is technology truly inherently bad or dangerous?”

In the 1980’s, Melvin Kranzberg, a former history of technology professor at Georgia Tech, addressed this question through one of his six laws of technology. He stated that “technology is neither good nor bad; nor is it neutral.” Kranzberg expresses that technology itself is not bad or dangerous. Instead it is the unintended consequences of or the people behind the technology that can make it “bad” or “good.” Most of the technology that we fear like robots or AI is not meant to be bad; it is developed to be useful, increase productivity and efficiency, and make tasks easier. In Monday’s “Space Junk” talk, the speaker presented a real-life event in which technology had unintendedly sparked fear and controversy. In 1977, the Soviet Union shot a rocket into space; the rocket experienced an inadvertent malfunction where the nuclear core did not separate from the rocket correctly, and by chance, radioactive fragments happened to reenter the atmosphere and land in Canada. Although no injuries accompanied this incident, the people living nearby in Canada became fearful, thinking that the Soviets planned the debris shower and that they were no longer safe. The radioactive debris was an unintended consequence of sending the rocket to space, however, it brought up feelings of tension and fear. Once again, the intention of the technology was not bad in nature – to advance space travel – however because the people behind the technology did not consider all the consequences, a series of events happened causing the rocket and those involved in creating and authorizing the rocket to be seen as malicious. When analyzing technology, we should always consider that technology itself is “neither good nor bad; nor is it neutral.”

The fear that is portrayed in both of the films I mentioned earlier comes from the idea that technology is going to reach a point where we are no longer in control and a time will come where technology is independent from man and has the power to turn against us. Not being in control of our lives scares us. Kranzberg’s laws also address this fear, for he says that “technology is a very human activity.” By saying this, Kranzberg identifies that technology itself is not alive and humans are very much still the fabricators and controllers of technology. On a similar note, the concept of technology being in control of our society and its future is presented in the idea of technological determinism, which gives agency to technology. We have to remember to steer away from technological determinism because it is not an accurate concept. Humans have agency not technology, for technology is a tool in the hands of man. Furthermore, technological determinism propagates the notion of technology reaching a point where it can destroy us is inevitable, which is not true because once again, humans are the ones who have agency and there has yet to be an incident where technology itself has done something bad.

A robot apocalypse may be a daunting thought; however, it may be a fear that has been blown slightly out of proportion. Like Kranzberg wrote, “Technology is a very human activity.” Our fear of losing control of technology comes from our tendency to personify computers and machines and to see them as being alive. Technology is a tool not a person. Our fear is furthered by the irrational idea of technological determinism, which claims that technology advancing to a dangerous point is inevitable. All this said, I cannot predict the future, however we must remember that technology itself is not evil and that we as humans have agency not technology.