A Colby Community Web Site

Author: Jack MacPhee (Page 1 of 2)

Artificial Intelligence and its Implications on the Future of Humanity

Jack MacPhee

Fleming

ST112WA

May 16, 2018

Artificial Intelligence and its Implications on the Future of Humanity

Though some may think of robots taking over the world when they envision AI, artificial intelligence is a very broad term that includes things from Apple’s Siri and Amazon’s Alexa, to, more recently, more complex things like autonomous vehicles and robotics with the ability to interact and adjust appropriately to their environment in order to complete small tasks. It is growing in importance every year, with new forms and uses constantly being dreamt up and brought to life. This is a highly sophisticated area of modern science and gaining a better understanding of how it works will lead to a better understanding of how it may affect our lives as humans the more it expands. We know that AI could be an incredible tool once we master it, but what areas will it affect the most and what significant changes will it bring to life as we know it today? I believe that AI will have massive implications in more areas than one and will one day be an extremely relevant topic in the human scientific and intellectual realm. Artificial intelligence is very important because it may one day completely change the face of society with an impact on the job scene and theoretically endless other areas. AI’s development has skyrocketed in the past decade, and is more prominent today than it has ever been. As this technology gets increasingly refined, perfected and applied in more areas, the implications could be drastic for the human race, with humans effectively being replaced by machines in some ways.

It is easy to see why one would be uncomfortable or off-put by the idea of artificial intelligence becoming a staple in everyday life, as there is a negative stigma attached to it in media and popular culture. It is painted as some kind of enemy that we should fear and try to supress. These notions are largely unfounded, however, and many experts agree that anything like what we’ve seen in movies like the Terminator franchise or 2001: A Space Odyssey are far off in the future and essentially a different beast altogether from what we have been able to develop thus far. The technology seen in these films is what is known as “true AI” which is far more advanced than anything the current state of society has to or can experience today. Popular culture has done its best to tarnish the image of artificial intelligence in the public eye. TV, movies, media, and books depict a darker side of robots and artificial intelligence that casts a negative shadow on them. It would make sense that the race to produce true AI might slow with all of this media teetering between horror and sci-fi on the subject of AI circling the globe. But yet, the acceleration of the field has only increased. A great deal of people find AI somewhat unsettling, and rightfully so. The idea that something can get so close to being a human, but only consist of wires, metal and code, is a disturbing thought. They can certainly be categorized as a candidate for falling in the uncanny valley. Super computers have the ability to recall and analyze more data and gather information faster than any one human could. Robots, depending on the purpose they serve, can be hundreds of times physically stronger than a human. Combine the two in one creation, along with the ability to learn and think for itself, and just the thought of what it might be capable of is terrifying. Despite all of this, though, the field of AI continues to grow, with new advances being made all the time. However, we are still in the early stages of developing artificial intelligence, and the invention of anything seen in a movie like Ex Machina, where an inventor creates a truly artificially intelligent, sentient robot that can manipulate humans to reach a desired outcome for itself, is still likely decades away. Adam Coates, director of the Baidu Research Silicon Valley AI Lab, believes this to be so: “I think that sometimes we get carried away and think about ‘sentient machines’ that are going to understand everything the way we do and totally interact with us. These things are pretty far away… A lot of the scaremongering of AI taking over the world or getting out of our control are a little bit overwrought.” Much of the fear surrounding AI comes from overestimating the advancement and power of this technology and conjuring up mainly overhyped ideas of the eventual implications it will have on the human race. The mainstream media and popular culture have given a bad reputation to artificial intelligence, making it seem like more of a force that will eventually harm humans, or even be the ultimate demise of humanity as a whole. It has an almost apocalyptic stigma attached to it that leaves the public to wonder not what potential benefits and positive implications AI could have, but whether the finished product will be deadly or not. Experts who research, work with, and use it on a day to day basis agree that the overwhelming majority of fears and apprehensions people have about AI and where it is headed are mainly unfounded.

Now that we have established what it is that makes people so afraid of AI, we can dive deeper into this subject as it pertains to the shared future of this technology and mankind. True artificial intelligence, or “strong AI” is achieved when a system can take information from the outside world, learn by itself and build an awareness similar or superior to that of a human. Some scientists refer to the artificially intelligent systems we have today as “weak AI” since it can only operate once given a set of rules or algorithms from the creator to follow. For example, Apple’s Siri technology is not sentient in any way. It simply takes user input and gives a calculated response based on the rules and code it was given by Apple. It does not learn or think consciously. A true AI could interact with and learn about things in its environment through observation or trial and error and generate a conscience on its own. Some theorize that what makes humans “human” is the ability to reason. A true AI would be able to mimic this trait. This is where humans begin to get apprehensive about whether or not we are headed in a safe direction pursuing the advancement of this technology. Even some of humanity’s most brilliant minds have expressed their concerns when it comes to this subject. In an interview, Elon Musk, CEO and founder of both SpaceX and Tesla, had this to say concerning AI: “We should be very careful about artificial intelligence. If I were to guess what our biggest existential threat is, it’s probably that… Scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon.” He finishes the demon metaphor by saying that in movies and stories there is the exorcist or medium (humans) that claims to have the demon (AI) under control and it typically does not work out as planned. Elon Musk is a man at the forefront of state-of-the-art technology, and yet he has his concerns and goes as far as to liken the future of AI to summoning a demon. Hearing this come from such a prominent intellectual figure in this day and age is sure to discourage some from supporting AI. Even Stephen Hawking chimed in on the debate of whether or not AI could be detrimental to humanity: “The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” And yet, this field has shown no signs of slowing down its development. So why is that? What uses do we have planned for AI that will eventually help humans? What are the downsides to these applications? Well, it is possible that a good chunk of jobs currently held by us could be run instead by machines in the near future.

Of all the possible ways AI could change our society, the most drastic is easily how it could affect the job scene. Theoretically, AI or robots could replace almost any job a human can do. Robots can simply be programmed to do easy jobs that currently use a human workforce,  and artificial intelligence can perform slightly more advanced tasks that require more than simple straightforward inputs. An increasing number of people feel that their ability to make a living will be threatened by artificial workers in the near future. However, this all depends on what job you have, and how long you have had it. The relative job security of older generations is in part due to the senior positions they hold within their companies. After all, it’s easier to replace somebody in the beginning of their career rather than replace entire established branches of management or leadership. On top of this, it’s typically these more established employees who will be in charge of decisions surrounding the implementation of AI in the first place, and they’re unlikely to want to replace themselves. For the foreseeable future, only low-level, repetitive tasks will be automated, and those with more nuanced and difficult jobs will likely be safe. This means that younger people with developed and adaptable problem-solving and decision-making skills can exclude themselves from the immediate threat of being replaced, and aim to work alongside AI in more senior positions in the long term. On the other hand, widespread job-redundancy is inevitable and will bring in a new philosophy regarding the role of human work in society. If machines and robots are going to be able to perform cognitive functions that were once too difficult to automate, very few jobs are safe. Theorists suggest a universal basic income as a solution, whereby governments provide their citizens with a living wage to combat their inability to secure employment. This is a scary proposition, and could cause backlash from the public on the issue of moving forward with this movement from manned jobs to an automated workforce. This, however, may not actually be such a bad thing when you listen to the other side.

A study of 1,000 companies revealed that AI systems created new jobs in 80% of the organizations they were implemented in. In fact, a 2017 report by Gartner predicts that AI will create 2.3 million jobs while eliminating only 1.8 million by the year 2020, creating a much more diverse workforce of creative and high-skilled individuals, and a net gain of over 500 thousand jobs. As the global economy gears up for the widespread adoption of AI solutions, competition grows fierce for employees with the scarce skills required to implement, manage and work alongside this new technology. Developing these skills is therefore vital for any young professional wishing to retain job-security in an increasingly automated workplace. And as this skilled workforce drives the AI industry forward at an accelerated pace, the demand for even more highly trained professionals will grow with it. This will result in a workplace comprising of adaptable people – according to Gartner – whose jobs are reimagined, enriched or facilitated by the technology they work alongside. While it’s true that many low-skill jobs will fall by the wayside, replaced by the sophisticated automation AI enables, new careers and industries will emerge that haven’t been invented yet. Just as our parents struggled to predict the emergence of fields like social media or blogging, so, too, are we incapable of comprehending the jobs AI will create for the time being. Artificial Intelligence is still in its infancy, and is yet to reach the point of mass adoption. As such it’s difficult to predict the extent to which it will redefine the workplace or the jobs of the young professionals within it. The most likely scenario, however, is a combination of both the optimistic and pessimistic views. An economy that prizes highly-skilled, well-trained, and adaptable employees who work alongside very smart machines. And a large segment of low-skilled workers whose skills are made redundant at an alarming rate. While we can appeal to the better nature of organizations to nurture and prepare their staff for an inevitable technological transition, millennials should heed the warning signs, take initiative, and equip themselves with the skills needed to survive a potentially tumultuous economic evolution.

In conclusion, it can be said with certainty that within the next 5-10 years, due to the increasing implementation of artificial intelligence in everyday life, modern society will look very different than it does now. This change will most likely not occur in the same apocalyptic manner that some fear it may, but it will nonetheless be a drastic change for better or worse. As the field of AI expands and improves, more applications will arise and more pieces of how our society currently operates will become affected. The global economy and workplace will see drastic changes if these speculations and estimations are correct, and we could see millions of jobs worldwide formerly performed by human workers be replaced with robots and AI. As stated, those who have held their positions for a greater amount of time and cemented themselves as senior staff members and employees at their position will be at a lower risk of having their job taken by an artificially intelligent system than a new member of the job scene whose career is still in its early stages. On the bright side, these aspects of society will almost certainly recover from the blow that was the various applications of AI to different companies, markets, and industries. The advent of new uses for AI will create new jobs that may not even exist today, and we will have a more skilled and effective workforce because of it. With the majority of simple tasks being taken over by robots, new opportunities will open up for humanity. It will interesting to see what kinds of changes unfold as the years pass, and what sorts of breakthroughs will occur sooner or later than we imagine possible. But the same holds true no matter what: things will likely never again be as they are today.

 

References     

Meulen, Robert, and Christy Petty. “Gartner Says By 2020, Artificial Intelligence Will Create More Jobs Than It Eliminates.” Hype Cycle Research Methodology | Gartner Inc., Gartner, Inc., 13 Dec. 2017, www.gartner.com/newsroom/id/3837763.

 

Floridi, Luciano. “True AI Is Both Logically Possible and Utterly Implausible – Luciano Floridi | Aeon Essays.”Aeon,15 May 2018,aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible.

 

Hornigold, Thomas. “When Will We Finally Achieve True Artificial Intelligence?” Singularity Hub, 1 Jan. 2018, singularityhub.com/2018/01/01/when-will-we-finally-achieve-true-artificial-intelligence

 

Anderson, Janna Q. “The Future of Work? The Robot Takeover Is Already Here.” Medium, Augmenting Humanity, 12 Aug. 2015, medium.com/@jannaq/the-robot-takeover-is-already-here-5aa55e1d136a.

 

Brown, Rosie. “Where AI Is Headed: 13 Artificial Intelligence Predictions for 2018 | NVIDIA Blog.” The Official NVIDIA Blog, 3 Dec. 2017, blogs.nvidia.com/blog/2017/12/03/ai-headed-2018/.

 

Mulhall, Douglas. Our Molecular Future: How Nanotechnology, Robotics, Genetics, and Artificial Intelligence Will Transform Our World. Prometheus Books.

 

HUSAIN, AMIR. SENTIENT MACHINE: the Coming Age of Artificial Intelligence. SCRIBNER, 2018.

 

Genetics: The Tip of the Iceberg

When most people think of genetics, they think of all of the predetermined codes and sequences from our parents that will come together to create us in our own unique way. And they would be right to think this. However, the idea of epigenetics brings another level to this entire concept.

Epigenetics, which translates to “above genetics,” is the study of changes caused by a modification in gene expression, instead of a physical alteration of the genetic code itself. These changes are caused mainly by environmental factors, like diet, tendencies, and habits. What we eat, what we do, and what we get addicted to are among the things that can change our epigenome; that is, change what our original genome is told to do, leading to an altered gene expression. The physical manifestation of epigenetics comes in “tags” or “markers” attached to DNA when it is wrapped around the histones, or bundles of protein that act as a spindle for a section of genetic material.

One example of a study that helped both confirm the existence and give a closer look at the implications of epigenetics was an experiment conducted on mice carrying a specific gene. The experiment saw mice that carried the agouti gene, one that causes mice to be obese and more susceptible to diseases like cancer and diabetes, be either let alone to reproduce naturally with no outside forces affecting the mother, or have humans intervene and give the mice a diet consisting of special foods that contained “silencing” markers for the agouti gene. It was discovered that the negative gene can have its affects negated, and essentially be switched off, just by giving the mother a diet consisting of different foods, leaving the original genetic code untouched.

A good metaphor I found online that helps explain this phenomenon in simpler terms goes as follows: imagine you have an identical twin or clone whom shares 100% similar genes with you, but is brought up separately from you. They develop different tastes and lifestyles and make different choices than you do. They smoke, have a different diet, have a stressful job, and in general have experienced life differently than you have. But at their core, they are still genetically identical to you. Over time, you have become different people, not just on the outside, but on the inside as well, due to environmental factors. Imagine the genetic code as a paragraph. If you compare your paragraph with your twin or clone’s paragraph, the letters of the paragraph will all be in the exact same order, as you are still identical to one another, but the spacing and punctuation of the paragraph will have changed over time, effectively giving the paragraph a different message. This “genetic punctuation” is epigenetics. The markers at the end of each sentence are what give the sentences different meanings than your twin counterpart. Identical genes, brought up in separate environments, eventually molding two different expressions of the same person.

Epigenetics can be a difficult concept to grasp, since most people see genetics as the predetermined code that will shape every facet of your life that it has control over, but this is simply not the case. In fact, the nurture side of your existence can spill over into the genetic side of things under the right circumstances. This renders standard genetics the metaphorical tip of the iceberg when considering how one gets to be the person they eventually become.

Should We Be Afraid of Artificial Intelligence?

“One day, the AIs are going to look back on us the way we look at fossils and skeletons in the plains of Africa”

-Nathan, Ex Machina

Over the past few decades, popular culture has done its best  to tarnish the image of artificial intelligence in the public eye. Films like Terminator, Ex Machina, 2001: A Space Odyssey, and others depict a darker side of robots and artificial intelligence that casts a negative shadow on them. It would make sense that the race to produce true AI might slow with all of this media teetering between horror and sci-fi on the subject of AI circling the globe. But yet, the acceleration of the field has only increased.

A great deal of people find AI somewhat unsettling, and rightfully so. The idea that something can get so close to being a human, but only consist of wires, metal and code, is a disturbing thought. Super computers have the ability to recall and analyze more data and gather information faster than any one human could. Robots, depending on the purpose they serve, can be hundreds of times as strong as a human. Combine the two in one creation, along with the ability to learn and think for itself, and just the thought of what it might be capable of is terrifying.

Despite all of this, though, the field of AI continues to grow, with new advances being made all the time. However, we are still in the early stages of developing artificial intelligence, and the invention of anything seen in a movie like Ex Machina is still likely decades away. Adam Coates, director of the Baidu Research Silicon Valley AI Lab, believes this to be so: “I think that sometimes we get carried away and think about ‘sentient machines’ that are going to understand everything the way we do and totally interact with us. These things are pretty far away… A lot of the scare mongering of AI taking over the world or getting out of our control are a little bit overwrought.” Much of the fear surrounding AI comes from overestimating the advancement and power of this technology and conjuring up mainly overhyped ideas of the eventual implications it will have on the human race.

The mainstream media and popular culture have painted artificial intelligence as a force that will eventually harm humans, or even be the ultimate demise of humanity as a whole. It has an almost apocalyptic stigma attached to it that leaves the public to wonder not what the potential benefits and positive implications AI can have are, but whether the finished product will be deadly or not. Experts who research, work with, and use it on a day to day basis agree that the overwhelming majority of fears and apprehensions people have about AI and where it is headed are largely unfounded. So should we be afraid of AI? CGI movies say yes, but scientists say no.

Research Proposal: Artificial Intelligence and its Implications on the Future of Humanity

Jack MacPhee

ST112WA

Fleming

April 4, 2018

 

Artificial Intelligence and Its Implications on the Future of Humanity

A topic relating to Science, Technology and Society that I would like to research further is artificial intelligence, or AI. Though some may think of robots becoming all-powerful and taking over the world when they envision AI, artificial intelligence is actually a broad topic that includes things like Siri, as well as more complex things like autonomous vehicles and robotics with the ability to learn and process information as a conscious being would. It is growing in importance every year, with new forms and uses constantly being dreamt up and brought to life. I want to explore this highly sophisticated area of modern science and gain a better understanding of how it works, and how it may affect our lives the more it expands.

The question that will guide my research and help me explore this topic further is as follows: What areas will AI affect the most and what significant changes, if any, will it bring to life as we know it today? I believe that AI will have massive implications in more areas than one and will one day be an extremely important topic in the human scientific and intellectual realm. I plan to delve deeper into where AI is headed, as well as what our lives may look like because of it in both the near and distant futures. Artificial intelligence is very important because it may one day completely change the face of society with an impact on the job scene, political scene, and theoretically endless other areas. AI’s relevance has skyrocketed in the past decade, and is more prominent today than it has ever been.

As this technology gets increasingly refined, perfected and applied in more areas, the implications could be drastic for the human race, with humans effectively being replaced by machines. It is easy to see why one would be uncomfortable or off put by the idea of artificial intelligence becoming a staple in everyday life, but I believe there must a be a method to the madness explaining why scientists continue to pursue this field and I intend to discover that with my research.

Bibliography:

“Benefits & Risks of Artificial Intelligence.” Future of Life Institute, futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Mulhall, Douglas. Our Molecular Future: How Nanotechnology, Robotics, Genetics, and Artificial Intelligence Will Transform Our World. Prometheus Books.

 

    

Are There More Than Two Cultures?

In C.P. Snow’s famous book The Two Cultures, he argues, as you might expect, that western society is divided into two cultures who generally do not interact with one another and regard each other negatively. These two groups are the scientists and the humanists. Obviously, based on life choices alone, individuals on either side of this spectrum likely do not have many interests in common, and it can be seen why they might think less of members of the other side. Scientists were likely never too interested in the arts, and people concerned with humanities probably don’t care what happens when certain chemicals interact on a molecular scale. Scientists regard humanists as lazy, flighty, flowery, and crunchy while those same humanists turn around and call scientists narrow-minded geeks and nerds. Now while these two “cultures” have their differences, in order to see this as a legitimate argument, we must first define what a culture is and decide if there is really more going on.

I believe a culture can be defined as the culmination of ideas and values held by a specific group or community, as well as how a member’s upbringing in this community shapes their development as they experience it and age. Based on this definition, I can somewhat see how science and the humanities could be considered cultures. In both of these disciplines, there are certain things valued by those who consider themselves a part of them. In science, things like hard facts, skepticism, and accuracy are highly valued. In humanities, ideas like critical thinking, evidence, and literature are important. That said, I still do not think that it’s safe to go as far as to say science and the humanities are the only two intellectual divisions among western society.

Across the globe, there are considerably more than just two cultures that can be found. However, these are not the cultures C.P. Snow is talking about in his book. His point is that although both renowned humanists and scientists are considered smart, if either is taken out of their culture and put into the other, they would be seen as far inferior in terms of intellectual ability. A discipline that I think bridges the gap somewhat between these two cultures is the social sciences like sociology, economics, and psychology. In these disciplines, they are still regarded as sciences, since they deal with the intellectual study of behaviors and trends through observations and experiments, but simultaneously take a closer look at human history, culture, etc. I believe that social sciences can be seen as a sort of counterargument to the ideas raised in The Two Cultures. Because social scientists deal with a chunk of either side to some extent, when placed out of their niche and into one of the other two, there would not be as negative of a response from their one-on-one interaction. They do not study the exact same things, but a mutual understanding would be reached much more easily.

C.P. Snow’s lecture on the two cultures of science and humanities makes a few valid points, but in the end fails to account for a possible addition to this list. The social sciences represent the overlapping area of the Venn diagram of intellectual western culture. Some may argue that social science could be considered a class of its own, but I believe its roots extend into both sides.

Women in Science and Technology

The roles of men and women have changed dramatically in contemporary society.  Women have gotten more freedom to express themselves and take an active part in the development of technologies, despite the fact that there are still problems in this field. People realize that gender equality is one of the components of healthy society and true development is impossible without it.  Only understanding the contribution women can make to the development of science and technology can bring positive impact on the development of the field.

At the present moment all over the world, with small exceptions, women take an active social role and demonstrate their abilities in a lot of fields. Nowadays women are active in the good production industry, natural resource management, education, as well as community management. Women occupy different positions in these spheres and professions and the spheres mentioned above are mostly considered to be female ones. A large percentage of women work in the medical industry, as well.

Despite growing technological development and popularity of feminism, women still do not possess an equal position in society.  Although women make up approximately 50% of the global population, they have access to less than half of the resources in terms of technology, financing, land, training and education, and information. A lot of specialists believe that true progress and development are not possible without women’s active participation in these processes.  The gender lens would be an essential contribution to the development of STI and would enable people to meet global changes.

History shows a lot of examples of great input made by women to the development of science and technology. The  Scientific and industrial fields, as well as other technological industries are influenced and dominated by men. Despite the fact that there are women who have played an important role in the development of the industry, their names are rarely mentioned.  For example, Ada Byron King, the daughter of the famous Lord Byron, became the first computer programmer. She also was a prominent mathematician. Unfortunately, her name is rarely mentioned in the history of the development of the computer industry and when people speak about this sphere they remember the names of famous male specialists. Earlier history also contains data about the famous female specialists in science and technology, but their names are even less remembered than Ada Byron King’s name. Hypatia, an Egyptian mathematician, made great contributions to the development of science, inventing the hydroscope and the plane astrolabe. Maria Gaetana Agnesi made a great contribution to the science through her work in differential calculus in 1700s. Sofia Kovalevskaya worked in 1800s in Russia. Her contribution to the development of astronomy and mathematics is vastly important. In recent history, it is worth mentioning Grace Hopper, a woman with a PhD in mathematics. She became one of leaders in the field of software development and made a great contribution to the development of new programming techniques. Grace Hopper became the first to recognize possible profits people could have from the use of computers and she did a lot to put her ideas into practice. She realized that making the computers easier to use would increase their popularity among ordinary people. In this way she made her contribution to the computer revolution and made computers more available.  This case illustrates not only women’s ability to achieve high results in the fields of science and technologies, it also demonstrates the way female brain works. In contrast to men, who think about complicated schemas and complex technologies, women take into account the sphere of application and use this to make technology more available for people.

Despite an unfair amount of neglect from the “higher ups” in the field of science and technology, not every great female achievement has gone unnoticed. Today, women are becoming more and more prominent in these areas and are surging forward as the world becomes more progressive.

Frankenstein: The Dangers of Obsession

Victor Frankenstein was a very brilliant scientist when it came to his achievements and what he was able to accomplish in his lab. He was able to reanimate a dead human being that could think for itself and carry out tasks, something seen as an act only gods should be capable of. Where Victor fell short in his prowess as a scientist, was in his ethical approach to the experiment in which he created his “monster”.

Frankenstein was about as passionate as a scientist could get. He spent countless hours tirelessly working on his experiments, which is a major reason why he was successful in completing his goal of bringing a human body back from the dead. But this still begs the question: should he have even attempted this? Are there some things that should be left alone in the world of science?  If you ask the very creation that came from Victor’s experiment, the answer would be an emphatic yes. While Frankenstein was caught up in his own obsession with conquering death, he neglected to take into account the possibility of his creation having feelings and being able to reason and comprehend its own existence. Once back into the realm of living humans, the creature resented Frankenstein for bringing him into this world. On top of this, Frankenstein was disgusted at what he had made saying “never did I behold a vision so horrible as his face”. He had given life to a creature doomed to an existence of neglect, despite any good intentions it may have. This is a main area where I believe his ethics are almost non-existent. Yes, he had accomplished something scientists before him only dreamed of, but perhaps that is how it should have remained: a dream.

Another piece of Victor’s flawed ethics is his lack of hesitance to “play god” with his experiments. There are some experiments that modern scientific ethics simply won’t allow, like using human participants in potentially life-threatening studies without their consent or knowledge. In a sense, this is what Victor did with the monster, but instead of suffering and death, he gave the monster suffering and life. The monster never asked to be reborn, and now it lives a life of loneliness and ostracism. Who was Frankenstein to toy with the forces of nature by bringing back a deceased human? Eventually, his poor ethical choices came back to haunt him when the monster he created whom he could have accepted and marveled at but instead shunned and regarded with disgust destroyed his family and others as well. An effort to give the creation that was the result of his life’s work a comfortable life would have gone so far as to protect the lives of others, but he failed at this as well. Victor Frankenstein wanted to play with death, but did not prepare for the consequences.

Frankenstein is the story of an incredibly smart man carrying out a groundbreaking experiment, but lacking the ethical competence or foresight to reap any sort of benefit from his “achievement”. Whether or not he should have attempted the reanimation was a decision he tackled rather hastily, and it shows that sometimes, having all the passion and obsession in the world is not necessarily a recipe for becoming great.

What If We’re Wrong?

Jack MacPhee

STS112WA

Fleming

2/21/2018

The Scientific Revolution: a period of time in which nearly everything people in Europe thought they knew about the natural world was called into question. Some of humanity’s most brilliant minds began voicing their once silenced findings and revelations to the outside world. Sir Isaac Newton brought to the surface new physics concepts of light, motion, and most famously, gravity. The idea of Aristotelian Cosmology was ripped apart by Copernicus and Galileo. Ideas once considered scientific fact at the time were slowly being debunked one by one until the field of science was not even recognizable anymore. Reading about this incredibly important era in human history got me thinking: what if another event like this were to happen today? What might the history of science look like in the distant future?

Nowadays, we have rigorous scientific methods that allow us to say with certainty that some things simply are true. We like to think we have a good grasp on how to prove something is factual.  But what if we don’t? We know that we do not have an answer for everything, as was the case before, during and after the Scientific Revolution.  There are some things that even today’s science just cannot explain, like how the placebo effect works, or what dark matter is. Instead of forcing some ridiculous substitute  explanation for a lack of a scientifically based one, we now know to continue digging deeper into these mysteries and finding the “true” answer. But what if one of our current geniuses were to stumble upon something equally as revolutionary as what Galileo Galilei and Isaac Newton discovered in their day? Is this even possible? If something like this were to occur, it would obviously be devastating in a number of ways. It’s hard to imagine what it would be like if nearly everything we’ve been teaching in schools and working on in the field of science turned out to be false. If someone came along and told scientists they’ve been looking in the wrong places for answers the whole time, I think it would be one of the worst things to happen to society in a long time due to the hundreds of years of progress that would be instantly taken away.

Obviously an event like this one is pretty much impossible with how much extensive research we put into all areas of science today. Scientists today don’t just double check, they devote their lives to making sure something is true every single time. Nonetheless, toying with the idea of letting the events of the original scientific revolution unfold in today’s setting is extremely unsettling. I can’t imagine being around in the 15th and 16th centuries when scientists like Copernicus and Galileo were telling me everything I knew was a lie. Neil DeGrasse Tyson once gave a lecture in which he explained the fact that there is only a 1% difference in DNA between humans and other primates like chimpanzees. He went on to detail that within that 1%, we are so vastly more sophisticated that we can create cities and teach calculus and all these other amazing things. he then proposed a deeply disturbing thought to the audience: What if there existed a creature who was 1% different from humans the way we are from chimpanzees? We would seem completely unintelligent to them. The simplest of their ideas would be the most complex of ours They would be able to explain the things that we cannot even with our cutting-edge technology. Though it is a thought and only a thought, if they were to exist, we could be proven wrong about everything.

Technology and its Unwavering Pertinence

Melvin Kranzberg’s fifth law of technology states that “all history is relevant, but the history of technology is the most relevant.” At first this statement might seem as though there must some sort of bias behind it. To simply label all other fields of history except technological history as inferior with one heavy sentence seems ridiculous. But upon further dissection, one begins to see the truth in this loaded statement.

Kranzberg’s fifth law does not outright say that history not pertaining to technology is unimportant, it still acknowledges the importance of other areas, but it does emphasize the area of technological history above all others. Some may disagree with this statement, but I happen to agree and can see where Kranzberg is coming from. Human history can be looked at as the events that have transpired since humans have been on earth, but in terms of what has made humans the way they are today and shaped society, technology is easily the most important factor to consider. The way humans interact with one another and create history is important, but the driving force that pushes mankind forward collectively as a society is advancement in technology. The first Homo sapiens to walk the earth did not survive by settling with what the earth gave them. They were capable of thinking and reasoning and innovating effective survival tactics. The timeline of the human race can be organized chronologically by each era’s contributions to furthering the longevity of humanity through their respective advancements in technology. We have progressed so far that survival is no longer at the forefront of our priorities, as it is for nearly every other form of life on earth. Now we can focus on making technological leaps once thought impossible, like populating Mars or perfecting artificial intelligence. Human history not only revolves around technology, it is made possible by it.

The idea of technological history being the most relevant type of history can tie into Kranzberg’s second law of technology, which states, “Invention is the mother of necessity.” New inventions and innovations spur on the need to invent more technology to solve whatever complications may arise from them. One example of this is the invention of cell phones. Cell phones, when first introduced, were a major milestone in technology, allowing people to communicate with others no matter where they were. One problem with them was that they would easily break if dropped, creating a need for the invention of a phone case to protect it. Along with this, people did not simply invent cell phones and settle with the very first prototype ever made. The invention of the cell phone created competition between companies to innovate and expand on the idea. This is the reason we’ve ended up with extremely advanced handsets like iPhones and other “smartphones” as they are categorized today. As such prominent part of our current everyday culture, cell phones serve as a prime example of how technological advances throughout history are the most relevant and influential.

Though human history can be conveyed or examined through many different lenses, when looking at it through the technological perspective, one can find a cohesive correlation to advances in technology, and the shape of historical events along the timeline. History can be best seen when put into the technological context of the period in question, making the history of technology the most relevant.

 

 

« Older posts

© 2023 ST112 WA2018

Theme by Anders NorenUp ↑