The Unfounded Fear of Machine Learning

If one were to take a random group of people and ask their opinions on machine learning, the vast majority will voice concerns of robots rising against their creators and wreaking havoc on society, often referencing movies such as The Terminator and Ex Machina. The cultural lexicon regarding machine learning is misleading as to what machine learning is and what it can potentially achieve. Though many have fears of malignant artificial intelligence, the fears of machine learning are largely unfounded and can be tempered through an understanding of the particularities of machine learning, suitcase words, and an examination of previous technologies that were initially met with resistance but became invaluable.

The first step to seeing machine learning for what it actually is through the understanding of the mechanics of machine learning. At its core, machine learning is the process of giving a computer a large amount of data and an algorithm with the goal of the computer becoming more efficient at the algorithm through a large amount of repetition. This concept is rather confusing and is best described through an example.

Open a photos app on a smartphone and one of the options for organizing the photos is by person. One can search their friends name and by seemingly magic, the app will know which photos have their friend in them and present these photos. Through facial recognition, the app is able to recognize faces and know which faces are the same. This is achieved through machine learning. The computer “learns” what faces are and which are the same by being given millions upon millions of photos of faces and going through a series of algorithms to ultimately make a prediction as to whose face they have. The first step is to determine whether or not the photo in question actually has a face in it. The computer takes the photo it has and abstracts its pixels to gradients so that image is a map of light and dark, essentially creating an outline of the image. It compares this outline to a known outline of a face and if they are similar enough, then the computer deems this picture to have a face. This known outline of a face is a compilation of millions of these outlines. Before the computer can begin to identify whose face it is, it skews the photo so that weird photo angles are straightened out in order for the face to best match a head-on view of the face. Finally, the computer can begin to identify the face. It measures a set of measurements, such as distance between eyes, size of nose, and width of lips and then compares this set of numbers to a database of other sets of known faces and then selects the known face that has the closest measurements. This in itself does not seem particularly amazing, but the magic is that, through repetition, a computer can become remarkably fast and accurate at facial recognition. As it is given more and more photos to analyze, it develops a more accurate set of measurements to use to determine similarities between faces and can make these measurements faster. One way it alters its algorithm is through triplet training. The computer will take two different images of the same person and a third of a random person and take the measurements for each image. It then tweaks the measurements it takes slightly so that the two set of measurements for the same face are more similar to each other than to the other face. By doing this millions of times, the computer will able to reliably generate the same set of measurements for a photo of a face taken at any angle(Schroff, 2015). Currently, Facebook can recognize faces with 98% accuracy and in the matter of milliseconds(Geitgey, 2016). Through sheer repetition and slight altering after each step, the computer can become extremely talented at a specific task.

Machine learning has applications in fields far beyond simple facial recognition. In health care, machine learning is being used to diagnose diseases long before they are usually diagnosable. For instance, by giving a computer a large group of radiological images, it is able to identify miniscule differences in the images indicate tumors. Through the same process as with facial recognition, the computer will get progressively better at identifying tumors through small alterations of its set of measurements over large amounts of iterations until it is able to identify tumors in places humans ordinarily cannot. In the field of financial markets, machine learning can be used as a predictive tool. The computer will “learn” the patterns of a particular financial market and with enough data can predict what will happen based off of the current information. It recognizes patterns from its historical data and makes a prediction to what will happen next from this information. This appears to be achievable by humans and is how most stock predictions by people are made, but the computer will recognize patterns so complex or miniscule that humans are not able to recognize them, thus creating a more accurate and insightful prediction. Recently, Bank of America, Morgan Stanley, and JPMorgan have all begun to develop and implement machine learning-based market prediction tools because they realize the usefulness of machine learnings pattern-recognition capabilities(Eisenberg, 2018).

Because machine learning is still a burgeoning field of computer science, its true potential is still being realized. With more and more data, machine learning has the potential to make progressively more novel insights in virtually any field. When taken apart and examined thoroughly, the mystique behind machine learning is stripped away and what is left is a powerful tool that has near-limitless applications. The issue is that this portrayal of machine learning is not what is seen is popular media and culture. It is often used as a buzzword to describe a computer being able to complete a humanlike task. What machine learning actually is is not told because it takes away the awe and marketability of a computer completing a previously incompletable task and there is usually not enough time or motivation to explain machine learning in a proper way. In the worst case, people will think computer scientists are creating computers that are literally thinking in the way humans think and it’s only a matter of time until they rise up and start to kill its creators. Sadly, this image of machine learning and AI is what is most usually portrayed in mainstream media, further feeding a cultural misunderstanding of machine learning. Through a lack of understanding of what machine learning is, people generate incorrect assumptions to how machine learning operates, leading to more false assumptions about how machine learning will develop. One reason for these false assumptions is the phenomena of suitcase words.

When most people hear of machine learning, they think of a computer learning a task the same way a human would. Because the way machine learning operates is not obvious or easy to explain, people use analogies that often lead to an incorrect understanding of how machine learning works. One large reason for this misunderstanding is the phenomena of suitcase words. “Learning” is a suitcase word because people associate an idea of how learning is performed that is not always correct. Humans learn through observation and repetition and are able to quickly apply what they previously know to new tasks. They engage in “sponge-like” learning that computers is simply not the same as the way machine learning operates. Computers “learn” by being given a large of data and then slowly get better at a task through sheer repetition and adjustment. They require specific training data that has been set up by researchers in order to function and even then computers have the capability of making the correct prediction if they are given a piece of data that is unlike anything they were given to train on. They are incapable of adapting and learning the way humans do. One example is the Deep Blue machine that beat chess grandmaster Garry Kasparov in 1997. The computer uses brute force to analyze every possible move and each successive move far more steps into the future than a human could and selecting the move that has the best chance of success in the future. Humans play chess through overarching strategy and the strategic placement of pieces on the board to comply to ideas they have developed about chess. Though it may be a more effective way, they do not analyze each possible move and its subsequent potential moves because this would simply take too much time and computing power to make it a practical strategy. But when people heard that the computer beat Kasparov, they assumed the computer learned how to play chess in the same way as humans, just better(Berliner, 1988). Through this incorrect assumption of how machine learning operates, people misplace human characteristics onto machine learning, causing it to seem far more sentient than it actually is. This links back to object recognition, where people assume that the computer is actually learning what an object is.

Give a computer millions of photos of bananas and through machine learning it will be able to recognize when one points their phone towards a banana. But people often think machine learning is much more. Because it is called machine learning, people often think the computer is “learning” in the ways humans do. In the case of bananas, the computer is learning what a banana looks like through millions of photos of bananas. It does not actually know what a banana is, it only knows that the collection of pixels a photo of a banana creates is labelled banana. People often do not understand this and think the computer is gaining intelligence and ultimately consciousness, causing conclusions to be made of machine learning that are unfounded.

One way to calm concerns of machine learning fundamentally changing society for the worst is by looking at historical examples. One of the most prominent cases of technology that is initially met with resistance but eventually becomes indispensable to society is during the Industrial Revolution with the textile industry and the Luddites.

Before the Industrial Revolution, the textile industry of the United Kingdom was completely human-powered. The process of weaving, spinning, and trimming was completely done by hand. The industry employed the majority of the citizens in the north of the UK and was overall a rather inefficient and costly process to create the final product, but the workers were happy because they were employed and had a large amount of control over when and how they worked. But these inefficiencies started to negatively affect profits and the factories owners started to look for ways to improve efficiency and cut production costs. They brought in machines that automated the cotton-weaving process and adopted techniques that made more fabric, albeit at a lower quality, in far less time. The workers were enraged at the subsequent decrease in wages and created a group called the Luddites that broke into the factories at night and destroyed the new machines with massive sledgehammers. The Luddites were scared that the machines would take away their jobs and take away their livelihoods, so they set out to destroy them. The movement quickly accelerated and reached a feverish peak when a Luddite assassinated a factory owner in daylight. As one can guess, the Luddite movement did not culminate in the retraction of machines and automation out of society. Ever since the industrial revolution, the presence of automation has been steadily increasing. It has accelerated faster recently than in the past because of the advent of computers and machine learning where tasks previously thought un-automatable have been automated. This has led to a re-emergence of the fear of that Luddites had way back in the 1800s. People are scared that their jobs will be taken by robots and they will end up unemployed. But as history can show us, this is a largely unfounded fear. When fewer factory workers were needed because of the incorporation newly invented machines, they did not simply form a large unemployed workforce, but instead went into newly created jobs that were created by the advent of the machines. Because less-skilled jobs are now automated, jobs that are more creative and more skillful are now the norm, creating a society consisting of a more skilled and creative. There will inevitably be a transitionary period that can be made shorter through public education, new jobs are ultimately created that replace those that were taken by automation. This was the case in the 1800s with the advent of textile machines and is the case now with the creation of machine learning and artificial intelligence(Thompson, 2017).

Though there is the vast majority of ideas fueling the fears of machine learning, it is true that nobody quite knows what future technological advancements will bring. Currently, machine learning is an extremely powerful tool that has a broad use of applications, but it requires specific setup by its user. In the indeterminate future, nobody knows how powerful or essential machine learning will become. But through careful implementation and constant analysis, society can continue to innovate in the field of machine learning and continue to improve people’s lives.

Though machine learning has the potential to improve society in a wide host of ways, most people have a largely unfounded fear of it. This fear can be understood and subsequently alleviated through an understanding of how machine learning actually operates, the phenomena of suitcase words, and through the examination of previous technology that was initially met with resistance but became an integral part of society. Machine learning is simply the improvement of a computer at an extremely specific task through millions and millions of iterations. Most people believe this process is completed through a process similar to how humans think because the term “learning” carries connotations and assumptions of how one “thinks.” Finally, the fear of machine learning upsetting the labor force can be caused by looking at the Luddites and the lack of widespread unemployment because their was creation of new jobs caused by the advent of the machines. If people were to correctly understand machine learning, the mystique surrounding machine learning would be shed and what would be revealed is a powerful tool that can be used to improve lives. Ultimately, the most likely way for machine learning to negatively affect people’s lives is through a malignant human using machine learning malignantly, for machine learning is ultimately neither good nor bad, but is simply a tool that has the power to influence our lives in profound ways.

 

References

 

Berliner, H. (1988, December 17). Deep Thought for Winning Chess. AI Magazine, 10(2).

 

Brooks, R. (2017). The Seven Deadly Sins of AI Predictions. MIT Technology Review, 120(6), 79-86.

 

Eisenberg, A. (2018, March 19). 7 Uses of Machine Learning in Finance. Retrieved from https://igniteoutsourcing.com/publications/machine-learning-in-finance/

 

Faggella, D. (2018, March 22). Machine Learning Healthcare Applications – 2018 and Beyond. Retrieved from https://www.techemergence.com/machine-learning-healthcare-applications/

 

Geitgey, A. (2016, July 24). Machine Learning is Fun! Part 4: Modern Face Recognition with Deep Learning. Retrieved from https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78

 

Greenemeier, L. (2017, June 02). 20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess. Retrieved from https://www.scientificamerican.com/article/20-years-after-deep-blue-how-ai-has-advanced-since-conquering-chess/

 

Hinton, G., Vinyals, O., & Dean, J. (2015, March 09). Distilling the Knowledge in a Neural Network. Retrieved from https://arxiv.org/abs/1503.02531

 

Jain, K., Dar, P., & Bansal, S. (2018, April 26). Machine Learning Basics For A Newbie | Machine Learning Applications. Retrieved from https://www.analyticsvidhya.com/blog/2015/06/machine-learning-basics/.

 

Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2015.7298682

 

Thompson, C. (2017, January 01). When Robots Take All of Our Jobs, Remember the Luddites. Retrieved from https://www.smithsonianmag.com/innovation/when-robots-take-jobs-remember-luddites-180961423/