Dating all the way back to the 17th century, during which Robert Hooke recorded the observations he had taken through microscopes and published them in the book Micrographia, information has been visualized and in more recent times quantified, so that a broader range of people have access to information that they are unable to learn about themselves. This explosion of data has indeed revolutionized how we go about inquiry and how we operate. Predictions for instance, whether they be for stock market trends or sport outcomes, are now based on numbers and data. The gut feelings and intuitions humans relied on long ago have now become obsolete. What’s clearer now more than ever is that whenever sufficient information can be quantified, modern statistical methods will outperform an individual or small group of people every time. However at what cost?

 

Professor Hanlon talked about potential risks of quantifying information, such as ambiguity and lack of content. Data represented in binary computation for example is black and white. There is no room for context and no judgment involved. Whereas this is an efficient process, the potential for danger is worth consideration. For when taking the human element out of information, things like emotion, anomalies and other randomness are not accounted for. Let’s consider a bet for example. You and your friend want to make a bet for next Sunday’s football game. We’ll imagine the Raiders are playing the Broncos. Statistical method and computation can account for things like: The teams’ respective head-to-head records in their last 20 games, the number of games won by the home team, the number of games won by the Raiders since signing David Carr, the number of games won by the Broncos since losing Peyton Manning, the number of games won by either teams when the game has been played in snow, and the list can go on like that forever. All of this data can be considered and put into some sort of algorithm that can predict, depending on the certain conditions of the game next Sunday, which team is most likely to win; and this prediction will be fairly accurate and telling. What statistical methods can’t do, however, is consider the fact that the game is taking place on Thanksgiving, and that the center for the Broncos has a tradition where on Thanksgiving he eats a huge breakfast and drinks heavily during lunch. Obviously, this is a fake example, but I think the point is clear. ON Thanksgiving, the Broncos are at a disadvantage because on their key players can’t perform as well due to his holiday traditions. Whereas statistical predictions can’t factor that into their data, humans can consider it and use the information to come to a decision and make the bet.

 

Looking at the bigger picture, the growing collection of data has made modern day life tremendously easier and more efficient. For the most part, there is little downside to losing context for unimportant things. We can live with for instance, Amazon recommending a movie we hate because the data it had accumulated from our recent purchases thought we might like it. I think up until now the side effects of quantifying information have been minor, but all it takes is one costly error to make us reconsider whether the efficiency of data is worth it.