Author: Amanda Sagasti

On Memory and History

Professor Popkin’s talk brought forth the difference between memory and history. There can be both types of accounts for one same event. Memory is the emotionally charged version of it, while history is a carefully documented account of events that scholars put together. Although in theory the difference seems very straightforward, there are texts that bend and stretch the boundaries between these two concepts.

For example, Gros’ An Historick Recital is a first person account of what happened during the San Domingue insurrection. However, its trustworthiness can be debated: if it is a first person account, how do we know it actually happened? How do we know it’s not a furthering of political motivations?

The English version of Gros’ narrative is composed of two different texts. The first one is a chronological account of his experiences in captivity, while the second one consists of an account of the events that led up to the uprising written especially for American readers. Gros acknowledges this difference and affirms that “circumstances” obliged him to restrain his descriptions of the atrocities the slaves committed in the first part. The first part of his narrative was written in a time of political tension between the people in San Domingue, while the second one was written years later when that political tension had broken down.

I often find that airport stories are an interesting example on the difference between memory and history. I don’t know why, but airports seem to get the worst of people. When I ask “how was your flight?” my friends say something along the lines of “OH MY GOD IT WAS AWFUL. First, we boarded TWENTy MINUTES LATE. Then, they made us just SIT THERE for LIKE AN HOUR and they didn’t SAY anything. After we landed, MY BAGS were not THERE OH MY GOD so I HAD TO WAIT…” and it makes it sound like the worst day of their lives. Perhaps a factual account of what happened might have not made it seem that bad. A simple example like this one makes the difference between emotionally charged accounts and factual retellings. At the airport scale it might not make that much of a difference, but when the account is about something at a nationwide scale, it can have greater consequences. Gros’ account, for example, proposed how not only could the slaves rise against their French patrons, but how they did so in an organized fashion to follow a political and military cause.

Six Degrees

Networks as a notion of the world has always existed, but it has become especially heightened with digital networks. Bruno Latour found in networks a “powerful way of rephrasing basic issues of social theory, epistemology, and philosophy”. To define an entity in actor-network theory, one must define its attributes, its network. All the attributes are necessary for any self-contained entity to exist. Latour’s networks also get rid of nature, society, power, and any other notions that were able to exert some sort of control over the development of science.

Joi Ito has a similar approach to science, the anti disciplinary approach. Ito imagines the whole of science as a huge piece of paper in which dots represent the different disciplines and the blank space between them represents “antidisciplines.” In approaching more difficult problems, we should be focusing more in a united science rather than a “mosaic of different disciplines.”


Combining Latour’s networks with Ito’s antidisciplines, we get a model that looks like the following:


What I think is interesting is: if science is a network that consists of nodes and edges, what would it be immersed in? What would the blank space represent? Perhaps the combination of both models allows for a unified conception of science. The differentiation between each of the points implies a connection can still be drawn if it’s not there already.

This is revelatory of the kind of science we will be doing in the future. We will no longer need that many specialists, but synthesists who can draw connections between unlikely concepts. This may seem new in this context, but many science fiction authors have proposed future worlds with people of this character. William Gibson, for instance, introduces the idea of a “cool hunter” in his novel Pattern Recognition. A cool hunter can identify trends before they become popular. Brian Aldiss introduces the “seeker” in An Appearance of Life. A seeker has a combination of a “serendipity factor” and training in other areas. 

In other words, I put two and two together in situations where other people were not thinking about addition. I connected. I made wholes greater than parts. Mine was an invaluable profession in a cosmos increasingly full of parts.

  • Brian Aldiss, An Appearance of Life

We won’t have to worry about specifics and generating information because we are already doing that. We will need to learn how to read large amounts of data and identify the unlikely connections between the concepts.

Data and Context

The term “data” is not unique to our generation. We were raised in a society dominated by technology and are privileged enough to store and remember most, if not every, moment of our lives. Data is part of our daily jargon: we have data plans on our phones, we store our files in databases in our computer, talk about data when backing up claims in an article…

Although the concept may seem contemporary, the term data originated in the 17th Century when it originally meant scriptural givens. A “heap of data” was understood to mean the word of God, something that is undebatable. Over the centuries, data changed in meaning to the result of experimentation. Now, data has started to replace the term “evidence” and has eventually come to replace it.

To understand data, we need context. If not, it is just information with no rules, boundaries, or meaning. The contexts which define the interpretation can make the same data set mean two different, and even opposing, things. For example, in the 16th century, Tycho Brahe recorded and documented extensive astronomical data. Tycho used it to support his own cosmological system, the Tychonic system, in which the Earth remained stationary in the center while the sun and moon circled around it, and the other planets circled the sun. However, after his death, his student Johannes Kepler used the same data to support the Copernican system, and eventually his own elliptical orbits. It turns out that the Tychonic and Copernican systems were mathematically equivalent, and Tycho’s same data could be used to contextualize the data for either interpretation.

We do not collect data for nothing. Data collection has a purpose, whether it is as the proof of a theorem or the result of an experiment, it is done with purpose. This is why the context cannot be ignored, because if we take something that was done for the sake of something else and consider it without its purpose, it becomes meaningless. Data has no truth. Information as an abstraction with no content.

However, we are shifting into a mode of operation in which nothing needs to be explained anymore. Information is turning into a standalone unit and we no longer need to describe or explain them. This is the moment where data visualization starts being more and more important, and where Professor Hanlon’s talk is particularly relevant. Data has always been visual and is starting to become the main form of evidence. The context matters less and less, and a single interpretation of data is assumed. How do we implicitly agree on one interpretation? Where does the consensus come from? These are a few of the critical questions we must ask ourselves before we commit to an interpretation with no way back.


Some day, the big cosmological clock will stop ticking. Mozart’s Requiem, Shakespeare’s plays, the Sistine Chapel, Einstein’s theory of relativity, all of Ray Bradbury, Neil Gaiman and Terry Pratchett, all of our libraries, museums, monuments, and buildings, even my name graffitied on my 6th grade locker will be forgotten and destroyed in our sun’s explosion. Everything we have worked so hard to create will be destroyed, the law of conservation of energy has no exceptions. These objects to which we have associated such deep significance are only arrangements of atoms in the larger scheme of the universe.

What happens to an object when it outlives its time period of significance? How can we preserve that? What does it mean? It seems as if everything we make has an expiration date if no one is around to interpret it anymore. Most items that outlast generations are forgotten in favor of novelty and contemporary objects, but a few of them are preserved in museums, libraries, and archives.

Museums, libraries, and archival buildings have always faced the challenge of forgetting. They become reservoirs of strength, of wisdom, and reminders of the continuity of the human existence in the face of our fast-paced lives. They urge us to slow down and appreciate objects and books instead of living in the Web. They remind us that all art was once contemporary, and the people who made it were just like us.

In our generation, where content is created with one click and physical objects seem to hinder us, the virtual world seems to rush past the physical one. Even considering our generational the bias towards the virtual, I always thought that having things as opposed to having thoughts provides a sense of permanence and longevity. This is the paradox of our times: trying to reconcile permanence with impermanence, and keeping those two ideas coexisting and contradicting each other. This urges a new take on what we consider permanent. The ubiquity of technology makes it easy to use it as a repository and a time machine, knowing that once it is online, it will be there “forever.” However, an entire generation has not lived and died with the internet yet, and we are still figuring out the consequences of storing everything digitally. and museums and libraries help us slow down.

Although museums and libraries seem futile in the future of the universe, they do not need to be. They might not be relevant to the universe, but they are relevant to us. They are a record of our existence, our past, present, and future—they are our graffiti  in one of the universe’s corner. Just because the sun is going to die one day does not mean we should give in to hopelessness and cynicism.

The Cathedral and the Bazaar

Popular science is science that’s not for scientists. However, when we examine the exact boundaries between popular science and real science, they break. What counts as genuine scientific knowledge? What is popularized science?

The traditional model of popularization splits the production of scientific knowledge in two steps: the scientists develop real knowledge, and the popularizers spread it to the public. The popularization aims to be an “appropriate simplification” of the genuine. This way, one scientific theory has two superimposed interpretations: the actual scientifically rigorous one, and the “vulgarized” popular one. In translating actual knowledge into non-scientific terms, some scientists fear that there is a distortion and oversimplification that lose sight of the essence of the subject.

Popular science may be taken as less valid because it is easier to understand than real science, and loses value because it may be “dumbed down” for the general public. However, this claim assumes that the essence of scientific knowledge can be contained in the words and figures of genuine sources like journals and science for scientists. This gives scientists the power and authority to determine which popularizations are more valid than others. This follows what Eric S. Raymond calls the cathedral: knowledge is released, but the legitimate science and its popular representation is restricted to an elite group of scientists who choose which ones are appropriate. (Even our lecturer told us a trick that she uses to get rid of books she doesn’t like from the library.)

Scientific popularization definitely needs quality control and supervision—only the ones who produce the knowledge actually know what is appropriate or not. However, popularization should not be discarded so quickly: there is value in simplification and abstraction. Abstracting the signal from the noise in order to explain something in layman’s terms forces scientists to rephrase certain principles and maybe even realize they’ve made a mistake. Maybe sometimes they just find a better way to phrase a principle.

In this light, the questions stated at the beginning change to the following: should science be left to the scientists alone? How can the general public participate in science without necessarily being a scientist? This is more attuned to the cathedral’s counterpart—the bazaar. In the bazaar model, science is developed over the community, in view of the public. The bazaar lets the public cooperate in the development of science. The participation of the public in scientific affairs does not have to be in the laboratory, just understanding the scientific principles is enough. This way, the public can make more informed and rational decisions, can support science and understand its benefits, and can act in a rational manner.

Curiosity Killed the Cat

But satisfaction brought him back.

Kerry Emanuel explored the history of climate science and listed many contributors to the development of this discipline. Throughout his talk, what really stood out to me was that curiosity is a driving principle for the advancement of climate science. I venture far as to say that curiosity is a driving principle for ANY science. Curiosity opens up the conceptual space for exploring new ideas not available before, and allows for experimentation, both theoretical and practical.

According to Emanuel, curiosity about why the Earth’s temperature was what it was, why the old ice sheets behaved the way they behaved, and what determines the nature of the surface of the Earth is what guided many scientist’s experiments in climate science. This desire to know guides not only climate science, but also any other scientific endeavors; it reflects an awareness, want, and need to fill in a knowledge gap. It is also a source of personal satisfaction and of aliveness: according to Ian McEwan, “the standard measure of how alive you are is your curiosity.”

Taking curiosity as a guiding principle in science leads to many paths that are not necessarily useful at first glance. Endeavors led by curiosity are often thought of as useless because they do not have any immediate practical applications, and are waved off as daydreams or delusions. Those who follow their scientific curiosity, and dare to ask the “what if…” question (or any other questions), are often the ones that make significant contributions. In these types of experiments, utility is not the primary purpose, the motivation to realize them is not utilitarian.

This does not mean, however, that any inquiry based on curiosity will result in something useful. “Fooling around” with an idea does not make it automatically better, but it does offer conceptual freedom and “shackles off the human mind” (Flexner, 546) and sets it free for adventure. According to Flexner, curiosity is the “outstanding characteristic of modern thinking. It is not new. It goes back to Galileo, Bacon, and to Sir Isaac Newton, and it must be absolutely unhampered” (Flexner, 545).

Emanuel made it clear that the revolution in climate science, just like any revolution, has a long and intricate history. Each scientist finds bits and pieces of a theory and then the pieces are organized in a systematic way to make a real contribution to science. All of these experiments enrich our world view and aid in the pursuit of science and truth.  “The mere fact that [experiments] bring satisfaction to an individual soul-bent upon its own purification and elevation is all the justification that they need” (Flexner, 549).


Flexner, Abraham. “The Usefulness of Useless Knowledge.” Harpers, vol. 179, 1939, pp. 544-552,

Mind, Body, and Simulation

Up to the first half of the 20th century, the Cartesian separation between the material and mental constructs used to explain and understand the physical world had more implications than in the nature of being. This ontological-turned-epistemological dichotomy was reflected in science as the dualism between theory and practice…

Until computers came around. The digital brought a new dimension to understanding this Cartesian world, making it not a dual, but a triadic relationship between the physical, the abstract, and the virtual.

With this intangible yet graspable medium, where objects are neither entirely material nor entirely abstract, science and experimenting took on a new direction in an in-between realm of theory and practice. In the virtual world, reality is manipulated just enough for it not to be like the real world, and not abstract enough to take away the particulars. This opened up a third type of third type of experiments, simulations, that allow the manufacturing and manipulation of reality to have a closer look at the phenomenon under study. As in theoretical experiments, time and space collapse in this virtual dimension, yet that does not strip this domain from having practical applications. These simulations create a near real world in which we can create and destroy with the touch of a button. Simulations allow us to explore, refine, filter, and focus on experiments before actually engaging in them in the real world.

This new middle ground between theory and practice also implies a redefinition of the relationship between theory and practice, theory and simulation, and simulation and practice. If we strived for correspondence between our concepts and their material equivalent in order to support any scientific theory, this virtual world adds a new variable to the equation. Does this mean we should strive for matching physical, virtual, and abstract realities? Are simulations only an intellectual playground for those who know how to make them?

Even though virtual worlds do not interfere directly with the material or abstract reality (they do not make the physical less physical or the concepts more conceptual) they do have an impact on our lives. On the one hand, the virtual can help us understand the physical and abstract world better by examining them more closely in simulations. On the other hand, the virtual can also help us lose ourselves in the noise and disregard the real world completely. For example, the easiness with which simulations are created and destroyed impact our definition of permanence and relevance: in the midst of such quick changes and progressions, can the relevant simulations be singled out if they do not prevail for long?

We can always choose not to acknowledge or participate in this new intermediate level of reality, still living by the traditional mind and matter dualism. However, as technology becomes more accessible, prevalent, and ubiquitous, the more difficult it will be to resist its pull. This urges a reexamination of how we relate ourselves to this new, fabricated reality, and how we choose to value it.


I believe in the Internet. I believe in its ability to make stories and realities, all things visible and invisible. I believe in its thoughtless inclusion of everything and everyone and its rejection of any notion of exclusivity. It heightens the world’s capacity to carry on simultaneously everywhere at the same time.

I believe the Internet is timely and timeless. I also believe that the shortness of our lives makes everything more urgent. Everything is more pressing because there is an expiration date, a deadline, a time when everything is due. I believe we should be aware of our mortality because the Internet is not eternal either. I believe the Internet is helping us forget our past faster because ideas have no time to constellate before they’re discarded in favor of a novel one. I believe we’re facing the same problems others faced centuries ago. This makes us think that we´re somehow alone… The Internet is worsening our technological hubris get the best of us, thinking that we’re somehow alone in this planetary chaos.

I do not believe this is the only thing the Internet can do. It can help connect unconnected knowledge, erase any boundaries of time and space, and create new insight. I believe in the endless possibilities of the Internet and in the strength in numbers. I believe that belonging in the multitudes does not forfeit your identity, but defends it. Imitating others is not surrendering your voice, it is finding new ways to be better. Sometimes we don’t know where we stand and by looking up to others, we find our own voice. This does not always end pleasantly; we often make mistakes. I believe in making all kinds of mistakes, saying the wrong things, thinking the wrong things, and I believe that can be remedied. I believe in the uncomfortable benefit of changing our minds. I believe we all have the need to understand and be understood, and finally find some sort of contentment and fulfillment when we say what we have to say.

I believe this world is looking for the limelight because we look for solace. We’re too eager to find heroes, and the media will go through great lengths to build them up and tell their stories. We need to remember that heroes rise at the expense of others who won’t be heroes.

I believe they will eventually be forgotten.

I believe the world we live in is not nice and that we are not okay. I believe that when we are okay, we have no need to be radical, no need to change our lives. I believe in hardships not only happen, but they are necessary because they’re life’s way of telling us the truth. When faced with hardships, we create.

I believe in art and all of its expressions. I believe it shatters the borders of language and geography. I believe that governments fear the power of art and of ideas because they can be revolutionary. Even though the medium can be repressed, ideas will still spread like a dandelion’s seeds because the seed was already planted. It’s too late to stop an idea from moving forward. I believe that with ideas, we can escape the terminal velocity of the Internet. I believe in the future. I believe that someone out there can do things better than I can, and I believe this is a manifestation of hope. I believe in all the people, there are so many people. I believe people are smart. I believe we’ll cope. I believe we’re smart enough to figure it out.

Climate Change Death Anxiety

We are the only species capable of modifying our climate. Just let that sink in. We’re messing with the entire planet. We’re tampering with one of the fundamental natural aspects of the earth as if it were nothing. I mean, we’re all going to die, right? Climate change doesn’t seem like such a bad way to go: the rise in temperature will make some strange infectious virus kill us all, the economy will collapse and we’ll run out of food and water, the political tensions will become unbearable, some suicidal terrorist group will act up, and we’ll basically die in an environment incompatible with human life.

But we might have a chance.

Although there is no terminal horizon to our climate crisis now, we do have time to enjoy some good dystopian entertainment. In such books and movies, we’re pitted against monsters, aliens, natural disasters, giant robots, genetically modified dinosaurs… and somehow manage to survive. Our fears are manifested in creatures, monsters, or situations that imply that we’re tampering with nature at a greater, more abstract scale. And as always, our entertainment manages to mirror the time and culture we’re immersed in with precision and critique. Climate change is terrifying and makes me think that we’re a time-bound species on Earth. We can live under certain conditions that by chance emerged on Earth and just as they emerged, they can disappear. We’re making all of these changes happen and affect our environment and just don’t think about how fragile we are. What’s also more surprising about the scale of our impact is that long after we die, the effects of our actions here will leave geological scars on earth.

Even though part of dealing with extreme weather events is denial, paralysis and overall losing faith, instead of intimidating us and describing what dangers we’ll face, it should inspire us to change (or move to change). Just as the protagonists of our favorite movies accept their fears and overcome them, so can we. Thinking about climate change and even naming it “climate change” already reflects not only a change in the weather, but also a change in the way we think about change. We’re already realizing that this is a multidisciplinary issue that is challenging all of our faculties and everything we thought we knew about the world. We’re becoming more careful about overstepping nature’s boundaries and our involvement with it, and becoming more sensitive about overstepping human boundaries as well.

We need to save us from ourselves. There’s not much we can do about a climatic event, but we can do something about the way we react to it. Although we can’t adopt an idealistic and naïve perspective about saving the world, we shouldn’t fall to the cynical opposite either. If not, what hope for humanity is there?

Motion in Language or Language in Motion?

“The most revolutionary part of the Scientific Revolution was that we use a metaphor of a revolution to describe it.” That conclusion was very provocative to me. As products of their time and culture, and as aware and critic of themselves and their environment, the scientists during the Scientific Revolution started using the word revolution to explain their circumstances. “Revolution” was not only used as a metaphor to challenge the past and current standards and establish a new scientific outlook, it was also used as a cyclical historical term to describe a pattern.

During the Scientific Revolution, knowledge stopped being about enlightenment and faith and started being about experiments and testing. Although the theories were abstract and mathematical in nature, they were able to be tested physically with experiments. Everything could be put to the test in this process of discovering new principles through empirical methods and mathematical analysis. Bacon, Galileo, and Descartes created the foundations of thinking about our thinking about the world and established a new approach to methodological inquiry. This new paradigm echoed the rise of humanism during the Renaissance, which questioned religious authority and emphasized the capacity of individual human beings to understand the world. The Scientific Revolution relied heavily on a capacity for abstract thinking and a precise use of language in order to become such a powerful period in history.

According to Chalmers Brothers and Vinay Kumar, language is a tool we cannot stop using because we need it to use all other tools. Language does not only communicate and describe; by making distinctions, it creates, generates, and provides us access to conceptual breakthroughs. By acquiring distinctions and giving them a name, we discriminate between things we didn’t see as different before. New ways of seeing things allow us to do what we could not do before. In short, language and distinctions give us access to knowledge. Once you have the distinction, you have created the conceptual space for understanding and access breakthroughs. The thinkers of the Scientific Revolution realized they were living a turning point in modern science and culture, they were aware of the distinction, and named that distinction “revolution”. Even though the word could be used in different contexts (to roll back, to return, overturning, as an astronomical term…) and could take the form of different truths, it contributed to understanding and thinking about the world.

Centuries later, we continue to enhance and discover new meanings to the distinction made by the phrase “Scientific Revolution” using new analytical methods through data. We can now see we weren’t able to see before and distinguish deep structures and patterns in history. By reinterpreting the Scientific Revolution in terms of language we uncover a pattern of a continuing process of change as a critical part of history.