As a child, I was fascinated by blackholes. I remember thinking Mars was pretty cool, but my love for space reached it’s max at my elementary school blackhole phase. Regardless, I found the lecture last Tuesday incredibly intriguing. And if I’m being completely honest, a tad disturbing. Roger Launius’s cyborg discussion is when I really began to furrow my brows. In my philosophy class, we explored the morality of robots and, more generally, the relationship between humans and robots. The concept of a human cyborg, of a Six Million Dollar Man-esque being perturbs me; I wonder, can this be moral? Certainly this is not natural (although perhaps natural is not always a prerequisite to progress). The idea of “cyborg inevitability,” as Launius puts it, assumes that humanity is willing to part ways with the essence of humanity, does it not? If in creating cyborgs we really are able to create “better, stronger, faster” humans, what are the moral implications of such an action? Certainly this falls under a similar heading as genetic engineering, and as German philosopher Jürgen Habermas tirelessly and convincingly argues, genetic engineering would be doubtlessly destructive for humankind. If cyborgs carry the “best traits of humanity,” what happens to regular humans? How does society shift? Will it not become a super-privilege, an even greater system of oppression? If the price is our humanity, is the “inevitable” really worth it?