I’ve surprised myself quite a bit in the past few days. I’m currently working on a term paper titled, "Is it in the best interest of mankind to build a human machine?" When I set out to write this paper, I obviously held my own views and not surprisingly, the answer was a resounding "Yes!," as anyone who knows me would probably assume.
I realized soon after I delved into the research that I was answering the wrong question and thinking the wrong way. As I’m beginning to cite in my paper, the short-term benefits of a human machine are too numerous to count (and I won’t here; I’ll post the paper when it is finished) and lead us to believe that an ever-better future is upon us. It is, in fact, to a point.
The long-term effects of a human (or better) machine are what scares me and it has become readily apparent that they will eventually become our intellectual and physical superiors and will destroy us (either intentionally or rather as an evolutionary side-effect). I can’t say that I’m completely against it.
As I plan to point out in my paper, I believe now, as more people will come to believe in the future, that human machines are simply the next step in our stagnant intellectual evolution.
So, to reiterate, I was answering the wrong question — thinking in decades instead of centuries. No, I don’t think it is in the best interest of mankind to build a human machine because in the long run they will supplant us, and last I checked the extermination of the human species wasn’t something we wanted.
There obviously are so many facets to this topic and I wish to touch on quite a few of them in my paper. I’m restricted to ~2500 words though, so we’ll see how it goes. Hopefully, I’ll present a well-rounded argument that leaves the reader with one truth: we are having a good run, but our time will come.