“Probably the most dangerous thing about an academic education, least in my own case, is that it enables my tendency to over-intellectualize stuff, to get lost in abstract thinking instead of simply paying attention to what’s going on in front of me.” - Depression & Genius: David Foster Wallace
Most all of us have seen the movies about it, WarGames, The Terminator, The Matrix and the list goes on. How does this relate to the quote above? There has been a long standing debate about the relationship of very high intelligence and that of mental and/or emotional problems. Considering just how many great minds suffered I tend to agree there is a correlation. Even Aristotle claimed “…there is no great genius without a mixture of madness” . Now consider if we take that intellect and expand it exponentially almost beyond measure, and throw in some good old fashioned emotions,… you have a super computer that may be suicidal. Ok maybe I exaggerate, maybe not.
Here is the concept...if, (read when) we can eventually create an Artificial Intelligence (A.I.) AND are able to incorporate emotions into the A.I. machine (for example Commander Data of Star Trek: The Next Generation series after he had the emotion chip installed) what are the possible problems?
The fear of computers taking over the world or destroying it is much more plausible in this scenario. A learning A.I. with emotions basically equals an uber-super genius human (almost, it would really depend on how the emotions are induced, chemically like a human or through electronics, or both). Now with the correlation of bi-polar, depression etc., that seems to accompany many of higher intellect and the probable incalculable "intelligence" potential for a learning A.I., could it, would it, because of the ability of such introspection and asking itself "what is the point of it all" and so on... eventually drive it insane or become so "depressed" to the point where it ceases to function because it has come to the conclusion, there is no point and either destroy itself or everything including its creators?
So considering just how dangerous the possibility is a learning Artificial Intelligence computer/robot should NEVER, have emotions incorporated into it. Right? Possibly, consider this.
On the flip side, a godlike intellect that may well have control everything in our daily lives from traffic lights to the nuclear missiles, as in the movie WarGames mentioned earlier, bases its entire decision making sets on pure logic, without any emotion to hedge those decisions. It has no emotion (pity, compassion, guilt, love etc) to control the possibility of it deciding, in its own form of logic, (as a stereotypical example) that humans should be destroyed because of their self destructive nature. If we create such an A.I. of such a scale, can we afford NOT to incorporate emotions to counterpoint logic (think of Spock and decisions he made because of his half human ancestry, how many times would Kirk or the Enterprise have been destroyed if he relied only on logic)?
I really do not know but assuming we WILL one day create such an entity, will it be smart enough to think through the questions of “Why?” and “What are we doing here?” and come up with an answer that does not end in it crashing, ending itself, and us in the process or helping us as humans to better understand everything in the Universe around us?
Would it be our greatest ally in expanding our knowledge and mind,… or would it be the doomsday machine that Hollywood likes to make it out to be. Would it be too smart for its own (and our) good?
Either way it is a scary thought to me. What do YOU think?
Speaking of great minds that suffered…, “Happiness in intelligent people is the rarest thing I know”-Ernest Hemingway, author and journalist, Nobel Laureate (1899-1961)
-Casey R. Varnado