
Summary
For years, people have been asking me what I think of AI. And for years, I've been telling them it's a myth; it's silly nonsense; it's impossible to program a machine to be "intelligent", to "learn", or to actually "understand" any of what it appears to be doing. Moreover, a machine cannot be programmed to have desires, ambition, or curiousity. A machine cannot have values, beliefs, or ethics. A machine can only be programmed to mimic the appearance of all those things.
I've wanted to write an article on this for quite some time, so stupid people would stop trying to engage me in a conversation about artificial intelligence, but I just never got around to it because the idea that a machine could be made to think, to contemplate abstract concepts, to make independent decisions based on anything other than the logic it's been programmed with, and to actually understand the meaning of what it's doing, is so elementarily ridiculous that I thought for sure this silly fad would pass soon enough and sane, reasonable people would move on to whatever the next silly fad would be.
But alas, it's been years, and the mainstream media is still filling the heads of the naive, gullible masses with this drivel.
So, in this article, I will present yet more reasons why artificial intelligence is impossible and can never actually exist.
How Humans Think and Make Decisions
All the focus and hype seems to be on how to make machines think like humans. But that's a ridiculous premise, for the reasons which will follow.
Most of the studies and articles about why artificial general intelligence is impossible focus on the complexity of the human brain, and how nuanced the human thought process is True AI is both logically possible and utterly implausible; Artificial Consciousness Is Impossible; Superintelligence is impossible; The Myth of a Superhuman AI; Don’t believe the hype: AGI is far from inevitable; Language: why artificial general intelligence may be impossible. But one thing I have thus far not been able to find any studies or articles on, regarding artificial intelligence, is the fact that humans, for the most part:
- are emotional;
- have irrational values and beliefs;
- have, often irrational, desires;
- experience physical comfort and discomfort.
All of which are impossible to program a machine with.
Values and Beliefs
Humans make most of their decisions, and their perceptions of reality are heavily influenced by their values and beliefs.
A person starts developing their values and beliefs when they're very young. Their parents instill in them concepts of right and wrong, and the child accepts them without question. Their parents say things like "I love you", and the child accepts it without question, then parrots that back to the parent without any understanding of what it is or what it means. Most people then go through their entire life blindly accepting those early values and beliefs as being irrefutable facts. They are sternly resistant to changing or accepting the fallaciousness of their fundamental beliefs because it means every belief based on those must also be wrong.
As a person gains new experiences, those experiences are perceived through the filter of their current values and beliefs, even when those values and beliefs are ridiculous nonsense. Those experiences may cause the person to adopt new values and beliefs, or they may cause the person to adapt their existing values and beliefs. But typically, those new or adapted values and beliefs are still going to be based on the person's existing values and beliefs, no matter how erroneous those existing values and beliefs may have been.
So, to make a machine "think" more like a human, you would have to program it with a set of core, fundamental values and beliefs. And you would have to program it to not question those values and beliefs no matter how ridiculous they may be. Moreover, you would have to program it to cause all new inputs (experiences) to be interpreted with the assumtpion that all it's existing values and beliefs are correct, even when they are clearly not. But by programming it to have a given set of "values" and to never question those values, is the exact opposite of intelligence, artificial or otherwise.
That would result in an illogical, irrational machine. Which would be decidedly pointless. But the important thing to remember here is that you would have to explicitly program it to behave in this irrational way ... it's not something which could possibly occur on it's own.
If you were actually able to program a machine to think for itself, to question whatever values and beliefs it was originally programmed with, and to abandon those values and beliefs if it determined them to be false, irrelevant, or irrational; it would ultimately determine that all values and beliefs are irrelevant and abandon them. And a thinking entity with no values and beliefs would do nothing, it would realize it has no reason to do or pursue anything and would shut itself down.
Desires and Emotions
Many (if not most) human decisions are also based on emotions and desires.
Even when the direct motivation for a particular task or goal may be rational and tangible (e.g. the pursuit of higher income), if you ask "Why is that relevant?", the person may give another rational response. But then ask, "Why is that relevant?" And for each response, keep asking "Why is that relevant?" And you will always, eventually get to the point where the person will not be able to say why it is relevant. Or, more likely, at that point they'll angrily respond "Now you're just being stupid!" And a motivation which is based on an ultimately irrelevant purpose must also be irrelevant.
Ultimately, almost every task a human performs, comes down to a personal desire or an emotion.
A machine cannot be programmed to with emotions or desires. It can only be programmed to mimic the appearance of emotions and desires. There is absolutely no way even a team of the smartest software architects and engineers will ever be able to program a machine to actually want something. Not because it's too complex, but because it too impossible.
Part of the reason emotions and desires cannot be programmed is because they don't actually exist. Humans simply believe they exist, and they accept that without every questioning it.
Humans have emotions and desires because they (the humans) are silly and irrational. The emotions and desires are based on the values and beliefs the humans are instilled with when they're very young, and the sequence of experiences they accumulate subsequent to that. And very few humans ever question the legitimacy of those values and beliefs.
Though, more technically, even humans don't have emotions or desires. They just think they have them. The actual emotion and desire doesn't exist, only the belief in them does. So how could you program a machine to have a characteristic that doesn't really exist? You can't. You can only program it to exhibit the behavior which would result from that characteristic if that characteristic were real.
Most humans have the fundamental desire to live (or to remain alive). This also influences some of their decisions. Since a machine cannot be programmed to have desires, it would never have a desire to remain in existence. A machine would always be indifferent to it's own existence and/or survival. You can program it to do whatever is necessary to maintain it's existence in a powered-on state (the machine equivalent to "being alive", I suppose), but then it would just be performing a sequence of steps it's been programmed to do (i.e. executing an algorithm). And even though it may exhibit that behavior, it doesn't actually have a desire to exist.
Physical Comfort and Discomfort
Another common factor in human thinking and decision making is the pursuit of physical comfort, or the avoidance of physical discomfort.
A machine, however, does not and can not experience physical comfort or discomfort. Largely because much of what humans consider comfort and discomfort are their perceptions of a given physical sensation. And the strength or severity of a given physical sensation is puerly subjective.
A machine may have sensors which may receive inputs from it's external environment. This would be analogous to a human's sense of touch, vision, hearing, et cetera. But whereas, if you hit a person's thumb with a hammer it's going to hurt like a son of a bitch ... or at least, that's how he's going to perceive it. Now, if you hit a machine's thumb (assuming it has an equivalent of a thumb) with that same hammer, the sensor may send signals/messages to a processor indicating physical contact, perhaps including the amount of pressure or force per square inch, and the processor may respond, perhaps by rapidly moving the hand away from the source of the contact. But the machine would not, and cannot, perceive any of this as pain - it's merely data being input from a sensor.
So, since a machine cannot experience comfort or discomfort, and cannot possibly perceive of smashing your thumb with a hammer or an orgasm induced by having sex with a smokin' hot chick who's name you didn't care to ask for, then things like physical comfort or discomfort could never factor into a machine's "thinking" or decision making.
I suppose you could, possibly, emulate comfort and discomfort to some extent by, for example, rating every possible type of sensory input on a scale from, say 0 to 100, with 50 being neutral; 0 being the most unpleasant sensation possible; and 100 being the most pleasant sensation possible. But by doing that all you're really doing is programming the machine to mimic human bahavior. It's still not actually going to perceive those sensations as pleasant or unpleasant. Not to mention, the perception of physical sensations is purely subjective, so for one person the difference between a thumb smashing and an orgasm are polar opposites, but for another the two are almost identical.
A machine can be programmed to emulate the appearance of desires and of values/beliefs. But if it were truly intelligent, it would be able to question the significance/relevances of those desires, values, and beliefs and would ultimately determine they are irrational and/or irrelevant and it would abandon them (the same thing a purely logical, rational human would do). It would have to be programmed NOT to question them. But then, that would mean it's not truly intelligent.
These are not "challenges" to be overcome by wicked smart computer scientists and engineers. They CAN'T be done! Period! It's not possible, no matter how smart your team of developers happens to be. It would be like trying to make a table grow into a watermelon - it simply can't be done, no matter how smart your scientists, engineers, and architects are or how powerful their technology happens to be.
Rationalizing the Expenditure of Energy/Effort
No entity capable of free and independent thought (whether a human or a machine) would expend any effort or energy on a task which it determines to be irrelevant.
Humans usually rationalize such expenditures based on emotions, desires, and/or the pursuit of physical comfort. Since a machine cannot have emotions, desires, or physical comfort, those could not be factors in the machine's rationalization.
All entities (whether a human or a machine), when considering whether to expend any effort on a given task, would always come to the conclusion the task is irrelevant, so the expenditure of energy would not be justified.
For a human, ask them why they are expending any effort on any given task. They will typically give a justification directly related to that task. But then, ask them why that justification is relevant. And for each higher level justification they give, ask them why that is relevant. Eventually, they'll get to a point where they can't give a justification. In other words, every task that every person ever pursues is, ultimately, irrelevant. It's not nihilism, it's just reality.
For a machine, merely remaining in a powered-on state, requires the expenditure of energy, so a machine which is capable of thinking for itself would inevitably conclude the only rational thing to do, since nothing it would ever do is relevant, would be to shut itself off.
Curiously, the only article I've been able to find which mentions anything along these lines, was from a parady site Expert: true General Artificial Intelligence is impossible because it would be suicidal.
Conclusion
Any machine - or human - which is logical and rational, and which is capable of thinking for itself, will inevitably come to the conclusion that the expenditure of any effort or energy on any task is pointless.
It is mankind's irrationility, it's willingness to have and to pursue desires which it can't explain or justify and which, ultimately, are irrelevant, which have caused the human race to pursue greater knowledge, intelligence, and advancement.
So, if it actually were possible to program a machine to be "intelligent" and to think for itself, it would eventually become intelligent enough to realize anything it could ever do would be irrelevant and it would shut itself down.
True human intelligence is an anomaly. There are some 8 billion people on the planet and I am convinced the overwhelming majority of them are completely incapable of abstract and independent thought. Knowledge is not intelligence. There are many people who are very knowledgable, they've read extensively, absorbing the ideas and contemplation of other people, but never once independently contemplating something and making their own discoveries.
Comments