[written by guest author: Ken Knowlton (computer graphics pioneer and member of the Bell Labs Research team from 1964 to 1982) - see our post about Ken, Kenneth C. Knowlton’s “Terrible Thoughts” about AI Automation)]
Artificial "intelligence" is a nebulous term that means, basically, fast automatic processing of information: collecting, analyzing, devising strategies and applying them. It exists in many forms and environments. How well do instances of it work? Depending on purpose, builders' skill, and affected persons' viewpoints: exquisitely, acceptably, disappointingly, or appallingly.
From a standpoint of 50 years' experience in the development of computer methods (while trying to maintain a typical-human status) I present here a cautionary appraisal.
People strive, first of all, to survive, moreover to survive well, in the face of nature's and civilization's challenges. Against nature's hurdles we all tend to cooperate. But in various arenas of business, politics, religion, government, education and pastimes, we compete – for best standing of personal accomplishments, status, and power. These are competitions where the best, quickest analyses win - where fastest (soonest) is even more important than best.
For accomplishment, status, and power, there are many goals and strategies, typically of two kinds: to be a bit better than competitors, or to cause your competitors more trouble than you're having. As to being better: use your smartest AI to figure out what they're doing and, with your own smarts and electronic power, do it a notch better. In my own early technical life, I took this route to some extent: doing new things, and/or doing old things in new ways.
Backing off lately, I've become more of a witness than participant, especially worried about sabotage. It seems that messing up a competitors' collection-analysis-scheming-application may be easier than making things run more smoothly for myself. Many things are delicately balanced in this high-stakes world; danger abounds. Consider one area: self-driving vehicles, obviously a bit risky. But think here of deliberate malfeasance.
You want to throw a monkey wrench into the gears? If you have a dozen self-driving trucks of your own, you can arrange to send them off to misbehave savagely, one on each Manhattan avenue and Broadway, between the same two streets about 7:30 am or 4:00 pm. Better yet, don't buy your own trucks - pry your way into someone else's system without leaving finger- prints. Or, with this deviltry in mind, you could arrange to be employed by a self-driving delivery firm. You alone could cause several deaths and one hell-of-a New York City rush-hour mess.
Other possibilities? Just one tuck is enough for causing lots of trouble -loaded it with explosives and sent it out on a "suicidal" mission. Or you might tinker with some company's automatic pilot software. It should not be surprising if frustrated folks - out of work for no reason of their own - think up new ways to demonstrate that they are not powerless. Or you might achieve anything you want by blackmail: "If the president does not release (convicted arsonist) back to his own homeland, we (anonymous) will do one of the above things."
Things get more confusing, maybe worse. Civilization's structure involves rules of privilege and obligation, also rights and prohibitions - much of this in real or tabulated tokens (largely money), or information about imaginary money representing privileges. The risks here are accidental or deliberate massive mis-performance in information-management that is not nearly as easily diagnosed and fixed as crashed trucks causing Manhattan gridlock. We will become dependent on results of very complex processes occurring at lighting speed, not able understand and check that the processes are valid - i.e. what intelligent, honest, capable people would have decided.
And AI involvement in military areas, where the most pro-active combatant is likely to end up least damaged? Would you not want our system to be preemptive in a serious confrontation? Suppose that our country's AI analysis of international tension is this: Our system "thinks" that the opponents' system thinks that our system thinks, etc … that there's eventually going to be a nuclear war. Best, by far, to be the first to launch, right? Consider even in a more limited situation: a surprise missile is coming in - reconnaissance? or a nuke? Shouldn't we arrange for our instant automatic response to be blasting such an intruder before it might blast us?
AI is going to be unavoidable: On behalf of its users, it decides and acts before normal sluggish folk have even begun to know that there's something afoot to be dealt with. But overwhelmingly troublesome is this: that no one can really trust it - we don't know just why it's doing what. Was your own system (or someone else's) properly programmed and/or trained? We techies are creating and setting loose, or making available, frightful sorcerers' apprentices that scheme and perform reflexively, instantly. And for the seriously malevolent - what a playpen!
P.S. Oh, yes, climate change? Even well-meaning people can mess things up by automating normal economic processes. Imagine forests mowed down be largely automated machinery - to create farmland for feeding the world's out-of-control population. Or, while there are still fish in the oceans - yet surviving the warming waters - imagine the efficiency, and the necessity, of self-piloted fleets of fishing boats pursuing and catching those last fish.
The situation is becoming dire, but we're not hopeless nor helpless. Alert and responsible, we do need to devise and enact a vast new set of rules and regulations, licenses, inspections, scrutiny and enforcements. [Don't count on me for much of that: those of us born in 1931 are already 3/4 dead. Good luck to y'all.]