Kenneth C. Knowlton’s “Terrible Thoughts” about AI Automation (and they have nothing to do with the Future of Work)

Kenneth Knowland (right) with his childhood friend who came out to support him (let's hope friendships like this are still around in the future!) Photo Credit: Served Fresh Media 

From Terrible Thoughts to a Brighter Future?

There is something bone-chilling about the ominous warnings of humanity facing a dire future coming from someone who has lived for almost an entire century – long enough to have seen wars, governments rise, crumble, and rise again, and to have seen the long-term impact (both positive and negative) of humanity’s innovation and creativity.

At a recent MoMa event featuring Kenneth C. Knowlton's early work (computer graphics innovator, artist, and member of the Bell Labs Research team from 1964 to 1982), 87-year old Knowlton shared his detailed and disturbing thoughts on how humanity will be impacted by AI automation in the future and what we should do about it. His commentary (which was in direct response to a question I asked him during the Q&A) not only made the hairs on my arm stand up on end, it stunned the entire audience into awkward silence, punctuated only by occasional gasps at the sheer boldness of his revelations and his willingness to share them at length.

Kenneth C. Knowlton

Knowlton seems to have thought about humanity's automated future a lot and, based on the first video clip below of him reminiscing about his childhood, he’s been pondering all of this since he was at least 8 years old. To say he's concerned would be putting it mildly. My question (an unexpected one at an event focusing on art and computer history) seemed to unleash a waterfall of thoughts Knowlton has been having about the topic.

In an almost 10-minute response to my question, he admits to having "terrible thoughts" about AI automation and cites examples of AI’s potentially destructive impact on society including neuronets making life-and-death decisions for us that we will not be able to explain (the “black box”), and networks of machines forming gangs that compete against each other using humanity as pawns (his concerns, have nothing to do with work or the lack thereof due to automation). Yes, it all sounds a bit scifi-ish but Knowlton, unlike Elon Musk who, though he is respected for his innovative work, struggles to be taken seriously about his AI doomsday warnings, delivered our dark future with thoughtfulness, wisdom and patience (we might be hurtling quickly towards a self-destructive future, but he took his time sharing his thoughts) that not only commanded our attention, but was very effective at getting our attention in a substantive way that Musk has not.

And again, contrary to Elon Musk's maniacal,  self-serving doomsday warnings where he positions himself as saving the world (hello Mars!), Knowlton seems resolved in what he sees as an almost inevitable future that he wants no part of. He ends his dire forecast by saying, essentially, that we are all in for a mess of trouble with AI that we have to fix immediately and that our deep denial won't postpone the inevitable. His last words were unnerving yet funny, "I won't be here so good luck with all of that." Several people in the audience laughed nervously when he said this but I couldn't help but feel that the reality of the enormity of what we have to do to ensure that AI doesn't do irreparable damage to humanity has been dropped in our laps like a big pile of poo. And I'm sure that several people wished they could "check out" of planet Earth in some way, like Knowlton will soon enough, to avoid the inevitable.

When an 8-year old Worries About the World of Tomorrow

After a screening of several “computer films” inspired by his groundbreaking work in computer graphics over 50 years ago, Ken Knowland participated in a lengthy, thought-provoking Q&A with two moderators and the audience. In one of his answers, he explained how going to the World’s Fair with his aunt in 1939, when he was just 8 years old, impacted how he saw machines and automation changing the world (and it was this insightful memory that lead me to ask Knowland the question that he would send him on a prophetic deep dive):

"My Aunt Marie, in New York, took me to every museum, and gallery, and planetarium, and the opera. And when I was 8 years old, she took me to the New York World’s Fair in 1939. The theme was “The World of Tomorrow.” I walked away from that with a problem. And the problem was this:  Machines would be doing so much of the work that we would have so much leisure time, what would we do with so much leisure time?!" -- Ken Knowlton

As he continued answering questions and sharing antidotes from his 60+ years of developing technologies and art, what stood out the most to me were his comments about his experience at the World’s Fair for a number of reasons, not to mention the fact that, just a few months ago, we published a blog post on how World’s Fairs inspire people to think differently, The World’s Fair: When an Event about Innovation Needs Innovating (written by Anne T. Griffin).

My question for Mr. Knowland, which was the last audience question for the evening, wasn’t so much about the World’s Fair as it was about how his concern about the displacement of human beings by machines when he was only 8 years old, is one that we still grapple today.

My question to Ken Knowlton:

"You mentioned that you worried about what human beings would do with all of the free time they would have after machines automate everything for us. Today, we still have this concern more now than every before. What would your advice be to us moving forward on how to grapple with the impact of AI automation on society in the near future?"

His answer and predictions about our future with AI were unapologetically blunt and unsettling. Watch the video of his answer below. After the video, you can read a his answer as well.

Ken Knowlton’s answer to my question (transcribed verbatim):

"Well, I have some terrible thoughts about automation. It's not about work and getting paid for it. It's about machines making decisions that we don't understand.

I think we will come to a time when machines will face serious questions, very serious questions, like involving war, or huge forest fires or floods, or something like that – epic events that need massive protections. And I think that the machines making these decisions about how to deploy our resources are going to make nonsense decisions, nonsense recommendations (or seemingly so) and we won't know why or how. For example, these will be made sometimes by neural net, the intricacy of which we don't understand. We just know that having been fed a million pictures of, lets say, cats and dogs and children, the cats and dogs and children will be recognized as such [by the algorithms]. But if faced with something absolutely new, we may not know why the car [controlled by AI] seems to do absolutely the wrong thing in a certain situation because you don't even know what the fine structure of the neuronet has done in its learning because you never exposed it to this particularly new thing, or a new environment, or whatever.

Also we don't know which machines will be scheming together against other sets of machines. You only know which machines are these individual groups, if there are individual groups, competing. Why wouldn’t they, after all? Everything we do is competition, the reason to compete, and for working together and cooperating is in order to compete better for another group that is working together or cooperating. This is true of religious groups, business groups, geographic groups, racial groups and so forth. The main and most unfortunate things is that we are guided and pushed, or impelled somehow by the sense of being better than, more powerful than, richer than ,more lushly surrounded by our possessions. And, at the very top of that, major groups of governments and countries, against other groups of countries – something's going to blow one of these days or something is going to happen.

If a bomb goes off – let's say a nuclear bomb somewhere – now, are we going to trust our very highly intelligent systems to know or to guess whether that was an accident or intended. And whose bomb was it? And if the machine says, “We're 80% sure that that was an accident,” are we going to accept that kind of thing?

That’s going to really mess us up!

And with a cooperating bunch of humans not knowing what to do when machines are making important decisions like that… it troubles me. We're going to be in really tough situations anyway with populations expanding and resources declining, and seas rising and all that – I think that's for real.

You can't brush it away by hoping that away or arguing it away. Nature is going to take its course.  We're gonna be in trouble. I mean, you’re gonna be in trouble. I'm gonna be gone."

Goodbye and Thank You Mr. Knowlton

After the event, audience members commented on how much they appreciated my question and how much Ken Knowlton's answer resonated with them, though they couldn't exactly say why. What was it about Knowlton’s predictions about our AI future that resonated with this audience (an audience that has surely heard these things before)? It’s hard to say, but I walked away from that event feeling as if I had just been given a peak into our most-likely future and that we had been given a seemingly impossible challenge: This could all go terribly wrong for humanity, what are you going to do about it? Nothing about this is new, of course, we've been hearing a lot of this concern from the science and tech communities (among others), but sometimes it's not the message that needs to be changed, it's the person delivering the message (no shade to Elon Musk).

Also, the point is not so much that we arrive at a definitive, general consensus about the future of AI but that we have these discussions in public, together, and in unexpected places (like an art museum) with people from all backgrounds. The more we have these discussions, the better we'll be able to prepare for (and co-create) our future.

Thank you Kenneth Knowlton for your contribution to technology, art and the world, and for sharing those "terrible thoughts" that made us clutch our pearls and think a bit deeper and differently about our collective future.

Kenneth C. Knowlton (left) during his Q&A with moderators at MoMa (on the screen is one of his childhood class photos).

 

 

Charlie Oliver
Founder and CEO of Tech 2025 and Served Fresh Media. Unapologetic instigator of provocative discourse.

    1 Comment

  1. May 3, 2018
    Reply

    AI: Right Answers to the Wrong Questions?
    © 2018 Ken Knowlton

    By programmers’ brilliance, and machines’ speed and memory size, AI (Artificial Intelligence) is leaping ahead of human ability. Game playing programs now beat the best humans – in Chess, even in Go. They may (or may not) be the best strategists for matters financial, military, political and/or environmental. We need clearer thinking, and feeling, in this thicket. We may be putting the cart before the horse, dealing with numeric values of matters that are defined weakly, if at all; nevertheless we compute extravagantly.

    AI is a set of methods for dealing with complex situations – for maximizing values that stand for the well-being of individuals, groups, and/or societies. We
    (1) model an environment-of-concern,
    (2) predict futures of the system under various presumptions, and
    (3) choose (or have chosen for us) the best course of action.
    We presume that resulting predictions – of accidents, health, longevity, incomes, possessions, and similar quantifiable matters – lead to good choices and actions.

    The problem is this: without AI, we normally choose, not by how we think about the future, but how we implicitly expect to feel about it. Where to live, what trade or profession to prepare for? What religious or philosophical belief system to follow? We imagine various possibilities and decide what seems/feels best.

    From birth, and ever after, we experience pain, pleasure, hunger, love, tiredness, uneasiness, etc. We care. After a while, exactly those words and many others, express states of being, along with numerous modifiers for nuances. How could automatic processes deal in such terms? They are not conscious, not empathetic. They do not experience what we experience. Consider, for examples, states that I might feel myself to be in, or others see me as, in response to a perplexing situation:

    abnormal, absent, absurd, accessible, accomplished, accountable, accurate, accused, active, adequate, admired, adorable, affected, afflicted, afraid, aggressive, agreeable, alarmed, alive, alone, amazed, ambiguous, amused, amusing, anchored, angry, annoyed, annoying, anonymous, antipathetic, anxious, apologetic, appreciated, appropriate, approved, arbitrary, argumentative, artificial, artistic, ashamed, assaulted, assured, astonished, attentive, authentic, authoritative, authoritarian, autonomous, average, awake, aware, awkward. (Only “a’s” here; would you like the b’s, c’s … ?)

    Words like these describe my experience – who and what I am (or am seemed to be); this is how and where I live. These issues are the bases of my “intelligent” (at least human) response. AI systems cannot be, or experience, such states. At best, AI presents one or more futures in sufficiently rich terms that I might imagine how they would feel. ( I should not ignore them: AI’s predictions may be more realistic in quantitative terms, and intertwined complexities better handled than I might manage.)

    (AI predictions will, of course, be available to groups, regions, countries, etc., but the general assertion remains, except that it’s difficult to say how a group or country might “feel” about – i.e. react to the thought of – something.)

    Situations vary. Sometimes there’s no time for human contemplation – when AI systems become essential for immediate analysis and automatic response. One example: it “thinks” that incoming missiles are detected from a potentially hostile region. What should we have arranged for our AI systems to do, automatically and instantly?

    More generally: As soon as our AI system “thinks” that a nuclear war is immanent, shouldn’t our AI systems act first, preemptively and decisively? Or with more look-ahead, imagine this: as soon as my system thinks that the opposing system thinks that my system thinks (etc.) … TAKE THE INITIATIVE!

    How do we feel about unfeelingness of AI? Uneasy, with such unpredictable sorcerers’ apprentices!
    [ ]

Leave A Reply

Your email address will not be published. Required fields are marked *