The Goal of this Think Tank:
"I think human extinction will probably occur, and technology will likely play a part in this. [We could possibly create] a fleet of artificial intelligence-enhanced robots capable of destroying mankind." -- Elon Musk
This discussion and thought exercise will review (and be based on) the Asilomar 23 Principles document published earlier this year (a set of guidelines created by 100 of the top scientists, researchers, and businessmen working on AI on how we should develop AI): Stephen Hawking and Elon Musk backed 23 principles to ensure humanity benefits from AI
The goal of this event is to give anyone interested in this topic an opportunity to participate in a conversation on the moral and ethical consequences of developing Artificial Intelligence (AI), and to propose ideas on how we can develop AI safely and beneficially. Ask questions, get answers, connect with other people.
This is a non-technical workshop. Anyone and everyone can participate: marketers, advertisers, developers, engineers, product managers, investors, data analysts, students, policy wonks, aliens from Mars -- all are welcome!
The Discussion: What is AI?
The truth is nobody knows for certain what AI is or, more importantly, what it's becoming. Ask a scientist, a technologist, a corporate executive, and an ethicist to define AI and they will each likely have very different definitions and understandings of the technology.
So, where does this leave the rest of us?
Join us as we launch the Tech 2025 Think Tank (the people's think tank) AI series, to explore the ideas, ethics and technologies powering artificial intelligence now and into the future. Each Think Tank event is an exploratory adventure into the unknown of emerging technologies and ourselves as we define and redefine what AI is and what it should be.
Every event begins with the question: What is AI? This is done so that we can constantly remind ourselves that the definition of AI is not a rigid and absolute but, rather, its meaning is fluid, constantly changing as the technology develops, and should be subject to analysis, critique, and rigorous debate. We are all contributing to defining this powerful new technology as we use it in our homes, at work, and for entertainment each and every day.
The Thought Exercise: How can we develop safe AI?
Earlier this year (January 5–8), in Asilomar, California, more than 100 Artificial Intelligence experts gathered for the Beneficial AI 2017 Conference, hosted by the non-profit Future of Life Institute, whose scientific board of advisors includes Elon Musk, Larry Page, and Stephen Hawking, among others. The experts in attendance at this exclusive event included researchers from academia and industry, and thought leaders in economics, technology, law, ethics, and philosophy. The event was filled with some of the greatest minds in science and technology developing AI.
The purpose of the conference: to discuss and create guiding principles of "beneficial AI" to keep us from developing AI technologies that can eventually harm (and maybe even destroy) humanity.
Their solution was a mutually agreed upon document called Asilomar 23 AI Principles -- a list of 23 guidelines they suggest all AI researchers, scientists, lawmakers and tech companies should follow when developing and implementing AI. The document has since been signed by over 3,542 science and tech influencers (see the full list of signatories HERE).
Read and download the Asilomar 23 AI Principles document HERE. Also, you may want to watch THIS VIDEO of conference attendees discussing the future of AI on a panel including Elon Musk, Stuart Russell, Ray Kurzweil, Sam Harris, Nick Bostrom, and others.
The incomparable Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, joins us once again to guide us in exploring the intent, meaning, and implications of the Asilomar 23 AI Principles, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe or even be enforceable in this fractured, complicated world. Dr. Baum's expertise in global catastrophes and risk will give us unique insight into this topic and guide us on alternate ways of thinking about AI risk. What questions should we be asking of researchers, tech companies, the government, and ourselves about these safeguards. And who are these 100 AI experts guiding our future?
Put your thinking cap on, connect with other people who are as interested in this topic as you are, and meet Dr. Seth Baum (this might be your only chance to talk to a global apocalypse expert!). And this is an interactive think tank (we have group exercises and discussions -- thinking is nice, but thinking and doing is best!).
Dr. Baum's research focuses on risk and policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, and runaway artificial intelligence.Baum received a Ph.D. in Geography from Pennsylvania State University and completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions. Follow him on Twitter @SethBaum and Facebook http://www.facebook.com/sdbaum
Lite food and soft drinks included.