23 Guidelines to Avoid an AI Apocalypse According to Experts (workshop)

Eventbrite - The Skynet Is Falling: 23 Guidelines to Avoid an AI Apocalypse (workshop)

About This Workshop 

[Photo Above: 100 AI experts attending the Beneficial AI Conference in January (including Elon Musk who is seated in the second row), Photo Credit: Future of Life Institute

On January 5–8 of this year, in Asilomar, California, more than 100 Artificial Intelligence experts gathered for for the Beneficial AI 2017 Conference (a follow-up to the 2015 AI Safety conference in Puerto Rico), hosted by the non-profit Future of Life Institute, whose scientific board of advisors includes Elon Musk and Stephen Hawking, among others. The experts in attendance at this exclusive event included researchers from academia and industry and thought leaders in economics, technology, law, ethics, and philosophy.

The purpose of the 3-day conference? To discuss and create guiding principles of "beneficial AI." Here is a portion of the joint statement from the organizers of the event:

"We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

In planning the Asilomar meeting, we hoped both to create meaningful discussion among the attendees, and also to see what, if anything, this rather heterogeneous community actually agreed on. We gathered all the [AI research] reports we could and compiled a list of scores of opinions about what society should do to best manage AI in coming decades."

What this group of 100+ experts came up with is a mutually agreed upon Asilomar 23 AI Principals -- a list of 23 guidelines that they suggest AI researchers, scientists, lawmakers and tech companies should obey to ensure safe, ethical and beneficial use of AI. In short, guidelines to keep us from creating AI/robots that will destroy humanity.

The guidelines have since been covered widely in the media and signed by 3,542 AI/Robots research. See the full list of signatories HERE.

"I think human extinction will probably occur, and technology will likely play a part in this. [We could create] a fleet of artificial intelligence-enhanced robots capable of destroying mankind." -- Elon Musk
 

What Will This Workshop Cover?

JOIN US for this special, interactive workshop and discussion, featuring guest presenter, Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, as we explore the intent, meaning, and implications of these AI guidelines, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe. Dr. Baum's expertise in global catastrophes and risk will offer unique insight into this topic and guide us on alternate ways of thinking about AI risk and the questions we should be asking of researchers, tech companies and the government about these safeguards. Additionally, who are the 100 AI experts guiding our future? What can we learn about them that will help us to understand how our future is being shaped through AI?

You won't want to miss this workshop!

After the presentation, we will have our popular interactive, group experience where you will get the opportunity to explore and answer some of the challenging problems in this space with others who are just as intrigued by this topic as you are!

Prerequisite

No technical experience required. This is a non-technical workshop meant for everyone: marketers, advertisers, developers, engineers, product managers, investors, data analysts, students, policy wonks -- all are welcome!

In addition to reading the Asilomar 23 AI Principals, you may want to watch the following video of conference participants on a panel discussing the future of AI including Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallin.

Agenda

  • 6pm - 6:15pm - Sign in, enjoy lite bites and beverages
  • 6:15 - 6:20 - Introduction, Tech 2025 announcements, sponsor acknowledgement
  • 6:20 - 6:45 - Guest instructor, Dr. Seth Baum, presentation
  • 6:40 - 7:50 - Interactive group exercises and problem-solving
  • 7:50 - 8pm - Final thoughts and Q&A

Guest Presenter

Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, a nonprofit think tank that Baum co-founded in 2011.

Dr. Baum's research focuses on risk and policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, and runaway artificial intelligence.

Baum received a Ph.D. in Geography from Pennsylvania State University and completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions.

Follow him on Twitter @SethBaum and Facebook http://www.facebook.com/sdbaum

New Location 

We'll be at a new location for this workshop and will be having a more intimate group setting (we're experimenting with various sizes and types of workshops. This workshop will be at 36 East 23rd Street, 9th Floor. Because this is a private space, only people who are confirmed and on the RSVP list will be admitted into the event (so RSVP today if you'd like to join us!).

Register for Workshop

 
Guest Instructor
Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute.
 
Registration: $20
Questions? Contact us: theteam@tech2025.com

Start Time

6:00 pm

Tuesday, July 11, 2017

Finish Time

8:00 pm

Tuesday, July 11, 2017

Address

36 East 23rd, 9th Floor, Suite 9F

Event Participants