About this Workshop
[NOTE: this is the first event in our new Mission AI program -- learn more about it here: http://bit.ly/missionai - space limited to 40 people]
By the year 2020, global spending on artificial intelligence research is estimated to reach $46 billion, with money pouring in from big tech companies, private investors, public institutions and governments. AI is set to change humanity and our world in unimaginable ways and AI research is the fuel catapulting us into to this next era of technological innovation.
What is AI research? Who are the key players (people and companies) in AI research today? Which research is having the most impact on innovation? And how can becoming well-versed in AI research help you to think more creatively about AI and to level-up your innovation game?
This workshop, taught by Dr. Seth Baum (Executive Director of The Global Catastrophic Risk Institute and an AI researcher specializing in AI safety and risk) will introduce you to the global AI research ecosystem (the researchers, the companies, the conflicts, the money funding it and and the research leading to new innovations).
You will learn (from someone who has been doing AI research for over a decade) how to stay on top of the latest research without becoming overwhelmed, and how you can participate in AI research even if you aren't an academic or a technologist. And finally, you'll walk away with a diverse list of resources to stay on top of AI research and to further explore AI research that interests you the most (or that is most relevant to your professional goals). There will also be plenty of time for Q&A with Dr. Baum.
Prerequisites & PreparationNo prerequisites required.
This workshop is for anyone who who wants to level-up their AI knowledge and gain a more nuanced understanding of how AI research is currently being produced, funded, and used to develop new technologies. It is especially ideal for product managers, developers and engineers, marketers, tech journalists, investors, entrepreneurs, students, and teachers.
TakeawaysThis workshop will cover:
1. the different types of AI research - computer science vs. social science etc.
2. how Google went from being a research paper to a product and startup (an example of research that literally changed our world)
3. academic research on the social and policy dimensions of AI
4. the research ecosystem that is specifically focused on long-term forms of AI
5. research focused on building AI
6. debates and controversies within AI communities, including the Google Project Maven incident
7. who the major players are in building AI
8. research communities in different countries
9. money being invested
10. how the media covers research
11. the use and impact of data in research
12. the AI Safety Research program (funded primarily by Elon Musk) to help researchers find safe uses for AI (Dr. Baum will also provide a brief overview of the research proposal he submitted to the program)
Dr. Seth Baum is Executive Director of the Global Catastrophic Risk Institute (GCRI) and a renowned researcher whose research focuses on risk and policy analysis of catastrophes that could destroy human civilization, such as global warming, nuclear war, and runaway artificial intelligence.He leads GCRI’s planning and management and contributes to GCRI’s research. He is also a Research Scientist at the Blue Marble Space Institute of Science and an Affiliate Researcher at the Columbia University Center for Research on Environmental Decisions. He holds a Ph.D. in Geography from Pennsylvania State University (2012), an M.S. in Electrical Engineering from Northeastern University (2006), and B.S. degrees in Optics and Applied Mathematics from the University of Rochester (2003).
Dr. Baum received a Ph.D. in Geography from Pennsylvania State University and completed a post-doctoral fellowship with the Columbia University Center for Research on Environmental Decisions.
Dr. Baum's Recent Research and Articles:
* A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy
* Modeling and Interpreting Expert Disagreement About Artificial Superintelligence
* Social Choice Ethics in Artificial Intelligence
* Reconciliation between factions focused on near-term and long-term artificial intelligence
* Preventing an AI Apocalypse,published in Project Syndicate, and reprinted in The New Times (Rwanda), Khaleej Times (United Arab Emirates), World Economic Forum, MarketWatch, Japan Times, Asia Times, Médias24 (Morocco), Times of Oman, Khmer Times, Taipei Times, and Dagens Perspektiv (Oslo).
* Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing, with Steven Umbrello, published in Futures
* A model for the impacts of nuclear war, with and Tony Barrett. New opinion article Preventing an AI apocalypse
* He's interviewed in a Gizmodo article on robot personhood and a Future of Life Institute podcast on my paper A model for the probability of nuclear war.
* Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing, with Steven Umbrello, in Futures (academic)
* A model for the impacts of nuclear war, with Tony Barrett, GCRI working paper (academic)
* A model for the probability of nuclear war, with Robert de Neufville and Tony Barrett, GCRI working paper (academic)
* A survey of artificial general intelligence projects for ethics, risk, and policy, GCRI working paper (academic)
* Modeling and interpreting expert disagreement about artificial superintelligence, with Tony Barrett and Roman Yampolskiy, in Informatica (academic)
* Social choice ethics in artificial intelligence, in AI & Society (academic)
* Towards an integrated assessment of global catastrophic risk, with Tony Barrett, in proceedings of colloquium Catastrophic and Existential Risk (academic)
* Reconciliation between factions focused on near-term and long-term artificial intelligence, in AI & Society (academic)
The workshop will begin sharply at 6:10PM. You are welcome to bring food and beverages.
There will be in-class exercises. Please bring a laptop, tablet or other device to access the internet and participate in the exercises.
Below is a list of the AI research and topics guest instructor, Dr. Seth Baum, will be discussing in the workshop:
WORKSHOP RESEARCH & TOPICS TO BE COVERED
- Your Future Self-Driving Car Will Be Way More Hackable [READ]
- The Environmental Impact of Autonomous Vehicles Depends on Adoption Patterns [READ]
- Silicon Valley Takes a Right Turn [READ]
- The Political Behavior of Wealthy Americans: Evidence from Technology Entrepreneurs [READ]
- CB Insights State of Artificial Intelligence Report (13 AI Trends Reshaping Industries and Economies) [READ]
- Machine morality: bottom-up and top-down approaches for modelling human moral faculties [READ]
- Social Choice Ethics in Artificial Intelligence [READ]
- Artificial Intelligence Is Stuck. Here’s How to Move It Forward [READ]
- Yann LeCun, Yoshua Bengio, & Geoffrey Hinton - Deep learning [READ]
- AI and Ethiopia: an unexpected synergy [READ]
- How Silicon Valley Became a Den of Spies [READ]
- Google Helps Chinese Military, Why Not US? Bob Work [READ]
- Artificial Intelligence, International Competition, and the Balance of Power [READ]
- Artificial Intelligence’s White Guy Problem [READ]
- Reconciliation Between Factions Focused on Near-Term
and Long-Term Artificial Intelligence [READ]
- Could Artificial Intelligence Create an Unemployment Crisis? (paywall) [READ]
- Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life [READ]
- Leakproofing the Singularity: Artificial Intelligence Confinement Problem [READ]
- Safe Baby AGI [READ]
Please bring current, valid state or government-issued ID to show building security at the venue.
If you have any questions or need additional information, email us at firstname.lastname@example.org.