"The responsibility for humanity’s future need not rest entirely in the hands of tech companies. But Silicon Valley will have to resist its more shadowy corporate tendencies and spend more time sharing ideas. Because the only prescription for its AI ethics fever may be more sunlight." -- Jeremy Hsu, Backchannel
About This Workshop
If a driverless car causes an accident that kills pedestrians, who is at fault - the AI or the humans? How can we stop weaponized AI from getting into the hands of dangerous criminals? Can the scientists developing AI today guarantee that in 50 years or more, AI won't destroy us all? And who will ultimately own, protect, and make money from our AI data?
These are but a few of the questions that we, as a society, need to answer as we race to develop artificial intelligence that will fundamentally change our world. The short list of questions above may seem a bit too "out there" to be considered real problems that need to be solved today (killer robots be damned!), but here's a more grounded, though equally difficult, question that experts are struggling to answer:
"The real issue — though it doesn’t have the same ring as “killer robots” — is the question of corporate transparency. When the bottom line beckons, who will lobby on behalf of the human good?"
Some of the world's most respected scientists and tech leaders have expressed grave concerns about the potential for AI to harm humanity if it is not developed responsibly (read "Open Letter on Artificial Intelligence" signed by over 150 science and technology experts HERE).
In response to the very real threat of AI causing more harm than good, and to address these deep questions of ethics in AI, the big tech companies have begun forming partnerships to define and solve these ethical dilemmas. These "AI consortiums" include "Open AI," founded by Elon Musk that counts Peter Theil, Sam Altman, and Reid Hoffman as leading members, and "Partnership for AI," formed by Google, Microsoft, Facebook, Amazon, Microsoft, and IBM. The mission statement of Partnership for AI sums up the shared goal of these consortiums:
"Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society." -- Partnership for AI
Join us for this special workshop, lead by guest presenter, Joyce J. Shen (Global Director, Emerging Technology Partnerships and Investments, Thomson Reuters), that will define and analyze AI consortiums, their members, the research they're doing, and who the key players are. If these consortiums are being formed to address the good of society, why are they currently so secretive? Should the general public be allowed to participate in defining the ethics of AI and, if so, how would this happen?
We'll also be reviewing the new 23 Asimolar AI Principles, published just a few weeks ago, which are guidelines covering research issues, ethics and values (short-term and long-term) in AI including how scientists should work with governments and how lethal weapons should be handled (among other things). The document was endorsed by hundred of AI professionals including Elon Musk and Stephen Hawking.
Gain a thorough understanding of AI consortiums and the companies and executives playing a leading role in defining these entities and their goals. Learn what tech companies are doing to stave off AI disaster and how they are investing their money to protect their own interests. Participate in an interactive, group exercise where you will be get a chance to discuss and put forth your own solutions to some of the most pressing problem in defining AI ethics.
This is a non-technical workshop meant for anyone: marketers, advertisers, developers, product managers, investors, data analysts, students, policy wonks -- all are welcome. Three articles worth reading before attending the workshop: (1) Tech Leaders Are Just Now Getting Serious About the Threats of AI, (2) Tech Giants Team Up to Keep AI from Getting Out of Hand, (3) Open Letter on Artificial Intelligence, Let's Talk About Tech Literacyand (by Joyce Shen).
As the global director of emerging technologies in the CTO office, Joyce (@joycejshen) built and oversees the end-to-end emerging technologies practice at Thomson Reuters. She leads emerging technologies research, startup & ecosystem partnerships, and the emerging tech venture fund.
Joyce is also a faculty lecturer at UC Berkeley School of Information for the Master of Information and Data Science (MIDS) degree program.
Before Thomson Reuters, Joyce was a founding executive member and the first CFO of Cloud Platform at IBM, responsible for launching the Bluemix cloud platform, the Bluemix Garage, and deploying the $1B investment. Joyce also spent several years at IBM Corporate Development, leading numerous acquisitions and divestitures. Joyce received her undergraduate and masters degrees from the University of Chicago. She is a regular writer and speaker on emerging enterprise technologies, startup ecosystem, and product innovations.
The NYU Entrepreneurial Institute leads University-wide initiatives to launch successful startups and commercialize technology created by NYU’s 60,000 students, faculty and researchers. We are team of startup experts that offers educational programming and events, and help identify funding opportunities for aspiring entrepreneurs. The Mark and Debra Leslie Entrepreneurs Lab is a 6,800 sq-ft space where entrepreneurs from across NYU can connect, collaborate, and explore how to turn ideas and inventions into startups. The Leslie eLab offers vast array of resources available to the NYU community.