An Opinion Piece
I remember my first Tech 2025 event that I went to, despite it being cold and wanting to go home after work on a dark January night. To my surprise, I had the most fun I’ve ever had at a meetup that night (although technically, Tech 2025 is not a meetup).
That first event back in January 2017, Introducing the General Public to Chatbots and Conversational Interfaces, opened my eyes to what chatbots were, how they worked, and showed me how the skills I have now translate to building chatbots. With my team of other attendees, we designed a Valentine’s Day chatbot. The whole point of the event was to teach people about chatbot technology and to help them understand how they can be involved in shaping the future. For myself, it was a success, and started me down a path of learning even more about emerging technologies.
Tech 2025 Founder and CEO, Charlie Oliver will tell you that is the point at every Tech 2025 event. Through her work at Served Fresh Media (the parent company of Tech 2025) Charlie had C-suite executives asking her, panicky, what they should be doing with emerging technologies such as AI. So many asked her that she realized that if the C-suite doesn’t understand these technologies, the average person must not have a clue either. I know that I didn’t have a clue, and this community inspired me to learn more and upskill, including taking a data science class last year. Now that I have a clue, I really understand why Charlie felt the need to create a community and platform to educate other people.
The future was yesterday, and possibly years ago, but the average person still doesn’t really understand data science or AI, and now we’re seeing the beginnings of laws and regulation. But, how much can those laws and regulations help if the people they are meant to protect aren’t involved in the process? The Palantir software is a good example of technology fraught with bias that has the ability to ruin lives without even most of their own City Council being aware of it (Palantir Has Secretly Been Using New Orleans to Test Its Predictive Policing Technology). Even worse, some of the people who have the most influence over these laws and systems are just as uninformed as the people who will feel the impact the most.
New York City is looking to address the automated decision making systems that its agencies use. Their new algorithmic accountability law, the first of its kind in the country, will implement a task force to identify and assess the automated decision systems that the cities agencies use (Could New York City's AI Transparency Bill Be a Model for the Country?). Most of the people who are experts who will be on that task force won't likely be the people who have to access the resources or opportunities that are leveraging the types of systems the city is using. The law does specifically mention the involvement of charitable organizations who represent the people who are impacted by these systems. Because this is the first law of it's kind in the country, we need participatory research -- a method of research that emphasizes participation and action in cooperation with communities. Without substantively including the community in this process, it will just be a bunch of "experts" and industry people continuing their reign over the working and shrinking middle class of the city.
The people who will (and already are) being impacted need to have a seat at the table to regularly weigh in on these systems and laws. They need to be a part of defining what is "fair" and what makes sense, and what to do when something is found to be biased. After the algorithmic task force publishes a report, and perhaps reveals biased systems at several city government agencies, who reviews the new algorithms that will replace the biased systems? And how do we make sure the average person is aware new systems are being put in place? You can vote a mayor out of office, but you can’t vote an algorithm out of an agency. The average citizen does not get to elect who gets to be on the automated decision system task force, and has no recourse if a bad algorithmic model is implemented. It only makes sense to me that, as a democracy, participatory research is a part of the process that produces the report.
One of the people who has been in participatory research since the 1970s is Budd Hall, an UNESCO Co-Chair in Community-Based Research based out of the University of Victoria, Canada. In a dialogue he had in 2017 with his co-chair, Rajesh Tandon of Society for Participatory Research in Asia, India, he described his disappointment early in his career failing to come up with courses that interested the general adult population, even though he sent out several surveys to collect data. It wasn’t until he was sitting in a bar with a friend, expressing his disappointment with the low attendance of the courses, that he came up with the idea to start to talk directly to people. People told him the things that would be the most useful to them, and it worked.
“After a couple of hours, I realized that I had learned more about the needs of the people by simply listening and talking to them. This was the beginning for me, when I realized that the way we acquire knowledge, the way we learn about people’s needs, the way we construct our ideas of community, people and identity in relation to each other, is really dependent on our ability to establish a relationship where you can listen and learn. It is here that I learned to ‘shut up’ and ‘listen’. Learning to listen is one of the most difficult things to do and I am still working on it.” -- Budd Hall (Participatory research: Where have we been, where are we going?)
We, as a society, need to learn to listen. One of the biggest problems we are facing currently in both the public and private sector in the use of automated decision systems is bias against people of color, people with disabilities, women and non-cis-gendered people, people who don’t identify as hetrosexual, non-Christians, poor/working class, living with disabilities, and the formerly incarcerated. While participatory research is no silver bullet, it is very important that the city and its agencies listen to these people. If you look at the majority of people involved in data science and AI in the US today, it is a pretty homogeneous group in terms of race, gender, sexual orientation, class, and somewhat in terms of religion. While it’s important and possible to find a diverse-ish set of professionals to lead the task force to represent different racial backgrounds, LBGTQ+, and religion, from a class standpoint, you’re not going to find many people who are also working class/poor experts in the field of data science and AI, and especially not people who have been previously incarcerated.
People who will ultimately be impacted by decisions around affordable housing, school, employment, and more, won’t have significant representation. It’s good to have subject matter experts gather and meet, and even charitable organizations who represent those who would be impacted, but there are subtleties in the experience of being one in one or more of these groups that people who aren’t living through it will miss. You can study a subject all you want and get to know the needs of a group of people, but when it comes to protecting the rights of people, it is very important to understand the outliers because they are humans who will be denied benefits and/or employment. Both Virginia Eubanks, author of Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, and Cathy O’Neil, author of Weapons of Math Destruction, agree it is a big problem when governments and companies use automated decision systems to make complicated, politically charged decisions without human accountability.
In her book, Eubanks talks about the Vulnerability Index - Service Prioritization Decision Assistance Tool (VI-SPDAT) -- a survey that tries to prioritize homeless families and individuals for the dearth of housing available to them in Los Angeles. The questions in the survey potentially could incriminate those who are trying to access much needed services by asking about their drug usage and if they have had sex in exchange for drugs or money. And the population of accounted for homeless people in LA is about the size of the town Eubanks lives in now at 50,000, as she pointed out in a recent talk I attended. There are way more homeless people in LA than there is housing for them. To try to determine who is “more” in need, they opted to have an automated decision system make the decision instead of humans. While it is necessary and important that cities like Los Angeles and New York involve experts in their field from industry and academia to audit these systems, those people are experts in data science and AI, not homelessness, addiction, low income and affordable housing, education, disabilities, unemployment, or other social subject matter areas.
While bringing in charitable organizations who are experts in those areas is a good step, the people who are the best subject matter experts are the people who use these systems today because they are the recipients of the decisions. Eubanks interviewed several homeless people who use the survey, and they are aware of how certain answers can increase or decrease their place in the housing lottery system. Though, several people opt to not disclose or not answer 100% truthfully, because while some responses can increase your ranking, it could require you to disclose very sensitive information. If you were or are a drug user, would you disclose that information to the government and trust that the police wouldn’t show up to arrest you? Or maybe the laws will change in a year, and that information about your drug use that you gave disqualifies you from other services you need.
Most recently, Wisconsin started planning to introduce legislation that would disqualify people from welfare based on the results of mandatory drug testing (Wisconsin's Welfare Overhaul Is Almost Complete). I can promise you none of the people of Wisconsin on welfare are proposing that law. In an age where you cannot trust anyone with the data from your Hogwarts sorting hat quiz, it’s alarming the government would be asking you for such personal information (voluntarily or mandatory) in exchange for a service you need to live and survive in 21st Century America. It would be like the government of Michigan offering the residents of Flint free, drinkable water during the ongoing lead water crisis in exchange for sensitive information about themselves. The homeless of LA are left out of the process that shapes the prioritization of who gets housing and who doesn’t. I highly recommend picking up Eubanks' book if you want to read more about LA’s VI-SPDAT or where other cities’ agencies have gone wrong.
While, in theory, New York’s new law will mitigate algorithmic bias, without participatory research we are still failing those who need to access these resources. Much like Hall failed to create courses that were impactful until he talked to the very people he wanted to take adult education classes in the first place. The stakes are high, and with the increasing income inequality and displacement of communities of color, New York City cannot afford to not listen. We cannot afford to fail. The challenge to the data and AI community and local and state governments is to learn to listen and create sustainable ways to give those impacted by the systems a seat at the table.
If New York City wants to have a real impact on making the algorithms its agencies use “fair,” they need to come up with a system that regularly engages the very New Yorkers the government is supposed to serve, and holds algorithms accountable to them.
- New York's local aw in relation to automated decision systems used by agencies: https://on.nyc.gov/2HUTYwd
- My favorite article about how algorithmic impact assessments should be run: http://bit.ly/2HZmrkE
Featured Image Credit: FiveThirtyEight