Issue #1 | Summer 2022
We Are(n’t) AI (Yet)
- in an interview with Prof. Julia Stoyanovich
Julia Stoyanovich has dedicated her career as a computer scientist to responsible data management and responsible AI. She is a fierce advocate for educating the public about the impact that AI and algorithms have on their lives – by offering free public library courses and even creating comics.
Interview with
Prof. Julia Stoyanovich
You are the founder of the Center for Responsible AI. The Center’s claim is “We build the future in which responsible AI is the only AI.” How close are we to that goal?
Is responsible AI synonymous with AI today? Unfortunately, not yet. We are still quite far from that. It will take a com- bination of approaches and solutions to get there, and some of the solutions will be technical. Better algorithms built by people who are aware of the issues will help. But I think at this point, the main gap is not really algorithmic.
”With the degree of automation that is already present in that industry, every one of us has either already been affected or will be affected by AI systems when looking for a job .
What else is needed?
We don’t have a shared understanding of the role AI should play in society. And in order to reach that understanding, we need to get people to understand what AI can and cannot do. Among the public at large, there is a lot of magical thinking about AI. This hands a lot of power to the people who are developing this technology – perhaps even power they don’t really want, because they are not necessarily trained in law or the social sciences. We shouldn’t put technologists in a position where they have to adjudicate some of these issues with AI through code.
Do we need a broader discussion on what AI can do for us?
We are running a course right now called “We Are AI.”It’s a public education course that we offer through the Queen’s Public Library in New York City, available and accessible for everyone, irrespective of their background, math knowledge or programming experience. The course is free of charge; we even give people gift cards for attending. The message is: “AI is what we want it to be.” My colleagues with whom I developed this course, as well as the students, say: “But this is not the case right now. We don’t have control over AI.” Indeed, today we are, for the most part, subjected to these systems. But by making the claim that “AI is what we want it to be,” we assert ourselves. We should really be speaking in the present tense and very forcefully about what we want the world to look like. And by doing that, we begin moving in the right direction.
One of the constituencies that I’ve been thinking about when developing transparency and interpretability mechanisms are regular members of the public. I’ll give you an example: There are lots of algorithmic systems being used at various stages of the hiring process. We all look for jobs at some point in our lives. With the degree of automation that is already present in that industry, every one of us has either already been affected or will be affected by AI systems when looking for a job. Access to jobs is access to an essential economic opportunity, it’s not optional. This is a crucial domain in which we should be figuring out how to regulate the use of AI. We need to find ways to educate job seekers about the systems that they are subjected to, and to explain to employers – companies that are buying such tools – what they are paying for. Regarding the employment sector, we have to know what the algorithms do exactly. Do they look for candidates who may or may not know that they’re even a good match? Do they match resumes to positions? Do they determine the level of compensation? All of these possibilities have different requirements and different error margins, and different tools are used in all of them. So you have to be very specific about the domain while speaking to the stakeholders. This is key.
What might this look like in practice?
I’ve been developing “nutritional labels” for job seekers, to enable different kinds of interaction between a job seeker and the hiring system. One specific label would accompany a job ad. I call it the “posting label”. These labels should document what the position’s qualifications are so that you can tell whether you’re qualified, what data the potential employer will use to screen you, what screening methods they will use, whether you can opt out at any point and what the features are that a particular screener will look at. This kind of label is going to enable informed consent on the part of the job seeker, helping them decide whether to apply for the job, and to request accommodations or an alternative form of screening if necessary. Another kind of label, called the “decision label,” would accompany the decision about whether you are hired or denied a job. These labels will tell the job seeker what they can do to improve their chances of being hired in the future. And they will support recourse, allowing the job seeker to contest or correct the decision if, for example, incorrect data was used.
Do you see room for more regulatory approaches in that regard?
One of the things the Center for Responsible AI is very proud of, and I am personally very proud of, is that we were big proponents of a law that was passed in New York City on December 21, 2021, that sets the first precedent for regulation in the domain of algorithmic hiring. The law requires that automated decision-making tools used for hiring and employment be audited for bias. And it also requires that applicants be notified before they apply that such a tool will be used for screening, and also what features of their application it will use. The law will come into force in January 2023, giving the vendors of such AI systems and their users a year to figure out how to comply. I see this law as a great first step.
Background
Nutritional labels as inspiration for transparency on ranking algorithms
Automated decision-making (ADM) systems often calculate scores and rankings to present their results. For example, a person’s creditworthiness is often assessed through the automated determination of a score, or applicants for a job are listed by a ranking algorithm according to their supposed qualifications and suitability for a position. Such scores and rankings are known to be unfair, easy to manipulate and not very transparent. Moreover, they are often used in situations for which they were not originally designed – which can lead to inaccurate and problematic results. For this reason, Julia Stoyanovich and colleagues have developed a rating system that provides information on rank- ing algorithms similar to nutritional labels for food. The Ranking Facts application uses visualizations to create transparency. It shows things such as the decision- making criteria included in a ranking, how they are weighted and how stable and fair the calculations are. The application is in- tended to also help non-experts evaluate the quality and suitability of a ranking.
transparency and accountability
Dimension:
social sustainability
Criteria:
transparency and accountability
Indikator:
publicly available information about system implementation
When people have become the subject of an automated decision, they must be informed of that fact. Similarly, relevant information about AI systems must be made publicly available so that the functionality, the decision-making criteria and technical dependability of the system can be verified by independent bodies. The minimum standard is to document the most relevant information regarding the system’s goals, user and usage cases, training and test data, model used, feature-selection processes, inputs, tests, metrics and so on. Such information can be stored in public registers.
non-discrimination and fairness
Dimension:
social sustainability
Criteria:
non-discrimination and fairness
Indicators:
detection of, awareness and sensitization to fairness and bias
To establish non-discrimination and fairness in the context of AI, organizations that develop or use AI need to raise awareness. The first step is to define fairness on a case-specific basis, and to communicate this definition broadly in the planning and development process. Potential discrimination can be recognized during the development phase of AI systems through impact assessments. There are proven methods for measuring fairness and bias, such as Equalized Odds and Equal Opportunities. These can be used to identify biases in training and input data, as well as in the models, methods and designs, and to make improvements during development. Fairness tests must take into account protected attributes such as ethnicity, skin color, origin, religion, gender, etc. in order to prevent discrimination on the basis of these attributes. The same applies to socalled proxy variables that correlate with the protected attributes.
Data, Responsibly #1: „Mirror, Mirror“
Julia Stoyanovich has published two comic book series in collaboration with Falaah Arif Khan, who is both an artist and a data scientist. One series is called “We Are AI.” It was written in English, has recently been translated into Spanish and will appear in other languages as well. This series accompanies a public education course targeting an adult audience without any background in technology, but with an interest in AI and its social impacts. By teaching through comics, a less standard medium that allows for humor, Stoyanovich and Arif Khan try to make AI and technology ethics more accessible. The second comic book series, called “Data, Responsibly,” is aimed at data science students or enthusiasts and is slightly more technical. Stoyanovich uses this series as part of assigned reading for the responsible data science courses she teaches to graduate and undergraduate students at New York University.
PROF. JULIA STOYANOVICH
Professor of Computer Science & Engineering and Data Science at New York University
Her research focuses on operationalizing fairness, diversity, transparency and data protection in all data lifecycle stages. She is active in AI policy, having served on the New York City Automated Decision Systems Task Force and contributed to New York City regulation of AI systems used in hiring. Stoyanovich teaches responsible AI to data scientists, policy-makers and the general public, and is a co-author of an award-winning comic on this topic. She is a recipient of an NSF CAREER award and a senior member of the ACM.