Prof. Julia Stoyanovich
You are the founder of the Center for Responsible AI. The Center’s claim is “We build the future in which responsible AI is the only AI.” How close are we to that goal?
Is responsible AI synonymous with AI today? Unfortunately, not yet. We are still quite far from that. It will take a com- bination of approaches and solutions to get there, and some of the solutions will be technical. Better algorithms built by people who are aware of the issues will help. But I think at this point, the main gap is not really algorithmic.
”With the degree of automation that is already present in that industry, every one of us has either already been affected or will be affected by AI systems when looking for a job .
What else is needed?
We don’t have a shared understanding of the role AI should play in society. And in order to reach that understanding, we need to get people to understand what AI can and cannot do. Among the public at large, there is a lot of magical thinking about AI. This hands a lot of power to the people who are developing this technology – perhaps even power they don’t really want, because they are not necessarily trained in law or the social sciences. We shouldn’t put technologists in a position where they have to adjudicate some of these issues with AI through code.
Do we need a broader discussion on what AI can do for us?
We are running a course right now called “We Are AI.”It’s a public education course that we offer through the Queen’s Public Library in New York City, available and accessible for everyone, irrespective of their background, math knowledge or programming experience. The course is free of charge; we even give people gift cards for attending. The message is: “AI is what we want it to be.” My colleagues with whom I developed this course, as well as the students, say: “But this is not the case right now. We don’t have control over AI.” Indeed, today we are, for the most part, subjected to these systems. But by making the claim that “AI is what we want it to be,” we assert ourselves. We should really be speaking in the present tense and very forcefully about what we want the world to look like. And by doing that, we begin moving in the right direction.
One of the constituencies that I’ve been thinking about when developing transparency and interpretability mechanisms are regular members of the public. I’ll give you an example: There are lots of algorithmic systems being used at various stages of the hiring process. We all look for jobs at some point in our lives. With the degree of automation that is already present in that industry, every one of us has either already been affected or will be affected by AI systems when looking for a job. Access to jobs is access to an essential economic opportunity, it’s not optional. This is a crucial domain in which we should be figuring out how to regulate the use of AI. We need to find ways to educate job seekers about the systems that they are subjected to, and to explain to employers – companies that are buying such tools – what they are paying for. Regarding the employment sector, we have to know what the algorithms do exactly. Do they look for candidates who may or may not know that they’re even a good match? Do they match resumes to positions? Do they determine the level of compensation? All of these possibilities have different requirements and different error margins, and different tools are used in all of them. So you have to be very specific about the domain while speaking to the stakeholders. This is key.
What might this look like in practice?
I’ve been developing “nutritional labels” for job seekers, to enable different kinds of interaction between a job seeker and the hiring system. One specific label would accompany a job ad. I call it the “posting label”. These labels should document what the position’s qualifications are so that you can tell whether you’re qualified, what data the potential employer will use to screen you, what screening methods they will use, whether you can opt out at any point and what the features are that a particular screener will look at. This kind of label is going to enable informed consent on the part of the job seeker, helping them decide whether to apply for the job, and to request accommodations or an alternative form of screening if necessary. Another kind of label, called the “decision label,” would accompany the decision about whether you are hired or denied a job. These labels will tell the job seeker what they can do to improve their chances of being hired in the future. And they will support recourse, allowing the job seeker to contest or correct the decision if, for example, incorrect data was used.
Do you see room for more regulatory approaches in that regard?
One of the things the Center for Responsible AI is very proud of, and I am personally very proud of, is that we were big proponents of a law that was passed in New York City on December 21, 2021, that sets the first precedent for regulation in the domain of algorithmic hiring. The law requires that automated decision-making tools used for hiring and employment be audited for bias. And it also requires that applicants be notified before they apply that such a tool will be used for screening, and also what features of their application it will use. The law will come into force in January 2023, giving the vendors of such AI systems and their users a year to figure out how to comply. I see this law as a great first step.