Interview with Dr. Alex Hanna
Google, Microsoft and Facebook only fund research relating to existing scientific paradigms concerning the optimization of their business models. That’s directly or indirectly the case, so either in terms of the types of papers they put out or the funding they give to university researchers, research nonprofits or “AI for Good” projects. Funding guides what problems people work on. They typically don’t fund things that are contrary to their interests; and if they do, it’s in a very limited capacity.
It is part of the problem when it is used to concentrate and consolidate power and it is used as a means of exacerbating existing inequalities. Most of the time, that AI is implemented in the Big Tech context,the aim tends to be that of facilitating recommendation systems, ad targeting or minimizing customer “churn,” so, it’s a facilitator for business. AI is also being used in the public sector as a means to minimize the amount of human labor needed for welfare allocation or to identify fraud. But at the same time, it is becoming a tool of surveillance. AI often has the effect of worsening conditions for workers, either by creating a new class of laborers who work for minimal wages to produce data for AI or by optimizing conditions for employers in gig economy settings to the detriment of workers.
People who are impacted by AI and automated decision-making systems need to have a much greater say in where and when these systems can be deployed. We want to begin by including communities in research activities.
If we are going to rethink AI, we will have to rethink what is needed by communities, especially what is needed by marginalized racial, ethnic and gender communities. Some of these tools can be used as a means of taking some of that power back or supporting community decision-making and engagement. Some of the work DAIR is doing points in that direction, for instance, the work that we’ve done on spatial apartheid and on how AI can support processes of desegregation in South Africa. Another thing that we’re looking into is how we can use AI or natural language processing tools to find and identify abusive social media accounts of government actors. We’re trying to recalibrate how AI is used and to find a way that doesn’t concentrate power but instead redistributes it.