The ethical and social implications of Artificial Intelligence remain uncertain.

Opinion: We Let Tech Companies Frame the Debate Over AI Ethics. That Was a Mistake.

Citizens and lawmakers need to be proactive about setting the AI agenda—and doing so in a manner that includes the voices of the marginalized.

The ethical and social implications of Artificial Intelligence remain uncertain. Visual: Glen Carrie / Unsplash

Artificial intelligence, or AI, is proliferating throughout society — promising advances in healthcare and science, powering search engines and shopping platforms, driving driverless cars, and assisting in hiring decisions.

With such ubiquity comes power and influence. And along with the technology’s benefits come worries over privacy and personal freedom. Yes, AI can take some of the time and effort out of decision-making. But if you are a woman, a person of color, or a member of some other unlucky marginalized group, it has the ability to codify and worsen the inequalities you already face. This darker side of AI has led policymakers such as U.S. Senator Kamala Harris to advocate for more careful consideration of the technology’s risks.

VIEWPOINTS
Partner content, op-eds, and Undark editorials.

Meanwhile, the companies on the front lines of AI development and deployment have been very vocal about reassuring the public that they’ll deploy the technology ethically. These proclamations have elicited great fanfare. When Google acquired DeepMind of AlphaGo fame in 2014, DeepMind notoriously required them to establish an ethics board. In 2016, Facebook, Amazon, IBM, and Microsoft joined Google and DeepMind as founding members of the Partnership on AI — intended as a link between public and private discussions on AI’s social and ethical ramifications. Apple tardily joined the partnership in early 2017, somehow, as a founding member. And Google long had the motto “Don’t be evil,” until it was unceremoniously dropped earlier this year.

Amid the storm of praise whipped up by enthusiastic public relations professionals, it’s very easy to forget just who it is we are praising and what we are praising them for. Each and every one of these companies, despite their mottos and vociferous declarations, have primary imperatives to make money, not friends. Most of their lofty ambitions have yet to materialize in the public domain in any tangible form.

That is not to say that companies should not voice concerns over the ethical ramifications of their work. To do so is commendable; leading companies in other emerging industries should take note. But we should be wary of letting tech companies become the only or loudest voice in the discussion about the ethical and social implications of AI.

That’s because the early bird really does get the worm. By letting tech companies speak first and loudest, we let them frame the debate and allow them to decide what constitutes a problem. In doing so, we give them free reign to shape the discussion in ways that reflect their own priorities and biases.

Such deference is all the more troubling given that the makeup and output of many of the companies’ ethics boards have been kept behind boardroom doors. Google’s DeepMind ethics board was formed nearly five years ago, and the names of its members have yet to be publicly released.

Even many ethics-focused panel discussions — or manel discussions, as some call them — are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere — a kind of ethical colonialism, if you will.

Today, we can see a similar pattern emerging with the deployment of AI. Good intentions are nice, but they must account for and accommodate a true diversity of perspectives. How can we trust the ethics panels of AI companies to take adequate care of the needs of people of color, queer people, and other marginalized communities if we don’t even know who is making the decisions? It’s simple: we can’t and we shouldn’t.

So what to do? Governments and citizens alike need to be far more proactive about setting the AI agenda—and doing so in a manner that includes the voices of everyone, not just tech companies. Some steps have been taken already. In September, U.S. Senators introduced the Artificial Intelligence in Government Act, and the U.K. has placed AI at the center of its industrial strategy. Independent groups like the Ada Lovelace Institute are forming to research and provide commentary on these issues.

But we can and should be doing more to identify AI biases as they crop up — and to stop the implementation of biased algorithms before people are harmed. After years of sitting in the back seat, it’s high time that governments and citizen groups took the wheel.


Robert David Hart (@Rob_Hart17) is a London-based journalist and researcher with interests in emerging technology, science, and health. He is a graduate of Downing College, University of Cambridge, with degrees in biological natural sciences and the history and philosophy of science.