Opinion: To Ingrain AI Ethics, We Should Get Creative About Copyrights

AI is evolving too fast for existing regulatory frameworks to keep pace. Intellectual property law may hold a solution.

Artificial Intelligence, or AI, presents both awesome possibilities and terrifying potential risks. It could unlock the power of data in ways that make society more efficient, productive, and decisive, but it has also been described as an “existential threat” to humanity, and potentially a “high-tech pathway to discrimination.” Concerning applications of AI are already arising: It has been deployed in discriminatory predictive policing, biased employment screening, autonomous weapons design, political manipulation, artistic theft, deepfake impersonation, and other dubious pursuits.

Ideally, AI would be regulated in a way that allows the benefits of the technology to flourish while minimizing its risks. But that’s no easy task, in part because AI is evolving faster than traditional legislative processes can keep up with. Laws to regulate today’s AI are destined to become obsolete tomorrow. New AI applications crop up every day, and it is nearly impossible to predict which ones will be harmful and should be restricted, and which will be beneficial and should be enabled.

In the U.S., for instance, AI is primarily regulated through data privacy laws that require companies to disclose data practices, obtain consent before collecting user data, and scrub data of information that might reveal a user’s identity, among other things. But consent is a murky concept when applied to models so sophisticated and opaque that even the developers themselves often don’t fully understand the risks. Likewise, deidentification of individual users does little to protect against the potential for AI models to discriminate or cause harm to the groups to which those individuals belong.

A few recent efforts, such as the White House’s Blueprint for an AI Bill of Rights, and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, have sought to strengthen ethical safeguards, by advocating for policies like stronger safety measures and protections against algorithmic discrimination. While important, those policy measures are non-binding and voluntary, with no penalties for non-compliance.

As researchers who study issues at the intersection of technology, policy, and law, we believe society needs a new, more nimble approach to ensuring the ethical development and use of AI. As we argued in a recent paper with Megan Doerr of Sage Bionetworks, ethical AI standards must be able to rapidly adapt to changing AI technology and the changing world. And they must be enforceable. The key to achieving both goals may come from an unexpected source: intellectual property rights.

To understand why, consider the Creative Commons. Established a little over two decades ago, Creative Commons is a licensing system that allows any person who creates a work — a photo, song, book, or otherwise — to give blanket permission for others to use the work, under certain conditions. A photographer might license a picture to be reprinted, but only for non-commercial purposes; a musician might grant rights to reproduce a song they wrote, on the condition that it be clearly attributed. Creative Commons is an example of what’s known as copyleft licensing — in which the default is to permit some reuse of a work, but under a licensing agreement that follows the work if it is used to create something new. One of the key advantages of copyleft licensing is that it allows private organizations to set, adapt, and revise the terms of use of the things they create — and to respond to new applications and risks more quickly than traditional laws could.

Consent is a murky concept when applied to models so sophisticated and opaque that even the developers themselves often don’t fully understand the risks.

Applied to AI, the copyleft approach would allow developers of new models and training datasets to make their work freely available, but under conditions that require users to follow specific ethical guidelines. For example, companies that use AI could be required to assess and disclose the risk of their AI applications, clearly label deepfakes and other AI generated video, or report discovered instances of bias or unfairness so that AI tools can be continuously improved and future harm can be prevented.

Of course, restrictions on a copyleft license mean little if they aren’t vigorously enforced. So in addition to the standard copyleft licensing scheme, we propose establishing a quasi-governmental regulatory group that would essentially crowdsource the enforcement powers from each license. This regulatory body would have the authority, among other things, to fine offenders. In this way, the regulator would be similar to “patent trolls” that buy up intellectual property rights and then gouge profit by suing offenders. Except in this case, the regulator, empowered by and accountable to the community of AI developers, would be a “troll for good” — a watchdog incentivized to ensure that AI developers and users operate within ethical bounds.

As we see it, the enforcement body would do more than just collect fines: It could issue guidance about appropriate and inappropriate AI uses; issue warnings or recalls of potentially harmful AI models and data circulating in the market; facilitate the adoption of new ethical standards and guidelines; and incentivize whistleblowing or self-reporting. Because it would not rely on traditional laws, the regulatory body would be able to adapt as AI evolves, ethical norms change, and new threats emerge. In our paper, we dubbed the approach Copyleft AI with Trusted Enforcement, or CAITE.

There are key details that would need to be ironed out to make CAITE work. For instance, who would make up the enforcement body and where would it reside? Those remain open questions. And it will be up to a community of relevant parties — including people involved with and impacted by AI — to decide what constitutes appropriate ethical standards, and to revise those standards as new issues arise. These are hard questions that are going to require careful and collaborative problem solving.

We need disruptive innovation in AI policy perhaps even more than we need disruption in the technology itself.

But the advantage of a CAITE-like model is that it would provide a much-needed infrastructure for encouraging the widespread adoption and enforcement of ethical norms, and it would provide an incentive for AI developers to actively participate in that process. The more developers that opt to license their models and data under such a system, the more difficult it would become for holdout companies to survive outside of it. They would be forgoing a bounty of shared AI resources.

The regulation of AI will likely require not one but many approaches working in concert. We envision CAITE potentially complementing traditional regulatory strategies, perhaps with traditional government regulators using CAITE compliance as a factor in their own enforcement decisions.

What’s clear, however, is that the risk of doing nothing is tremendous. AI is rapidly evolving and disrupting existing systems and structures in unpredictable ways. We need disruptive innovation in AI policy perhaps even more than we need disruption in the technology itself — and AI creators and users must be willing participants in this endeavor. Efforts to grapple with the ethical, legal, social, and policy issues around AI must be viewed not as a luxury but as a necessity, and as an integral part of AI design. Otherwise, we run the risk of letting industry set the terms of AI’s future, and we leave individuals, groups, and even our very democracy vulnerable to its whims.


Cason Schmit, JD, is an assistant professor of public health at the Texas A&M University School of Public Health where he researches legal and ethical issues relating to health and data.

Jennifer Wagner, JD, Ph.D., is an assistant professor of law, policy, and engineering at Penn State University where she conducts ethical, legal, and social implications (ELSI) research related to genetic and digital health technologies.