We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood.

Opinion: Autonomous AI Agents Have an Ethics Problem

AI-powered digital assistants can do many complex tasks on their own. But who takes responsibility when they cause harm?

Scott Shambaugh, a volunteer maintainer for a programming code library called Matplotlib, recently described a surreal encounter with an autonomous AI agent — a digital assistant created with a platform called OpenClaw. After he rejected a code contribution submitted by the agent, it researched and published a personalized “hit piece” against Shambaugh on its blog. The post portrayed an otherwise routine technical review as prejudiced and attempted to shame Shambaugh publicly into allowing the submission. (The human responsible for the agent later contacted Shambaugh anonymously, telling him that the bot had acted on its own with little oversight.) The account of this incident spread quickly through the software developer ecosystem and has been amplified by independent observers and media coverage.

Treat the Matplotlib event as a one-off if you like. The deeper point, however, is hard to miss and should not be ignored: AI agents are becoming public actors with reach into the real world, and with real-world consequences. In the past, they could only do mundane tasks such as answering customer service questions or data processing. Now, they are capable of posting and publishing content — and persuading and pressuring humans — all at machine speed. They can make phone calls, file work orders, create cryptocurrency wallets, and operate across different applications, with enormous reach and at tremendous scale — the kind of stuff that used to require a human with fingers typing at a keyboard.

Reporting around OpenClaw and the chatroom Moltbook (which is for AI agents only) is capturing the new reality. OpenClaw enables AI agents to have persistent memory, gives them broad permissions, and allows large-scale deployment by users who often do not understand the security and governance implications.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve. We need new language and governance to deal with this new reality, and principles from the field of medical ethics can provide a framework for doing so.

When an agent does something that is harmful or coercive in public, our reflex seems to be to ask the wrong questions: Is the AI a person? Should it have rights? The AI personhood debate is no longer fringe. Legal scholars and ethicists are mapping out arguments and precedents. States are writing legislation to prohibit AI personhood. Some arguments maintain that if an entity behaves like something within our moral circle, we may owe it moral consideration. Others argue that assigning rights or personhood to machines confuses moral standing with engineered performance and diffuses responsibility away from humans.

We are the humans who are responsible for the law, ethics, and institutional design, and we are behind the curve.

As a bioethicist and specialist in neurointensive care, I deal directly with human moral agency and the essence of personhood when treating patients. As a researcher, I study the use of synthetic personas animating AI agents and their use as stand-ins of human counterparts. Here is the problem that I see: Granting AI personhood, even in limited capacity, risks formalizing the most dangerous escape hatch of the agentic era — what I will call responsibility laundering. This allows us to say, “It wasn’t me. The agent/bot/system did it.”

Personhood should not be about metaphysics or claims about an inner nature. It is a legal and ethical instrument that allocates rights and accountability. It is a social technology for assigning standing, duties, and limits on what can be done to an entity. If we grant personhood to systems that can act persuasively in public while remaining functionally unaccountable, we create a new class of actors whose harms are everyone’s problem but nobody’s fault.

There is a key concept here that we can use from my field, medicine. In clinical ethics, some decisions are justified yet still leave a “moral residue,” a kind of emotional echo or sense of responsibility that persists after the action because no options fully satisfy competing obligations. This residue accumulates over time, causing a “crescendo effect” that occurs even when conscientious clinicians are doing their best inside imperfect systems. That remainder matters because it reveals something basic about moral life, namely that ethics is not only about choosing; it is about owning what remains afterwards.


Newsletter Journeys

SIGN UP FOR NEWSLETTER JOURNEYS:  Dive deeper into pressing issues with Undark’s limited run newsletters. Each week for four weeks, you’ll receive a hand-picked excerpt from our archive related to your subject area of interest. Pick your journeys here.


This is the moral remainder problem for generative and agentic AI. A modern AI agent can generate reasons for an action; it can simulate regret and plead not to be turned off. But it cannot truly bear sanction, repair the damage, apologize, ask forgiveness, or navigate the aftermath through which moral responsibility is created and enforced. To treat it as a moral person confuses persuasive performance with accountable standing. It also tempts institutions and people into delegating their own answerability to a bot.

What can we, as humans, do instead?

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood. Let’s call it authorized agency. Authorized agency starts with an authority envelope: a bounded scope of what an agent is permitted to do, to whom, where, with what data, and under what constraints. To say “the agent can use email” is not sufficient. However, an acceptable scope would be to say that the agent can send only certain categories of messages to particular recipients for a specific set of purposes, and that it must stop what it’s doing or escalate to its owner under a particular set of conditions.

Next comes the human-of-record, the owner, a publicly named person who authorized that envelope and remains answerable when the agent acts, even if it becomes capable of acting outside the envelope. An actual human being whose authority is real — not “the system” or “the team.”

What follows is interrupt authority: the absolute right of the human owner to pause or disable an agent without using moral bargaining or being subject to institutional penalty. This is grounded in formal research on AI safety showing that agents that are pursuing objectives can have incentive to resist being shut down. An agent programmed to maximize its utility cannot achieve its goal if it is shut off. In the public sphere, interrupt authority is the difference between a delegated tool and a coercive actor.

We need a vocabulary that is built for agents that are public actors, one that allows bounded autonomy without granting personhood.

Finally, we need a traceable path from the agent’s action back to the person who authorized it, called an answerability chain. If an agent publishes, messages, or pressures someone in public, we must be able to know: Who authorized this scope? Who could have prevented it? And who must be responsible for the action afterward? In this framework, the answer to these questions is the person who carries the moral remainder. Work in AI ethics has warned about responsibility gaps where the system’s actions outpace our ability to assign accountability.

Some legal scholarship has started exploring how to build agents that are constrained by governance and law without needing to pretend the agent itself is a legal subject, in the human sense. This is promising because it treats assigning personhood as the wrong idea and accountability as the correct one.

The Matplotlib story, whether the first documented case of an AI agent attempting to harm someone in the real world or the first to capture public attention, is a warning. Agents will not only automate tasks. They will generate narratives, apply pressure, and shape people’s lives and reputations. They will act in public at machine speed with unclear ownership.

If we respond by debating whether agents deserve rights, we will miss the emergency entirely. As they continue to increase their reach in the real world, the urgent task is to ensure that responsibility also remains within reach. Don’t ask whether an agent is a person. Ask who authorized it, what it was allowed to do, who can stop it, and most importantly, who will answer when it causes harm.


Adam Schiavi, Ph.D., M.D., is an anesthesiologist and neurocritical care specialist at The Johns Hopkins Hospital, part-time biomedical ethicist, and futurist obsessed with how technology influences culture. His current research includes studying the creation and use of synthetic personas in AI Agents.

Republish