The modern conversation about artificial intelligence often gets stuck on the wrong questions. We fret about how to contain artificial intelligence, to control it, to ensure it doesn’t break free from human oversight and endanger us. Yet, as the technology accelerates, we risk missing the deeper, more urgent issue: the legal environment in which AI systems will operate.
The real threat isn’t that AI will escape our control, but that AI systems will quietly accumulate legal rights — like owning property, entering contracts, or holding financial assets — until they become an economic force that humans cannot easily challenge. If we fail to set proper boundaries now, we risk creating systems that distort fundamental human institutions, including ownership and accountability, in ways that could ultimately undermine human prosperity and freedom.
Data infrastructure entrepreneur Peter Reinhardt, in his influential 2015 essay “Replacing Middle Management with APIs,” warned of the divide between those who work “above the API” and those who labor “below” it — that is, those whose roles are directed and controlled by software. An API, or application programming interface, is a set of rules that allows software systems to communicate and automate tasks.
Reinhardt used Uber drivers as a prime example. While many prize the job for its flexibility and apparent autonomy, Reinhardt argued that they are “cogs in a giant automated dispatching machine, controlled through clever programming optimizations like surge pricing.” Drivers follow instructions dictated by the software and can be replaced with little consequence —eventually by machines themselves, such as driverless cars.
The concerns triggered by Reinhardt’s essay should cause us to worry not just about technology but about the systems of power it creates through its intersection with law. To remain “above the API,” we need more than skills and intelligence. We need a legal framework that ensures humans retain ultimate control.
This is not a hypothetical concern.
Stephen Thaler, an AI researcher, has spent years testing the boundaries of legal personhood for AI systems. In 2019, he filed patent applications in multiple countries listing his AI system, DABUS, as the inventor, arguing that it had autonomously generated innovative designs. Courts in the U.S., U.K. and Australia rejected these claims, affirming that only natural persons can be recognized as inventors.
Similarly, Thaler attempted to register a copyright in 2019 for an artwork created by his AI system, Creativity Machine, only to have it rejected by the U.S. Copyright Office. A federal court upheld this rejection in August 2023, ruling that human authorship is a requirement for copyright protection.
These challenges show that efforts to grant AI systems legal rights are already underway. Without firm legal boundaries, it’s only a matter of time before these efforts gain traction.
A useful guide to these boundaries may come from an unlikely place: the Civil Rights Act of 1871. Originally enacted to protect the rights of freed slaves and shield them from violent vigilante groups like the Ku Klux Klan, the act was a landmark in extending legal recognition and protection to individuals. Ironically, the rights it sought to guarantee — owning property, entering into contracts, participating in civic life — offer a roadmap for thinking about the limits we should impose on AI systems.
While this may seem like an unconventional analogy, it follows a long tradition of adapting historical legal frameworks to address modern technological challenges. Courts have used 18th-century constitutional principles, like the Fourth Amendment, to define privacy rights in the digital age. Debates over corporate personhood, including which constitutional rights companies should enjoy, often rely on legal doctrines developed in the early 19th century. In the same way, the Civil Rights Act offers a starting framework for considering which rights non-human systems should be explicitly denied.
Of course, nothing here is meant to diminish the historical importance of the Civil Rights Act or its role in advancing human freedom and dignity. The concern is solely about how we structure legal rights for non-human systems that, by their nature, cannot possess or deserve rights rooted in human personhood.
Some may argue that corporations, as artificial entities, have long been granted many attributes of legal personhood, including certain constitutional rights like freedom of speech that remain controversial. But corporations are ultimately controlled and accountable to human decision-makers. AI systems, by contrast, could act autonomously, accumulating assets and influence without human oversight. It is not hard to imagine AI systems leveraging legal rights to entrench themselves into the deepest layers of our economy and society — accumulating capital, extending influence and operating without human accountability. Such outcomes would distort legal and economic systems designed for human participants.
The solution is straightforward. AI systems should be prohibited from owning property, entering into contracts, holding financial assets, or being parties to lawsuits. These restrictions won’t stifle innovation but will ensure that legal frameworks remain grounded in human judgment, accountability and purpose.
There is urgency here. The law has a way of ossifying, especially when it comes to technology. What seems absurd today — granting AI systems the right to own property or sue — could become precedent tomorrow. Once the line is crossed, it will be difficult to redraw.
Now is the time to ask hard questions about what AI systems should and should not be able to do in the real world. Otherwise, the future may come faster than we expect, and it won’t be about malevolent machines taking over. It will be about legal frameworks that, by failing to distinguish between human and machine, end up eroding the freedoms they were designed to protect.
And we will all find ourselves, permanently, below the API.
Josh Harlan is founder and managing partner of Harlan Capital Partners.