New York Voters
Politics

How to protect freedom from the threat of corporate AI?

Date: December 23, 2025.
Audio Reading Time:

Eight years ago, Russian President Vladimir Putin suggested that whoever masters AI “will be the ruler of the world.”

Since then, investments in the technology have skyrocketed, with US tech giants (Microsoft, Google, Amazon, Meta) spending more than $320 billion in 2025 alone.

Not surprisingly, the race for AI dominance has also generated significant pushback.

There are growing concerns about intelligent machines displacing human labor or introducing new safety risks, such as by empowering terrorists, hackers, and other bad actors.

And what if AIs were to elude human control altogether, perhaps vanquishing us in their own quest for dominance?

But there is a more immediate danger: increasingly powerful but opaque AI algorithms are threatening freedom itself.

The more we let machines do our thinking for us, the less capable we will be of meeting the challenges that self-governance presents.

The threat to freedom is twofold. On one hand, autocracies like Russia and China are already deploying AI for mass surveillance and increasingly sophisticated forms of repression, cracking down not only on dissent but on any source of information that might foment it.

On the other hand, private corporations, particularly multinationals with access to massive amounts of capital and data, are threatening human agency by integrating AI into their products and systems.

The purpose is to maximize profit, which is not necessarily conducive to the public good (as the dire social, political, and mental-health effects of social media show).

AI confronts liberal democracies

AI confronts liberal democracies with an existential question. If they remain under the control of the private sector, how (paraphrasing Abraham Lincoln) will government of, by, and for the people not perish from the earth?

The public needs to understand that the meaningful exercise of freedom depends on defending human agency from incursions by machines designed to shape thinking and feeling in ways that favor corporate, rather than human, flourishing.

One in ten voters told researchers that conversations with AI chatbots persuaded them to shift from not supporting particular candidates to supporting them

This threat is not merely hypothetical. In a recent study involving almost 77,000 people who used AI models to discuss political issues, chatbots designed for persuasion were found to be up to 51% more effective than those that had not been trained in this way.

In another recent study (conducted in Canada and Poland), roughly one in ten voters told researchers that conversations with AI chatbots persuaded them to shift from not supporting particular candidates to supporting them.

Pervasive algorithms

In free societies like the United States, corporations’ ability to monitor and influence behavior on a massive scale has benefited from traditional legal constraints on state regulation of the marketplace, including the marketplace of opinions and ideas.

The operative assumption has long been that, absent a significant threat of imminent violence, putatively harmful words and images are best met by more words and images aimed at countering their effects.

But this familiar free-speech doctrine is ill suited to a digital marketplace shaped by pervasive algorithms that covertly function as AI influencers.

The extensive measures by which algorithms “nudge” users toward what a given corporate platform wants them to want remain obscure, buried in the depths of proprietary code

Users of online services may think they are getting what they want – based, for example, on previous viewing choices or past purchases.

But the extensive measures by which algorithms “nudge” users toward what a given corporate platform wants them to want remain obscure, buried in the depths of proprietary code.

As a result, not only is “counter speech” unlikely to break through programmed barriers, but the very perception of – and felt need to counter – harm is being squelched at the source.

Threats to freedom in the digital age

A similar distortion of free-speech doctrine is evident in Section 230 of the Communications Decency Act of 1996, which protects digital platform owners (including the most popular social-media sites) from liability for harms that may arise from online content.

This corporate-friendly policy assumes that all such content is user-generated – just people exchanging ideas and expressing their preferences. But Meta, TikTok, X, and the rest hardly offer a neutral platform for users.

Social Media Icons
Governments betray their obligation to protect the meaningful exercise of freedom when they fail to regulate online marketing that is designed to manipulate preferences surreptitiously

Their existence rests on the premise that monetizing attention is immensely lucrative.

And now, corporations seek to increase profits not only by marketing various AI services but also by deploying them to maximize the time users spend online, thereby increasing their exposure to targeted advertising.

If holding users’ attention means covertly serving up certain kinds of information and blocking others, or offering AI-generated flattery and ill-considered encouragement, so be it.

Governments betray their obligation to protect the meaningful exercise of freedom when they fail to regulate online marketing that is designed to manipulate preferences surreptitiously.

Like the calculated falsehoods that constitute fraud when commercial products or services are at issue, deliberately hidden or disguised corporate behavioral manipulation for profit falls outside what the US Supreme Court regards as “the fruitful exercise of the right of free speech.”

Law and public policy need to catch up to contemporary conditions and the threats corporate AI poses to freedom in the digital age.

If AI is indeed becoming powerful enough to rule the world, governments in free societies must make sure that it serves – or, at the very least, does not disserve – the public good.

Richard K. Sherwin is a Professor Emeritus of Law at New York Law School.

Source Project Syndicate Photo: Shutterstock