The Illusion of Control: Deregulation, Legal Loopholes, and the Rise of AI
Bioneers | Published: July 3, 2025 Eco-NomicsJustice Article
The technologies shaping our future aren’t arriving in a vacuum—they’re following a well-worn path laid by industry influence, regulatory retreat, and legal systems designed to serve private power.
In this third installment of our series on AI’s hidden costs, environmental lawyer and longtime activist Claire Cummings traces the roots of today’s AI boom back to the biotech battles of the 1970s, the rise of deregulation under Reagan, and the legal frameworks that continue to prioritize profit over people. Drawing from decades of experience confronting unchecked corporate power, Cummings warns that the same forces that once enabled genetically engineered crops to flood the market are now steering the future of artificial intelligence—with consequences that go far beyond code.
Read to the end to access the other three essays in this series.
CLAIRE CUMMINGS: For more than 30 years, I’ve worked at the intersection of law, journalism, and activism, focused in large part on biotechnology and its growing influence on agriculture. That experience has shaped how I understand the deeper forces reshaping our legal systems, our environment, and our humanity.

Over the past five decades, the legal and regulatory systems meant to protect our privacy, health, and environment have been steadily dismantled. Rights we once took for granted have been quietly eroded, often in the name of innovation or efficiency.
Let me take you back to 1975, to a place called Asilomar. Asilomar is a conference center in Pacific Grove, California. That year, scientists developing recombinant DNA technology—using cancer cells and E. coli to cut and splice genes—recognized the risks. What if this technology got out into the world? So they held a conference, but in the end, they chose to self-regulate. They didn’t want government oversight. That decision still shapes our failures to adequately regulate technologies today.
As a result, this work has continued largely without external checks as scientific breakthroughs are rapidly deployed as technologies worldwide without meaningful safeguards. Many of these applications remain essentially uncontrolled experiments.
Just after Asilomar, Ronald Reagan launched his presidential campaign with the now-famous line: “Government is not the solution, government is the problem.” He ran on a platform of deregulation and won.
By 1986, Reagan’s vice president, George H. W. Bush, invited four Monsanto executives to the White House. Together, they crafted a plan to support biotechnology with minimal interference. When Bush became president, that plan was formalized as the “Coordinated Framework.” It gave industry everything it wanted: no new laws, no new oversight, just a patchwork of existing regulations never meant to handle genetic engineering.
Sound familiar?
Today, we’re facing another wave of powerful, poorly regulated technology, AI, with the same pattern repeating. Scientific-sounding concepts are invented to make it all seem safe. The review process is largely voluntary, and the government only knows what the companies choose to share.
I did a little test recently. I asked Google, “Is artificial intelligence regulated in the United States?” And it said yes.
With AI, as with biotechnology, there are no new laws, no meaningful oversight. What Reagan started—dismantling the agencies meant to serve the public—is still happening, and what we’re seeing now is the result: regulatory agencies being gutted and businessmen with clear conflicts of interest being put in charge of public protections.
And even when regulatory agencies do exist and courts agree they have jurisdiction, what we usually get is risk assessment—a cost–benefit calculation, not a real safeguard. It’s not protection; it’s permission.
These technologies are inherently invasive. Think back to the debates around genetic engineering and GMOs. These were products that entered our bodies and ecosystems. They weren’t just ideas; they became part of us, often without our consent.
But the campaigns we ran around GMOs offer a model for how to respond. We didn’t just critique the technology; we organized across sectors and spoke directly to the public. Together, we demythologized the science. We cut through the industry hype and told people what was really going on. And it worked. We helped build public skepticism. Not cynicism, but healthy doubt. The kind of critical thinking we desperately need right now around AI.
And just as important, we offered an alternative. We didn’t stop at opposition. We promoted organic food, sustainable farming, and direct connections between farmers and consumers. People had something to say yes to. That combination—clear critique and offering tangible alternatives—is one of the most powerful tools we have.
Another critical point of intervention is intellectual property (IP). The lifeblood of both GMOs and AI is the ability to patent and profit from information. In the case of GMO patents, it’s life itself—genes, organisms, even biological processes. Over time, IP law has been reshaped to make this not only possible, but standard. This legal structure doesn’t just enable exploitation; it also hides it. Trade secrets and proprietary data make it nearly impossible to know what’s being done, let alone to stop it. That’s how these technologies continue to advance—out of view and without accountability.
Legal reform is one piece of the puzzle, but it won’t be enough on its own. We also need to rethink how we tell the story. Mainstream media tends to embrace whatever’s new and shiny, often without asking hard questions. That’s why it’s critical we create our own channels: spaces rooted in care, caution, and collective values. We did it during the GMO campaigns, and we can do it again.
But at the heart of this moment is a deeper question: How do we resist? How do we confront these technologies and the systems that enable them while staying grounded in our humanity? There’s no single answer, but I hope these stories spark ideas about where you can intervene, and how your voice might help shape what comes next.
We didn’t know what we were doing. We were figuring it out as we went. I hope you’re willing to do the same—to step into the unknown, because the stakes are high.
Most technologies, going all the way back to the plow, have been designed to replace human effort. That’s their core function. Today, doctors don’t have to conduct patient interviews because AI can do it. Farmers don’t have to weed because they rely on herbicide-resistant crops. These tools aren’t just making tasks easier—they’re replacing people.
This isn’t only a threat to jobs. It’s something much deeper. I want to invite you to consider: What does it mean to be human? What are we losing when we adopt these technologies so readily, without reflection?
I want to share a recent personal experience—something that happened just a couple of weeks ago.
My husband and I live in a senior living center up in Sonoma County, a community that was started by the San Francisco Zen Center. It’s very intentional, rooted in the idea of “beloved community.” We’re deeply committed to living by our principles, taking care of each other, and making decisions together using Quaker-style consensus tools.
Not long ago, two people came by promoting AI tools for senior care. One of the products they introduced was a surveillance system that watches you as you move around your apartment. It tracks how you walk, how steady you are, how active you are, supposedly to learn how you’re doing and, if something seems wrong, to alert someone if you fall or don’t “match” the behavioral data they’ve collected about you.
The second product they presented really broke my heart. It was an artificial intelligence “friend” for people who were lonely.
Of course, we rejected both proposals outright, but it also challenged us to really live according to our principles. If we believe in that concept of beloved community, then we have to ask: How do we truly take care of one another? How do we notice if someone is lonely, or struggling, or in need of support?
The reality is that many care communities will adopt these technologies because they’re underfunded, understaffed, and overburdened. On paper, AI looks like a practical solution. But I’m challenging all of us to go deeper, not just to oppose these tools in theory or try to tweak the legal system, but to call on our own humanity. Ask yourself: What can I do to replace what AI is promising everyone else?
In 1964, I was a student at UC Berkeley, part of the Free Speech Movement. We were young, idealistic, and determined to figure out how real change happens—how to challenge unjust systems while staying true to our deepest values.
The day Mario Savio gave his famous “Rage Against the Machine” speech, we were running a freedom school, kind of like the Occupy movement. We held classes and had conversations about how to create change, how to live in alignment with our deepest values. That’s what was happening in December 1964 on Sproul Plaza on the Berkeley campus.
We didn’t know what we were doing. We were figuring it out as we went. I hope you’re willing to do the same—to step into the unknown, because the stakes are high. We are in a moment of crisis. My generation did what we could. We made progress, but our time is passing.
So how will you rise to meet the challenge? How will you respond to what may be some of the most dangerous and dehumanizing technologies our society has ever seen?
This series—adapted from the Bioneers 2025 session AI and the Ecocidal Hubris of Silicon Valley—offers critical perspectives on the systems driving the AI boom and the broader impacts of techno-solutionism.
In the second piece, tech critic Paris Marx exposed the staggering environmental toll of AI’s infrastructure, from massive energy use to the exploitation of local water systems.
Coming up next: