Farming in the Dark: The Black Box of AI and the Erosion of Food Sovereignty
Bioneers | Published: July 3, 2025 Eco-NomicsFood and FarmingJustice Article
In the race to digitize every aspect of life, artificial intelligence is rapidly gaining ground in agriculture, quietly reshaping how we grow food, manage ecosystems and make decisions about land and livelihoods. Framed as a tool for efficiency and sustainability, AI is increasingly embedded in systems that claim to address climate change and food insecurity, but beneath the promises lie deeper questions: Who controls these technologies? Whose knowledge do they prioritize? And what happens when decisions about nature are outsourced to opaque, corporate-built algorithms?
In this essay, Soledad Vogliano, an anthropologist, farmer, and Program Manager at the ETC Group, unpacks the expanding role of AI in food systems. Drawing on her work supporting Indigenous and peasant movements and her leadership on digitalization at ETC, Soledad makes the case that AI in agriculture is not just a technical issue, it’s a political one.
Adapted from the Bioneers 2025 panel AI and the Ecocidal Hubris of Silicon Valley, this piece is the fourth installment in our four-part series examining some of the hidden impacts of artificial intelligence. Read to the end to access the other essays in the series.
SOLEDAD VOGLIANO: Artificial intelligence is quietly but profoundly reshaping the way we grow food and manage biodiversity. While it’s often promoted as a high-tech fix for some of our biggest global challenges, from climate change to hunger, its growing presence in agriculture raises unsettling questions: Who’s really in control of these tools? And whose interests are they designed to serve?

Let’s start with what I consider the elephant in the room: the black box.
The “black box” refers to the opaque nature of many AI systems, especially those built using machine learning. These models can generate highly accurate predictions, but how they arrive at those decisions is often unclear, even to the experts who design them. We can observe what goes in and what comes out, but the inner workings remain hidden. That lack of transparency is one of AI’s most dangerous features—and one of its most overlooked.
Those mysterious algorithms making decisions about everything from crop protection to biodiversity conservation are, in practice, about as transparent as a brick wall.
Imagine a farmer—let’s call him John—standing in his field, facing a pest outbreak. He consults an AI system developed by a far-off tech company for guidance. The system gives him a recommendation. But here’s the problem: John has no idea how that decision was made. Was it based on the latest agronomic data? Was it tailored to his region’s climate or soil? Was it simply designed to push a product? He can’t tell, and there’s no way for him to find out.
That’s the danger of the black box. When AI systems operate without transparency, their decisions may be flawed, biased, or harmful, and users are left in the dark. If John applies a pesticide that degrades his soil or plants a crop unsuited to his land, he may not even know what went wrong, let alone how to fix it.
The black box doesn’t just obscure technical processes; it raises serious ethical questions. In high-stakes fields such as agriculture, healthcare, finance, and criminal justice, this opacity threatens fairness, accountability, and human agency.
This brings us to a second and equally urgent concern: accountability. What happens when decisions that shape lives and livelihoods are made by invisible algorithms that answer to no one? It may sound dystopian, but this is increasingly the world we live in as AI systems are integrated into the foundations of agriculture, health care, finance, and more.
Consider a scenario: an AI system recommends a pesticide that ends up destroying beneficial insects or encourages a crop choice that later crashes in value. Who is responsible? The farmer who followed the advice? The corporation that built the model? The algorithm itself—a piece of software with no awareness or agency?
This is where accountability breaks down. Without transparency, there’s no clear line of responsibility. Tech companies can shrug off failures, claiming the system, not the company, made the decision. Meanwhile, it’s the farmers, ecosystems, and communities who suffer the consequences. It’s like receiving a harmful medical diagnosis, only to be told afterward that “the AI said it was fine.” How can that possibly be acceptable?
The lack of accountability in black box AI isn’t just a technical oversight; it’s a systemic failure. One that protects corporate interests at the expense of human and environmental well-being.
So, who’s really in control of AI in agriculture? The answer probably won’t surprise you. Many of the same corporate giants that dominate agrochemicals and industrial farming—companies such as Bayer, Syngenta, and Corteva—are now at the forefront of AI integration, often in collaboration with major tech firms. Together, they are shaping the digital future of agriculture.
These companies are using AI to steer decisions about what gets planted, how crops are managed, and which inputs are used. Their systems are powered by data they often control, collected from farms across the globe. And they’re embedding themselves deeper into agriculture by layering digital decision-making on top of the same extractive models they’ve long promoted—models reliant on genetically modified seeds, synthetic fertilizers, and pesticides.
The result is a consolidation of power. AI becomes a tool not for democratizing knowledge or supporting sustainability, but for reinforcing the dominance of firms already shaping global food systems. The technologies remain opaque, their logic inaccessible to farmers and the public. What looks like innovation is often a digital power grab that risks locking farmers into systems they can neither fully understand nor easily escape.
And it doesn’t stop there.
When AI systems are built on appropriated data and biased assumptions, they don’t just miss the mark, they perpetuate inequality, erode sovereignty, and turn culture itself into a commodity.
Even when AI systems appear neutral, they are not. Algorithmic bias is a growing concern that we ignore at our peril. These systems are trained on data that reflects the values, assumptions, and interests of those who create and control them. In farming, this often means data drawn from industrial agricultural practices, leading to recommendations that prioritize yield and profit over soil health, biodiversity, or local needs, overlooking the ecological and cultural realities of small, diverse, or Indigenous-managed farms.
When corporate interests shape the data, they shape the outcomes, and when those outcomes are flawed or biased, it’s communities and ecosystems that pay the price.
This leads to harmful mismatches. AI may suggest fertilizers or pesticides based on monoculture norms, ignoring local soils, biodiversity, and traditional knowledge that has sustained communities for generations. Yet these outputs are often framed as objective, scientifically validated truths, despite being based on biased inputs.
Which brings us to another critical issue: data ownership, or more precisely, the lack of it. In the world of AI, whoever controls the data holds the power. And right now, that power lies almost exclusively with corporations. Data is often extracted from farmers, frequently without clear consent, and fed into AI models that go on to shape the tools, policies, and economic systems those very farmers must navigate.
This is a form of digital colonialism. Local and Indigenous communities that have long been the stewards of biodiversity and traditional ecological knowledge are seeing their insights extracted, repackaged, and monetized by distant actors. Their knowledge is treated not as a living inheritance, but as raw material to be mined for corporate gain. All of this is buried beneath layers of technical complexity, making it nearly impossible to recognize, let alone resist, the exploitation.
When AI systems are built on appropriated data and biased assumptions, they don’t just miss the mark, they perpetuate inequality, erode sovereignty, and turn culture itself into a commodity.
And then there’s the hype: the narrative that AI is the future, whether or not it actually works. One of the most troubling aspects of AI’s rapid rise is the overwhelming optimism surrounding it. The excitement—amplified by corporate marketing, media headlines, and government endorsements—has triggered a wave of massive investments, often based more on speculative promise than proven performance.
This rush to adopt AI has created artificial demand in sectors such as agriculture, even when the technologies in question remain opaque, unreliable, or misaligned with real-world needs. The more corporations can frame AI as revolutionary, the more funding, influence, and market share they can secure, even if the tools themselves haven’t delivered on their promises and rarely acknowledge their limitations.
Mainstream media often reinforces this narrative, presenting AI as an inevitable solution to pressing global challenges: climate change, food insecurity, and ecological collapse. In doing so, it pushes critical questions to the margins: How effective is AI really? What are its social and environmental consequences? Who benefits, and who bears the cost?
In this environment, the deployment of AI technologies often outpaces our understanding of their impacts, leaving little room for democratic oversight or ethical reflection. That’s why we need to shift the narrative from top-down innovation to bottom-up assessment.
Bottom-up technology assessments are essential if we want AI to serve the public good rather than corporate interests. These approaches center community voices, lived experience, and local knowledge. They prioritize inclusion and transparency and ensure that those most affected by new technologies have a meaningful say in how they are developed, implemented, and evaluated.
Corporate-led evaluations often sideline Indigenous and local communities, undermining their rights to self-determination. In contrast, bottom-up approaches center those voices, allowing assessments to reflect cultural values, ecological knowledge, and sustainability priorities.
But effective bottom-up assessments must go beyond surface-level consultation. They should support community organizing and help local groups build and share their own narratives. These communities offer essential insights into how technologies affect ecosystems, livelihoods, and futures. When they are empowered to define resources and benefits on their own terms, the resulting assessments are far more likely to align with shared values and aspirations.
To conclude, the growing reliance on AI in agriculture and beyond raises serious concerns about transparency, accountability, bias, and power. The opacity of these systems, often referred to as the “black box,” combined with corporate control over both the tools and the data, risks exacerbating inequality and displacing local knowledge.
What we need instead is clear: greater transparency, better data, and inclusive, bottom-up assessments that ensure AI technologies serve all communities, not just corporate interests.
This series—adapted from the Bioneers 2025 session AI and the Ecocidal Hubris of Silicon Valley—offers critical perspectives on the systems driving the AI boom and the broader impacts of techno-solutionism.
In the second piece, tech critic Paris Marx exposed the staggering environmental toll of AI’s infrastructure, from massive energy use to the exploitation of local water systems.