
Where Risk Gets Mispriced
It’s easy to think of AI as clean. Digital. Intangible. It lives in the cloud, responds instantly, and promises efficiency without exhaustion. It doesn’t flood, catch fire, collapse, or organize. At least not in ways we’re taught to measure. But when insurers start pulling out of climate-threatened regions while underwriting opaque machine learning systems, we are no longer in the realm of coincidence. We are witnessing a shift in what society values — and what it believes it can walk away from.
In the past two years, major insurers have exited home and fire insurance markets not just in obvious states like California and Florida, but across the entire Midwest, from Pennsylvania to the Dakotas, citing the compounding risks of climate volatility. Meanwhile, the same industry is investing in bespoke insurance products for algorithmic performance, AI-driven trading systems, and large-scale automated decision engines. These models aren’t being scrutinized for their environmental dependencies or social externalities. They are being insured because their risks are distributed, hidden, and — at least for now — plausibly deniable.
This is not merely a failure of policy. It is a collapse of discernment.
AI as Infrastructure, Not Abstraction
Much of the discourse around artificial intelligence remains metaphorical. We talk about training models, fine-tuning intelligence, or deploying agents, often as if these systems operate in a vacuum. AI does not float above the material world; it is deeply embedded in it.
Training large language models like GPT-4 can consume over 1.3 gigawatt-hours of electricity — enough to power a small town for a year (Patterson et al., 2021). Inference — the real-time generation of output — is ongoing and cumulatively more resource-intensive. Data centers that support these systems require millions of gallons of water annually for cooling and are often located in regions already under climate strain (Lant, 2023). These systems also depend on a global supply chain of rare earth elements, cobalt, and lithium, sourced from extractive economies and often from zones of ecological degradation or human rights violations.
What we call AI is not just software. It is infrastructure. The choice to describe it otherwise is not neutral — it is obfuscation.
The Cost of What We Don’t Count
Governance systems rely on visibility. What cannot be seen cannot be governed. In this sense, AI systems present a dual risk: Not only are their behaviors often inscrutable, but the full costs of their operation remain externalized — economically, environmentally, and morally. To externalize a cost is to push it outside the boundaries of responsibility, visibility, or accountability. The system benefits, but someone else pays the price.
Frequent data breaches in sectors like banking, healthcare, and social media offer clear, contemporary examples of externalized costs:
Banks use data to optimize lending, marketing, or fraud detection. But when a breach occurs, individuals — rather than institutions — suffer the consequences: identity theft, financial disruption, and diminished trust. The system benefits from data-driven efficiencies, while the harm is offloaded onto those least equipped to manage it.
Hospitals and healthcare systems digitize patient records to improve care. Yet, when sensitive data is exposed, patients — not the institutions — absorb the emotional distress, reputational damage, and potential legal and financial vulnerabilities. The healthcare system remains operational. The cost is quietly redistributed.
Social media companies collect and monetize user data to drive engagement and advertising revenue. But when that data is misused or leaked, it is democracy, public mental health, and civic discourse that absorb the fallout. The business model remains untouched. The collective burden grows.
These are all forms of externalized harm: systemic efficiencies preserved at the expense of human, social, and institutional resilience. These harms are distributed, diffuse, and rarely traceable — making them easier to ignore.
Displaced costs don’t vanish. They dilute accountability, defer reflection, and sever action from values. This is how values drift begins: not with malice, but with systems designed to prioritize output while suppressing awareness of dependency. Consider the ways this surfaces in practice: When energy consumption is framed as a technical challenge, not an ecological one. When data dependency is celebrated as innovation, rather than examined as infrastructural fragility. When insurance instruments reward performance metrics without scrutinizing the environmental or labor systems that enable them. These are not isolated oversights — they are symptoms of a broader misalignment.
As Bartunek and Moch (1987) suggest in their work on organizational change, first-order shifts — those that improve performance or efficiency — often mask second-order consequences. These deeper shifts reflect changes in core assumptions that remain unexamined until failure forces attention. The same dynamic applies here: We govern what the model does, but not what it demands.
Governance focused only on what the model delivers — rather than what it depends on — offers the illusion of oversight. The cost we don’t count becomes the risk we pretend doesn’t exist.
Why it matters for governance: When systems externalize cost, they obscure accountability. Governance that focuses only on performance outcomes misses the deeper dependencies that generate risk and enable harm. Oversight must extend beyond metrics to the conditions that sustain the model — and the ones it quietly erodes. The same dynamic applies here: We govern what the model does, but not what it demands.
What Sustainability Actually Demands
There is a growing tendency to speak of “AI for Good” or to pursue “green AI” initiatives, suggesting a shift toward more sustainable lifecycles — though rarely addressing what sustains the system itself. While well-intentioned, these efforts often stop short, substituting branding for accountability, offering the illusion of empowerment while deepening our dependency on invisible systems. In the Accountability Era, where data shapes both decision-making and meaning-making, governance must resist the temptation to let language collapse into vague brand messaging — for example, where all innovation is branded as AI, and hallucination fixes are treated as stand-ins for meaningful alignment.
“AI for Good” becomes suspect when it functions as a moral cover for unsustainable design. We must use data’s specificity—its unique capacity to define, classify, and anchor meaning—to call a thing a thing. Otherwise, what sounds empowering may simply obscure the next layer of harm. Sustainability is not a badge. It is a practice—one that requires confronting trade-offs, modeling restraint, and institutionalizing pause.data’s specificity — its unique capacity to define, classify, and anchor meaning — to call a thing a thing. Otherwise, what sounds empowering may simply obscure the next layer of harm. Sustainability is not a badge. It is a practice — one that requires confronting trade-offs, modeling restraint, and institutionalizing pause.
True sustainability in AI means accounting for the systems beneath the systems: the electrical grids, water supplies, labor pipelines, and extractive economies that feed its operation. It also means rethinking scale. As Crawford (2021) argues in Atlas of AI, the pursuit of scale in artificial intelligence has mirrored industrial logic: maximize output, externalize cost, suppress friction. But sustainability thrives on limits. Governance, if it is to have any moral spine, must articulate what those limits are — and who is protected when we honor them.
We Made the System — Now It Moves Without Us
We’ve seen this pattern before. In another era, it was tractors displacing families in the Dust Bowl. Today, it’s algorithms displacing attention from the systems that sustain us.
In Steinbeck’s The Grapes of Wrath, the mechanization of agriculture and the abstraction of financial risk displace thousands of families. The harm is real, but no one claims responsibility. “The bank is something more than men,” one character says. “It’s the monster. Men made it, but they can’t control it.”
Our current monster is not a tractor — it’s a server. It’s a model optimizing outcomes while externalizing risk. AI is the infrastructure we pretend is neutral. It automates, monetizes, and scales. But it does not feel. It never tires. It has no rights, freedoms, or concerns. So, we treat its demands as costless.
Just as banks once walked away from land and people, insurers are walking away from homes — while underwriting machines. We know this pattern. In the Accountability Era, silence isn’t neutral. It’s designed.
What to Rethink Before We Insure the Next Model
This isn’t about pulling the plug. It’s about reclaiming the capacity to name. In the Accountability Era, clarity is not a gesture — it’s governance. It’s the baseline for trust, and the first condition of accountability. Governance does not start with innovation; it starts with inquiry.
Before we underwrite another model’s performance, we should be asking:
- What does this system rely on?
- Where are its dependencies? Who maintains them? Who is harmed by their extraction?
- What environmental and infrastructural risks are being displaced?
And most of all: What is the true cost of keeping up the pretense that this intelligence is clean?