
This article explores how socio-technical systems theory, discovered in a 1951 coal mining study, explains why many AI transformations fail today. Drawing parallels between historical industrial innovations and modern AI implementations, it shows why successful transformation requires designing human and technological systems together rather than deploying technology first and expecting people to adapt.
Sometimes the most important discoveries happen when we're looking for something else entirely. In 1951, two researchers named Eric Trist and Ken Bamforth were studying productivity at the Haighmoor coal mine section near Leeds when they stumbled across something they hadn't expected to find.
Small teams of miners had quietly reorganized themselves. Instead of following the traditional hierarchical structure, these multi-skilled crews were managing the entire coal extraction cycle on their own. And the results were remarkable: absenteeism had plummeted, accidents were down significantly, and productivity had soared.
What Trist and Bamforth discovered that day would become known as socio-technical systems theory, and it might hold the key to understanding why so many AI transformations struggle today.
The breakthrough wasn't just about mining efficiency. Trist realized something fundamental about how work actually gets done: you can't optimize technology and people as separate problems. They form what he called a "joint system", each part influences and depends on the other.
When those Yorkshire miners redesigned their work, they weren't just changing processes. They were creating conditions where people could see how all the pieces fit together, make adjustments when unexpected problems arose, and take ownership of the results.
The technical innovation was important, but it was the human system design that made the difference.
Twenty-three years later, Volvo decided to test this insight on a much larger scale. Their traditional assembly line was struggling with a devastating 41% annual turnover rate, people were leaving almost as fast as they could be hired.
So in 1974, they built something radically different: the Kalmar plant. Cars moved on automated carriers that teams could stop when needed. Groups of 15-20 people determined their own work rhythm and took responsibility for complete sections of the assembly process.
The transformation was striking. Defect rates dropped. Absenteeism virtually disappeared. The plant became so successful that managers from around the world made pilgrimages to see how it worked.
What made the difference? The technology was important, but the real innovation was giving people both the visibility to understand the whole system and the autonomy to improve it.
Fast-forward to 2025, and we're seeing the same fundamental challenge play out in AI transformations. Organizations invest in sophisticated technology, deploy advanced systems, and then wonder why adoption is slow, results are disappointing, or systems become fragile under real-world conditions.
The pattern is remarkably similar to those failed assembly lines from the 1970s: deploy technology first, expect humans to adapt afterward.
Here's what typically happens:
Advanced AI changes how problems manifest, often faster than existing approval processes and control systems can respond. A machine learning model might detect an anomaly, but by the time the finding works its way through organizational hierarchies, the opportunity to act has passed.
The people closest to the work see issues first, but they often lack the authority or framework to address them. They become frustrated observers rather than active problem-solvers.
Design assumptions never get tested against reality because there's no systematic way for frontline insights to influence system improvements. The AI operates based on its training, but real-world conditions evolve in ways that weren't anticipated.
Systems become brittle instead of adaptive. Without the ability to make continuous small adjustments, small problems accumulate into larger failures.
At Spentia, we've built our entire methodology around that 75-year-old insight from the Yorkshire coal mine. We don't deploy AI and hope humans adapt. Instead, we work to create what those researchers would recognize as joint systems.
This means involving people in designing how they'll work with AI from the beginning. It means creating feedback loops so that human insights can continuously improve how AI systems operate. It means giving teams both the visibility to understand what's happening and the autonomy to make adjustments when they spot opportunities for improvement.
We've found that when people understand the whole process, both the human elements and the AI components—they become remarkably good at orchestrating both. They know when to trust the AI's analysis and when to apply human judgment. They can spot patterns the AI might miss and provide context that makes the technology more effective.
What those coal miners understood intuitively, and what Volvo proved at scale, remains true today: performance emerges from the interaction between people and technology, not from either one alone.
The most sophisticated AI in the world won't create lasting value if it's imposed on teams without considering how they work, what they need to be successful, and how they can contribute to making the system better.
Conversely, even the most motivated and skilled teams will struggle if they're working with technology that doesn't fit how work actually gets done.
The coal miners of 1951, the Volvo teams of 1974, and the AI implementation teams of 2025 all face the same fundamental reality: sustainable performance is a property of the whole system, not just its individual parts.
This doesn't mean change is impossible or that technology isn't important. It means that the most successful transformations happen when people and technology are designed to work together from the start.
Those Yorkshire miners didn't reject innovation, they embraced it. But they did it in a way that honored both the power of new approaches and the wisdom of human experience.
Perhaps that's the real lesson: the future isn't about choosing between human intelligence and artificial intelligence. It's about creating conditions where both can contribute to something better than either could achieve alone.
Signals of Real Change: actionable insights at the intersection of AI, transformation, and talent.