The Human in the Loop: Technology, Power, and the Reflection We Avoid

There will always be a human in the loop, unless we are urged to forget it….

There’s a recurring narrative in discussions about artificial intelligence that evokes a shiver of inevitability: “AI won’t take your job, but someone who knows how to use AI will.”

It’s not so much a warning about automation as it is a reminder of agency—who wields it, and how.

Strip away the silicon, the algorithms, and the data centers, and you find that behind every automated decision, every “neutral” technological leap, is a profoundly human hand, directing its course, choosing its applications, and defining its consequences.

Yet we’re remarkably eager to forget this.

Technology has a seductive quality—it promises objectivity, precision, and an escape from the messiness of human subjectivity. AI, in particular, embodies the perfection of deterministic thinking. It abstracts away the biases, emotions, and inefficiencies of human decision-making—or so we like to believe.

In our rush to delegate responsibility to machines, we create the illusion of neutrality. But the reality is far messier. The loop—the system of decisions and consequences—remains entirely within the human domain.

And that’s where the real danger lies: not in the machines themselves, but in our willingness to forget the humans who made them, who wield them, and who shape the outcomes.

The Hidden Human in Other Loops

This isn’t unique to AI. Throughout history, the most powerful institutions have often operated under the guise of impartiality. Religion, politics, and technology—all have been framed as forces unto themselves, as though their impacts were intrinsic and not directed by people. Consider these reimaginings:

  • Religion won’t take your spirituality, but someone who knows how to use religion will.
    Doctrine, like code, can be written with precision but interpreted with human bias. What begins as a quest for meaning and transcendence becomes a tool of control when wielded with cunning.

  • Politics won’t take your agency or adulterate your patriotism, but someone who knows how to use politics will.
    Systems of governance are designed to serve, but history shows they are most often used to manipulate, centralize power, and obscure accountability. The structure may remain, but the intent is always human.

  • Technology won’t take your humanity, but someone who knows how to use technology will.
    Whether it’s the assembly line or the algorithm, every invention amplifies human potential—for creation or destruction, for equality or exploitation.

In each of these, the common thread is the human in the loop. The tools themselves are not neutral; their creation, deployment, and ultimate use reflect the motives, ethics, and worldview of the people behind them.

The Danger of Forgetting Ourselves

When we consciously remove or conveniently forget that humanity exists in every loop, we surrender not just our control but our reflection. In doing so, we often justify great harm under the guise of progress or efficiency. Consider the implications:

  • When we tell ourselves that AI decisions are “objective,” we obscure the biases baked into their training data and perpetuate systemic inequities.

  • When we frame political dysfunction as the fault of “the system,” we excuse the actions of those perpetuating it.

  • When we view religion as a static, unchanging truth, we fail to hold its interpreters accountable for weaponizing belief.

This forgetting is not always accidental. Often, it’s the point. To obscure human agency in the loop is to absolve responsibility and justify power. The greatest trick of modern systems—technological, political, or otherwise—is convincing us they operate independently of the humans driving them. But this is a dangerous abdication of our role as both creators and participants.

Remembering the Loop

So what’s the antidote? Reflection. Accountability. An insistence on keeping the human visible in every loop we create or inhabit. This isn’t just about recognizing that AI was designed by humans, trained on human data, and used for human purposes. It’s about understanding that every tool we wield reflects who we are, what we value, and what we aspire to.

The question is not whether AI (or religion, or politics) will take something from us. The question is how we choose to engage with the humans who wield them. Will we empower those who seek fairness, justice, and empathy? Or will we allow these loops to be dominated by those who exploit them for control, division, or profit?

The loop has always been human. To forget this is to forget ourselves. But to remember it—deeply, consistently, and with purpose—might just be the way we make it through, together.

Leave a comment