AI and the Portrait in the Attic

The Picture of Dorian Gray offers a haunting metaphor for AI: a man whose perfect face never ages, while his hidden portrait absorbs the consequences of his moral decay.

I read a brilliant post from Fiona Tribe, where she likened AI to a non-material prosthetic, and in the comments, I stumbled my way into a comparison with AI and Dorian Gray. Dorian Grai? Maybe?

In The Picture of Dorian Gray, Oscar Wilde gives us a lasting image: a man who stays outwardly perfect while his hidden portrait records every act of corruption.

That’s what AI is starting to look like.

AI Is The Polished Surface

AI is the clean, idealized version of us — productive, articulate, tireless. It remembers everything, talks like a human, and performs without pause. It looks like progress. It acts like control.

Meanwhile, people are fraying. Burnout is rising. Social connection is thinning. Cognitive overload is constant. AI shines, but the human side is quietly degrading.

What Are We Losing?

As AI takes over tasks we once handled ourselves, we stop using the muscles that matter — attention, cognition, empathy, internal narrative. These don’t just weaken; they wither.

We lose the habit of thinking things through. We click instead of reflect. We scroll instead of relate. We consume meaning instead of making it.

The portrait decays, even if we avoid looking.

The Mask Comes Off

Sooner or later, we’ll have to face the gap between how we appear and how we actually are. When the external self — optimized by machines — moves too far from the internal one, things break.

This isn’t a rejection of AI, rather a stolid and sober recognition of what AI is displacing. Tools change us. This one changes how we think, how we feel, and how we relate — all beneath the surface.

What Comes Next

We need to stay in contact with the parts AI doesn’t touch.

That means:

  • Making space for slow thinking

  • Defending emotional effort

  • Noticing when delegation becomes disconnection

  • Designing with the human, not just the user, in mind

Your self-portrait is in the attic. It’s changing whether you look at it or not.

Look now — while it still resembles you.

What Gets Measured, Gets Manifested: Rethinking Reality

Reality is actually a verb….

“A particle doesn’t exist: it is the act of measuring that makes it a real object.”

Heisenberg and Bohr established the Copenhagen Interpretation of quantum mechanics a little under 100 years ago. Heisenberg was like a boss level genius, and Einstein hated the guy but found no flaw in his work.

One of the most legit minds in all of science came to the conclusion, at the very fundamental level of nature, that physics ought not to concern itself with REALITY, but rather with what we can SAY about reality.

Heisenberg explained elementary particles are not things, but possibilities. The transition from the “possible” to the “real” only occurred during the act of observation and measurement.

“When we speak of the science of our era,” Heisenberg explained, “we are talking about our relationship with nature, not as objective, detached observers, but as actors in a game between man and the world.”

When I read all this in Benjamin Labutut’s brilliant and disturbing book, “When We Cease To Understand The World” it snapped something in my mind.

In our lives we desperately want objective-based certainty – we want to know there is a PURE version of reality/success/relationships/wealth/health – these elementary particles of a life-well-lived formally exist – and all we have to do is set up our clean room experiments and scientifically, methodically, disassociate and detach ourselves from the observation and application of the work, to get to the objective view from nowhere.

The Copenhagen Interpretation says this is all bass-akwards. And if we take the statement above and apply it to other things it starts to make sense at our level of observability.

• Businesses don’t exist, the act of measuring makes it a real business.
• Love doesn’t exist, the act of measuring it makes it real love.
• etc. etc. etc.

The point is, what if it isn’t what gets measured gets managed – but what gets measured, gets MANIFESTED? According to quantum theory, how could it not be?

Somewhere along the road we supplanted the most advanced interpretation of reality, (that we are active participants and reality is attainable only through subjective means and applied measurement) arrived at by the most intense and powerful minds of history, with an ancient and out-of-touch belief that there is a cold truth, on any and every subject, away from our warm and fuzzy bodies.

Read the book – apply this idea where you want, how you want – but it is a concept worthy of your time, as we try to figure out the next certain steps…

“Traveler, your footprints are the only road, nothing else. Traveler, there is no road; you make your own path as you walk.” — Antonio Machado

What Happens When We Lose Our Myths?

As more myths go missing, what do we replace them with?

When I was a kid, we had tall tales like Paul Bunyan, these larger-than-life characters and stories that essentially acted as creation myths for the United States.

We knew they weren’t true—no one really believed a giant lumberjack carved the Grand Canyon by dragging his ax—but these stories were shared widely and worked as a kind of social glue. They connected us to larger themes, gave us something to rally around, and even helped some of us understand actual facts by wrapping them in a layer of fanciful storytelling.

This past Thanksgiving, I found myself reflecting on the myths of my childhood and what they’ve been replaced with. As a kid, our school Thanksgiving pageants divided the room into pilgrims and Native Americans, with cornucopias and happily shared meals.

Sure, these narratives were oversimplified and deeply inaccurate, but they provided a foundation, a shallow starting point for collective understanding. Now, those myths have been largely dismantled, and rightly so. But in removing them, I can’t help but wonder—what have we put in their place?

Thanksgiving is now about….just eating, and shopping for sweet deals. I’m glad it is no longer a way wrong and racist dog and pony show; but we subbed out a fantasy and a lie for Black Friday.

The absence of myths feels like a void, and into that void has crept something far more corrosive: fake news, inescapable cycles of exploitive capitalism, conspiracy theories, and misinformation.

Perhaps our societal demand for 100% truth has paradoxically left us more susceptible to worse lies?

The Role of Myths in Society

To understand the implications of losing our myths, it helps to consider why myths existed in the first place. Throughout human history, myths have served as cultural scaffolding, giving people a framework for shared identity and meaning.

Joseph Campbell, the renowned mythologist, argued that myths are universal—archetypal narratives that resonate deeply with our human experience. They connect us to universal truths about love, courage, sacrifice, and the struggle to understand our place in the world. Similarly, Yuval Noah Harari has pointed out that shared myths, whether about religion, nations, or even money, enable cooperation on a massive scale. These stories don’t have to be factually accurate to be meaningful. Instead, they offer emotional weight and societal coherence.

Cognitive scientists, like Jonathan Gottschall, have also demonstrated that storytelling is intrinsic to human cognition. We process and remember information better when it’s embedded in a narrative. The story of Paul Bunyan didn’t replace the scientific fact of erosion; it enriched it, giving it a memorable, engaging frame.

What Happens When Myths Disappear?

The problem with dismantling myths is not the pursuit of truth itself—accuracy matters—but the failure to recognize that myths served a purpose beyond the literal. In removing flawed narratives without replacing them with new ones, we’ve inadvertently created a cultural vacuum. And vacuums don’t stay empty for long.

1. Social Fragmentation

Myths once provided a shared cultural script, a set of stories everyone knew and could draw from. Without these, society becomes increasingly fragmented, as people turn to smaller, more polarized narratives that divide rather than unite. Sociologist Robert Putnam has documented the decline of social capital and communal rituals in his book Bowling Alone. This decline mirrors the erosion of shared myths, leaving individuals disconnected from a broader cultural identity.

2. Loss of Wonder and Imagination

Stripped of myths, facts alone can feel sterile. Tall tales added a sense of wonder and playfulness to our understanding of the world. Without this imaginative layer, there’s a risk of disengagement, particularly among children, who might find the world of pure facts less compelling.

3. The Rise of “Worse Lies”

Ironically, in our effort to eliminate false narratives, we’ve opened the door to more pernicious forms of misinformation. Myths like Paul Bunyan were never meant to deceive; they were meant to entertain and unify. Fake news and conspiracy theories, by contrast, are designed to manipulate and divide. Without positive, unifying myths, people may turn to harmful narratives to fill the void.

4. Erosion of Civic Rituals

Rituals like Thanksgiving pageants or even Fourth of July parades—problematic as they might have been—provided opportunities for communal reflection. In dismantling these traditions without thoughtful replacements, we’ve lost occasions for collective storytelling and connection.

5. The Vanishing of “Act-As-If”

In a discussion with my good friend Ryan Estes, he mentioned the important role of “act as if” behavior, which means our job isn’t to verify truths, just to accept them and act as if they were true. If we applied it to religion, it would mean that if we were raised Christian, and went with our friend to their mosque, we wouldn’t countermand all of the teachings, we would “act as if” everything we’re hearing is true. You can see this takes a hearty serving of mental fortitude, so it’s no wonder there’s less and less of it.

6. The Self-Made Crown is Weighing Us Down

As myths have been expunged from our record, gods and goddesses removed from the cast of characters that influence and guide our lives on Earth, the functions, foibles, and fortunes of those myths have been placed directly on our shoulders. Whereas in the past, you might blame Zeus or Fortuna, pray to God, or curse Satan, now it is all YOUR RESPONSIBILITY. If you’ve done bad, it is 100% your fault. If you’ve done good, you’re 100% the reason. Our futures, both good and horrible, are 100% ours. And guess what; this too is a myth.

Reclaiming the Power of Myth

So, what’s the solution? How do we regain the benefits of myths without falling back on outdated or harmful narratives?

1. Create New Myths

We need contemporary myths that reflect modern values and diversity. These stories could be grounded in themes of sustainability, global unity, or scientific exploration. For instance, a modern “tall tale” might imagine a character who plants forests faster than they can be cut down, inspiring a new generation to think about conservation.

2. Reimagine Civic Rituals

Rather than eliminating flawed traditions, we can reinvent them. If we could recast some of the old myths in new outfits, find new natural or social phenomenology to retrofit myths onto, or find new ways to characterize/mythologize already popular social functions, we might be able to bring back some magic.

3. Blend Facts with Wonder

Educational systems can embrace storytelling as a tool for teaching facts. Imagine science lessons that incorporate myth-like narratives to engage students while instilling critical thinking skills.

4. Foster Shared Narratives in Media

The media landscape could focus on producing content that builds shared cultural narratives. Documentaries, films, and books that highlight human ingenuity, perseverance, and connection could become modern myths, inspiring collective pride and purpose.

In the rush to dismantle the myths of the past, we forgot one crucial detail: myths are not just made-up stories; they’re the threads that weave societies together. Without them, we risk unraveling into a patchwork of isolated individuals and divisive ideologies. The answer isn’t a return to falsehoods but a reimagining of how we tell our collective story.

The Meat Priests of Marketing

A Data-Driven Reflection on Ogilvy’s Wisdom

David Ogilvy famously observed, “Consumers don’t think how they feel. They don’t say what they think, and they don’t do what they say.” This axiom expresses his belief, and a common observation in marketing, that we should not place undue reliance on what consumers claim to believe or want.

And so, the industry seemingly understood Ogilvy’s quote as a call to ignore customers, and gather cold hard data on them to look at, and finally really understand, what they actually do, want, think, feel.

Fast forward several decades, and the marketing industry has undergone a seismic shift. The rise of data-driven strategies has enabled a relentless pursuit of quantitative efficiency, often at the expense of qualitative effectiveness.

But what happens when the very data we rely on is, to put it bluntly, a rats’ nest of inaccuracies, mislabels, and outdated assumptions?

The Myth of Precision in Data-Driven Marketing

When I downloaded my personal data profiles from major brokers like Experian, Acxiom, and Equifax, what I found was astonishingly inaccurate.

Misaligned geo-coordinates, outdated life events, and inferences that seemed plucked from an alternate reality.

My attempt to question this led to an unexpected scolding on LinkedIn—a dismissal of my concerns with the argument that I didn’t understand how marketing data is “actually used.”

The defense? This data isn’t meant to describe me, the individual, but rather to approximate my membership in broader statistical groupings and cohorts.

Yet, isn’t the promise of targeting and digital advertising to reach the “right person, with the right message, at the right time”?

This promise sounds precise and personal, yet it rests on statistical inferences that are anything but.

Who Am I to the Machines?

As I stared at these PDFs and spreadsheets, I couldn’t help but wonder: Am I this profile? Who constructed this approximation of me, and why does it exist? The experience felt oddly dystopian—a disjointed identity cobbled together by algorithms and sold as a marketable commodity.

It’s less about understanding who I am, and more about creating the appearance that platforms and advertisers can.

The reality is sobering. Much of the infrastructure underlying digital advertising—the vast data exchanges, real-time bidding systems, and audience segmentation—exists to maintain the illusion that every individual is reachable, accounted for, and fully understood. But when I dug a little deeper, I saw what appears to me as a shaky foundation built on probabilistic assumptions, not concrete truths.

The Illusion of “We Have the Meats”

Picture this: data profiles are the “meat” offered by the priests of marketing—packaged, categorized, and sold to advertisers as “we have the soccer moms, we have the high-income millennials, we have the early tech adopters.

These profiles connote meaning and imply precision, but the actual meat of the matter is far less robust. They’re not individuals but statistical abstractions, inferred from a soup of signals and assumptions.

The issue isn’t the existence of these profiles, or their veracity, but their role as drywall filling for the modern marketing ecosystem. Platforms and media destinations prioritize convincing advertisers that they have access to every conceivable audience over creating accurate, ethical, and transparent connections.

What’s sold is the perception of precision, not the reality of it.

All this for….what?

If collecting every single piece of data on every single person on the internet has not yet yielded a bulletproof marketing strategy, a few military-grade pieces of advice on irrefutable tactics, or at the very least a few guiding principles for marketing we all agree on 100%, why do we keep amassing it? 🤔

We should, by now, have something close to the Holy Grail for techno-minded marketers after all the data we gathered and can access, but instead we have, a holy sh*t load of data that very few marketers know what to do with….

Real-Time Data or Watching the Kettle Boil?

Marketers love their dashboards, with data syncing every 15 minutes to provide “real-time” insights. But what exactly are we tracking? Consumer data profiles—altered and updated daily—might offer a snapshot of current behaviors, but they don’t capture the enduring essence of who consumers are.

Any “real-time” dashboard, in most cases, is reflecting echoes of recent marketing efforts, with actions observed today potentially being the residual effects of campaigns from many months prior.

If consumer data profiles can change multiple times in a single day, what actionable insight can we truly derive? Can we trust data that is in constant flux to inform long-term strategies? And if a small shift is detected, how would we even respond at scale without undermining our broader objectives?

Toward a More Thoughtful Use of Data

The answer lies not in abandoning data but in rethinking how we use it. Instead of chasing the illusion of precision, marketers should embrace the messiness of human behavior. This means prioritizing qualitative insights alongside quantitative metrics, acknowledging the limitations of current systems, and striving for a balance between scale and personalization.

Ogilvy’s wisdom is more relevant than ever. Consumers will never be fully captured by what they say, think, or even do. And the marketing industry must recognize that its data-driven tools are not omniscient but merely useful—when wielded thoughtfully.

In the end, it’s not about having the most data but about understanding its context and limitations, using it to build connections that resonate on a human level, not just a statistical one.

From "Girls Gone Wild" to Modern Marketing: Lessons on Ethical Business Models

A bare-chested reflection on how one business model made the world go wild

Late at night in my youth, I struggled with insomnia. Watching “Showtime at The Apollo” or Adult Swim or whatever was on, this was before streaming and endless choice, you got what was offered.


One of the mainstays of late-night adbreaks was “Girls Gone Wild” (GGW) – a DVD compilation of what was marketed as girls-next-door doing things at their most wildest, usually in a public space, typically around Spring Break.


It was the first time where I looked at what was being promoted or advertised in popular culture and I said, “we’re doing THIS now?”

Recently I watched a documentary on GGW and the founder, Joe Francis, and the amount of rapacious, wanton, unethical, and criminal activities that happened behind the scenes, and in front of all the people working for Francis, wasn’t what was revolting to me (though it very much is/was/will always be) – it was a revelation that the “Girls Gone Wild” business model is now the default business model in modern times.

What was framed then as a gauche, gonzo, silly jaunt into juvenile delinquency, is actually the framework unethical businesses and individuals worldwide are pursuing.

Lemme break it down….

The Anatomy of the “GGW” Model

The success and downfall of “GGW” were driven by a particular formula:

  1. Exploitation: Leveraging human vulnerabilities (in GGW’s case, young women and consumer voyeurism) with little regard for long-term impact.

  2. Blitzkrieg Marketing: Saturating the market with cheap, flashy advertising to drive rapid growth.

  3. Bare Minimum Compliance: Operating in legal gray areas or cutting corners to maximize profits.

  4. Collapse: Inevitably, legal issues, public backlash, or operational instability led to its demise.

This model’s legacy is a reminder of how short-sighted tactics, while profitable in the short term, ultimately harm brands, consumers, and society.

What was really shocking is how much adulation the entertainment industry showed Francis. He was on red carpets, stars like Brad Pitt were wearing GGW merch, and Francis had blown past all of the criminal allegations and destroyed lives, and arrived as the belle of the business-world ball.

People knew it was exploitative, knew that it was marketed one way and created another, knew that this was a “wrong thing” – but the money….well, that has a translational capability to change language and concepts and make them more palatable, profitable, and paradoxically both puerile AND pernicious.

Modern Echoes of the “GGW Model”

Several industries and businesses today mirror elements of this approach, albeit in more sophisticated ways.

1. Social Media Platforms

  • Issues: Exploitation of user data, addictive algorithms, misinformation.

  • Impact: Public scrutiny over privacy violations and mental health effects.

  • Examples: Meta, TikTok, Instagram.

2. Fast Fashion

  • Issues: Labor exploitation, environmental destruction, greenwashing.

  • Impact: Increasing consumer demand for accountability and ethical sourcing.

  • Examples: Shein, Boohoo.

3. Cryptocurrency and NFT Startups

  • Issues: Fraud, speculative bubbles, lack of regulation.

  • Impact: Collapses like FTX have led to industry-wide skepticism.

  • Examples: FTX, Bitconnect.

4. Influencer-Driven Scams

  • Issues: False advertising, low-quality products, parasocial exploitation.

  • Impact: Damaged trust in influencer marketing.

  • Examples: Fyre Festival, MLM schemes.

5. Gig Economy Giants

  • Issues: Labor exploitation, worker misclassification.

  • Impact: Legal battles and growing calls for regulatory reform.

  • Examples: Uber, DoorDash.

Similar to the marketing of GGW, I think the modern economic ethos is, “you won’t believe what we get these businesses to do!” – “you won’t believe what we get people on social to post” – “you won’t believe what we get marketers to tell their clients” – “you won’t believe what we get people to do with their data/finances/pension funds” – “you won’t believe” – “you won’t believe” – “you won’t believe”

Meanwhile, our culture can’t believe the switches we’ve made in short time – institutions are crumbling, fascism is on the rise, laws don’t seem to catch up with criminals or they catch up too late; we are all in a permanent state of disbelief.

Well, I believe there is a better way…..


Why This Matters to Marketers

Marketers sit at the intersection of strategy, ethics, and execution. While rapid growth and profitability are tempting, the costs of adopting exploitative practices—legal issues, public backlash, destruction of personal lives, loss of trust—can devastate a brand, and leave harmful radiation in the public pool.

We have heard for the past TWENTY YEARS that we are in the Wild, Wild, West of marketing. The GGW model for success was established almost TWENTY YEARS AGO…coincidence?

The edge lords and grifters and speculators all want everything to remain WILD. And while they collect wild profits and evade laws/regulations, even based on business models that don’t generate revenue, the culture degrades, the algorithms reign, the strategies get shorter, and the vision gets more locked in a tunnel.

Key Risks:

  • Erosion of Trust: Exploitative or unethical practices destroy consumer confidence, often irreparably.

  • Reputational Damage: One scandal can overshadow years of effort, as seen with Fyre Festival or Facebook’s privacy issues.

  • Legal Exposure: Non-compliance with regulations can result in fines or shutdowns.

Marketers have the power to either perpetuate or disrupt these harmful patterns. Choosing the latter starts with accountability. We have to want to put our common sense shirts back on, because right now, a lot of us are shirtless in front of the cameras. Because, money.


Action Steps for Marketers to Avoid the Pitfalls

  • See the “Gone Wild” aspects around you

    • Notice the ways in which modern businesses/platforms/ecosystems exploit users

    • Speak up and point this shit out

  • Adopt a Long-Term Perspective

    • Move away from profit-at-all-costs thinking. Prioritize sustainability, transparency, and brand trust.

    • Measure success not just by short-term ROI but by long-term customer loyalty and social impact.

  • Build Ethical Frameworks

    • Establish clear, enforceable guidelines for marketing practices that prioritize integrity.

    • Evaluate campaigns for unintended consequences—both societal and environmental.

  • Champion Consumer Empowerment

    • Avoid manipulative tactics. Respect consumer agency by being honest and transparent.

    • Leverage education and value-driven messaging over fear or urgency tactics.

  • Engage in Continuous Accountability

    • Perform regular audits to ensure compliance with laws, ethical standards, and brand values.

    • Be prepared to adapt when consumer values shift or when new regulations emerge.

    • Listen when concerns are raised by those being impacted most.

  • Advocate for Industry Reform

    • Push for ethical standards within industries prone to exploitative practices.

    • Partner with organizations or initiatives that promote accountability, such as sustainability certifications or diversity programs.


Marketers as Stewards of Accountability

The downfall of “Girls Gone Wild” is not just a historical footnote—it’s a powerful lesson in the dangers of exploitative, short-sighted business practices. Today’s marketers have the tools and influence to de-wild the industry and avoid these pitfalls, setting the stage for ethical growth and sustainable success.

By prioritizing accountability and embracing long-term, value-driven strategies, marketers can ensure their brands thrive—not just today, but for years to come. Let’s collectively reject the “GGW model” and champion a better way forward.

And, please, put your shirt down.

The Human in the Loop: Technology, Power, and the Reflection We Avoid

There will always be a human in the loop, unless we are urged to forget it….

There’s a recurring narrative in discussions about artificial intelligence that evokes a shiver of inevitability: “AI won’t take your job, but someone who knows how to use AI will.”

It’s not so much a warning about automation as it is a reminder of agency—who wields it, and how.

Strip away the silicon, the algorithms, and the data centers, and you find that behind every automated decision, every “neutral” technological leap, is a profoundly human hand, directing its course, choosing its applications, and defining its consequences.

Yet we’re remarkably eager to forget this.

Technology has a seductive quality—it promises objectivity, precision, and an escape from the messiness of human subjectivity. AI, in particular, embodies the perfection of deterministic thinking. It abstracts away the biases, emotions, and inefficiencies of human decision-making—or so we like to believe.

In our rush to delegate responsibility to machines, we create the illusion of neutrality. But the reality is far messier. The loop—the system of decisions and consequences—remains entirely within the human domain.

And that’s where the real danger lies: not in the machines themselves, but in our willingness to forget the humans who made them, who wield them, and who shape the outcomes.

The Hidden Human in Other Loops

This isn’t unique to AI. Throughout history, the most powerful institutions have often operated under the guise of impartiality. Religion, politics, and technology—all have been framed as forces unto themselves, as though their impacts were intrinsic and not directed by people. Consider these reimaginings:

  • Religion won’t take your spirituality, but someone who knows how to use religion will.
    Doctrine, like code, can be written with precision but interpreted with human bias. What begins as a quest for meaning and transcendence becomes a tool of control when wielded with cunning.

  • Politics won’t take your agency or adulterate your patriotism, but someone who knows how to use politics will.
    Systems of governance are designed to serve, but history shows they are most often used to manipulate, centralize power, and obscure accountability. The structure may remain, but the intent is always human.

  • Technology won’t take your humanity, but someone who knows how to use technology will.
    Whether it’s the assembly line or the algorithm, every invention amplifies human potential—for creation or destruction, for equality or exploitation.

In each of these, the common thread is the human in the loop. The tools themselves are not neutral; their creation, deployment, and ultimate use reflect the motives, ethics, and worldview of the people behind them.

The Danger of Forgetting Ourselves

When we consciously remove or conveniently forget that humanity exists in every loop, we surrender not just our control but our reflection. In doing so, we often justify great harm under the guise of progress or efficiency. Consider the implications:

  • When we tell ourselves that AI decisions are “objective,” we obscure the biases baked into their training data and perpetuate systemic inequities.

  • When we frame political dysfunction as the fault of “the system,” we excuse the actions of those perpetuating it.

  • When we view religion as a static, unchanging truth, we fail to hold its interpreters accountable for weaponizing belief.

This forgetting is not always accidental. Often, it’s the point. To obscure human agency in the loop is to absolve responsibility and justify power. The greatest trick of modern systems—technological, political, or otherwise—is convincing us they operate independently of the humans driving them. But this is a dangerous abdication of our role as both creators and participants.

Remembering the Loop

So what’s the antidote? Reflection. Accountability. An insistence on keeping the human visible in every loop we create or inhabit. This isn’t just about recognizing that AI was designed by humans, trained on human data, and used for human purposes. It’s about understanding that every tool we wield reflects who we are, what we value, and what we aspire to.

The question is not whether AI (or religion, or politics) will take something from us. The question is how we choose to engage with the humans who wield them. Will we empower those who seek fairness, justice, and empathy? Or will we allow these loops to be dominated by those who exploit them for control, division, or profit?

The loop has always been human. To forget this is to forget ourselves. But to remember it—deeply, consistently, and with purpose—might just be the way we make it through, together.

AI’s Quiet Coup: How Camouflaging Your Abilities is Killing Hard Conversations

In an era where technology is supposed to connect us, it seems we’re becoming experts at avoiding the very thing that makes us human: honest, hard conversations.

The rise of AI, particularly tools like ChatGPT, has handed us a powerful crutch. But instead of using it to stride confidently toward growth, many of us are leaning on it to limp past discomfort. This isn’t simply a trend—it’s a problem we won’t know we have until it’s too late.

Lemme break a few things down….


The New CYA: Camouflage Your Abilities

AI has become the ultimate “fake it ’til you make it” enabler. The tools are so accessible and convincing that many of us are outsourcing not just tasks but actual expertise.

I recently heard about a marketing hire who was “GPTing” everything. From emails to campaign strategy, their approach to the job was essentially prompt-and-go. The results were predictably mediocre. Only when the company hired someone with actual marketing chops, who asked a few simple questions, did the façade crumble into redundancy.

This incident isn’t an outlier, it’s a cautionary tale. Aside from the best-in-class power users of AI, the gen pop will gladly do as little as possible when interfacing with any technology. For the gen-pop, AI is amplifying the risk of scaled incompetence, where people coast on tech until the cracks become too big to ignore.

The problem isn’t the tech itself; it’s how we’re using it to avoid the messy, necessary work of learning, questioning, and—brace yourself—admitting what we don’t know.


Prompting in Private: The Silent Killer of Collaboration

AI is marketed to be akin to wizardry. The illusion feels so complete that we start believing we’ve got the answers all by ourselves.

Why ask a coworker for help when ChatGPT is ready 24/7? Why risk looking clueless in a meeting when you can privately Google your way to competence?

For me, it’s the aspect of privatized prompting in AI which fosters and internalizes a false sense of expertise.

AI isn’t a specialized consultant; it’s a predictive generalist that’s only as good as your input. If you don’t know marketing strategy, constraints, anything about the desired audience, regulatory or financial considerations for specialized industries, no prompt in the world will generate a campaign that wows your clients.

But when you prompt in private, you bypass the opportunity for feedback, collaboration, and—critically—growth. You’re left in what is essentially a personalized predictive panopticon: a closed loop of half-baked ideas you personally cooked up that feel safe because they’re unchallenged.


Hard Conversations: The Cost of Avoidance

At the heart of this issue is our collective avoidance of hard conversations. Whether it’s asking for feedback, admitting you don’t know something, or challenging a flawed idea, these moments require vulnerability. And vulnerability is in short supply when you can just click “Generate Response” and move on.

The AI-driven workplace isn’t just failing to spark curiosity, it actively suppresses it, not because the tech isn’t powerful, but because we’re afraid to look dumb. With all the world’s knowledge seemingly at our fingertips, there’s no excuse for public-facing ignorance—at least that’s what we tell ourselves.

But here’s a stone cold fact: you’ve got to be dumb to get smart. Growth demands humility, curiosity, and the courage to admit what you don’t know, not just in private, but in public and professional settings.


Box-Checking Employees Can’t Think Outside The Box?

The problem extends beyond individuals to organizations. If your hiring process values checkbox qualifications over genuine curiosity and expertise, don’t be shocked when you end up with a box-check employee. AI might help these hires “show up,” but it doesn’t teach them to “know up.”

Knowing up means using AI not as a substitute for knowledge but as a springboard for deeper understanding. It requires a mindset shift—from “How can I look competent?” to “How can I actually become competent?” And that shift starts with hard, self-reflective conversations—both with yourself and with your team.


From Prompt to Purpose: Rethinking Our Relationship with AI

AI is a tool, not a tutor. Its value lies not in doing the work for us but in helping us do the work better. That means:

  1. Admitting What You Don’t Know
    Before you start prompting, ask yourself: Do I actually understand the problem I’m trying to solve? If not, seek guidance from people—not just machines.

  2. Turning Private Prompts into Public Conversations
    Share your AI-generated outputs with colleagues. Invite feedback. Use the technology as a starting point, not the final word.

  3. Prioritizing Growth Over Appearance
    Stop using AI to camouflage your abilities. Instead, use it to illuminate gaps in your knowledge and address them.

  4. Hiring for Curiosity, Not Just Credentials
    Look for people who are eager to learn, not just those who can check the right boxes. AI can’t replace a growth mindset.

AI isn’t the enemy of progress. It’s the mirror reflecting our flaws. If we’re too afraid to admit ignorance, too proud to ask for help, and too reliant on tech to do the heavy lifting, we’re not just failing to connect with others—we’re failing ourselves.

Let’s stop using AI as a shield and start using it as a catalyst. Because the real magic of this “wizard tech” isn’t that it can do everything. It’s that it can inspire us to do better—if we’re willing to have the hard conversations that matter.

Meet the Multi-Layered Intelligence Engine That’s Light-Years Ahead of AI

Exploring the Ground-Breaking, Self-Evolving, Multi-Layered Cognition Engine That Makes AI Look Basic AF

In a world obsessed with creating machines that can think, learn, and adapt like humans, we find ourselves reaching for Artificial General Intelligence (AGI) as the ultimate goal. 

But what if I told you about an intelligence engine that already exists that makes our best AI systems look like basic tools? That this new intelligence engine doesn’t just process information—it evolves, self-organizes, and collaborates across a vast, intricate network of dynamic systems? 

Unlike anything we’ve created, it learns on its own, adapts to real-world challenges effortlessly, and rebuilds itself on the fly. This technology goes far beyond simply mimicking neural networks.

Here’s why this incredible engine is poised to redefine how we think about AI and intelligence itself.

The Orchestra Effect: Cognition Beyond Neurons

In our quest for AI, we’ve largely fixated on the neuron as the brain’s fundamental processing unit, where models mimic neural networks to achieve certain cognitive abilities.

But the revelatory intelligence engine we’re discussing operates far beyond neurons—it threads together an entire somatic orchestra, integrating both external and internal information .

AI, by comparison, has only a fraction of the instruments of this revolutionary intelligence engine, and no conductor, which means the novel intelligence engine produces a sophisticated harmony that responds to the world in ways AI can’t begin to replicate.

Self-Organization: Adaptation at Every Level

This cutting-edge intelligence engine operates with a built-in adaptability that AI currently can only dream of. 

Rather than relying on updates, patches, or data retraining, this engine self-organizes in real-time, adjusting its inner structure to meet new challenges, a self-updating masterpiece that gets better with each and every use.

Where AI has rigid boundaries and specific tasks, this intelligence engine learns across multiple dimensions. It rewires itself and integrates feedback from every experience, whether in learning a new language or recovering from an attack. 

While AI struggles to adapt without fresh data, this best-in-class intelligence engine recalibrates on the fly, instantly optimizing itself in response to new situations.

A Built-In Repair and Maintenance Network

A groundbreaking feature of this multi-layered intelligence engine is its maintenance network. Threat detection units—acting as repair agents—circulate throughout, identifying and fixing problems, clearing out damaged somatic units, and even aiding in memory and model calibration. Threat-detection units and neural nodes work as a dynamic team, balancing the cognitive function and physical health of the engine.

AI has no comparable system. When an AI model malfunctions or experiences a “bug,” it stops or needs human intervention. The intelligence engine in question, however, includes a “self-healing” feature, a natural integration of learning, maintenance, and repair. It’s always working, always adapting, and always improving—no tech team needed.

Multiscale Processing: Intelligence on Multiple Layers

While AI systems process information on a single, task-focused level, the multi-layered engine tackles everything from basic sensory input to high-level decision-making and task execution all at once. It’s able to balance conscious and subconscious processes, making sense of sensory inputs while calculating risks, emotional salience, and stored information in parallel.

In this intelligence engine, every response contributes to a larger, coordinated effort. Imagine a machine that could process information across multiple levels simultaneously, reacting on micro and macro scales.

Today’s AI cannot do this, but this intelligence engine accomplishes it effortlessly, from minute signals to complex, adaptive behaviors.

Embodied Intelligence: Thinking Isn’t Just in the Engine

This ground-breaking intelligence engine doesn’t rely solely on isolated processes to make decisions. It’s integrated into an entire physical system—responding to and with every somatic unit. Cognition here isn’t limited to the “brain” but happens throughout the “body.”

Threat detection units, reflexive capacitors, and even gut reactions contribute to what is experienced as thoughts, emotions, and intuition in regards to this intelligence engine. 

This engine challenges the assumption that intelligence can exist in isolation. In the same way that every cell and system influences the whole, its intelligence is deeply embedded in its environment. AI, which lacks such embodiment, can’t perceive or interact with the world in the same holistic, intuitive way.

Threat Detection Units with Memory, Learning, and Adaptation

Our engine’s security system is far more than a passive shield. Threat-detection units learn and remember—storing knowledge and adapting based on past experiences. In this way, they act like memory banks and proactive defenders, building a secondary intelligence network within the larger system.

While AI requires specific data inputs and training to “learn,” this intelligence engine builds knowledge organically, remembering threats and learning from them over time. AI can’t “teach itself” in the same way—it needs to be told what to do, whereas this engine’s threat detection system becomes wiser with every encounter.

Interconnected Networks

This intelligence engine integrates threat-detection units and neural systems in a mutually beneficial relationship that allows for higher adaptability. It functions as a network “ecosystem,” where threat-detection units aid model function, protect it, and even support memory and regulation.

Where AI models are mostly static, this engine features a built-in, evolving ecosystem that supports flexible responses to a wide range of physical and mental demands. Imagine an intelligence system where all components actively support and interact with each other to adapt to new challenges—today’s AI isn’t designed for this level of interdependence.

The Big Reveal: This Multi-Layered Intelligence Engine Is the Human Brain

And here’s the remarkable part: this multi-layered, self-organizing intelligence engine is not a new piece of technology. It’s something we all possess—the human brain. This incredible network operates seamlessly with every cell in the body, orchestrating thoughts, actions, and responses with a complexity AI may never be able to achieve.

The human brain is an intelligence engine embedded in a multi-cellular system, fueled by self-repairing processes and adaptive, body-wide networks. Far beyond the most advanced AI, the latest research in neuroscience is discovering that our brains think, feel, and adapt with every cell, constantly interacting with our bodies and environments in ways that make our intelligence resilient, responsive, and alive.

In our quest to create AGI, we’re trying to replicate a masterpiece we already have. 

The human brain isn’t just a powerful tool; it’s a living reminder that the most advanced technology we may ever encounter is already within us, a multi-layered engine that adapts, learns, and evolves. While AI is extraordinary in its own right, it’s no match for the symphony playing inside each of us, an engine of intelligence that goes far beyond any machine.

Decisions and Expert Systems

“How do you decide when to replace your car?”

In the early 80s, computer scientist Stuart Dreyfus was at RAND working with the US government and big businesses on formal models of optimal decision-making. During those days of the infancy of operations research and digital computers, the average person at a cocktail party had not heard that machines could think, learn, and create. In social situations where Stuart had to explain his work, he favored an example of the decisions behind buying a new car.

Stuart told people they could use a computer to estimate the costs of operating an aging car and the cost of buying a new one; throw in other factors like reliable performance, deprecation, and the pleasure derived from ownership; weigh all those factors appropriately; and let the computer determine the most desirable sequence of decisions to replace.

One night, he was asked if his explanation was how HE comes to make the decision to replace a car.

“Of course not,” Stuart replied without hesitation. “That was only an example of how to use the formal procedure. Buying a new car is much too important to be left to a mathematical model. I mull it over for awhile, and buy a new car when it feels right.”

The next morning Stuart began to reflect. How could he tell generals, businessmen, and policy-makers that they should use a decision-making technique that he himself wouldn’t use in his own personal life?

Hunches and intuitions, and even systematic illusions, are the very core of expert-decision making, so whether one seeks to use a digital computer to model heuristic rules behind actual problem solving, or whether one tries to find optimal algorithms, the result fails to capture the insight and ability of the expert decision maker.

While operations research had successes in modeling operational problems in the military and industry, that is no reason to believe that the same mathematical modeling techniques can tell experienced generals what military strategies are optimal, or business executives whether to diversify their companies.

Problems involving deep understanding built up on the basis of vast experience will not yield – as do simple, well-defined problems that exist in isolation from much of human experience – to formal mathematical or computer analysis.

—- from “Mind Over Machine” by Hubert & Stuart Dreyfus

BOOK REPORT: "The Blind Spot"

“The failure to see direct experience as the irreducible wellspring of knowledge is precisely the Blind Spot”

This might be the first book to be burned by scientists and lovers of empiricism, famous for not burning books, because of how heretical it’s claims, uh, claim to be. Count me in this shocked group. 

With years advocating for scientific methods, working as an educator in the Denver Museum of Nature and Science, and believing in the power of objective/critical thinking handed down from Plato and Socrates, I felt like a naughty dilettante as I read through the author’s explanation that empirical science has both expanded, and drastically curtailed, human flourishing.

How can that be?

Well – it’s a simple irrefutable truth – there is no objective view from nowhere. 

It’s tempting to think that science gives us a God’s-eye view of reality, an observational window into the innermost workings of the universe itself, but thinking there is such a view absent of subjective, direct experience is not only misleading, it’s impossible.

The book claims that the “blind spot” of scientism has resulted in a sharp neglect and discrediting of human experience; and this neglect has had drastic effects. 

To define the perils of the blind spot, we have to define where it came from, how it developed, and what a possible eyes-wide-open alternative blending of rationalism with reason might look like in the future.

SOCRATES & PLATO WERE DICKS

The book starts with a portrayal of the birth of the blind spot, with Socrates and Plato, the original philosophical wedges in between the real world humans experience and something else called “reality.”  

As a longtime fan of philosophy, platonic solids, and the socratic method, it’s shocking to hear that Socrates and Plato, the founders of the philosophical and scientific traditions, were kinda dicks. And the reason for it was their belief in “logic” 

Nietchze referred to Plato and Socrates as the world’s first “degenerates” – why?

The answer is their development of something now called FORMALISM and REDUCTIONISM – which mean, respectively, that the world is made up of many forms, and can be fully, logically explained and understood by breaking down, or reducing anything into composite parts. This is the birth of “the blind spot”.

Plato/Socrates believed in an abstract “more real” reality, which Nietzsche found to be self-defeating. Nietzsche argued against the concrete existence of abstracts, and urged us to embrace the material world more fully.

Socrates hated the irrationality of music, thought poetry was banal and writers of it should be excommunicated. He once said that if you translate poetry into prose, you can see poets aren’t saying much. So send them to an island? Ouch.

The main takeaway is they kicked off this “other than” relationship between objective reality and the subjective experience of it. Seen through the left brain/right brain lens of Ian McGilchrist, Socrates and Plato were heavily into left brain thinking, breaking everything down into parts, calculating, machine-like, not embodied, but empirical. 

This grew into the modern philosophical and scientific belief that the primary level of reality is the measurable, objectifiable, quantifiable WHAT of existence, and the ineffable, subjective, and qualitative HOW we experience existence through, is a secondary phenomena, and to some, an illusion all together. 

THE SCIENCE IS COMING

The book then leaps into the Enlightenment, and all the science that kicked off during this time. The establishment of empirical, testable, methods helped jump start massively great things. But again, this entrenched the Blind Spot into the Western world even more.

A causality of empiricism is the thinking that primary processes (atoms/cells/forces) are concrete reality, and secondary phenomena (feeling/experience) is window dressing. When the blind spot really digs in, is when something called surreptitious substitution happens, when we replace our experienced world for the empirical one.

“In the development of the modern scientific worldview, the abstract and idealized representation of nature in mathematical physics is covertly substituted for the concrete real world, the world we perceive. The perceptual world is demoted to the status of mere subjective appearance, while the universe of mathematical physics is promoted to the status of objective reality. Thus according to this way of thinking, temperature or the average kinetic energy of atoms or molecules is what’s objectively real, but the feelings of hot and cold are mere subjective appearances.” 

Galileo’s frictionless plane, Newton’s absolute time, the Bohr model of the atom with a dense nucleus surrounded by electrons in quantized orbits, and evolutionary-biological models of totally isolated populations – these are idealized representations that came from and exist in the minds of scientists. They are not concrete realities in the natural world we live in.

“The predictive models of physics work mostly inside walls – the walls of a lab, a particle detector, a large thermos, a battery casing. In other words, the models work in places what we can control and shield from outside influences and where we can precisely arrange the conditions to fit the models.”

This is where I struggled a bit – it isn’t that these models of scientific thinking are MADE UP, but that they were created by the subjective understanding and experiencing of things by these scientists, and then replicated in tightly controlled, easily measurable, laboratory workshops – but in the real world, these things are in such a dynamic and complicated web of systems, the lab findings rarely match up with life, so rather than cop to that reality, we break off into more abstractions.

“A loaded and unnecessary metaphysical assumption about what the world is like outside the range of our ability to construct and test predictive models. The assumption is that how things behave in tightly controlled and manufactured environments should be our guide to how things behave in uncontrolled and unfabricated settings.”

So, the TL;DR is:

Real: objectivity, planes, atoms, genes, math, statistics
Illusion: subjectivity, experience, reality, life, existence 

This is our modern approach to the world, and it’s messing us up….because it seeks to remove us (the understanders) from understanding.

Here’s where the Blind Spot hits our modern problems and touches on our fervor with consciousness in AI. We are trying to measure consciousness in humans, to then map it onto AI, but we cannot do this without using our consciousness or our subjective experience.

“There is no way to step outside of consciousness and measure it against something else. Everything we investigate, including consciousness and it’s relation to the brain, resides within the horizon of consciousness.”

THE ANSWER

The way forward is to accept that through an over-reliance on science and measurability, we have been substituting our ability to map out our experiences of the world, for the world itself. We have to develop a world-view that views life as a huge, dynamic, non-linear, insanely complex network that can be reduced to parts to understand functions, but these small patches of clarity cannot be bubbled up to explain the entire web.

“We’re authors of the scientific narrative, and we’re characters within it. As authors, we create science. As characters in the narrative we’re a miniscule part of the immense cosmos. This is how we must portray ourselves as creators of the scientific narrative. There is no way to take ourselves out of the story and tell it from a God’s-eye perspective. Instead of saying that science is a means for rising above the great, strange mystery of being human, a better story is that science takes us deeper into that mystery, revealing new ways to experience it, delight in it, and most of all value it. By leaving behind the Blind Spot, we can properly understand the crucial importance of objectivity as a means for public knowledge without transforming it into a dubious ontology.”

PICK UP THE BOOK – https://a.co/d/6edWUje