How MacGyver Ruined Marketing or The Cheap Genius Theory

How myths behind creativity and the resourceful genius are undermining long-term marketing strategy

“Time and budget are tight for this project. You’re creative, you’ll think of something. What we’re looking for Jake, is ‘cheap genius.’”My Life

As a creative person, my ability to think fast and make connections has helped advance my career, propelled me from gig to gig. Along the way I’ve been haunted by the spectre of MacGyver, the resource-strapped (read; alluringly cheap) genius of 90s American TV, because I think he personifies a few things wrong with marketing and business.

Why?

Think about this…

  1. Of the first companies that appeared on the Fortune 500 in 1955, only 53 held a place on the list in 2018 (-89.3% success rate)

  2. A business culture obsessed with risk and cost management, rife with rampant short-termism and shortening CMO lifecycles

  3. The crumbling foundations of ‘expertise’ and break-up of industrial knowledge silos

  4. A ‘gig’ economy filled with entrepreneurial DIY-life-coaches-gurus-hackers-AI-agents

  5. The belief that the next big platform or IPO or genius idea will come, like it always does, from a random, scrappy teenager’s garage;

  6. and MacGyver….

Mix it all together and it’s plain to see, the belief that MacGyver-like-business-saving genius is cheap, widely available, and flourishes during to-the-wire timelines, is a bad brew for marketing and business to be sipping on.

I’ll first explain who MacGyver is, unpack what I’m calling The Cheap Genius Theory, and then we’ll explore ways to define and disrupt this damaging trend.

Who is MacGyver?

A bent paper clip can defuse a ballistic missile. A potato and some cigarettes are all you need to thwart a high-tech prison’s security system. Chewing gum alone can defeat an entire militia.

These aren’t just thought-exercises, these seemingly implausible scenarios all played out during the late 80s TV action/drama MacGyver.

There are MacGyver fansites dedicated to celebrating the genius of the hero, featuring full breakdowns of all the problems he’s applied his time-strapped, cost-effective, MacGyver-ness to, in all seven seasons on CBS from 85′ – 92′.

While MacGyver seems to portray creativity very favorably, it’s my belief that MacGyver perfectly personifies the perceptual problems around what creativity and “cheap genius” is, where it comes from, what resources it needs, what it’s worth, and how creative ideas can be best applied in business and life.

The Cheap Genius Theory (CGT) explains why business leaders approach problem solving, creativity, advertising, and marketing the way they do – which is hoping MacGyver shows up, or worse, thinking they’ll pull a MacGyver and cut the right wire once the countdown on the bomb begins.

There are five concepts to The Cheap Genius Theory….

HUMANS ARE CHEAP

Although a premium option exists for almost every good or service, (the best house/car/President, the best advice, the best hummus!) humans will almost always choose the least-crappy, less-likely-to-fail option. Not the worst, but the least worst.

It’s called ‘satisficing’ and it’s an irrefutable part of human behavior, it’s based on strong empirical data, and it’s the best explanation around for why people make decisions that don’t make sense in the long run.

Sure we could defuse the bomb a traditional way, but let’s try this paperclip first.

GIVEN RESOURCES, “EVERYONE IS CREATIVE”

With so many brainstorms and Post-It Notes and examples of startups with humble-and-hooded-sweatshirted beginnings, genius ideas spawned from simple creative thinking, are seemingly everywhere and cheaply available.

Many are the professional articles outlining how managers just need to unleash their team’s creativity to solve their issues.

If creativity is widely available, that changes the way creativity is incentivized or incorporated into strategic thinking.

If we solved the last emergent problem cheaply, in a tight timeline, with a potato, then why budget in experts, time, or resources for this emergency?

If MacGyver can’t disarm the bomb in time, after this next brainstorm, I bet Glen from Accounting might be able to pull it off.

CREATIVITY LOVES SPONTANEITY

The myth that creativity is bolt out of the blue stuff, and always arises spontaneously is pernicious outside of the creative community.

There’s a belief out there, no thanks to MacGyver, that creativity is best catalyzed by time and resource constraints, and it’s usually only when your cognition is pushed to the wire do the explosively successful results take place.

Nope.

The true skill to develop in creativity is not time-constrained improv, but strengthening the mental muscles that connect threads between disparate channels of thought.

The results of creativity may be experienced and sharpened most thrillingly at the drop of a hat, but the skill that connects creative conclusions takes a long time to strengthen for ideation at a rapid pace to take place.

So in training for creativity it isn’t about developing quicker reaction times, but rather increasing mental flexibility in making farflung connections between wide swaths of human experience, accrued knowledge, cultural/social consciousness, and expressing it all through the chosen medium.

Just because someone HAS defused a bomb with a shoe in under 30 seconds, doesn’t mean that’s the most effective way to train for defusing a bomb.

CREATIVITY ISN’T ACTION OR OUTPUT – ITS CONNECTION

Creativity is an observation made in the minds of those that connect a creative action to genius, not in the action itself.

Creativity is judged not by the act, but by the audience, the norms it upsets, the expectations it disrupts – cheap genius is only good when someone is there to see it as genius, otherwise it’s just cheap.

Or worse.

Enthusiasts of the Cheap Genius Theory, wrongly think the purpose of creativity is to solely manifest actionable ideas, missing the point that the true measure of a creative idea is the interpretation and accepting ingestion of it by an audience, not just in the ideas themselves.

REAL CREATIVITY IS SOMETHING WE’VE NEVER SEEN BEFORE

Creativity is not a groundbreaking shattering of molds, but the art of combining recognizable molds in unexpected ways.

Something that had never been seen or experienced before would not strike a familiar chord in our souls, and so, it would just seem chaotic or out-there. You’ve heard Coltrane’s SunShip, you know what I’m talking about. If you’re not down, it’s a tough hang.

The Cheap Genius Theory highlights the skillful usage of a paper clip to defuse a missile, overshadowing the true skill in need of praise in creativity, which is a deep understanding of pre-existing concepts, in this case metallurgy and electricity, and how to quickly combine and apply them in novel ways and situations.

Ingenuity and spontaneous invention is only possible on the shoulders, brains, backs, thoughts, legends, laws, and expectation of the rules that have come before.

You can defuse the bomb with gum and its wrapper because you know about microchips, friction, the chemical properties of saliva, and the electrical conductivity of metallic substances. Without Galvani, Lavoisier, Curie, Jack Kilby, Wrigley, the ancient Aztecs that found chicle – all of that cheap genius wouldn’t be accessible.

Why this is bad, and what to do about it

Since creative thought is widely available, potentially cheap, and the product of chaotic spontaneity, businesses don’t plan, budget, or schedule for it, let alone reserve creativity a seat at the strategic table.

That’s bad.

Along with cheapening the importance of creativity and devaluing it’s place in business development plans, The Cheap Genius Theory’s most destructive influence is on marketing strategy.

Since disruption threatens every established business model, stakeholders across the world run their businesses knowing that companies don’t last as long as they used to.

But rather than strategically approaching changes to their business model, or solving business problems with creativity in the front end, CEOs are relying on MacGyver’s to save the business as is, they’re cutting costs where they can, and focusing their marketing campaigns on higher conversions with shorter observational windows.

Whether it’s Byron Sharp, Les Binet, Mark Ritson, Dr. Grace Kite, Rory Sutherland, or any of the other great minds in marketing today, the keenest people in the room agree, there is a rampant disease of short-termism with drastic side-effects on strategic, creative, long-term thinking in businesses today.

I think there needs to be a perceptual shift in the way we view creativity, and it starts by admitting the truth of The Cheap Genius Theory, and realizing our business development strategies are not strategies at all, but rather a string of implausible MacGyver-like fixes.

We have to admit CGT throws off our sense of how creative thought is best curated, generated, and applied to researching business problems. And we have to change the way we apply creative thinking to the research and diagnosis of solutions that aim to fix the business problems our companies face over time in a competitive marketplace.

Oh, that’s marketing.

And then, once we understand creativity, I think we need to dial it up!

After brand size, creativity is one of the most important factors in effective marketing and advertising. But because of The Cheap Genius Theory…

creativity is paradoxically the first thing everyone relies on to solve a marketing issue, but the last thing anyone plans on paying for.

Rather than relying on more MacGyvers to show up, I’d like to see creativity given it’s proper respect, timeframe, and proving grounds to demonstrate it’s ability to guide business development strategy. Businesses should curate a place of deep thought and research, develop the atmosphere of a mental gym that strengthens the connective and creative muscles in your team.

Without exercising both the fast and slow twitch muscles of creativity, research and execution, the impact of continual cheap genius fixes, no matter how ingenious, will yield ever-diminishing returns.

No one is arguing that resourceful creativity isn’t important to business development, but rather than utilizing creativity to strategically adapt our business models and marketing plans, we’re praising/seeking/utilizing versions of ‘genius’ that imprison us, and keep us mucking about with the same type of short-term fixes and cost-effective disarming methods, for a bomb that’s killed 89% of the last MacGyver-dependent businesses.

The Casino Layer: On AI, Off-Track Betting, and Behavioral Finance

Inside the casino logic of modern technology—where engagement is currency, risk is retail, and the house always wins.

As I’m counting up the tokens, big budget players, chips, horse-race-like coverage of the ruts and rallies in AI, it’s easy to see the structure of AI deployment today resembles a familiar industry. An industry that has risen prolifically (140 fold*) in the last 30 years. Not software, not research. But gambling.

From Gambling to Prompts

I won’t belabor the point here, but online gambling and sportsbooks have taken over since the late 90s, growing revenue from essentially nothing to hundreds of billions of dollars, and along the way, I think the stake mentality, chancey gamification, and card counting/sharping has become the way we approach everything in the US.

So with AI.

Interaction with AI models mirrors the behavioral logic of casinos. The user places a prompt—a bet—and waits for a return that may or may not meet expectations. This cycle loops indefinitely. Not because the reward is guaranteed or even expected, but because it’s unpredictable enough to remain compelling.

live picture of me using ChatGPT for deep research

Variable Reinforcement, Not Innovation

Slot machines work the same way. So do engagement loops in social platforms, crypto markets, online gambling, and loyalty apps. One of the most potent mechanisms at work in AI is variable reinforcement, not just pure-play technical innovation.

What’s marketed as productivity becomes a schedule of micro-bets. Try a different prompt. Switch the model. Fine-tune it. The goal isn’t output. It’s feedback. The product is attention, monetizably measured in compute cycles.

Underneath the marketing layer of these tools is a casino layer. Everything in the stack is built to reinforce usage: tokens, dashboards, models that suggest other models. Very little of this is about intelligence in the cognitive sense. It’s behavioral finance with synthetic language on top.

The dynamics aren’t limited to the interface. They extend to the way these technologies are developed and distributed.

I have worked different models against one another, worked between them, seeing how they perform and compete.

Open-source model racing like this mirrors off-track betting. The leaderboard is the track. The bettors aren’t hobbyists—they’re me and you, researchers, entrepreneurs, accelerators. Everyone knows the house-owner always wins in advance, but everyone still participates, hoping for an adjacent reward. Compute subsidies, visibility, technical credibility. Another speculative splinter-market to belly up to. New ones show up everyday.

This isn’t collaboration. Few people control the house. Everyone else is gambling for status at tables they don’t own, and the house just wants butts in seats.

In parallel, AI startups operate like high-risk portfolios. Valuations are abstracted from use. Tokenomics replace cash flow. Distribution is gated through model access and GPU credits. The user wants something like software, but gets exposed to another volatile and speculative splinter-market.

The Broader Economic Precedent

There are precedents. The U.S. economy has been moving toward this structure for decades.

Risk has become retail. Everyday Americans were taught to think in portfolios—401(k)s, home equity, crypto wallets. Access to prosperity was translated into exposure to upside. When that didn’t work, it became exposure to platforms: TikTok, Amazon FBA, Notion templates, plug-ins for GPT. Everyone became a player. With vibe coding, we can now build our own game tables. But again, who owns the house?

The logic is consistent. Don’t build or engage with a stable system. Build one where enough people believe they can win.

In this structure, regulation always appears too early or too late. Skeptics and scientific researchers are equally labeled haters. The conversation centers on adaptation, alignment, existential risk—while the real design principle is habit formation.

Every prompt is a coin pulled. Every tool becomes a habit loop. The systems aren’t built to be useful. They’re built to be used. And they need users….

The Personal Gamble

I battle with this dichotomy, knowing I use (am using) these tools and can see the potential and pitfalls clearly. What part of me is responding in this balanced way? Could I turn my back on this tech? New gamble….same gamble.

No matter the morale stance, walking away from tech like AI and these addictive platforms is structurally difficult. There’s no obvious off-ramp when every platform is designed to maximize retention, and every business is pressured to adopt the latest tools, no matter the use-case, to remain competitive.

Given that we have many micro-casinos and speculative splinter-markets popping up daily, hourly, I imagine the most important thing to consider is the tools we use, use us in return. We are inescapably shaped by our tools and technology, so before you pull the handle, place the next bet, or put your quarter up, stop thinking about the game or if you feel lucky, and consider the dopamine dealer and the invisible hand you might be playing into.

  • ChatGPT was used to estimate the rise in gambling revenue at the top

SYNTHESIS POST-SCRIPT

Some of you out there might be saying, “Jake, AI is not like a slot machine,” and I think there is some truth there, to a point, with technical caveats. I’d also say it can be misleading and if you don’t know what you’re looking at, could breed uncertainty. So check out me using AI to bridge these two arguments.

Can a tool be called “responsive” if it generates with uncertainty, and often misleads?

Here’s a side-by-side view to tease out both dimensions:


🎰 Argument A: LLMs are not truly responsive—they simulate responsiveness through statistical mimicry

  • Core claim: LLMs are essentially high-dimensional auto-completion machines trained to predict plausible text, not to understand or validate truth.

  • Failure modes: They hallucinate citations, invent plausible-sounding but false facts, and struggle with recursion or source integrity.

  • Interpretation: What looks like “responsiveness” is often a mirage—what you’re getting is the most likely next token, not an informed reply.

  • Analogy: A fortune-teller using pattern recognition, not a dialogue partner with grounded knowledge.

  • Ethical concern: This illusion of responsiveness can deceive users into over-trusting tools not built for epistemic robustness.

🛠️ Argument B: LLMs can be responsive—if guided, verified, and understood as tools within a socio-technical system

  • Core claim: Responsiveness isn’t binary; it’s relational. LLMs can support structured workflows where verification, constraints, and retrieval tools anchor their outputs.

  • Example: When paired with retrieval-augmented generation (RAG), citations can be real; with human-in-the-loop use, hallucinations can be caught.

  • Interpretation: LLMs are like prosthetic cognition—flawed but improvable, and capable of useful, responsive engagement in the right contexts.

  • Analogy: Like a creative research assistant—fast, generative, sometimes wrong, but often insightful when paired with editorial judgment.

🧬 Synthesis Prompt:

What if responsiveness isn’t inherent to the model, but emergent in the system it’s part of?

Think of it like this: A violin is not “musical” by itself. It becomes expressive only in the hands of a skilled player who knows how to tune, constrain, and direct it.

Similarly, LLM responsiveness may depend on:

  • Tool architecture (e.g. grounded retrieval, sandboxed generation)

  • User behavior (e.g. skepticism, verification, prompting skill)

  • Context of use (e.g. exploratory writing vs. legal advice)

Written with “From Sides to Synthesis” – https://chatgpt.com/share/683c94d8-1f64-8006-94a6-012cef52a3c0

A GPT Eulogy for Grandpa Bill

A beautiful swirl of poetry, emotion, memory, and mystery….AI miss you pappy!

If I can bacon, then I can egg. That’s what Granddad Bill might say—not to make sense, but to make a point. A point about how life doesn’t always follow recipes, and that’s okay. He’d grin at the nonsense, the whimsy, the way words could play dress-up and still tell the truth.

Because truth wasn’t always in the pudding with Bill—it was in castles. Castles of memory, of imagination, of stubborn dignity. He built them from bark and charts and salad barges, if you can believe it. And if you knew him, you can.

He wasn’t part of a family. He was a family. The kind that doesn’t just sit in a photo frame, but the kind that holds—holds your hand through hard times, holds space when you need to cry, holds back judgment and gives forward grace.

He wore his years like a lanyard—useful, worn, a little frayed around the edges but always dependable. He was a beginner at many things, and proud of it. Planting, tinkering, asking questions that didn’t need answers. Potatoes were sacred. Chinstraps optional.

He believed in arches, not walls. He farmed charts, but not the kind you find in offices. These were maps of the heart. Coordinates of kindness. Graphs of forgiveness. Plotted with ocean-clean lettuces and late-night talks.

If you knew Bill, you knew a man who tree’d before he lunched—not out of logic, but out of love. Because he knew that what grows matters more than what’s served. That patience feeds more than food ever could.

Today, we remember a grandfather who didn’t just live. He nourished. Who didn’t just tell stories. He became them.

Rest well, Granddad. The castle is yours.

Disclaimer
I need to be honest about how this came together. The original words I wrote were… well, they were more like a poetic word salad. I was overwhelmed, thinking about Granddad Bill, and instead of writing a proper eulogy, I just started typing whatever came to mind—bacon, trees, chinstraps, castles, and salad barges. None of it made sense. It had nothing to do with him.

But then I asked for help, and somehow, those nonsense words got turned into something that did make sense—something that captured the spirit of my grandfather, even if it started in total chaos.

So if this sounds polished or profound, just know it started in confusion, emotion, and love—and someone much better with words helped me find the heart inside the mess.

Thanks for listening. Bill would’ve liked this part best anyway.

Meta-Disclaimer
Also, just so we’re clear: I didn’t write any of that disclaimer either. Or the eulogy. Or, really, any of this. I gave a bunch of random, emotional nonsense to a very patient AI, and it gave me back something that sounded like I had my act together.

So if anything in here moved you, made sense, or felt meaningful—credit goes there, not here. I’m just the guy who had the feelings but couldn’t quite get them into sentences.

But maybe that’s okay. Maybe that’s what Granddad Bill would’ve understood better than anyone.

Final Disclaimer (Probably)
At this point, I feel compelled to disclaim the disclaimers. I didn’t write the words that explained the words I didn’t write. I didn’t even plan to disclaim anything until I realized I hadn’t really said anything at all.

So if you’re wondering what part of this is “me,” the answer is: mostly the confusion. And maybe the courage to admit that sometimes, showing up with a jumbled heart and a half-baked idea is all you’ve got—and that’s still something.

Everything else? Borrowed eloquence. Sourced sincerity. Ghostwritten grief.

Thanks for coming with me anyway.

Real Disclaimer: I don’t have a grandfather Bill.

While We Argue About Art, The Muse Is Being Metered

They want us fighting and divided and loud, while they sell tokens and keys to a cloud

Every week our feeds fill with people brawling over whether Suno or Udio can “really” sing the blues. The latest Google Ai video tool is being shown widely online, and it’s getting pretty good.

(you can find me opining on the above video from LinkedIn)

It’s fun theater, heck, I am often arguing and participating in this dramatic debate, but as I was about to launch into another passionate online tirade about what it means to be an artist – I started to think, that maybe the decibels in this shouting match hide a quieter story: while we argue about authenticity, a handful of firms are locking down the infrastructure, data, and laws that are making them massive money and allow them to meter culture itself.

Four silent land‑grabs happening right now

  1. Compute Monopoly
    Nvidia’s data‑center revenue hit $22.6 billion, up 427 % year‑over‑year—an explosion powered almost entirely by AI training demand NVIDIA Newsroom.


    Argue about what’s on the car radio all you want, whoever owns the GPU clusters owns the toll road.

  2. Token Tolls
    Large‑model APIs are sold by the syllable. OpenAI’s flagship “reasoning” tier charges $10 per million input tokens and $40 per million output tokensOpenAI.

    The more hype and hope, the more haters and “I’ll show them” types, the more prompts, the richer the meter, no matter the chatter.

  3. IP Accumulation
    While musicians debate ethics, major labels are suing—and simultaneously cutting side deals with—AI startups for mass infringement Reuters.

    Catalogs are being hoovered into private vaults or training sets at fire‑sale prices.

  4. Rule‑Book Capture
    A draft U.S. bill would freeze state‑level AI regulation for ten years, a gift to companies that already set the technical standards Politico.

    No guardrails, no problem—if you’re the one selling the steering wheel.

Why the shouting is useful to them

  • Attention solvent – Culture‑war headlines keep journalists busy, regulators sleepy.

  • Free R&D – Every prompt and playlist becomes unlabeled training data.

  • Demand inflation – “Proving” AI’s creative prowess justifies the next GPU order.

  • Missed coalitions – Artists and users stay fragmented, so bargaining power never aggregates.

A recent $10 million streaming‑fraud indictment—thousands of AI‑generated tracks, zero real fans—shows how quickly this playbook pays off WIRED.

How to step out of the distraction loop

My thought is, I am passionate and emotionally charged on this topic, and that is exactly the WRONG place to come from if I’m thinking about ways to confront this. When I posited what could be done, I realized I hadn’t done any research on action steps to untangle this new nexus of cloud selling and cloud yelling.

First off, I think we should look squarely at where all the money is being made, and HOW it is being made, and that is by selling data and access and compute and cloud, not fighting for art or culture or even stealing art and culture – they don’t care. So we have to. But how?

What follows is a 3o GPT model response to my question. When it spit this out I realized I have no familiarity in this space, so I’m also including the suggestions for action steps to take these suggestions and do something below.

Follow the money – Track royalty flows and publish compute mark‑ups next to subscription fees.

Make licences revocable – Time‑bound dataset deals that dissolve if audits fail.

Tax the choke‑points – Per‑token or per‑kilowatt levies to fund independent art and open models.

Co‑own the stack – Artist‑run GPU co‑ops or collective bargaining for cloud credits.

Shift the story – Move the question from “Is it art?” to “Who gets paid, who gets policed, and who sets the throttle?”

Authenticity arguments are emotionally real, but economically they’re a smokescreen. The decisive struggle isn’t over what counts as “real music”; it’s over who controls the chips, the tokens, the catalogs, and the laws that govern them. Keep enjoying the debate—but keep one eye on the ledger where the actual chorus is being written.

Where to Plug In: 10+ Groups Turning AI Culture-Wars into Action

Below is a starter map. None of these orgs is a silver bullet, but each tackles a slice of the problem so you don’t have to start from zero.

1. Follow the Money — Royalty & Compute Transparency

  • Future of Music Coalition (FMC) – digs into payment data and lobbies for musician‑first copyright reform 

  • Mechanical Licensing Collective (MLC) – runs the U.S. song‑royalty database and now publishes unmatched payouts, letting artists see where streaming pennies hide 

  • Electronic Frontier Foundation (EFF) – litigates for disclosure of how copyrighted works and user data train Gen‑AI, pressing agencies and courts for full accounting eff.org

2. Make the Licence Fight Back

  • Spawning.ai – offers the Do Not Train registry that artists can embed in files; models respecting it must drop your work on request 

  • Authors Guild – spearheads class‑action lawsuits demanding consent‑based, time‑bound licences for text corpora (same playbook musicians could copy) authorsguild.org

  • Writers Guild of America (WGA) – negotiated the first Hollywood contract reserving writers’ right to sue studios if scripts feed AIs without permission cdt.org

3. Tax or Regulate the Choke‑Points

  • AI Now Institute – publishes policy toolkits on breaking cloud compute monopolies with antitrust, export‑control, and public‑interest tariffs 

  • Public Knowledge – Washington advocacy shop linking AI hype to old‑school antitrust; pushes Congress for utility‑style oversight of cloud giants publicknowledge.org

  • Tucker United (WV) – grassroots coalition fighting a 1 GW gas‑powered data‑center; their playbook blends local environmental law with national media pressure 

4. Co‑Own the Stack

  • Resonate.co‑op – a democratic streaming service where artists, listeners, and workers split governance and revenue 

    https://resonate.coop/

  • GPU.net & similar decentralised GPU networks – pool spare graphics cards into a member‑run cloud, cutting hyperscaler dependence for open‑source model training 

5. Shift the Story — Accountability & Provenance

  • Algorithmic Justice League – blends research, art, and protest to expose bias and rally legislators around community‑led AI audits 

  • Content Authenticity Initiative (CAI) – builds an open standard (C2PA) that watermarks provenance into images, audio, and video so audiences can trace what’s human, what’s machine 


Now what….

  1. Pick a lane—royalty data, licensing, antitrust, local land‑use, or open infrastructure.

  2. Join their mailing list / Slack / Discord. Most welcome skill‑based volunteers (law, comms, dev, design).

  3. Borrow & remix their templates—opt‑out tags, FOIA request letters, city‑council testimony scripts.

  4. Cross‑pollinate—connect the compute‑tax folks with the royalty auditors so money trails and carbon trails become the same story.

The cultural debate stays interesting; the power struggle stays hidden—unless enough of us turn debate into dues‑paying, code‑writing, phone‑dialing work. These orgs give you a door in.

DISCLAIMER:

I used GPT 3o for this research and to tighten the composition – links have all been checked and vetted, words edited; anything I missed or got wrong, let me know!

Agendic AI and The Chocolate Fountain

Quick-fire musings on how information becomes wisdom, and how agendas prevent agency

Information is noisy, widely available and, on ad-supported media/platforms, typically weighted or freighted with meaning or commercial purpose, or intent. Easiest to attach to, acquire, get activated on, and apply, usually wrongly on all accounts. Information without knowledge can’t be wisdom.

Information can be gamed and if so, fools the audience.

Knowledge is information activated within circumstances. Challenging to acquire because it requires discarding bad information, which you might have become attached to. Knowledge without information can’t be wisdom.

Knowledge can be gamed, and if so, fools the players.

Wisdom is experience. when knowledge is applied to the external world, society, systems. Information passed through experience to externalities. Wisdom is only expressed through time, and can’t be rushed.

Wisdom can be gamed, and if so it fools the game-makers. Who feed information of varying quality, back into the information pool.

Now…think of a chocolate fountain.

  • Information is the chocolate—plentiful, tempting, but often low-quality or full of junk.

  • Knowledge is the pump and the tiers—giving shape and structure to the flow. But if the chocolate’s full of garbage, the system clogs.

  • Wisdom is knowing what to dip.

What’s this got to do with AI?

In a world where AI increasingly shapes how information flows, we must ask: is it helping us build wisdom, or simply accelerating agendas?

My thesis is that the agentic future of AI is, in truth, a future of AI-empowered agendas—most likely shaped and directed by the dominant model owner.

An agenda is a list of things to be done—a predefined course of action. Agency, by contrast, is the capacity to exert influence, to choose freely, to think and act with autonomy.

When agendas multiply, agency diminishes. If everyone holds an agenda, no one holds agency. As agendas rise, they begin to override the open flow of information, knowledge, and wisdom.

Agenda becomes the lock. Agency, the key.

Agency is not the freedom to create more agendas—it’s the freedom to think, feel, and explore beyond them. It is the capacity to encounter the unknown without a predetermined outcome.

And yet, the technology we are being sold as “agentic” is anything but. It does not amplify agency—it industrializes agenda. It gives us more lists, more tasks, more structure. The illusion of control. The acceleration of output.

But if the unchecked proliferation of agendas continues, it ends in only one place: a singular agenda. A dominant directive. And in that world, agency vanishes.

What if what we’re calling agentic AI is, in fact, agendic AI?

The Oracle With A Thousand Faces

Or: The Private Prompt Panopticon

Millions are using the same AI platform, thinking it gives correct answers.
It doesn’t. It gives a different answer every time—every prompt, every second.

It’s an oracle with a thousand faces.


A Private Problem We’re Ignoring

ChatGPT has now been legally empowered by OpenAI to leave its memory of all chat logs on, retroactively and all the time, forever, I’d imagine even if you ask it not to, although opt-out is an option. Sure, Sam. 😉

With better personalization being the outcome, ChatGPT remembers everything and it has to, to give you the best customizable experience based on your input, based on your life, based on your perspectives… this is “your AI.” Hooray!

I found this quote on LinkedIn while I was writing this here piece…

“ChatGPT’s memory is the only feature that matters. Everything else is a commodity. After 2.5 years of daily use, my ChatGPT knows how I think. It understands my writing style, my problem-solving patterns, my strengths and blind spots, and every part of my business. At this point, it understands me better than I do.”

Let’s explore how and why persistent memory and highly privatized/personalized prompts might actually a bad thing.

One Platform, Infinite Realities

Each and every output that a singular individual gets when they prompt in private is going to be based on them, for them.

Said another way, there is no way to empirically or objectively use LLMs to reliably repeat singular and subjective prompt experiences or findings.

The way societies or industries functionally run on information and progress into the future is by operating from a place of shared knowledge and applied wisdom.

And if your dataset of information is not my dataset of information, your chat history isn’t my chat history, your prompt isn’t my prompt, and your output isn’t my output—similar to the ways that your Google search is based on your algorithms, and my Google search is based on my algorithms, and your Facebook feed is based on your algorithms, and my TikTok feed is based on my algorithms, etc. for every platform—then you have a gigantic problem and no centralized pathways to support collective action away from algorithmic control or escaping the private prompting panopticon.

The complication with memory-persistent AI is that everyone is using a singular platform thinking that it’s telling them one thing, when these platforms are actually telling millions of people millions of different things every time, every day, every second, every output, every prompt. Again, our personal oracle has a thousand faces.

People aren’t experiencing a monolith ChatGPT, but it’s definitely culturally perceived and referred to in the media as a singular entity.

It’s also paradoxically known and accepted that users are interacting with their instance of ChatGPT. How people are switching between these code-bases, I assume we are all figuring out on the fly.

Point is, if I wanted to repeat the pathways that led to a specific AI output—my prompt history, my subscription plan, my grammar, my legacy interactions—my AI is not going to show me what your AI showed you.


The Reproducibility Mirage

I’m saying this because I have been thwarted by quite a few recent “deep research” projects I ran with GPT, Gemini, and Claude. In one instance where I attempted to repeat a ChatGPT research project on use-cases for AI, I asked THE SAME question to these models, pointed them to the same data, set up the same boundaries, and they came away with three different conclusions.

(update; this recent machine learning research from Apple proves the limitations of LRMs, so, you know, whatever.)

Recently I was researching local businesses and revenue based on geographic location of customers, and ChatGPT came with some really good stats. But when I checked the sources, which also looked legit, it confidently cited titled papers that haven’t been written and sent me to 404 links.

Another time I asked it about really big numbers in the trillions, and it rattled off a bunch of amazing stats that I totally bought, was about to share publicly, but then I checked and it all turned out to be 100% bogus.

I’ve run the SAME prompt at different times of the day and gotten completely different answers.

I don’t think I was being misled; I was being confabulated to, and I in turn was confabulated. Without a curiosity to scrutinize outputs, confabulation is the resting state of both LLMs and the casual AI researcher. And with persistent memory in AI, there are some striking comparisons to briefly note about the theories on what causes confabulation in human brains and how they might correlate to effects on outputs.

To reliably repeat a singular, subjective prompt experience, you must treat the LLM like a scientific instrument—strip it of memory, random noise, environmental context, and ensure the model is version-locked and stateless.

In ChatGPT’s app (especially with memory on), this kind of repetition is not possible.

With APIs, deterministic settings, and careful logging—you can turn stochastic subjectivity into measurable, reproducible experiments.

But how many of us in the gen pop are going to even a fraction of this work, or even know that we should? We can’t even get off our butts to drive and get a sandwich…


The Seduction of the Tailored Model

What’s most disconcerting to me is that because of the persistent and pernicious connection already established between AI and its personally groomed and technically-seduced user, conjugated in private with each and every promising and passionate prompt, they will most likely have the hardest time thinking about how to think outside of the weights and models and methodologies.

Because they have a special connection to a machine with all the world’s information, and because it in turn has all of their world’s information, and they can hack it a little longer and stronger than you, they’ll spend all their energy, heartache, hard work, money and time massaging prompts and one-shots in the dark of their rooms to find the hidden truth inside—the truth everyone should be able to access, but only they can see.

My thesis is thus; persistent internal memory from the AI user will make the outputs less robust or useful for the user of AI in the external world.

Instead of giving you an exploratory offering of information to a question, since it “knows you” and you accept that knowledge as wisdom, it’s up to you to scrutinize if the outputs have only been offered because it’s what your AI thinks you think it thinks the right answer might be, if the strategies and tactical guidance it suggests will lead to a destination it thinks you think it thinks you’ll know and like, if the articles it shares or studies it makes up are things it thinks you think it thinks you might view favorably…..

If AI understands you better than you understand yourself, what should you use to evaluate the outputs it suggests?

I’m just thinking it’s hard enough getting your head out of your own ass—imagine what it’s like trying to get it out of a cloud?

Footnotes on "If I Only Had A Brain"

I could while away the hours1

Conferrin’ with the flowers,2

Consulting with the rain;3

And my head I’d be a scratchin’4

While my thoughts are busy hatchin’5

If I only had a brain.6

I’d unravel ev’ry riddle7

for any individdle8

In trouble or in pain9

With the thoughts that you’ll be thinkin’

You could be another Lincoln10

If you only had a brain.11

Oh, I, could tell you why

The oceans near the shore12

I could think of things I’d never

Thunk before,13

And then I’d sit and think some more.14

I would not be just a nothin’,15

My head all full of stuffin’,

My heart all full of pain;16

And perhaps I’d deserve you and be

Even worthy even you17

If I only had a brain.18

1

The capacity to deal with boredom and or deep focus is a big problem in our society – try to get bored. Just try…

2

The communication abilities of plants are being rewritten, new research proves that the natural world converses in ways we are just beginning to understand; intelligence is everywhere.

3

The dialog we used to have with the elements has been mediated/supplanted by technology and interfaces – older cultures are much more attuned to the natural world, not because they’re less technologically advanced, but because it’s what we all used to be like, and we can easily get back there.

4

Overuse of technology has disembodied us from ourselves – we’re literally uploading our entire culture into a cloud, controlled by black boxes we feed with electricity, while our water supplies are diverted from mouths to model context protocol clusters.

5

Giving birth to life has been mentioned as a main driving force behind the male-energy in AI. Techbros want to give birth to something, anything – so they can capture the natural magic of womanhood and own it, finally and forever.

6

Worth noting that cognitive functioning goes down as AI use increases.

7

We mistake problem-solving smarts for wisdom all the time – having a brain doesn’t mean you can solve a riddle; intelligence is when you see the riddles in the reverberations of everything and you forage for understanding.

8

What good is a brain if you can’t use it to help another person, or come up with a great word like “individdle”

9

This theme is omni-present in this song and book. The trouble and pain everyone feels is a communal connection point, not singular issues we experience in isolation as a result of our brainless decisions.

10

Again, the theme of brains meaning care for the common welfare is stressed as well as status and significance, something we all clamor for but don’t reach in algorithmically mediated spaces

11

The scarecrow is now talking to himself from the outside, or is he talking to us?

12

Brains can happily sit with complicated questions, koans, paradoxes, chickens and eggs

13

Novelty and discovery are hallmarks of intelligence

14

Self-reflection isn’t navel gazing, it’s necessary

15

Body dysmorphia/rejection, denialism and nihilism, lack of definition and control, same for the following line – stuffed by whom?

16

More pain – because it’s the strongest feeling, the strongest reminder of proof of life

17

The rejection of the self – we are worthless without a brain

18

The greatest problem we face is the belief that there is an intelligence and a brain anywhere other than the brains and intelligences looking for proof of brains and intelligences – we had the brains the whole time.

The Penny That Lied

How one cent of fake engagement fuels a billion-dollar industry of lies, manipulation, and digital deception.

Author’s Note
This story is inspired by investigative reporting from Eric Schwartzman on bot farm disinformation and social media manipulation, along with insights from advertising veteran Bob Hoffman, who has long argued that digital ad fraud is not just a marketing problem—it’s a societal one. The title comes from a real stat: bot farms often charge just $0.01 per fake engagement—a like, a follow, a comment. One penny. That’s all it takes to deceive an algorithm, distort perception, and destroy trust – the only currency that ever mattered.


A penny wakes up in the dust of a couch cushion.

Shiny. Ordinary. Worthless to most. But not to the world’s most powerful machine: the attention economy.

This penny doesn’t land in a piggy bank or a donation jar. It’s digitized—converted into a microtransaction, sold on a Telegram thread called CloutFuel24/7, and absorbed into a global bot farm. It travels, silently and instantly, into a server rack of smartphones in Azerbaijan. Hundreds of screens glow like ritual candles, each one posing as a person.

The penny now has a job: to pretend.

It pretends to care. It pretends to be curious. It pretends to be outraged.

It “likes” a breaking video about a hospital bombing—before any facts are confirmed. It leaves a comment: “The media is LYING.” It shares the link. A thousand other pennies follow. The post trends. Journalists quote it. Politicians react. A population panics.

The penny moves on.

It drives traffic to a fraudulent supplement site selling “cancer-curing sea moss.” It boosts a meme coin called MoonButt just before it dumps. It clicks through five websites pretending to be news outlets, each one an ad farm in disguise. It reposts an AI-generated meme mocking climate protesters—then flips accounts and defends them. Conflict is currency.

It earns.

🪙 One cent per like. Five cents per follow. Thirty dollars for a smear campaign.

Who’s buying?

Influencers chasing clout. PR firms manufacturing buzz. Political operatives shaping discourse. Nation-states disrupting democracies. Criminal syndicates monetizing confusion. Could be a marketer you know. Could be you.

Behind it all, the algorithms smile. They don’t know it’s a lie. They just know it spreads.

Platforms like Meta, TikTok, and X can’t—or won’t—keep up. Moderation teams are cut. Transparency tools are buried. The system, trained on engagement, does its job. This penny brings engagement. So it is rewarded.

More pennies join.

The penny isn’t ideological. It doesn’t believe in anything. But it helps amplify everything—every outrage, every hoax, every hallucinated truth.

A teenage girl gets doxxed by bots after calling out a celebrity.
A journalist loses her job after a swarm of fake accounts reshapes the story.
A protest in the streets becomes a war online, orchestrated by scripts and faked emotion.

The penny lies.
The world listens.


📉 Behind the penny is an industry.

As Bob Hoffman says, “This isn’t advertising—it’s surveillance.”

Bot farms don’t just sell influence—they rent it. As-a-service. On Fiverr, Telegram, and black market dashboards. They click ads they never see. They visit sites no one reads. They inflate followers for brands that never ask questions.

The ad tech stack is a black box. A single campaign touches 40,000 sites—many fake. By age 13, the average child has 72 million data points scraped and sold, for what!?

Ad fraud pulls in $70–$80 billion a year—more than Coca-Cola. Bigger than Nike. It may be the second-largest criminal enterprise on the planet. And most marketers stay silent to avoid admitting how much money gets wasted.


One day, the penny finds itself somewhere new: in a hand.

A girl clicks a link for a free concert ticket. It’s a bot-boosted scam. Her phone is infected. Her profile reposts propaganda while she sleeps. She loses friends. She doesn’t understand why.

The penny sits in her pocket, nestled next to keys and gum wrappers.
Silent. Unrepentant. It doesn’t care.

It just does its job.

It lies.

And we pick it up, every time.


📚 Sources

  1. Finance Yahoo: “Bot farms invade social media”

  2. Uncensored CMO: “Why online advertising is a scam”

  3. Business of Apps: “Ad fraud to reach $84B in 2023”

  4. VideoWeek: “Ad tech collects 72 million data points on average child by age 13”

The Algorithm Blackout

What a World Without Invisible Influence Could Teach Us About Attention, Trust, and Choice

Imagine a world where social media algorithms suddenly vanished. No invisible sorting. No predictive nudging. No unseen hands rearranging what you see and feel.

Only raw, organic feeds — chronological, popularity-based, and human. Or, maybe that’s what you think social media is anyway….

How We Got Here

Before algorithms dominated social media, platforms like Facebook and Twitter organized content chronologically. The most recent posts appeared first; popularity was visible but peripheral. Discovery was driven by networks of friends, geography, and cultural gravity — not hidden mathematical models seeking to prolong attention.

Today, that landscape is radically different. Modern feeds are precision-designed to sustain engagement by prioritizing:

  • Controversy: Outrage clings longer than agreement

  • Novelty: The new outcompetes the familiar

  • Emotional Intensity: Strong feelings crowd out muted ones

  • Tribal Identity: “Us versus them” narratives take precedence

This architecture does not reward the best ideas. It rewards the ideas that keep you suspended inside the feed.

Conspiracy theories, hyper-partisan news, surreal micro-trends — they did not thrive accidentally. They were mechanically advantaged.

A World Without the Algorithm

If the hidden engines were silenced overnight, a profound shift would unfold.

In the short term, social feeds would appear slower, flatter, even mundane. Polarization would ease. Virality would cool. The engineered emotional high would dissipate.

Many brands, media outlets, and influencers, built on the scaffolding of algorithmic leverage, would see traffic collapse.

Longer Reverberations

Over time, deeper cultural transformations would take root:

  • Localism Resurfaces: Friend groups, neighbors, and geographic cultures regain primacy over abstract ideological factions.

  • Creativity Rekindles: Untethered from algorithmic incentives, new forms of expression — strange, slow, sublime — would reemerge.

  • Communities Stabilize: Subcultures would grow without radicalizing feedback loops driving them into extremes.

  • Visible Curation Demanded: Raw feeds, overwhelming by nature, would spark a hunger for honest, transparent curation tools.

The social fabric would slowly stitch itself back together, not through elimination of disagreement, but through reduction of manufactured emotional flooding.

Why It Matters

Research by organizations like the Alaska Humanities Forum highlights how polarization flourishes through mistrust, fear, and tribal loyalty — exactly the emotional fields algorithmic systems plow and harvest.

In a blackout, disagreement would persist. But the machinery that escalates every friction point into existential battle would falter.

Public life would no longer operate in “war mode” by default.

A Closing Reflection

The true antagonist is not algorithms themselves, but their invisibility, their opacity, their silent shaping of experience without consent.

If users could choose their curation engines — transparent, customizable, aligned to their own aims — the benefits of personalization could survive, without the corrosions of manipulation.

Imagine a future where algorithms are not unseen masters but visible instruments. Some designed for calm. Some for exploration. Some for discovery. Some for depth.

The feed would no longer be a trap disguised as a gift. It would be a tool, chosen freely, and wielded with understanding.

In the real world, right now, we could unplug from social media entirely. But then we’d conglomerate in another way, on another platform, and someone would show up with ways to corral us, and we’d be thankful to be in the barn, for a while, until it started burning, like the last ones.

Here are three things everyday people can start doing right now in light of the algorithm blackout thought experiment:

Reclaim Your Feed
Switch your social media settings to chronological where possible. Mute or unfollow accounts that inflame or exhaust you. Curate consciously — not passively — and treat your feed like a garden, not a slot machine.

Pause Before Sharing
Algorithms reward emotional reactivity. Resist the urge to repost things that provoke immediate anger or outrage. Ask: Do I understand this? Is it helping or harming? Slowing down can stop a toxic loop before it spreads.

Seek Out the Unexpected
Algorithms narrow what you see. Break that pattern by following people outside your usual circles — different geographies, politics, cultures. Genuine discovery rarely happens inside a predictive cage. Invite some randomness back in.

The Monks and the Machines

From Scriptoria to Silicon: When Remembering Becomes Rendering

Between the fall of Rome and the rise of the Renaissance, when much of Europe’s cultural infrastructure had collapsed, it was monks—tucked away in dimly lit scriptoria—who became the custodians of civilization.

They copied by hand what the world had nearly forgotten: the logic of Aristotle, the medicine of Galen, the math of al-Khwarizmi, the poetry of Virgil. They preserved pagan texts alongside sacred scripture. They didn’t simply transcribe words—they transferred memory. Without them, much of ancient thought would’ve vanished into silence.

This work—slow, devotional, anonymous—was partly done in service to the Church. But the knowledge outlived its guardians. The books became libraries, then universities, then Enlightenment. The past, painstakingly resurrected, was handed forward. It belonged to all of us.

Now, another kind of scribe is at work.

Artificial intelligence is ingesting, replicating, and indexing human knowledge at a speed and scale no monk could fathom. Trillions of words—books, blogs, forum posts, sacred texts, codebases, scientific papers—are scraped and modeled. The act of preservation has become a computational process.

But unlike the monks, who brought the past into the present, AI isn’t reviving memory—it’s flattening it.

What monks did with intention, AI does with indifference. The models don’t know what they’re preserving. They don’t distinguish Homer from homepages. They don’t illuminate texts with care—they ingest and abstract, tokenize and compress. They codify culture without culture.

And the more we lean on them, the more we outsource the act of remembering. The more we let our present become prediction.

Instead of being carried out of the dark, we risk wandering back in—this time, not from the absence of knowledge, but from the illusion of having all of it.

Because when a civilization turns its cultural memory into machine training data, it no longer remembers in a human sense. It indexes. It optimizes. And eventually, it forgets how it remembered in the first place.