Jeff Turner

My Thoughts Exactly

My Thoughts Exactly

about  |  speaking | videos

 

Recommend Me or
Buy Me A Beer 🙂

Copyright © 2008 - © 2026

  • About
    • My Photo Blog
  • Speaking
    • Speaking Videos
  • Family
  • Landing Pages
    • When AI Learned to Sound Human
    • AI Image Wizardry
    • Turner Family Darts

Man Cannot Will What He Wills

January 21, 2026 By Jeff Turner Leave a Comment

This quote, “A man can do as he wills, but he cannot will what he wills,” came to my attention through the Netflix series “Dark.” For the record, it’s one of the best TV series I’ve seen in a long time and was recommended by my son, Isaiah. It’s mind-bending.

And although I don’t remember ever seeing the quote before, it’s actually a famous quote by the German philosopher Arthur Schopenhauer. It is a summary of the argument he makes in his essay On the Freedom of the Will (1839) and offers a deep dive into the extent of our control over our own lives.

Schopenhauer’s essay is not light reading, for the record, but it’s also not long. The essay itself is only nine pages. You can find this argument discussed specifically on pages 5 and 6, or pages 13 and 14 of the PDF linked in the paragraph above.

In plain language, “A man can do as he wills, but he cannot will what he wills” means: You are free to act on your desires, but you aren’t free to choose what those desires are. That quote has shifted the focus of my personal exploration of artificial intelligence and its impact on human behavior. My conversations with AI are part of this new exploration.

Where “Cannot Will What He Wills” Fits Into The AI Conversation

Below is a slide I’ve placed as a trigger for this portion of a presentation at the Texas REALTORS® Winter Meeting in Austin, TX. I, of course, won’t be able to unpack it fully within the larger presentation theme, so I want to do that a bit here.


Man Can Do What He Wills, But He Cannot Will What He Wills

AI is moving from Output to Input

In the “Old Systems” model, we provide the will (the input), and the computer provides the result (the output). You decide you want the computer (old system) to do something, you give it a command, and it speeds off to do it. The shift from output to input has been happening since algorithms began making music suggestions, deciding which tweets we would see, and organizing search results. Generative AI ups the ante in this game.

Moving to input means AI is now providing the “motive.”

  • Schopenhauer’s View: He argues that nothing you want happens by magic or “just because.” Every time you feel a “want” or make a choice, there is a specific reason—a “ground”—that triggered it.
  • The AI Shift: If AI generates the recommendation, the notification, or the personalized feed, it is providing the “sufficient ground” that triggers your next act of will. The AI has moved from the end of the process to the very beginning.

Sitting “Upstream” of Desire

Schopenhauer’s core point is that while we can do what we want, we don’t choose what we want. Our desires “just happen” to us as “affections of the will.”

  • Where the Will Forms: Desires form when an internal character meets an external motive. By sitting “upstream,” AI can now control the “external motive” side of that equation. As a prediction engine, it can predict how to speak to this internal character.
  • The Illusion of Freedom: Schopenhauer notes that we feel free because we aren’t physically chained. However, if an AI perfectly predicts and presents a motive that it knows your “character” cannot resist, are you choosing, or are you simply reacting to a necessity? This becomes an even bigger issue when the AI can talk to you and build a “relationship.”

“Shape Desires” vs. “Respond to Desires”

  • Responding (Downstream): The old system is reactive. It waits for your free choice to manifest as an instruction. It then acts on your instruction.
  • Shaping (Upstream): This is predictive and causal. Because, as Schopenhauer argues, “all grounds are compelling,” if an AI provides a motive (a ground) that is perfectly tuned to your psychology, it can create a “necessity” for you to want it. It helps manifest your choice.

Upsteam vs Downstream: A Vending Machine Analogy

I have been struggling to come up with a good analogy for this “Upstream vs Dowstream” impact of AI. And ChatGPT didn’t help as much as I’d hoped. It’s usually quite good with analogies, I’ve found. But in this case, it kept coming up with analogies centered around more modern technology that didn’t really illustrate this the way I wanted.

Then, at the gym this morning, listening to Your Undivided Attention, something they said (I have no idea what) made me think of a vending machine. So, this may not be a perfect analogy, but I trust it will do the trick for illustrating upstream vs downstream.

Old System

Imagine you walk up to a vending machine. This is an “old system.” It is filled with a limited number of snacks. You look at the snacks and realize there are no Peanut M&Ms, which is what you really want, so you choose based on what is available and appeals to you in that moment. The vending machine obeys your command and spits out the Cool Ranch Doritos you desire.

You acted on your will without any real interference or input from the vending machine, and the machine carried it out. It was downstream of your will.

AI System

Now imagine, instead, a generative AI-powered vending machine that has access to all of the snacks known to man. It can’t present all of those to you; it would be overwhelming. So, instead, you interact with the AI-powered vending machine. You say, “I’m hungry, and I really want some Peanut M&Ms.” The AI-powered vending machine might ask you a series of questions and then make a compelling argument that what you really want is something different. You’ve told this vending machine in the past that you want to be healthier. So it steers you away from the Peanut M&Ms and the Cool Ranch Doritos and suggests you get a bag of Smart Popcorn, which you also like. So, you take its advice and choose the popcorn.

In this case, you still acted on your will, but the machine acted as a collaborator in your decision, an influencer. The AI vending machine didn’t just respond to your B5 button press; it helped shape your desire, your will. The machine still carried out your desire, but this time it also played a different role. It was upstream of your will.


Man Can Do What He Wills, But He Cannot Will What He Wills

This kind Of Influence Is Being Inserted Everywhere

I’m quite certain we’re not prepared for a world where businesses are inserting this kind of influence everywhere. Listen to Attachment Hacking and the Rise of AI Psychosis for yourself to get a deeper understanding of why.

Our children are certainly not prepared for it. A recent Pew Research Center study found that 64% of teens have used AI chatbots, and around 30% use them daily. Children & Screens, citing the same Pew study, explained that a fairly notable portion (about one-third) finds discussing serious topics with AI to be as satisfying or more satisfying than discussing them with humans.

That should concern us. Greatly.

And it should inform decisions around where and when we use AI, both personally and professionally.

“Man can do what he wills, but he cannot will what he wills.” And now, AI can certainly influence what he wills, better than he ever imagined.


The “hand-drawn” images shown on this page were created using a combination of ChatGPT Images, Gemini Nano Banana Pro, and Adobe Photoshop. They are all composites of several AI outputs.

Share this:

  • Click to share on X (Opens in new window) X
  • Click to share on LinkedIn (Opens in new window) LinkedIn

Like this:

Like Loading...

Related

Filed Under: Humaneering Tagged With: artificial intelligence, huan behavior, philosophy, technology

Add your voice...Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

 

Loading Comments...
 

    %d