I began actively thinking about artificial intelligence and its potential role in targeting and exploiting our human vulnerabilities and encrypted social power structures in October 2020. I had only half-consciously nibbled around the edges in the years prior. And as I quite often do, I threw a couple of quotes (see below) into a draft post and quickly jotted down a few personal thoughts on how hidden factors influence how we think, behave, and respond.
One of the thoughts I scribbled back then was this: it’s not just “human nature” or our own experiences that shape the factors that influence our actions. It’s our shared culture and history that embeds these layers into our behavioral coding.
That Post Was Never Written. I’m Glad.
Sometimes my self-editing is a bonus. Five years later, the picture is clearer. In October 2020, ChatGPT was two years from hitting our collective consciousness like a nuclear bomb. There were only early transformer examples, like Inferkit, that foreshadowed the future. That’s where my head was then. The possibility of a truly human-sounding voice model seemed further away than this. It will be a different post today. So, let’s get started.
Decoding The Human OS, Not Optimizing It
The first quote I put in this five-year-old draft was from Isabel Wilkerson’s book “Caste: The Origins of Our Discontent.” She wrote: “We cannot fully understand the current upheavals or most any turning point in American history, without accounting for the human pyramid encrypted into us all.”
Wilkerson’s idea of a “human pyramid encrypted into us all” maps neatly to the Humaneering lens I’m trying to apply today. It starts from the premise that humans aren’t blank slates or rational actors, as much as we’d like to think we are. We run inherited code shaped by culture, power, status, and history. AI doesn’t need to understand us deeply to be effective. It only needs to pattern-match against those embedded hierarchies and incentives.
Once machines can read that code, someone has to decide whether it gets exploited or protected. I believe it needs to be protected. I believe it needs to be “humaneered.”
Hacking Weaknesses Is Cheaper Than Amplifying Strengths.
The second quote was from Tristan Harris, founder of the Center for Humane Technology. (I love the work they’re doing, for the record.) Much of what I’m writing echoes their thinking and direction. Tristan said this, “We’ve all been looking out for the moment when AI would overwhelm human strengths, asking when we would get the singularity? When would AI take our jobs? When would it be smarter than humans? We missed this much, much earlier point when technology didn’t overwhelm human strengths, but it undermined human weaknesses.”
His insight is the hinge. Technology didn’t conquer us by outperforming our best qualities. It succeeded by leaning into our worst ones.
Fear spreads because it activates our ancient brain wiring.
Validation replaces truth because flattery works.
Outrage dominates because nuance is complicated.
Humaneering is a countermove. It’s not an argument against using artificial intelligence, quite to the contrary. I WANT to use AI, and I absolutely do use it every single day. What I am advocating for is that AI’s design should defend human dignity and agency even when exploitation is easier, faster, and more profitable. We should make sure this is the case, not just hope for it.
We need a framing that leads to understanding and action. This framing should help connect distraction, addiction, and polarization as design choices rather than accidents. Rather than faulty training.
And we can actually frame the myriad of grievances and scandals and problems that we’ve seen in the tech industry, from distraction to addiction, to polarization, to bullying, to harassment, to the breakdown of truth, all in terms of progressively hacking more and more of human vulnerabilities and weaknesses. And we’ve only just begun to understand this new AI power that’s about to encroach even further.
Artificial Intelligence Isn’t Neutral Because We Aren’t Neutral.
If human vulnerabilities are shaped by social hierarchy, culture, and historical power structures, then AI trained on our outputs inherits those features automatically. I don’t see acknowledging this as controversial or “woke.” That seems obvious to me.
So, this is not about making AI nicer. It’s about acknowledging that every system trained on human data becomes a mirror and an amplifier of existing inequalities, existing power dynamics, etc. Ignoring that is not neutrality. It is an abdication of responsibility to do better. To be better humans.
The Escalation From Attention To Intimacy.
Social media hacked our attention. And we all lost. Generative AI hacks meaning, identity, and trust. And I don’t want to see us fail again.
Language models don’t just distract us. They are now inside our sense-making loops. This is where I’m drawing a line, where humaneering draws a line: once technology enters this space, consent, friction, and restraint are no longer optional. They are moral requirements.
What should unsettle us is not that artificial intelligence is getting smarter, but that it is getting personal. AI platforms don’t need to surpass human intelligence to change us. They only need to learn our inherited blind spots well enough to circumvent consent, friction, and self-awareness, just like it did when I edited my son’s perfect photo. I thought I was aware, and I still fell prey.
I have begun almost every artificial intelligence presentation since 2016 with this quote from Pedro Domingos: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” It is more true today than it was in 2015 when he wrote it.
Why Humaneering Exists
Humaneering exists because pretending humans are rational, neutral actors is a lie we keep telling ourselves. When technology is designed without an honest accounting of human weakness, cultural conditioning, and historical power, exploitation is not a side effect. It is the business model. At that point, AI doesn’t merely reflect who we are. It exploits and amplifies the parts of us we are too often afraid to confront.
Humaneering exists because AI doesn’t have to overpower us. It only has to understand our weaknesses and apply them through the hierarchies we built long before it arrived.

Here’s an example from today’s news: https://www.nj.com/healthfit/2025/12/dr-oz-wants-ai-to-decide-what-procedures-people-need-nj-will-be-a-testing-ground.html << AI in this context isn’t really neutral. It’s being deployed within an insurance/healthcare bureaucracy that already pressures denial and cost cutting. It will be very interesting to see how this plays out.