Universal Basic Conflict

Granting every working-age member of a population a stipend of money per month, per year, for the rest of their lives will do little to mitigate the two states it’s designed to relieve: human avarice and envy, and a lack of meaningful employment opportunities. No matter how much the AI technologists promise that such schemes will, in some utopian future society, we will never arrive at.

Jealousy (of people), envy (of things), and avarice (the desire to acquire more) are human emotions that aren’t often acknowledged as the darker motivators for people to engage with work. All of these emotions, along with vanity and pride (all emotions grounded in negative storytelling), are typically at the bottom of many people’s motivations to chase money, status, titles, honor, and respect. And because they’re all lurking in the basement of every human heart, the materialist rationalist utopians among us are inevitably surprised when they manifest as genuine, but irrational, resistance to the desire for a Universal Basic Income-driven future.

A fellow named Dostoyevsky, however, put words to what the technologists can’t seem to name, around 160 years ago, in a pre-Industrial Revolution, agrarian monarchial society:

“Now I ask you: What can be expected of man since he is being endowed with such strange qualities? Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he would have nothing else to do but sleep, eat cakes, and busy himself with the continuation of the species, and even then, out of sheer ingratitude, sheer spite, man will play you a nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element.” ~ Notes From Underground, Part One, Chapter Eight

In our post-modern society, the era of “make-work” is over. And it’s been over for a while. But the thing is, we have also arrived at the end of the hangover from the end of the Industrial Revolution, so the era of “we financialize human effort in just a little bit better ways” is also about to be over with the dominance of LLMs that can do all that average make-work better and financialize it faster.

This is a real problem because, under the scheme that has built the last 125 years of scalable economic systems, meaningful employment was typically not found at the beginning of the employment ladder in minimum wage positions for many people. But now, even those rungs of the ladder are being hewn away. Without addressing both a lack of meaningful work opportunities for people at the beginning of their careers and the inherent built-in drivers toward accomplishing goals and earning money, all the Universal Basic Income in the world will only serve to exacerbate conflict, providing enough impetus for people to engage in conflict en masse. Because without work, idle hands will surely “…play you a nasty trick.”

Work provides spiritual, psychological, and emotional meaning for many people. But because those intangible outcomes don’t appear anywhere on a spreadsheet, they are either discounted as being meaningless or not even considered in the first place. A universal basic income does nothing to address any of those needs, emotions, or drivers in people. As a matter of fact, such schemes spit in the face of human motivators and dare the human being to do something about it. History proves that’s a negotiation human beings are fine with accepting the consequences of. And just declaring “Game on” doesn’t quite do justice to what will surely result from such schemes.

Human truth and what lurks deeply in the dark human heart are fundamentally what defeat all UBI schemes, whether from the State, from businesses, or even from our current crop of techno-utopians, drunk on AI outputs. Such proposed schemes really come down to giving people money, hoping to cure the deep disease of the human heart and the human spirit without ever engaging in performing the uncomfortable surgery of examining–and acknowledging–much deeper and darker motivations. You know. Those ones that have always lived deeply in the human soul, where even the state and technology cannot reach.

This is a sure recipe for universal basic conflict. And at scale.


Wizards Searching for Backdoors

The wizards, diviners, and soothsayers of the ancient world were invited into royal courts, penned scrolls that held the keys to gnostic knowledge, and were placed at the center of ancient societies. Many of them were hired to educate the elite of their times.

As the Middle Ages closed, and the rationality project at the core of the Enlightenment really took hold of the imagination of the West, the magicians, diviners, and soothsayers were pushed to the edges of the culture (eventually to be joined there by the various religious types, but that’s another post altogether). And–to add insult to injury–their lofty claims to being able to open spiritual backdoors into a mystical world were deemed to be the irrational ravings of “humbugs,” “scammers,” “con artists,” or even “marketers.” Case in point: recall the character of the Wizard in the Wizard of Oz. He was eventually exposed as just a flim-flam dude pulling levers who couldn’t make his hot-air balloon work well enough to get back to Kansas, where he came from.

In our technological era, though, the atheist rational materialist technologists have won the day. They’ve defeated the natural world, backburnered the spiritual world, and declared, hubristically, that “we will build our own gods” by building backdoors into reality and making epistemological claims without acknowledging–or even realizing–that they’re making those claims in the first place. They continue to pursue the same gnostic path of attaining secret knowledge that their forebearers attempted through spiritual means. And of course, they all declare that they’re going to get to the same place as their forebearers–for the good of humanity–through manipulating “intelligence on silicon.”

All that lands for me like a whole lot of humbug from a bunch of flim-flam dudes at the center of the post-modern royal court of attention.


Moats and Deltas

In business, particularly in startups, there is an idea that a business must possess a significant enough moat in order to stave off potential competitors in a crowded marketplace. Often attributed to investor Warren Buffett, moat, is a term used to describe a company’s competitive advantage. Like a moat protects a castle, certain advantages help protect companies from their competitors.

The same thinking can be applied to our work world, where there are two kinds of people using our LLM tools. One type uses LLMs like fortune cookies or magic 8-Balls. They ask an average question they would have used search to answer four years ago, and instead of generating a collection of search results that they would have had to parse and critically examine, they get an LLM regurgitated version of a mediocre answer.

These people are coasting along right now, using Chat GPT, Claude, Microsoft Co-Pilot, and on and on for entertainment, planning their next trip, flooding the attention zone with AI slop, or just lurking about, wondering what to do with these tools next.

The second type of person uses LLMs to examine assumptions and to level up what they are already doing at work. They are interested in prompt engineering and search for context inside the LLM’s answers to queries, and when the answer is average–like an answer from a fortune cookie–they push the tool past its limits. These people know and understand how language, persuasion, sales, negotiation, and conflict work at deeply interpersonal levels between people in the real world, and they employ critical thinking because they have mental and emotional discipline.

They are building a bigger and bigger moat, one prompt at a time, that eventually will transform into a delta; that is, a concrete, measurable variable of change between two states: those who have a large moat, and those who don’t.

As the delta between these two groups increases, and one group declines while the other one ascends in the economy of the future we’re building the rails for now, which one of these two groups do you think will have a more defensible moat in their job, their career, their financial situation, and even in their cultural life in America, as we breathlessly outsource more and more of our average outcomes to these new machines we’re building?

And here’s another question while you’re pondering the answer to that one: Which group of people will complain–or advocate if you will–for more fairly distributed outcomes as rewards begin to accrue to one group over the other economically and socially?

My advice to all of you reading: Expand whatever moat you’re working on right now into an ever-growing delta.


Hitting the 'Record' Button

For the first time in a long time, I didn’t hit the ‘record’ button on the videoconferencing software. As a result, the conversation I would have turned into a podcast episode was not recorded.

And yet, the person I was talking to and I both acted like it was being recorded. We behaved as if the cloud were absorbing our thoughts. We watched our words, monitored our tone, made our points, and when we disagreed, did so respectfully.

Did we already have great behavior, or was our tendency to be respectful with each other mediated, informed, and censored by the fact that we believed the interaction was going to be part of posterity?

At least temporarily on the Internet.

The tools that monitor and record us are changing our behavior as much as we are molding the tools to work with us. It’s a symbiotic relationship rather than a dictatorial posture, no matter what the marketing folks who work for the technologists would want you to believe.


The Singularity Appears

According to the prognosticators and breathless technophiles, the singularity, a point where artificial intelligence begins improving itself faster than humans can monitor or control, has apparently arrived.

The singularity hasn’t arrived, of course, in your daily life.

You know, all the places where the “intelligence on silicon” has been around for a while, but its impact is hidden from you directly, like in the navigation apps on your phone, or in the algorithms that show you more of what you click on in a social media feed.

The singularity hasn’t arrived, of course, in your relationships with other people, which remain messy, fraught with conflict, and unpredictable. Nor has it arrived, of course, in the myths you tell yourself and others, that continue to allow you to get up in the morning and go to work.

But, make no mistake: The singularity has arrived.

Ok.

And now that the singularity is here, soon, very soon indeed, “intelligence on silicon” will consume, overwhelm, and subsume “intelligence on carbon.”

Except, of course, carbon-based intelligence has gone pretty far in the last 5,000 years or so. And the people who are interested in a competing intelligence–those prognosticators and breathless technophiles I already mentioned–are usually the same people who devalue, dismiss, and disbelieve in the ongoing symbiotic relationships between intelligence, consciousness, and relationships among and between humans and machines together. They aren’t exactly fans of man.

To quote from a recent review of the book (…I know, I know…) The AI Paradox by Virginia Dignum in The Nation, “The more AI can do, the more it highlights the irreplaceable nature of human intelligence." She (Dignum) writes, " AI is good at certain tasks, such as “data analysis, logical reasoning, and linguistic processing.” Yet it struggles with others, especially those involving creativity, empathy, “moral and ethical discernment,” the “capacity for complex reasoning,” and the “ability to reason about relationships between concepts.”

Huh. How about that? And Ms. Dignum has been working with “intelligence on silicon” since at least the 1980s.

The singularity is here. Right on time, it appears, to reliably, meet its ceiling in the form of the humans who made it.


Fan of Man

Being a fan of human beings has always been a difficult proposition. However, in the era of seemingly instant transmission of information, the speed with which human beings can transmit gossip has never had human precedent before.

Sure, human beings can, and do, transmit good things about each other–praises, kudos, claps, and positivity–but the flood of negative gossip is overwhelming. And consuming, observing, and commenting on that flood is designed to be both corrosive and addictive.

Because “if it bleeds it leads” and no entertainer, huckster, influencer, grifter, magician, or marketer (but I repeat myself endlessly) ever went broke underestimating the unending human appetite for negative gossip rooted in envy, pride, lust, vanity, covetousness, and jealousy.

This makes it hard (but not impossible) to argue against the materialist reductionist mindset to human behavior. It makes it hard to argue that “intelligence on silicon” isn’t a better option. Human beings' behavior undermines the argument before it even leaves the mouth of the human making it.

But…

Man wasn’t created to be in a relationship with silicon. Man was created to be in relationship with the natural world, and the other, perpetually messy, people in it. The ceiling of our clean, unmessy, and artificial creations will eventually hit, is the ceiling of relationships.

That’s worth being a fan of man in order to defend.


The Twilight of the Novel

The book, as a technology for transmitting information, ideas, and concepts across time, is probably one of the top five inventions human beings have ever created. Included in that august list would also be indoor plumbing, penicillin, capitalism, and the internal combustion engine. The book–for all of the handwringing about its position as a technological influencer in Western culture right now–in its current modernist (or post-modernist form, if you will) is not on its last legs.

The novel, a variation of the liquid of ideas the container of the book holds, may indeed be on its last legs, however. The reasons for the death of the novel are many, including the following:

1). Human interiority and curiosity about the internal psychology of other people built the novel, and the deconstruction of that curiosity has led to its destruction.

2). The length of a reader’s attention spans and an audience’s cultural connection to historical material and social references across time go hand in hand. That chain has been breaking for at least the last thirty years.

3). The supremacy of other, more visually compelling media that get across the primary message of interiority to an audience better, e.g., film, TV, and of course, video on the Internet, has combined to beat up the novel.

But remember, even with all this, a novel is just a story placed in the format of a book. From Don Quixote to As I Lay Dying, and from Play It as It Lays to the current list of popular, AI-produced novels featured on Goodreads, the novel has probably gone about as far as it can go within the confines of the medium of the book.

This fact doesn’t mean that stories themselves are dead. Humans have been telling each other stories since the beginning of creation and will continue doing so until creation is wound up. It means that the types of content a book can contain will subtly shift.

Over the next 125 years of the book, the medium won’t die. It’s too resilient for that. But what will happen, I think, is that brave creatives, not trapped by the assumptions of the last 300+ years of Enlightenment novelization and cultural storytelling modes, will take the book itself in completely different directions.

Perhaps even, back to the future to a past–a pre-modern place populated by the works (but not novels) of Homer, Sophocles, Aeschylus, the Old Testament authors, Tacitus, Seneca, and Saint Augustine–where we haven’t been as readers, in the West, in a while.


Social Shame and Embarrassment as the Friction that Develops

The use and presence of our AI tools in business and organizations will be used as an excuse to stop developing junior employees because, well, “AI can do it better.”

The first generation of college students who have been exposed to ChatGPT for the last four years are graduating from college this June. They enter a work world where they will automatically have agency from organizations and employers to build slop, believe in slop, and advocate for slop arguments. And, to make matters even worse, the work world represents the final iteration of a social and educational world that has validated their every thought and assertion, right or wrong, since they were in kindergarten.

It used to be, up until about five minutes ago, culturally and socially, that the social and cultural shame and embarrassment attached to not knowing facts, ideas, or even the underpinnings of facts and ideas were enough to encourage curiosity. Or at least shame and embarrassment prevented the aggressively ignorant from asserting the wrong things at an increasingly loud decibel level.

But such social and cultural guardrails have been seen for at least two generations as merely limiting creativity and creative expression. And leveraging those tools by mid-career and senior leaders is now associated with delivering undeserved trauma to juniors who are, quite frankly, ignorant. And thus, the use of those tools of shame and embarrassment has been eroded quite significantly.

We are arriving quite quickly at a weird cultural and social cul-de-sac in the world of work. One where the junior employees we are seeking to develop confidently assert facts that are based on AI slop, social media algorithmic feedback loops, and an astonishing lack of practical education. And they don’t have the experience, maturity, courage, or competence to spot the slop, fight the algorithm, or get the education.

On the other hand, we have senior and mid-career leaders who can’t be bothered to employ even rudimentary social norming tools in the workplace because the backlash from leveraging those tools isn’t worth the outcomes they never see. Instead, it’s just easier to pay $100 a month for an AI stack that “Can do it better.”

There must be a way out of this cul-de-sac. Because if there isn’t, it’s going to be a long next twenty-five years in the work world.


A Tweet is not a Vote

Sometimes people on the other side of an ideological, social, moral, or ethical debate have a point. And when they have a point, it’s intellectually principled to acknowledge the point. Though it might be emotionally painful.

Here is the point, once made by a politician, in response to an online activist’s criticism of her political decisions: “A Tweet is not a vote.”

By which she meant: No amount of blogging, tweeting, posting, meme-dropping, or complaining online is a substitute for doing the work of going out to vote, encouraging people door-to-door to vote, or taking people to the polls to vote.

But that work is hard. And just as in so many other areas of the civic, public, and even corporate life of modern Americans, we’d rather perform what is easy than practically do what is hard.

When your ideological, social, moral, or political opponents do the hard, simple, and unglamorous work, they win the power. And all you get in return is the opportunity to complain, tweet, meme-post, or blog more about what they’re not doing right.


Anonymous Verification

The marketer and author, Seth Godin, made a point years ago, in either a book of his or on his long-running blog site, that sets the table for my observations today, “No society ever survived anonymous feedback.”

He was right, of course.

And as our national and global public discourse has declined into tribalism, violence, and polarization, calls for identifying people verifiably as people for the purposes of policing online discourse have increased.

The problem with verification of “humans as humans” and not “humans as bots” is not a “free speech” smothering problem.

People are free to speak (or write), but they have never been free from the consequences of such speech or written words. That’s why the 1st Amendment in the US Constitution is followed closely by the 2nd Amendment.

The problem with the verification of “humans as humans” for the purposes of making humans behave in their online communication is that humans have been shaped in their behaviors, communication patterns, and appetites by the Internet, as much as the Internet has been shaped by them. Problems with anonymity were just the tip of the iceberg in human communication and behavioral challenges with this new technology.

I am not opposed to human verification to police toxic commentary on the Internet. But I am opposed to verifying humans as humans as a shortcut to the hard work of mitigating behavior that is as much psychological and spiritual as it is material and emotional.

The problem lies not in the Internet, the trolls, or even the bots, dear Internet Commentator, but in ourselves.

And if we want society to survive, neither anonymity nor verification is going to serve well as cudgels to get humans to behave and communicate more humanely.


The Man Who Was Thursday

Smart people in society used to worry that people in classes lower than theirs would become monsters through osmosis by hanging around people who were already monsters.

Other than self-aware parents, I don’t know of any smart person, in an elite position of power in our society, in my day right now, who worries too much about that kind of influence anymore. Heck, we applaud people who may be of questionable character and give them attention and trust, based on the fact that they might be able to move an algorithm to “influence” some audience member’s behavior.

The lack of worry–and the presence of social applause–might be part of the reason that it appears as though there are more monsters, with larger microphones, around as of late.


The Water vs. The Rock

Getting rid of distractions is the easy part.

Delete the app on your phone.

Close the door to your office.

“Mute” the notifications on your phone.

Stop answering emails.

Not one of those things is hard. What is hard is committing to not reinstalling the app, opening the door to your office, “unmuting” the notifications on your phone, or answering the emails.

Commitment takes willpower, and the modern world is designed to drain us of that thing. The thing that neuroscientists can’t find in the brain, and that psychologists say doesn’t exist in the mind, but which every algorithm relies on wearing down, one interruption, one notification, one dopamine-driven impulse at a time.

Drip.

Drip.

Drip.

The rock stops getting eroded by the water first by being moved away from the water source, and then by building up a tougher, thicker layer of sediment.


Messing With The Clocks

The original reasons for instituting a time change with the clocks may have been stated as being for farmers and agricultural producers to get more done. . However, in the United States, we have passed through the practical reasons for time changes and now are into the more insidious reasons. . Controlling a population is about more than just about launching marketing or propaganda efforts to change minds. It’s also about changing behaviors that people do.


30 Year Technology Adoption Cycles

The Model T was in production for over 19 years.

Sales of color TV sets took 15 to 20 years to surpass sales of black and white TV sets, which took 10 to 15 years to move from being a luxury product to being a home staple.

Outdoor sanitation (that’s toilets, kids) was still a thing in many rural areas into the 1980s. In the United States.

Full-scale Internet adoption has taken 15 to 20 years and still isn’t complete in many places.

LLM adoption and the adoption of the outputs from LLMs (even the ones that we goggle at right now) will take 15 to 30 years to accomplish full cultural adoption.

Both the AI-Doomer and the AI Accelerationist alike need to slow their roll, hold their horses, and wait for the bubble generated by OpenAI being massively overleveraged to burst and for all of the current frothing at the mouth to come back to earth.


Cringe

The era we live in requires us to separate sincerity from what is commonly referred to as “cringe.”

“Cringe” is the emotional reaction of people whose temperament is oriented toward epistemic cynicism, nihilism, and the despair of the typically, perpetually Very Online doomer.

Sincerity is hard to find when the words people write, the videos they consume, and the images and memes they create become substitutes for emotional engagement with other real people.


Olestra, GLP-1's, Nietzsche, and the Continued Search for a Chemical Solution to Human Nature

Two things occur to me:

1). People in online popular culture no longer talk about “body positivity” now that GLP-1 drugs are readily available and have proven to be somewhat effective. However, I remember the coming and going of Olestra, so I’m waiting for the other shoe to drop.

2). There is never going to be a chemical solution to the pile of psychological, emotional, and spiritual factors that cause the differing disorders, pathologies, habits, tendencies, and tics that humans experience as a result of living in a fallen world.

Of course, I am trade-off positive rather than solution positive, because the abyss of human nature is as deep and dark as the abyss Nietzsche rhapsodized about in his various mad warnings.


Amygdalas Running Amoke

The story of technological adoption is a story of change. It’s also a story of amygdalas running amok, forgetting history, appeals to authority, and grifters and hustlers. It always “works out,” and the path of working it out is always hard, bumpy, and unpredictable. We can’t wish that journey away.