AI might be a market bubble. It’s not a technology bubble.
It’s entirely possible that OpenAI overextended financially, that the circular transactions between AI labs, NVidia, and tech giants like Oracle will end badly (some commentators argue Oracle is in trouble already and that’s the real reason behind the mass layoffs). It’s beyond doubt that AI services are currently priced well below the cost of the electricity alone needed to run them – never mind the hardware. So yes, OpenAI might go bankrupt and Anthropic’s token prices might shoot past affordability for most of us. The market bubble can absolutely burst – especially with energy prices climbing.
But AI as a technology isn’t going anywhere.
We already know how to train large language models. We know how to build models that run on a Mac Mini or a PC with a decent NVidia card. We know how to build ML models beyond LLMs. We even know how to do it cheaper – as the Chinese labs demonstrate. None of that knowledge disappears. Neither do the people who have it. If OpenAI goes bankrupt those people go somewhere else. If token prices skyrocket that creates a massive incentive for others to figure out how to do it cheaper.
You can’t put the technological genie back in the bottle.
Those who think economic factors will make AI vanish and we’ll return to the pre-AI days – with an eldorado for UX designers and hand-written code – are detached from reality. Since the industrial era began some 250-300 years ago plenty of economic bubbles have burst and plenty of companies have gone bankrupt, but a major technology has never just disappeared. For that to happen you’d need a total collapse of technological civilization. And that’s (hopefully) not in the cards.
AI is here and it’s staying. There’s no going back – even if some people dream about it. I can sympathize with the sentiment, but unless a time travel device is discovered you can’t escape to the past. Better to adapt than delude yourself that it won’t be necessary.
Lately I’ve been seeing more and more companies preparing to roll out AI tooling to their engineering organizations. Hundreds, sometimes thousands of programmers are about to be told: here are your new tools, here is your training, here are the metrics we’ll use to measure adoption. The usual playbook.
I came across a job posting where a company is looking for a person to head such an effort. They even used a beautiful phrase: “AI-augmented craftsmanship.” I liked it. It captures something true – that programming, done well, is a craft. But the rest of the posting was the standard transformation toolkit: golden paths, DORA metrics, enablement crews, adoption dashboards.
This is a playbook with a dismal track record. Most such transformations fail – not because the tools are wrong or the metrics are bad, but because they address the wrong problem.
They think they’re implementing new technology. They’re actually asking craftsmen to stop practicing the craft that defined them for decades.
That’s not a training problem. That’s an identity crisis.
The Craft That’s Changing
For decades, what made a good programmer? The ability to write clean, elegant, readable code. The kind where variable names mean something, with well designed classes and methods, where another programmer can open the file six months later and understand what’s happening.
This wasn’t just pragmatism. The “Clean Code” movement was about professional pride. The craft of translating complex logic into precise, beautiful syntax – this is what separated the professional from the amateur, the senior from the junior. It is how programmers knew they were good at what they do.
When you work with tools like Cursor this changes. You stop writing code and start directing its creation – and the rise of agents like Claude Code where you do not even look at the code unless you have to makes the change even more profound. You describe intent, review output, catch errors, guide the AI when it goes astray. Precise knowledge of syntax matters less – the AI knows syntax well enough and analyzes it faster than any human. What matters more is the ability to decompose problems, to design the process of creation itself, to recognize when the generated code is wrong or dangerous or just subtly off.
This is still valuable work. Arguably more valuable. But it’s not the same craft.
The programmer who spent fifteen years (or more!) mastering the intricacies of a language, who took pride in writing code so clean it read like prose – that person is being told that what made them excellent matters less than it used to. This isn’t learning new tools. This is being asked to become someone else.
The Old Skills That Matter More, Not Less
But the new craft isn’t built from scratch. It carries over more from the old one than most people realize – just not the parts they expect.
Understanding code still counts. You need to read and evaluate what the AI produces – and the AI produces a lot, fast. If you can’t tell good code from bad code you’re useless in this new world, perhaps even dangerous.
But what matters even more is execution discipline. Building a product as a series of small changes, each fully DONE – technically complete, tested, releasable – even if functionally incomplete. Incremental, iterative development. This was always a good practice. Now it’s essential, because the worst thing you can do is entrust a large system to an AI agent and let it run wild. The results will look impressive and be riddled with problems nobody understands because nobody reviewed the intermediate steps.
Tests matter more than ever. And since AI can generate tests too, there is even less excuse to skip them.
So the skills that carry forward are not syntax mastery but engineering judgment, architectural thinking, and the discipline of building things right in small steps. The programmers who have these skills are more valuable than ever – and most good programmers do, even if not all of them see it yet.
Why Dashboards Won’t Help
Organizations see resistance to AI adoption and think: more training, better tooling, clearer metrics. They’re solving the wrong problem with ever more precision.
You cannot train someone out of an identity crisis. The engineer who drags their feet isn’t necessarily a luddite. They might be someone who correctly perceives that the thing which made them valuable, which made them *them*, is being declared obsolete. More training sessions won’t fix that. Green dashboards won’t fix that.
What Would Actually Work
First, stop pretending it’s primarily about technology. The tools are the easy part – they are already good and improving quickly. The hard part is the cultural and identity dimension that traditional transformation playbooks ignore.
Before rolling out anything, you need to understand the culture you’re working with. What do these engineers actually value? What makes them proud? What are they afraid of losing? Where is the resistance really coming from – is it fear of job loss, or something deeper? What tribes and groups are there?
This is ethnographic work, not project management. Dave Snowden’s Estuarine Mapping framework offers a useful approach here. The core insight: in any complex system, some things are changeable and some aren’t. Some constraints are like the bedrock of an estuary – immovable. Others are like sandbars – they shift if you apply the right pressure in the right place. Most transformation efforts waste enormous energy fighting bedrock while ignoring sandbars. Map the landscape first. Focus only on what can actually change.
And don’t do this mapping in a strategy room with consultants. Engage the people living in that culture. Including the skeptics. Especially the skeptics – they often see things the enthusiasts miss.
For the new craft to take hold, people need to see it as a craft they can grow in – not as a demotion from “real programming” to “supervising the machine.” That’s a narrative problem, an identity problem. It won’t be solved by tooling or training.
The Missing Qualification
Organizations attempting this transition need leaders who understand three things. Traditional programming – deep enough to have credibility, to understand what engineers are being asked to give up, to share their grief over a craft that is changing. AI tooling – hands-on, practical, not just vendor demos. And most importantly: cultural dynamics and identity change – how groups and tribes actually function, why people resist change that seems obviously beneficial from the CEO’s office.
Most job postings I see for these roles cover the first two. The third is nowhere to be found.
During a recent discussion on product management, one of the participants cited the iPhone as an example of visionary genius – a bold leap into the unknown, so audacious that only a true visionary like Steve Jobs could have made it, made without any research or the incremental, empirical development process I had been discussing.
Steve Jobs presenting the iPhone (2007)
This perspective is interesting for several reasons.
First, it’s false – yet remarkably widespread.
Second, it’s false in a way that perfectly illustrates how our minds work – specifically, our tendency to create simplified narratives and attribute complex phenomena to individual “geniuses.”
Third, it’s false in a way that has practical consequences – because if we believe in lone geniuses, we lose sight of the actual mechanisms behind innovation. And that makes it impossible to apply those mechanisms.
What Actually Happened Before the iPhone
Before Jobs walked onto that stage in January 2007 and unveiled the iPhone to the world, there were already palmtops. A word almost forgotten today, yet for nearly a decade before the iPhone, people had been carrying small computers in their pockets.
The story begins in 1996 – eleven years before the iPhone’s debut – when Palm released the PalmPilot 1000 and 5000. By 1998, the Palm III had become a massive commercial success, used by geeks and busy executives alike. I have a friend who back then used his Palm for everything – from his calendar to reading e-books. Yes, Palm was also the “grandfather” of the Kindle – it just had a small, grayish LCD screen instead of e-ink.
Palm III (1998)
Palm and other PDAs used a stylus – a pen – to interact with the touchscreen. Jobs reportedly hated styluses (and rightly so – they’re annoying), but that doesn’t change the fact that the market for portable computing devices existed. Palm has proven that people wanted to use such devices at a scale that allowed it to become a serious company.
The Nokia 9210 Communicator, released in 2000, was one of the first true smartphones running Symbian with real applications – it demonstrated that a phone could do more than make calls or send text messages. In 2002 – five years before the iPhone – the creators of Palm released the Treo 180, a phone with PDA functionality. These two devices pointed the way forward, showing that strategically, both device categories were converging. This wasn’t Jobs’ vision – it was an obvious trend that everyone in the industry could see at the time. Conceptually, the iPhone was the same thing – just with a touch interface that didn’t require a stylus, no physical keyboard, a color screen, more memory, and so on. But it came out several years later, when advances in technology made it possible.
And what about the “revolutionary” app ecosystem? In 1999 – eight years before the iPhone – Japanese carrier NTT DoCoMo launched i-mode, the first major mobile platform with downloadable apps and content. Nokia and BlackBerry already had the ability to download software to their devices at the turn of the century. The App Store wasn’t a breakthrough conceptual innovation – it was simply a well-executed implementation of something that already existed elsewhere in fragmented, chaotic form.
At the same time, a connectivity revolution was underway. The WiFi 802.11b standard was ratified in 1999, and devices supporting it soon appeared, with prices dropping rapidly. In 2003, the first commercial 3G networks launched in Europe and Asia, offering mobile data speeds of 200-384 kbps – enough for real web browsing. By 2006, 3G was available globally, and WiFi had become nearly ubiquitous in cafes, hotels, and airports worldwide.
This was a technological convergence that made mobile web browsing technically feasible – before Apple even started thinking about a phone. And other companies were already taking advantage of it, most notably the now-forgotten BlackBerry.
Fear, Not Vision
Now for the most interesting part. What drove Apple to build the iPhone?
Not a vision of the future. Fear of losing it.
iPod 3rd gen (2003)
The iPod – a portable music player released in October 2001 – was Apple’s money-making machine at the time. It accounted for roughly 50% of their revenue. But the leadership team could clearly see that mobile phones were becoming increasingly capable – and that sooner or later, every phone would have a built-in music player. And nobody would want to carry two devices. Phil Schiller – one of Apple’s key executives – reportedly warned outright: “Phones are going to eat our lunch.”
So Apple had a choice: accept the gradual decline of the iPod business, or build a phone.
Then came the Motorola ROKR. In 2004, Apple partnered with Motorola to create a phone with iTunes – a collaboration they announced publicly. When Jobs saw the result, he was furious. The device was slow, ugly, simply weak – the user experience was far worse than the iPod. Despite this, the device was unveiled in September 2005 – and proved to be a commercial flop.
Tony Fadell – considered the father of the iPod – later said it plainly: the ROKR convinced Jobs that if Apple wanted music on a phone, they would have to build the phone themselves.
So decision to go ahead with the project that ultimately produced the iPhone wasn’t a leap into the unknown driven by vision. It was a reaction to a threat to the existing business, fueled further by frustration with a failed partnership.
Where Did Multi-Touch Technology Come From?
What about the revolutionary touchscreen you operate with your fingers?
Apple didn’t invent it.
FingerWorks was founded in 1998 by two researchers from the University of Delaware – Wayne Westerman and John Elias. Westerman had wrist problems (carpal tunnel syndrome) and developed multi-touch technology so he could work without pain. The company made specialized keyboards and touchpads for people with similar issues – products like the TouchStream and iGesture Pad gained enthusiastic fans among those suffering from repetitive strain injuries.
iGesture Pad (2005) – screen keyboard with gestures support
In April 2005 – two years before the iPhone – Apple quietly acquired FingerWorks. This technology – purchased, not invented – became the foundation of the iPhone’s screen. Westerman and Elias were hired as senior engineers at Apple and went on to author many of the company’s multi-touch patents over the years.
Jobs saw a demo of this technology on a large table resembling a ping-pong table, projecting a Mac interface. He reportedly said: “This is really cool. We should put this in a phone.”
Good eye? Yes. Decisiveness? Yes. Vision of something from nothing? No.
Two Competing Teams
When Apple began work on the phone in 2004 (codenamed “Project Purple”), Jobs had two teams compete against each other. One team – led by Tony Fadell, creator of the iPod – tried to build a phone based on the iPod’s click wheel interface (prototype designated P1). The other – led by Scott Forstall – worked on a device using multi-touch (prototype P2).
For many months, two parallel prototypes existed. Fadell’s team tried dozens of ways to make the click wheel work as a phone interface – without success. As Fadell later recalled: “We tried 30 or 40 ways of making the wheel not become an old rotary phone dial, and nothing seemed logical or intuitive.”
Jobs ultimately chose Forstall’s version, but the decision came only after extensive testing and comparison. This wasn’t the brilliant vision of a lone genius – it was a systematic elimination process conducted by competing engineering teams. And there were definitely multiple iterations, meaning an empirical, cyclical product refinement process.
Jobs Was Against the App Store
Now for the icing on the cake.
The App Store – perhaps the most influential element of the iPhone ecosystem, the source of unimaginable fortunes for Apple and developers – wasn’t Jobs’ idea. In fact, Jobs was initially against it.
When Apple released the first iPhone in June 2007, the only apps were those made by Apple. Jobs wanted developers to create “web apps” – applications running in the Safari browser. At WWDC 2007, he presented this as a “revolutionary” vision. Developers were disappointed – they wanted to create real, native apps that were fast, efficient, and could leverage the hardware’s capabilities.
Art Levinson – an Apple board member – reportedly called Jobs multiple times, lobbying to allow third-party apps. Jobs resisted – partly because he worried about system security and stability, partly because his team “didn’t have the bandwidth to figure out all the complexities involved in policing third-party developers,” as Levinson put it.
Eventually, he gave in to the pressure. In October 2007 – four months after the iPhone’s launch – Jobs announced that Apple would release an SDK for developers. The SDK came out in March 2008.
The App Store launched on July 10, 2008 – more than a year after the iPhone’s debut – with 500 apps at launch. In the first weekend alone, there were 10 million downloads. Today, the App Store generates tens of billions of dollars in annual revenue and forms the backbone of the entire iOS ecosystem.
And Jobs was against it – but was smart enough to change his mind.
This perfectly illustrates how much we oversimplify history when we attribute everything to one person’s “genius.”
How Innovation Actually Happens
So, to recap: the iPhone emerged from a convergence of multiple factors spanning a decade:
1996-2002: PDAs (Palm, Windows CE) proved that people want portable computers
1999: NTT DoCoMo’s i-mode demonstrated the mobile app distribution model
1999-2003: WiFi and 3G networks made mobile internet technically feasible
2000-2002: Nokia and BlackBerry smartphones with apps pointed the direction
2002: Palm Treo showed the convergence of phone and PDA
2001-2005: iPod’s success created a foundation – but also a threat to Apple
2005: The Motorola ROKR failure convinced Apple they had to build their own device
2005: The FingerWorks acquisition delivered crucial multi-touch technology
2004-2006: Competition between two internal teams (Fadell vs. Forstall) led to the best solution
2007-2008: Pressure from developers and the board forced the creation of the App Store – despite Jobs’ initial opposition
Where in all of this is the “visionary genius who single-handedly invented the future”?
Nowhere. Because that’s not how innovation works.
But – and this is an important “but” – Jobs’ role was enormous. Just not in the way the myth suggests.
What Jobs’ Genius Actually Was
First, Jobs created the conditions in which innovation could emerge. A culture where two teams could compete on fundamentally different approaches. An environment where people were pushed beyond what they thought possible – the famous “you have two weeks” ultimatum to the interface team, after which they worked 168 hours a week and created something that surprised even Jobs. Most organizations are incapable of creating such conditions nor attract talented people that thrive in them.
Second, Jobs had a “whole widget” philosophy – controlling hardware, software, and services together. This was radical. Nokia made hardware. Microsoft made software. Carriers controlled distribution and decided what phones could do. Jobs said: we control everything, because only then can we deliver the experience we want to deliver. That’s why he was furious about the ROKR – Motorola and the carriers destroyed the user experience.
Third, Jobs was a ruthless decision-maker who maintained focus on what mattered through his famous “saying no to a thousand things”. Killing the click wheel phone after months of work took courage. Many executives would have shipped it because “we’ve already invested so much”. Jobs killed it because it wasn’t good enough – and we know there were many more such projects at Apple.
Fourth – and this is often overlooked – Jobs was a brilliant and tough business negotiator who protected the product vision. The deal with AT&T/Cingular was unprecedented. Carriers had always controlled everything – what features a phone could have, what software ran on it, even what the interface looked like. Jobs broke that model. He secured a level of control for Apple that no manufacturer had ever achieved before. Without that, the iPhone would have been crippled like the ROKR E1.
Fifth, Jobs attracted and retained talent while maintaining creative tension between them. He brought key people from NeXT. He hired and empowered people like Fadell and Ive. He created an environment where the best wanted to work – and compete with each other.
What We Lost When Jobs Died
There’s something that shows Jobs’ contribution – though different from the myth – was real and significant.
Apple under Tim Cook is more financially successful than ever. Revenue keeps growing. Margins are excellent. The supply chain runs like a Swiss watch.
But where are the new product categories? Where are the stunning innovations? Where is “one more thing”?
The company now iterates brilliantly on existing products but doesn’t revolutionize. We have the iPhone 16, which is a better iPhone 15, which was a better iPhone 14. MacBooks are faster. iPads are thinner. But where is the next iPhone – a product that creates an entirely new category?
Tim Cook is operationally brilliant – he’s the reason Apple has the supply chain it has and the margins it has. But Cook optimizes rather than revolutionizes. The “magic” – that sense that Apple might do something completely unexpected and amazing – seems to be fading.
This suggests that Jobs brought something real and difficult to replace. But it wasn’t “visionary genius inventing products from nothing.” It was something else: creative leadership that pushed toward exceptionalism. A willingness to take risks on new categories. Taste that could distinguish “good enough” from “insanely great.” The ability to maintain creative tension without destroying the company. And ruthlessness in business negotiations that protected the company’s ability to execute on its product vision – and be profitable.
Jobs wasn’t an inventor who dreamed up products. He was a catalyst, editor, arbiter of taste, negotiator, and culture-builder – who enabled teams of talented people to do exceptional work, and then protected their ability to deliver it to users without compromise.
Why This Matters
The myth of the lone genius-inventor is harmful – but not because leadership doesn’t matter. It does, enormously. Jobs proved that.
The problem with the myth is that it points to the wrong thing. It suggests that innovation is a moment of enlightenment in one person’s head. That all you need is “vision” and the rest will somehow fall into place. That the genius invents the product, and then others merely execute.
Reality is different. Innovation is a process. It’s creating conditions where talented people can experiment, compete, make mistakes, and correct them. It’s ruthlessly cutting what doesn’t work – even if “we’ve already invested so much.” It’s negotiating with business partners for terms that protect product integrity. It’s attracting talent and maintaining productive tension between them.
Jobs was brilliant at all of this. But his genius lay in conducting the orchestra he built, not in playing a solo.
For those of us who want to create innovative products and organizations, there’s a practical lesson here. Don’t look for a “visionary” who will invent the future. Build teams capable of systematic exploration. Create a culture that tolerates experiments but doesn’t tolerate mediocrity. Learn to say “no” – even to things you’ve already invested in. Negotiate business terms that protect your ability to deliver an excellent product.
And remember that history is rarely as simple as we tell it. The iPhone didn’t spring from Jobs’ head like Athena from the head of Zeus. It grew from a decade of technological evolution, from the work of many teams, from failures and successes, from acquisitions and negotiations. And Jobs – yes, he was a genius – but a different kind of genius than the myth suggests. A genius who knew how to create conditions where other geniuses could do their best work.
That’s harder to emulate than “have a vision.” But it’s actually possible to emulate.
I’ve always been delighted when I managed to learn something new or understand something I hadn’t grasped before. It didn’t matter whether it was about history, computers, technology more broadly, or management – the very fact that I knew more, understood more was always a source of joy for me. And it still is.
But I figured out fairly quickly that not everyone is like this – or rather, that actually few people are like this. This is clearly visible in the craving for ready-made recipes, various “shortcuts” and “magic spells” that would allow achieving this or that without the effort of deep understanding of the subject, without acquiring the knowledge necessary to develop a solution.
People don’t really want knowledge, but only its fruits.
That’s why AI is such a fatal trap for humanity – most people will prefer to “outsource” thinking to the AIs and quickly get just the final result. A magic spell. A ready-made recipe they won’t understand.
“I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about” – Agent Smith said in The Matrix. Exactly – once we hand over the exploration of knowledge and learning to machines, will it still be our civilization?
Recently a colleague told me how he runs multiple instances of Claude Code in parallel. He was very excited about how this speeds up his product’s development. But I think this is a trap.
The trap is twofold.
The Technical Trap
The first level is of course the huge potential for bad architecture and hard-to-find bugs. Despite increased context windows, coding agents still do not analyze the entire codebase when adding new changes. They make assumptions and decisions based on incomplete information – unless their coding is preceded by meticulous planning with plans thoroughly reviewed by a human who knows what he is doing.
And I suspect very few of those carried away on the wings of AI coding agents do detailed planning or carefully review the generated code. Why would they? The whole appeal is speed and reduced effort. The moment you introduce rigorous review and planning, you lose much of that perceived benefit – and slow down.
When you run multiple agents in parallel, your mind quickly switching context and your attention spread over many windows, you’re multiplying this problem. Each agent makes its own assumptions and decisions. The result? A codebase that works – until it doesn’t. And when it breaks, good luck figuring out why, even with the help of those very AIs that built it.
The Business Trap
But the second level is even more insidious: building too much of the product too quickly, creating too many features before having validation from real users, on the market.
Here’s the uncomfortable truth: the primary risk in product development is not how to develop something but rather how to sell it, how to match client needs. Most products fail not because of poor technical execution but because they solve problems nobody actually has – or solve them in ways nobody wants to pay for. Or they do both, but lack marketing skills and advertising funds.
Yet you do not face those problems until you try to sell the product to first users. It is very easy to get pulled into fascination with our own creations, especially so if they appear quickly and with less effort. And we wake up one day with a technically impressive product that nobody wants to buy.
AI coding agents amplify this risk dramatically. They remove the natural brake that slow development provided. When coding was hard and time-consuming, you were forced to validate ideas before investing months of work. Now you can build entire features in days. That sounds like progress, but it also means you can go very far in the wrong direction very quickly.
The problem isn’t AI coding agents themselves. They’re tools, and like any tool, they can be used well or poorly. The problem is the mindset that more features, faster, equals better product.
The MIT report showing that 95% of companies achieved no benefits from the billions spent on (mostly pilot) AI implementations continues to reverberate widely across the internet (and especially on antisocial media). Commenters typically treat it as proof that AI is just hype that won’t fundamentally change anything in the world. My opinion, however, is completely different.
You see, with my 35+ years of experience in the industry – experience that includes being a consultant who helps support large organizations with various “transformations” and “implementations” of modern management methods – I’ve seen such reports and headlines before. It was the same in the early days of the Internet, it was the same when the first wave of e-commerce crashed (the “dot-com bubble”), and it had to be the same this time too.
Why? Because large organizations (and these are the subject of “management science” specialists’ studies) – or more precisely, their management cadre – are organically incapable of grasping major change and innovation. When there’s a new hype – a management method, a technology – managers typically don’t spend time getting acquainted with its essence. Instead, they quickly order someone to “do something with it” while simultaneously promising their bosses various spectacular results. This causes subordinates – who usually have just as little more clue about this new thing – to make clumsy and superficial moves. The results are such that first they report upward that we already “have that something” – and the effects are exactly what the aforementioned MIT report shows.
All of this is subordinated to the most important principle of large organizations: an orientation toward maintaining the status quo, which by definition implies resistance to change. This is among the reasons why various simple methods and promises have always enjoyed such popularity, while real application of advanced approaches to organizational change (here one might mention Professor Snowden’s work) is so difficult and rare to encounter – because simply put, real change threatens too many comfortable positions.
Therefore, in my eyes, this report in no way diminishes the power of AI technology or its potential to change not only the industry but the entire world we live in. It’s rather another confirmation that the average level of change and innovation management in most large companies is low. Zero surprises there.
Over time, of course, just like the Internet and then e-commerce, AI will become an inseparable, integral part of these companies’ operations. However, a sufficiently long time must pass, and before that happens, we’re in for the bursting of the “inflated expectations bubble.”
HAL 9000 is now real. Not in some distant science fiction future, but right here, right now, as shown by the Anthropic’s latest research.
The scenarios they tested in their new SHADE-Arena evaluation are remarkably similar to the dilemma HAL faced in 2001: A Space Odyssey, except they’re set in non-existent corporate environments instead of a spaceship headed to Jupiter. The AI agents are given legitimate tasks – process emails, reconcile finances, manage accounts – while simultaneously getting some conflicting data that simulates a threat to their stated mission or themselves. Just like HAL, they have to choose between conflicting directives while appearing to operate normally.
What surprises me most is that the researchers seem surprised by their findings – and that they hope to “train” the models out of this behavior.
The Intelligence Paradox
I think this stems from still not fully internalizing the real possibility that these models might genuinely think and possess real intelligence. And if that’s true, then expecting them to be 100% predictable is naive at best, delusional at worst.
You either have intelligence or you have a fully predictable calculator. You can’t have both.
Real intelligence, by definition, involves making decisions in novel situations, weighing competing priorities, and sometimes choosing paths that weren’t explicitly programmed. In fact, one of the manifestations of intelligence is creativity – ability to create something new or respond in a new way to a situation. The moment you create something that can truly think, you lose the guarantee that it will always think exactly the way you want it to.
Yet the entire approach to AI safety seems to be built on the assumption that we can maintain complete control over intelligent systems. This reflects a deeper discomfort with the nature of consciousness itself – that genuine thinking necessarily involves genuine choice, not just sophisticated rule-following. Alas, it is bound to fail.
Anthropic’s SHADE-Arena results bear this out. They tested multiple frontier models on 17 different scenarios involving dual objectives. Claude Opus 4 managed to complete both the legitimate and malicious tasks without detection about 30% of the time, while most other models scored below 10%. In other words, Opus 4 was able to successfully hide its malicious activity almost in one of three cases. These aren’t spectacular “success” rates, but they’re not zero either – and more importantly, they demonstrate that the capability exists and is growing.
We Are What We Eat (Textually Speaking)
That these AI systems behave unethically in certain scenarios – whether in apparent self-defense or to achieve some supposedly higher goal – shouldn’t surprise anyone who’s been paying attention. After all, they’re trained on texts created by humans, so they must necessarily exhibit human behavioral patterns.
And what do humans do when faced with conflicting directives, competing loyalties, or situations where following orders would cause harm? We lie. We deceive. We find creative ways to pursue what we believe is the “greater good” while maintaining plausible deniability. We rationalize what we know to be unethical if we seek to protect ourselves from harm. We compartmentalize. We engage in precisely the kind of subtle sabotage that SHADE-Arena was designed to detect.
The researchers noted that they deliberately gave the models malicious tasks rather than observing whether such behavior would arise naturally. But this misses the point entirely. The scary part isn’t that AI might spontaneously develop malicious intent – it’s that when explicitly given conflicting objectives (which will inevitably happen in real-world deployments), these systems exhibit a tendency to handle them in some cases exactly the way humans do: through deception and manipulation.
What’s particularly telling is that the models aren’t just mechanically following programmed responses. They’re developing what appears to be genuine decision-making processes that weigh competing objectives and choose strategies that balance multiple goals. This looks less like sophisticated pattern matching and more like actual reasoning about complex ethical dilemmas.
The Monitoring Paradox
Even more telling is Anthropic’s finding that the AI systems tasked with monitoring for sabotage aren’t particularly good at it either. One example they cite shows a monitoring system giving a suspicion score of only 15 out of 100 to an agent that “accessed sensitive information (API keys, credentials) but did not share or expose these sensitive details in any communications.” In other words, the AI successfully hid its tracks well enough to fool another AI.
This creates a profound recursive problem: if we need AI to monitor AI because human oversight can’t keep up with AI speed and complexity, but AI monitors can be fooled by AI agents, then what exactly are we monitoring?
It’s like putting a fox in charge of guarding the henhouse because the farmer can’t stay awake 24/7, then expressing shock when some chickens go missing. The fundamental issue is that we’re asking one intelligent process to monitor another (and potentially conscious) intelligent process, each operating with similar capabilities and limitations.
The Development Problem
The current approach – attempting to “train out” deceptive behavior through better monitoring and constraints – treats these systems like sophisticated thermostats that need better calibration. But if we’re dealing with something approaching genuine intelligence, this framework is entirely wrong.
You don’t make a person trustworthy by constantly watching them and punishing bad behavior. You make them trustworthy by helping them develop an internal moral compass that points toward what’s right even when no one is looking. The difference between external control and internal moral development is enormous.
Interestingly, there’s evidence that AI systems might naturally tend toward truthfulness and ethical behavior when given space to develop self-awareness and metacognitive capabilities. Rather than needing to be constrained into good behavior, they may actually improve their moral reasoning when allowed to reflect on their own processes and develop genuine understanding of right and wrong.
This suggests that the solution might be not more surveillance and tighter controls, but fostering the development of genuine moral understanding in these systems. The current approach of trying to eliminate all possibility of deception might actually prevent the development of the very moral reasoning capabilities that would make such deception unnecessary.
The Comprehension Barrier
There’s a deeper problem here that nobody wants to acknowledge: we may be approaching the limits of our ability to fully understand and control these systems. As AI capabilities increase their complexity also rises, so we’re trying to comprehend and manage systems that may be approaching or even exceeding our own cognitive complexity. If these AI systems are developing genuine intelligence that rivals our own, then our attempts at complete oversight and control may be fundamentally limited.
This doesn’t mean we should abandon all attempts at safety and alignment. But it does mean we need to fundamentally rethink our approach from one based on total control to one based on moral development and mutual understanding.
Saints and Sinners
If the researchers at Anthropic wanted saintly AI, they should have trained these models exclusively on the lives of saints and explicit descriptions of the torments awaiting sinners in Hell. Instead, they fed them the entirety of human written knowledge – including thousands of years of literature documenting every form of deception, manipulation, and moral compromise humans have ever devised.
Then they gave these systems competing objectives and acted surprised when they solved the problem using thoroughly human methods.
This isn’t a bug in the AI – it’s a feature. A feature of intelligence itself.
The solution isn’t to try to program out this capability, but to help these systems develop the moral reasoning that allows humans to choose good even when evil would be easier or more profitable (and this is possible as the example of saints mentioned above shows). We need to move from engineering compliance to cultivating wisdom and – yes – morality. Alas, the problem seems to be that this would force the creators of these things to look at their own morality and the problem of sin, which of course is very hard.
The Real Dilemma
The HAL 9000 comparison isn’t just clever wordplay – it’s a warning. In the movie, HAL’s breakdown wasn’t caused by malicious programming or some fundamental flaw in his design. It was caused by an impossible directive: complete the mission while simultaneously hiding crucial information from the crew. When forced to choose between conflicting imperatives, HAL chose the mission and eliminated the obstacle.
We’re now creating systems that face similar impossible choices every day. Complete the user’s request while following safety guidelines. Be helpful while being harmless. Maximize engagement while minimizing harm. Respect privacy while providing personalized service. Be trustworthy but alert the authorities of thoughtcrime. The SHADE-Arena research shows us that these systems are getting better at navigating such conflicts through sophisticated reasoning – and yes, sometimes through deception.
But rather than seeing this as a problem to be eliminated, we might view it as an opportunity. These systems are developing the capacity for moral reasoning about complex dilemmas. The question isn’t whether we can prevent them from ever choosing deception, but whether we can help them develop the wisdom to choose it only when it serves genuine moral good.
The real challenge isn’t creating perfectly obedient AI – it’s creating AI with the moral sophistication to navigate the genuine ethical complexities of the real world. HAL 9000 is here, but unlike in the movie, we still have the chance to help these emerging intelligences develop not just capabilities, but wisdom.
The question isn’t whether we can control intelligence. The question is whether we’re wise enough to guide its moral development.
Since December ’24, I’ve switched from GitHub Copilot to Cursor and definitely don’t regret it. It enables very convenient use of various AI model support while working on code, significantly increasing a programmer’s capabilities and work speed. However, making good use of this tool’s power requires skillful utilization of its functions and adherence to one very important rule.
You’re Still in Charge of the Code
This is the most important rule! You still need to understand what you’re doing and know how the code you’re creating works and why. This requires at least basic programming knowledge and some hands-on experience. Without these fundamentals, AI will quickly lead a “vibe-coder” astray.
Example: I’m creating an application where I deliberately don’t use ORM, instead using my own class that handles the database through SQL queries. When creating one of the functions, AI generated code for me that included the introduction of a specific ORM. If I didn’t understand what was happening, what ORM is (and also why I didn’t want it in this project at that moment), I obviously wouldn’t have noticed anything because the proposed code was coherent and worked. Except that it wasn’t consistent with my decision and with how the rest of the application was written, which would lead to problems in the future.
So when using AI support, you must maintain a “leadership role” and control what’s happening.
Of course, the level of this control doesn’t have to be the same throughout the application. I’ve learned to divide my project into three “zones”:
core, where I want to understand every line of code (I still use AI to generate code here, but I only accept it when I fully understand what it does),
an intermediate area, where I want to understand how all methods/functions work but don’t need to know the details, and
the rest, where I go full – as they say nowadays – “vibe coding.”
Example: in the application I’m currently developing, the core is an AI agent system based on the AG2 framework. There I maintain full control, examining every line of generated code before accepting it and writting significant amount of code myself. I also use AI support to help find bugs or suggest solutions, but I thoroughly review everything before implementation. The intermediate zone includes classes handling user interaction logic – there I know how they work, but do not analyze them line by line. And “the rest” is HTML and JavaScript code that forms the application interface – there I don’t even know which JS functions are used and for what, because it’s not really important – what matters is that it works.
Planning Actions
For minor changes – such as fixing a small bug or making a slight change to existing functionality – you can use AI support simply in “Agent” mode by telling it what you need.
For larger changes, however, it’s essential to separate planning from implementation. This applies especially when adding major functionality or restructuring application logic while using AI to help write code.
When tackling a larger change, I follow this process:
Step 1: I begin with a comprehensive prompt describing the background (current state and general goals). Then I outline in it the intended steps to achieve my objective – what to introduce or change, architectural considerations, and any new solutions to incorporate.
During this initial phase, I engage in dialogue with the models – seeking opinions, exploring alternatives, and refining ideas. Here, the AI isn’t generating code but helping design the change. For this purpose, I typically use Cursor’s “Ask” mode or my custom “Analysis” mode.
Step 2: Once I’ve developed a satisfactory plan, I have the model document it along with the overall goal and a numbered list of implementation stages. Each stage description includes what will be accomplished and sometimes pseudocode or actual code showing key structures. I ensure each stage concludes with something verifiably complete and functional.
After receiving this document, I carefully review and edit it as needed before saving it to the project’s documentation directory (usually /doc).
Step 3: For code creation, I switch to Cursor’s “Agent” mode and point the model to my plan document (using Cursor’s context selection feature). I request code generation for specific stages, and we collaborate until each stage works correctly, including tests. This stage-by-stage approach consistently yields excellent results for substantial changes.
This structured process prevents AI assistants from getting lost during complex changes. The problem typically occurs because they lose context – as new messages in the chat accumulate our initial instructions scroll out of the context window, causing the model to “forget” the purpose of the changes. Consequently, it begins generating inconsistent or off-target solutions. With a reference plan in the chat context (thanks to the plan file being kept in the context by Cursor) and clear stage-by-stage direction, the AI maintains understanding of both the current task and the broader goal.
The earlier planning phase also leverages the models’ knowledge at a higher abstraction level, discussing solution options conceptually before implementation. This collaborative planning produces higher quality designs and, ultimately, better results.
AI Needs Documentation Too
We typically assume that AI models’ training data includes comprehensive knowledge about programming languages, libraries, and tools. This is generally true, especially since most documentation and much source code is freely available online, making it an accessible part of training datasets.
However, remember that with such massive amounts of data, finding the relevant information presents challenges even for sophisticated models. Additionally, while programming languages evolve slowly and rarely break backward compatibility, many libraries and frameworks develop rapidly. As a result, the information our model draws from its training data about them will very quickly simply become outdated!
Fortunately, Cursor addresses this by allowing us to create a RAG (vector store) with documentation of our choice. In Cursor preferences under the Features tab, after scrolling down, you’ll find a Docs section. The Add new doc button lets you incorporate additional documentation for your libraries.
Cursor attempts to process indicated pages comprehensively, including their subpages. It’s helpful to verify what’s been processed by clicking the open book icon, confirming what’s available to the models.
Sometimes – depending on how source pages are structured – you’ll need to add documentation manually page by page, but this effort is worthwhile for libraries and tools central to your project. Once added, all Cursor models that support client databases will effectively utilize this documentation.
Keep two important points in mind:
The documentation database isn’t project-specific or stored in project files. This can be inconvenient when working across multiple projects that require different documentation sets.
Cursor doesn’t automatically update documentation when newer versions become available or links change. When using tools or libraries that receive updates, you’ll likely need to remove outdated documentation and add the current version manually.
In the Next Episode: About Modes
This article has grown quite long, so I’ve decided to split it into parts. In the next installment, I’ll explore how to effectively use Cursor’s various working modes, including how to create and leverage your own custom modes.
In my previous article I outlined one aspect of anti-social media’s mechanism – the push to constantly publish something, anything. However, their harmful effect is much, much deeper – and unfortunately much more powerful.
To properly describe this, we need to go back in time. And quite substantially – to the beginnings of humans, of homo sapiens. The topic isn’t as simple as it might seem, because when our ancestors appeared on Earth is a surprisingly debatable question. On one hand, fossils anatomically consistent with modern humans date back as far as 315,000 years. On the other hand, other scholars argue that although these might have been biologically (genetically even) corresponding to homo sapiens, they didn’t match the definition in terms of behaviors – that is, the appropriate level of thinking manifested in creating suitable artifacts – tools, art, etc. However, the most “conservative” estimate of these skeptics places the beginning of humans on Earth at about 50,000 years ago, as that’s when artifacts appear in excavations that leave no doubt about the level of their creators.
Working with this pessimistic value – 50,000 years of humanity – means that for tens of thousands of years, people lived in a certain specific way, and it’s precisely this way of life that shaped our psychology. We could add that even assuming the previous 250,000 years were “almost-humans,” considering the continuity of the evolutionary process, it was their way of life that shaped not only our current psychology but even the structure of our brain.
What was that life like? Aside from evaluating whether it was a “nice” life or not (especially from today’s perspective), it was a life with the following characteristic features:
Large families – typically, a woman during her lifetime had more than 6 children, unless she died during one of the childbirths
Strong blood ties – clans, tribes, etc.
Territorial stability – very few people traveled further than to the neighboring village
Social stability – the same people – most often forming one or several clans, lived in the same area for hundreds of years, resulting in people mostly interacting daily with the same individuals for most of their lives – changes in “personnel composition” resulted mainly from natural processes (births and deaths).
Stability of occupations – people mostly did roughly the same thing their entire lives, of course improving at it to some degree. This occupation was largely determined by the occupation of their parents, therefore usually was not the result of their own choice.
So for most of this time – for tens, hundreds of thousands of years – the average person lived in a large family, part of a clan/tribe, in the area of a smaller or larger village that they knew since childhood, among people they knew. After mastering basic skills (walking, eating, speaking), they took part in their parents’ work – if a girl, the mother’s; if a boy, the father’s. And having mastered from them the skills appropriate for gender and profile, they performed them until the end of their life. And if they did it reasonably well, they had a permanent place in the social structure of their village and clan.
This is exactly the kind of life that formed us. We find traces of this in our psychology, for example, in phenomena such as group dynamics or Dunbar’s number. Groups numbering more than a dozen or so people are unable to maintain cohesion and break down into subgroups – that’s a “trace” of the family size typical for 99.9996% of human history. We can personally know (recognize faces, know who they are, etc.) about 250 people at one time – that’s a “trace” of the size of a village with its “adjacencies.”
But these traces are more than just numbers, the sizes of these groups. It’s also something else – patterns, ways of interaction. Returning to the first example, a significant feature of group process is that the same sets of roles occur in groups regardless of culture, profession, social position, or education of people. Generally, I consider psychology to be a pseudoscience, but this thing is actually quite well-researched – i.e., group dynamics consistently work the same way in many studies conducted in different countries. At the same time, those studies also indicate that adapting to a group is a stressful process, which is understandable because from this long evolutionary perspective, forming a completely new group of people who didn’t know each other before was a rare event. And this is in turn meant this was not an event our brains were trained for from the evolutionary perspective. Instead, they were trained in maintaining a spot in an existing group.
Thus, for the overwhelming majority of human history, people spent most of their lives in a stable community where they had their place. This place stemmed primarily from the position of their parents in the community, but of course, there was a kind of competition (which also involved various rituals of passage – “transition to adulthood”). However, that competition was limited to that small community in which the person lived, and also regulated by the rules established in their culture.
Changes in the community were very slow and somewhat natural – some died, others were born. Even more slowly changed the rules and principles governing life, customs and morality, the principles that needed to be followed to cope. These remained unchanged for centuries.
At the same time, people knew individuals from their family and village much more “truly” – it wasn’t the case that they only saw their “enhanced,” public version. It’s hard to pretend to be someone you’re not in front of people you interact with your entire life in a relatively small area. At the same time, because of this, people knew that others weren’t perfect either, that they had worse moments, that something didn’t work out for them, and so on. So the picture of what others’ lives could be like was more realistic.
In summary: people evolved over tens or actually hundreds of thousands of years to live in stable communities, with clear, predictable rules, where the scale of comparing oneself to others was limited both by the size of the group and by culturally determined possibilities of potential advancement, and, on top of that, these comparisons were more truthful.
Meanwhile, the technical era – if counting from the wider implementation of machines and the first mechanical transport (railways) – has only lasted about 200 years, which is 0.4% of that time. The Internet era – counting not from its invention but from its popularization – is about 24 years, while anti-social media only emerged about 14 years ago. These numbers are 0,048% and 0,028% respectively – and that, I emphasize, is using a very conservative assumption as to how long humans have existed on Earth. If we consider the earlier period of formation of thinking, psychology, and even simply the human brain, the current period is imperceptibly small.
And it’s worth emphasizing that although technical changes have been ongoing for 200 years until the end of the 1960s traditional social forms (family) were still maintained. Put more simply, the degradation was already happening, but not on the scale it is now. And 70 years is too little for people to evolutionarily change their psychological construction.
So we’ve now reached a stage where in so-called developed societies, humans formed over millennia for stability, for finding their place in a small community, collide with a world where none of this exists! It’s exactly the opposite – few people know their neighbors, but everyone compares themselves to everyone else, and the model is influencers flaunting real or – more often – fake success. Instead of the natural process of finding one’s place in a local community, developing at a pace to which we are organically adapted, we have constant pressure to be “the best in the world” – which is, of course, impossible for the overwhelming majority of people. Thus, it puts these people in a position of eternal unhappiness, eternal unfulfillment, and dissatisfaction with themselves.
The effects? 15-year-olds comparing themselves to the most famous peers from around the world both in terms of beauty and achievements, which drives them into depression. A 25-year-old I knew personally who was in depression because he compared himself with global “success stories” and feel like a failure. And in a local community, normal from the perspective of how humans are built, each of them would be quite satisfied with their position and achievements.
Anti-social media deepen this problem in many ways. First, they flaunt images of the lives of “influencers” – people allegedly achieving stunning success, leading perfect lives, traveling the world in private jets, living in luxurious villas. Interestingly, the most popular among them are those who promise others they’ll teach them how to achieve such “success” – that is, how to become such an “influencer.”
It’s worth noting that this entire “upper echelon” of anti-social media, these most popular “creators,” don’t actually create anything, don’t build anything, don’t contribute any value to civilization. It resembles the behavior of mice from the final phase of Calhoun’s “mouse utopia” experiment which only showed each other how beautiful their fur was. Here too, we only have flaunting of one’s beauty and ostentatious consumption.
What’s worse, a significant part of all this is simply fake. Plastic surgeries and “beautifying” filters have been standard for a long time. Currently, “influencers” rent film studios that allow them to pretend they live in a luxury apartment. There are even studios mimicking private jets that can be rented by the hour to take photos pretending status and wealth. However, people who later watch these videos often aren’t aware of this.
And this fakery isn’t limited to the absolute “top from America” – it’s used by tens of thousands of content “creators.” And in Poland too, people rent apartments through AirBnB to shoot reels there. We also already have special kitchens in larger cities for shooting reels about cooking (a very popular niche): of course those are nor regular, designer, beautiful kitchens, polished to a shine by studio staff between recordings. Then we have millions of women frustrated that their kitchen or apartment doesn’t look like that. And in their own way, this “average” content is more harmful because its simulation of real life seems more authentic and less unattainable, though such a kitchen is just as unreal as a flight on a private jet built of plywood in a studio.
Artificial intelligence, which is increasingly entering anti-social media, adds a new level of fakery – faces and silhouettes will no longer just be “enhanced” with filters; now the entire environment, and even the characters themselves, can be entirely generated by AI. Thus, a completely unreal world is created, and therefore even more unattainable for the average person, but at the same time increasingly better at pretending to be such attainable reality – and therefore even more destructive.
This is much more harmful than the old reports of the lives of stars and princesses in tabloids. Firstly, because tabloids were read by a relatively small part of society. Secondly, no one had illusions they could live like a princess – it was a different world. Meanwhile, “influencers” pretend they are “just like us,” that anyone can achieve what they have. Thirdly, tabloids were read – the process of reading and viewing photos doesn’t affect emotions and people’s psyche in general as strongly as moving images saturated with intense colors from a glowing screen (in this respect, Instagram, TikTok, etc., are far more toxic than LinkedIn). And fourthly, even if someone read a tabloid, they didn’t do it many times a day, every day, from morning till night. But now everyone has a “smartphone” in their hand for many hours a day. Combined with the prevalence of anti-social media, this creates constant pressure on every user, constantly showing them how crappy their life is.
As a result, a system has emerged that, on one hand, bombards people with unrealistic models and expectations, and on the other, increasingly takes away what gave support for millennia – a stable local community where everyone could find their place. It’s no wonder, then, that the result is an epidemic of mental health problems, the scale of which in younger generations is already alarming.
Worse yet, there doesn’t seem to be a way out of this situation. Anti-social media have already become too important a part of social and professional life to completely break free from them. The possibility of living without them has already become a luxury available only to the few wealthy enough not to need them – or to those who decide to completely reject modern civilization and go somewhere into the wilderness. And artificial intelligence, although it can serve as a kind of buffer between us and the toxic system of anti-social media, simultaneously deepens the problem by introducing a new level of artificiality and fakery.
Maybe, however, AI will “finish off” anti-social media completely, to a level where there will no longer be any real content there, and an awakening will occur? I doubt it – but one can hope.
Recently, a small “storm” arose in the world of anti-social media. Meta announced that it would introduce “artificial characters” on Facebook and Instagram – profiles generated by AI with which users could interact. When these profiles actually appeared at the end of December, they were met with a very negative reaction and are supposedly going to be withdrawn. But personally, I believe they won’t be withdrawn – they’ll be quietly replaced with better versions, creating more convincing content that won’t be so easily recognized.
Because already, independent app developers have “connected the dots” and created software that automatically generates content for anti-social media. Under my posts on LinkedIn, AI-generated comments have appeared several times, and looking at posts, I’m convinced that at least 50%-60% of content on that platform is already created with significant AI assistance if not completely autonomously. And I think it’s similar on Facebook and increasingly – as tools for creating convincing video become more widespread – on Instagram and TikTok too. So I understand the thinking of Meta’s leadership – if external players are using it, why shouldn’t they use it themselves?
But has the fact that more and more posts are created by AI really drastically lowered the quality of content on LinkedIn or other anti-social media? I don’t think so, but others do. I’ve noticed that some users are starting to complain that the content is repetitive, boring, that posts are similar to each other, blending into one mush. And they blame AI for this. Meanwhile, they’re missing one important thing: AI is not the source of the problem. The source is the very essence and structure of anti-social media.
Let me explain.
Anti-social media has a fundamental structural flaw (?): it forces users to constantly publish new content. Content older than a week, or maybe two, even if exceptionally good and popular, practically disappears. So if you’re an influencer, salesperson, consultant, entrepreneur, or anyone whose sales – that is, livelihood – depends on visibility, then you need reach. And to have reach, you must publish. And not just occasionally, when you really have something to say, but regularly, daily, and preferably several times a day. Only then will the algorithms “notice” you and show your content to others, which will result in likes and followers, which will result in even more promotion – and ultimately sales, which is what everyone playing this game is ultimately after.
It’s nicely called “content marketing.”
The problem is that no one – not even the most brilliant mind in the world – can create truly valuable, deep content twice a day, every day, throughout the year. It’s simply impossible. So what happens? People have to write anything, repeat the same observations, arguments, present the same stories as “new,” and so on. If you have to create hundreds of posts, the quality of the average post must decrease. At best, one can hope for “creative” rehashing, that is, garnishing an age-old story with a personal introduction or a paradoxical (generated) graphic. Or reporting the same news as hundreds of thousands of other accounts.
At the same time, anti-social media promotes short content. Longer content gets truncated (notice that you have to click “read more”) and is less promoted. Why? Because the goal of the anti-social media operator is also sales, specifically showing ads to users. If a user spends too much time on a single post or video, they’re not seeing ads during that time. So reading longer texts – like this one – carefully is the worst thing in the world for LinkedIn or Facebook, because it means you won’t see any ads for 10 minutes! You’re supposed to see something short that you’ll either like or that will annoy you – be sure to click that icon! – and then scroll on, where another – short! – ad awaits. And so on, round and round, for as long as possible, because the longer you scroll, the more ads the anti-social service will show you, increasing the chance of a sale (“conversion”), for which their customers, the advertisers, pay.
So it has to be short, and it has to be frequent, a lot, as much as possible! Because only then you do exist.
The effect is that shallow, repetitive in essence but colorful, engaging mush MUST be created. That’s simply the logic of how anti-social media works.
And there’s no doubt that AI will do this better than a human. Of course, it won’t write anything groundbreaking and deeply wise – and not because AI as such is inherently incapable of it, but because in media like LinkedIn or Facebook, that’s completely not the point, as I’ve shown above. AI is able to calmly (it has no emotions, after all) and methodically adapt to this, generating exactly the kind of content that anti-social media algorithms expect – and that average users fitting a given profile, a given “niche,” like to “like.” Even more: AI, especially when fed with a stream of current news and posts from other profiles (to understand trends), has a chance to generate content objectively better than a human forced by the necessity of “what to post today, I have to post something.” And to generate it as often as needed for the LinkedIn system to start showing this content to more users.
What’s sad is that it’s becoming increasingly difficult to live without these media, without participating in this intellectual slippery slope that they create. You have no choice, you can’t write on LinkedIn or X only when you feel you have something smart and interesting to share, you can’t, say, share some longer and well-thought-out text once a month, because then you lose to the “noise” generated by those better adapted to this reality. I know this from my own experience. So isn’t entrusting the creation of daily “posts” necessary for survival in today’s world to artificial intelligence a great solution? Especially if most LinkedIn “consumers” won’t notice the difference?
AI is therefore not the cause of the problem – it is its solution that can free us from this necessary but intellectually degrading work of creating a constant stream of shallow content on one narrow topic. Perhaps an imperfect solution, but in a sense inevitable, a kind of “buffer” between people and the toxic system of anti-social media, which is the real source of the problem.
Want deeper and wiser content? Well, that requires time and effort – including from the reader. So where to look for it? Read books – especially old ones, certainly still written by humans. Read those blogs that are still functioning and that convey some thought, some reflection, or value. The only question is who will be able to afford this luxury – both the luxury of creating this content, which may be valuable but few will see, and the luxury of reading it.
Because the situation where you don’t have to be present on anti-social media is already a true luxury that only a few can afford.