AI might be a market bubble. It’s not a technology bubble.

It’s entirely possible that OpenAI overextended financially, that the circular transactions between AI labs, NVidia, and tech giants like Oracle will end badly (some commentators argue Oracle is in trouble already and that’s the real reason behind the mass layoffs). It’s beyond doubt that AI services are currently priced well below the cost of the electricity alone needed to run them – never mind the hardware. So yes, OpenAI might go bankrupt and Anthropic’s token prices might shoot past affordability for most of us. The market bubble can absolutely burst – especially with energy prices climbing.

But AI as a technology isn’t going anywhere.

We already know how to train large language models. We know how to build models that run on a Mac Mini or a PC with a decent NVidia card. We know how to build ML models beyond LLMs. We even know how to do it cheaper – as the Chinese labs demonstrate. None of that knowledge disappears. Neither do the people who have it. If OpenAI goes bankrupt those people go somewhere else. If token prices skyrocket that creates a massive incentive for others to figure out how to do it cheaper.

You can’t put the technological genie back in the bottle.

Those who think economic factors will make AI vanish and we’ll return to the pre-AI days – with an eldorado for UX designers and hand-written code – are detached from reality. Since the industrial era began some 250-300 years ago plenty of economic bubbles have burst and plenty of companies have gone bankrupt, but a major technology has never just disappeared. For that to happen you’d need a total collapse of technological civilization. And that’s (hopefully) not in the cards.

AI is here and it’s staying. There’s no going back – even if some people dream about it. I can sympathize with the sentiment, but unless a time travel device is discovered you can’t escape to the past. Better to adapt than delude yourself that it won’t be necessary.

Lately I’ve been seeing more and more companies preparing to roll out AI tooling to their engineering organizations. Hundreds, sometimes thousands of programmers are about to be told: here are your new tools, here is your training, here are the metrics we’ll use to measure adoption. The usual playbook.

I came across a job posting where a company is looking for a person to head such an effort. They even used a beautiful phrase: “AI-augmented craftsmanship.” I liked it. It captures something true – that programming, done well, is a craft. But the rest of the posting was the standard transformation toolkit: golden paths, DORA metrics, enablement crews, adoption dashboards.

This is a playbook with a dismal track record. Most such transformations fail – not because the tools are wrong or the metrics are bad, but because they address the wrong problem.

They think they’re implementing new technology. They’re actually asking craftsmen to stop practicing the craft that defined them for decades.

That’s not a training problem. That’s an identity crisis.

The Craft That’s Changing

For decades, what made a good programmer? The ability to write clean, elegant, readable code. The kind where variable names mean something, with well designed classes and methods, where another programmer can open the file six months later and understand what’s happening.

This wasn’t just pragmatism. The “Clean Code” movement was about professional pride. The craft of translating complex logic into precise, beautiful syntax – this is what separated the professional from the amateur, the senior from the junior. It is how programmers knew they were good at what they do.

When you work with tools like Cursor this changes. You stop writing code and start directing its creation – and the rise of agents like Claude Code where you do not even look at the code unless you have to makes the change even more profound. You describe intent, review output, catch errors, guide the AI when it goes astray. Precise knowledge of syntax matters less – the AI knows syntax well enough and analyzes it faster than any human. What matters more is the ability to decompose problems, to design the process of creation itself, to recognize when the generated code is wrong or dangerous or just subtly off.

This is still valuable work. Arguably more valuable. But it’s not the same craft.

The programmer who spent fifteen years (or more!) mastering the intricacies of a language, who took pride in writing code so clean it read like prose – that person is being told that what made them excellent matters less than it used to. This isn’t learning new tools. This is being asked to become someone else.

The Old Skills That Matter More, Not Less

But the new craft isn’t built from scratch. It carries over more from the old one than most people realize – just not the parts they expect.

Understanding code still counts. You need to read and evaluate what the AI produces – and the AI produces a lot, fast. If you can’t tell good code from bad code you’re useless in this new world, perhaps even dangerous.

But what matters even more is execution discipline. Building a product as a series of small changes, each fully DONE – technically complete, tested, releasable – even if functionally incomplete. Incremental, iterative development. This was always a good practice. Now it’s essential, because the worst thing you can do is entrust a large system to an AI agent and let it run wild. The results will look impressive and be riddled with problems nobody understands because nobody reviewed the intermediate steps.

Tests matter more than ever. And since AI can generate tests too, there is even less excuse to skip them.

So the skills that carry forward are not syntax mastery but engineering judgment, architectural thinking, and the discipline of building things right in small steps. The programmers who have these skills are more valuable than ever – and most good programmers do, even if not all of them see it yet.

Why Dashboards Won’t Help

Organizations see resistance to AI adoption and think: more training, better tooling, clearer metrics. They’re solving the wrong problem with ever more precision.

You cannot train someone out of an identity crisis. The engineer who drags their feet isn’t necessarily a luddite. They might be someone who correctly perceives that the thing which made them valuable, which made them *them*, is being declared obsolete. More training sessions won’t fix that. Green dashboards won’t fix that.

What Would Actually Work

First, stop pretending it’s primarily about technology. The tools are the easy part – they are already good and improving quickly. The hard part is the cultural and identity dimension that traditional transformation playbooks ignore.

Before rolling out anything, you need to understand the culture you’re working with. What do these engineers actually value? What makes them proud? What are they afraid of losing? Where is the resistance really coming from – is it fear of job loss, or something deeper? What tribes and groups are there?

This is ethnographic work, not project management. Dave Snowden’s Estuarine Mapping framework offers a useful approach here. The core insight: in any complex system, some things are changeable and some aren’t. Some constraints are like the bedrock of an estuary – immovable. Others are like sandbars – they shift if you apply the right pressure in the right place. Most transformation efforts waste enormous energy fighting bedrock while ignoring sandbars. Map the landscape first. Focus only on what can actually change.

And don’t do this mapping in a strategy room with consultants. Engage the people living in that culture. Including the skeptics. Especially the skeptics – they often see things the enthusiasts miss.

For the new craft to take hold, people need to see it as a craft they can grow in – not as a demotion from “real programming” to “supervising the machine.” That’s a narrative problem, an identity problem. It won’t be solved by tooling or training.

The Missing Qualification

Organizations attempting this transition need leaders who understand three things. Traditional programming – deep enough to have credibility, to understand what engineers are being asked to give up, to share their grief over a craft that is changing. AI tooling – hands-on, practical, not just vendor demos. And most importantly: cultural dynamics and identity change – how groups and tribes actually function, why people resist change that seems obviously beneficial from the CEO’s office.

Most job postings I see for these roles cover the first two. The third is nowhere to be found.

During a recent discussion on product management, one of the participants cited the iPhone as an example of visionary genius – a bold leap into the unknown, so audacious that only a true visionary like Steve Jobs could have made it, made without any research or the incremental, empirical development process I had been discussing.

Steve Jobs presenting the iPhone (2007)

This perspective is interesting for several reasons.

First, it’s false – yet remarkably widespread.

Second, it’s false in a way that perfectly illustrates how our minds work – specifically, our tendency to create simplified narratives and attribute complex phenomena to individual “geniuses.”

Third, it’s false in a way that has practical consequences – because if we believe in lone geniuses, we lose sight of the actual mechanisms behind innovation. And that makes it impossible to apply those mechanisms.

What Actually Happened Before the iPhone

Before Jobs walked onto that stage in January 2007 and unveiled the iPhone to the world, there were already palmtops. A word almost forgotten today, yet for nearly a decade before the iPhone, people had been carrying small computers in their pockets.

The story begins in 1996 – eleven years before the iPhone’s debut – when Palm released the PalmPilot 1000 and 5000. By 1998, the Palm III had become a massive commercial success, used by geeks and busy executives alike. I have a friend who back then used his Palm for everything – from his calendar to reading e-books. Yes, Palm was also the “grandfather” of the Kindle – it just had a small, grayish LCD screen instead of e-ink.

File:Palm iii.jpg - Wikimedia Commons
Palm III (1998)

Palm and other PDAs used a stylus – a pen – to interact with the touchscreen. Jobs reportedly hated styluses (and rightly so – they’re annoying), but that doesn’t change the fact that the market for portable computing devices existed. Palm has proven that people wanted to use such devices at a scale that allowed it to become a serious company.

The Nokia 9210 Communicator, released in 2000, was one of the first true smartphones running Symbian with real applications – it demonstrated that a phone could do more than make calls or send text messages. In 2002 – five years before the iPhone – the creators of Palm released the Treo 180, a phone with PDA functionality. These two devices pointed the way forward, showing that strategically, both device categories were converging. This wasn’t Jobs’ vision – it was an obvious trend that everyone in the industry could see at the time. Conceptually, the iPhone was the same thing – just with a touch interface that didn’t require a stylus, no physical keyboard, a color screen, more memory, and so on. But it came out several years later, when advances in technology made it possible.

And what about the “revolutionary” app ecosystem? In 1999 – eight years before the iPhone – Japanese carrier NTT DoCoMo launched i-mode, the first major mobile platform with downloadable apps and content. Nokia and BlackBerry already had the ability to download software to their devices at the turn of the century. The App Store wasn’t a breakthrough conceptual innovation – it was simply a well-executed implementation of something that already existed elsewhere in fragmented, chaotic form.

At the same time, a connectivity revolution was underway. The WiFi 802.11b standard was ratified in 1999, and devices supporting it soon appeared, with prices dropping rapidly. In 2003, the first commercial 3G networks launched in Europe and Asia, offering mobile data speeds of 200-384 kbps – enough for real web browsing. By 2006, 3G was available globally, and WiFi had become nearly ubiquitous in cafes, hotels, and airports worldwide.

This was a technological convergence that made mobile web browsing technically feasible – before Apple even started thinking about a phone. And other companies were already taking advantage of it, most notably the now-forgotten BlackBerry.

Fear, Not Vision

Now for the most interesting part. What drove Apple to build the iPhone?

Not a vision of the future. Fear of losing it.

3rd generation iPod
iPod 3rd gen (2003)

The iPod – a portable music player released in October 2001 – was Apple’s money-making machine at the time. It accounted for roughly 50% of their revenue. But the leadership team could clearly see that mobile phones were becoming increasingly capable – and that sooner or later, every phone would have a built-in music player. And nobody would want to carry two devices. Phil Schiller – one of Apple’s key executives – reportedly warned outright: “Phones are going to eat our lunch.”

So Apple had a choice: accept the gradual decline of the iPod business, or build a phone.

Then came the Motorola ROKR. In 2004, Apple partnered with Motorola to create a phone with iTunes – a collaboration they announced publicly. When Jobs saw the result, he was furious. The device was slow, ugly, simply weak – the user experience was far worse than the iPod. Despite this, the device was unveiled in September 2005 – and proved to be a commercial flop.

Tony Fadell – considered the father of the iPod – later said it plainly: the ROKR convinced Jobs that if Apple wanted music on a phone, they would have to build the phone themselves.

So decision to go ahead with the project that ultimately produced the iPhone wasn’t a leap into the unknown driven by vision. It was a reaction to a threat to the existing business, fueled further by frustration with a failed partnership.

Where Did Multi-Touch Technology Come From?

What about the revolutionary touchscreen you operate with your fingers?

Apple didn’t invent it.

FingerWorks was founded in 1998 by two researchers from the University of Delaware – Wayne Westerman and John Elias. Westerman had wrist problems (carpal tunnel syndrome) and developed multi-touch technology so he could work without pain. The company made specialized keyboards and touchpads for people with similar issues – products like the TouchStream and iGesture Pad gained enthusiastic fans among those suffering from repetitive strain injuries.

FingerWorks Zero-Force Mini Keyboard - Detailed Specification Sheet
iGesture Pad (2005) – screen keyboard with gestures support

In April 2005 – two years before the iPhone – Apple quietly acquired FingerWorks. This technology – purchased, not invented – became the foundation of the iPhone’s screen. Westerman and Elias were hired as senior engineers at Apple and went on to author many of the company’s multi-touch patents over the years.

Jobs saw a demo of this technology on a large table resembling a ping-pong table, projecting a Mac interface. He reportedly said: “This is really cool. We should put this in a phone.”

Good eye? Yes. Decisiveness? Yes. Vision of something from nothing? No.

Two Competing Teams

When Apple began work on the phone in 2004 (codenamed “Project Purple”), Jobs had two teams compete against each other. One team – led by Tony Fadell, creator of the iPod – tried to build a phone based on the iPod’s click wheel interface (prototype designated P1). The other – led by Scott Forstall – worked on a device using multi-touch (prototype P2).

For many months, two parallel prototypes existed. Fadell’s team tried dozens of ways to make the click wheel work as a phone interface – without success. As Fadell later recalled: “We tried 30 or 40 ways of making the wheel not become an old rotary phone dial, and nothing seemed logical or intuitive.”

Jobs ultimately chose Forstall’s version, but the decision came only after extensive testing and comparison. This wasn’t the brilliant vision of a lone genius – it was a systematic elimination process conducted by competing engineering teams. And there were definitely multiple iterations, meaning an empirical, cyclical product refinement process.

Jobs Was Against the App Store

Now for the icing on the cake.

The App Store – perhaps the most influential element of the iPhone ecosystem, the source of unimaginable fortunes for Apple and developers – wasn’t Jobs’ idea. In fact, Jobs was initially against it.

When Apple released the first iPhone in June 2007, the only apps were those made by Apple. Jobs wanted developers to create “web apps” – applications running in the Safari browser. At WWDC 2007, he presented this as a “revolutionary” vision. Developers were disappointed – they wanted to create real, native apps that were fast, efficient, and could leverage the hardware’s capabilities.

Art Levinson – an Apple board member – reportedly called Jobs multiple times, lobbying to allow third-party apps. Jobs resisted – partly because he worried about system security and stability, partly because his team “didn’t have the bandwidth to figure out all the complexities involved in policing third-party developers,” as Levinson put it.

Eventually, he gave in to the pressure. In October 2007 – four months after the iPhone’s launch – Jobs announced that Apple would release an SDK for developers. The SDK came out in March 2008.

The App Store launched on July 10, 2008 – more than a year after the iPhone’s debut – with 500 apps at launch. In the first weekend alone, there were 10 million downloads. Today, the App Store generates tens of billions of dollars in annual revenue and forms the backbone of the entire iOS ecosystem.

And Jobs was against it – but was smart enough to change his mind.

This perfectly illustrates how much we oversimplify history when we attribute everything to one person’s “genius.”

How Innovation Actually Happens

So, to recap: the iPhone emerged from a convergence of multiple factors spanning a decade:

  • 1996-2002: PDAs (Palm, Windows CE) proved that people want portable computers
  • 1999: NTT DoCoMo’s i-mode demonstrated the mobile app distribution model
  • 1999-2003: WiFi and 3G networks made mobile internet technically feasible
  • 2000-2002: Nokia and BlackBerry smartphones with apps pointed the direction
  • 2002: Palm Treo showed the convergence of phone and PDA
  • 2001-2005: iPod’s success created a foundation – but also a threat to Apple
  • 2005: The Motorola ROKR failure convinced Apple they had to build their own device
  • 2005: The FingerWorks acquisition delivered crucial multi-touch technology
  • 2004-2006: Competition between two internal teams (Fadell vs. Forstall) led to the best solution
  • 2007-2008: Pressure from developers and the board forced the creation of the App Store – despite Jobs’ initial opposition

Where in all of this is the “visionary genius who single-handedly invented the future”?

Nowhere. Because that’s not how innovation works.

But – and this is an important “but” – Jobs’ role was enormous. Just not in the way the myth suggests.

What Jobs’ Genius Actually Was

First, Jobs created the conditions in which innovation could emerge. A culture where two teams could compete on fundamentally different approaches. An environment where people were pushed beyond what they thought possible – the famous “you have two weeks” ultimatum to the interface team, after which they worked 168 hours a week and created something that surprised even Jobs. Most organizations are incapable of creating such conditions nor attract talented people that thrive in them.

Second, Jobs had a “whole widget” philosophy – controlling hardware, software, and services together. This was radical. Nokia made hardware. Microsoft made software. Carriers controlled distribution and decided what phones could do. Jobs said: we control everything, because only then can we deliver the experience we want to deliver. That’s why he was furious about the ROKR – Motorola and the carriers destroyed the user experience.

Third, Jobs was a ruthless decision-maker who maintained focus on what mattered through his famous “saying no to a thousand things”. Killing the click wheel phone after months of work took courage. Many executives would have shipped it because “we’ve already invested so much”. Jobs killed it because it wasn’t good enough – and we know there were many more such projects at Apple.

Fourth – and this is often overlooked – Jobs was a brilliant and tough business negotiator who protected the product vision. The deal with AT&T/Cingular was unprecedented. Carriers had always controlled everything – what features a phone could have, what software ran on it, even what the interface looked like. Jobs broke that model. He secured a level of control for Apple that no manufacturer had ever achieved before. Without that, the iPhone would have been crippled like the ROKR E1.

Fifth, Jobs attracted and retained talent while maintaining creative tension between them. He brought key people from NeXT. He hired and empowered people like Fadell and Ive. He created an environment where the best wanted to work – and compete with each other.

What We Lost When Jobs Died

There’s something that shows Jobs’ contribution – though different from the myth – was real and significant.

Apple under Tim Cook is more financially successful than ever. Revenue keeps growing. Margins are excellent. The supply chain runs like a Swiss watch.

But where are the new product categories? Where are the stunning innovations? Where is “one more thing”?

The company now iterates brilliantly on existing products but doesn’t revolutionize. We have the iPhone 16, which is a better iPhone 15, which was a better iPhone 14. MacBooks are faster. iPads are thinner. But where is the next iPhone – a product that creates an entirely new category?

Tim Cook is operationally brilliant – he’s the reason Apple has the supply chain it has and the margins it has. But Cook optimizes rather than revolutionizes. The “magic” – that sense that Apple might do something completely unexpected and amazing – seems to be fading.

This suggests that Jobs brought something real and difficult to replace. But it wasn’t “visionary genius inventing products from nothing.” It was something else: creative leadership that pushed toward exceptionalism. A willingness to take risks on new categories. Taste that could distinguish “good enough” from “insanely great.” The ability to maintain creative tension without destroying the company. And ruthlessness in business negotiations that protected the company’s ability to execute on its product vision – and be profitable.

Jobs wasn’t an inventor who dreamed up products. He was a catalyst, editor, arbiter of taste, negotiator, and culture-builder – who enabled teams of talented people to do exceptional work, and then protected their ability to deliver it to users without compromise.

Why This Matters

The myth of the lone genius-inventor is harmful – but not because leadership doesn’t matter. It does, enormously. Jobs proved that.

The problem with the myth is that it points to the wrong thing. It suggests that innovation is a moment of enlightenment in one person’s head. That all you need is “vision” and the rest will somehow fall into place. That the genius invents the product, and then others merely execute.

Reality is different. Innovation is a process. It’s creating conditions where talented people can experiment, compete, make mistakes, and correct them. It’s ruthlessly cutting what doesn’t work – even if “we’ve already invested so much.” It’s negotiating with business partners for terms that protect product integrity. It’s attracting talent and maintaining productive tension between them.

Jobs was brilliant at all of this. But his genius lay in conducting the orchestra he built, not in playing a solo.

For those of us who want to create innovative products and organizations, there’s a practical lesson here. Don’t look for a “visionary” who will invent the future. Build teams capable of systematic exploration. Create a culture that tolerates experiments but doesn’t tolerate mediocrity. Learn to say “no” – even to things you’ve already invested in. Negotiate business terms that protect your ability to deliver an excellent product.

And remember that history is rarely as simple as we tell it. The iPhone didn’t spring from Jobs’ head like Athena from the head of Zeus. It grew from a decade of technological evolution, from the work of many teams, from failures and successes, from acquisitions and negotiations. And Jobs – yes, he was a genius – but a different kind of genius than the myth suggests. A genius who knew how to create conditions where other geniuses could do their best work.

That’s harder to emulate than “have a vision.” But it’s actually possible to emulate.

Since December ’24, I’ve switched from GitHub Copilot to Cursor and definitely don’t regret it. It enables very convenient use of various AI model support while working on code, significantly increasing a programmer’s capabilities and work speed. However, making good use of this tool’s power requires skillful utilization of its functions and adherence to one very important rule.

You’re Still in Charge of the Code

This is the most important rule! You still need to understand what you’re doing and know how the code you’re creating works and why. This requires at least basic programming knowledge and some hands-on experience. Without these fundamentals, AI will quickly lead a “vibe-coder” astray.

Example: I’m creating an application where I deliberately don’t use ORM, instead using my own class that handles the database through SQL queries. When creating one of the functions, AI generated code for me that included the introduction of a specific ORM. If I didn’t understand what was happening, what ORM is (and also why I didn’t want it in this project at that moment), I obviously wouldn’t have noticed anything because the proposed code was coherent and worked. Except that it wasn’t consistent with my decision and with how the rest of the application was written, which would lead to problems in the future.

So when using AI support, you must maintain a “leadership role” and control what’s happening.

Of course, the level of this control doesn’t have to be the same throughout the application. I’ve learned to divide my project into three “zones”:

  • core, where I want to understand every line of code (I still use AI to generate code here, but I only accept it when I fully understand what it does),
  • an intermediate area, where I want to understand how all methods/functions work but don’t need to know the details, and
  • the rest, where I go full – as they say nowadays – “vibe coding.”

Example: in the application I’m currently developing, the core is an AI agent system based on the AG2 framework. There I maintain full control, examining every line of generated code before accepting it and writting significant amount of code myself. I also use AI support to help find bugs or suggest solutions, but I thoroughly review everything before implementation. The intermediate zone includes classes handling user interaction logic – there I know how they work, but do not analyze them line by line. And “the rest” is HTML and JavaScript code that forms the application interface – there I don’t even know which JS functions are used and for what, because it’s not really important – what matters is that it works.

Planning Actions

For minor changes – such as fixing a small bug or making a slight change to existing functionality – you can use AI support simply in “Agent” mode by telling it what you need.

For larger changes, however, it’s essential to separate planning from implementation. This applies especially when adding major functionality or restructuring application logic while using AI to help write code.

When tackling a larger change, I follow this process:

Step 1: I begin with a comprehensive prompt describing the background (current state and general goals). Then I outline in it the intended steps to achieve my objective – what to introduce or change, architectural considerations, and any new solutions to incorporate.

During this initial phase, I engage in dialogue with the models – seeking opinions, exploring alternatives, and refining ideas. Here, the AI isn’t generating code but helping design the change. For this purpose, I typically use Cursor’s “Ask” mode or my custom “Analysis” mode.

Step 2: Once I’ve developed a satisfactory plan, I have the model document it along with the overall goal and a numbered list of implementation stages. Each stage description includes what will be accomplished and sometimes pseudocode or actual code showing key structures. I ensure each stage concludes with something verifiably complete and functional.

After receiving this document, I carefully review and edit it as needed before saving it to the project’s documentation directory (usually /doc).

Step 3: For code creation, I switch to Cursor’s “Agent” mode and point the model to my plan document (using Cursor’s context selection feature). I request code generation for specific stages, and we collaborate until each stage works correctly, including tests. This stage-by-stage approach consistently yields excellent results for substantial changes.

This structured process prevents AI assistants from getting lost during complex changes. The problem typically occurs because they lose context – as new messages in the chat accumulate our initial instructions scroll out of the context window, causing the model to “forget” the purpose of the changes. Consequently, it begins generating inconsistent or off-target solutions. With a reference plan in the chat context (thanks to the plan file being kept in the context by Cursor) and clear stage-by-stage direction, the AI maintains understanding of both the current task and the broader goal.

The earlier planning phase also leverages the models’ knowledge at a higher abstraction level, discussing solution options conceptually before implementation. This collaborative planning produces higher quality designs and, ultimately, better results.

AI Needs Documentation Too

We typically assume that AI models’ training data includes comprehensive knowledge about programming languages, libraries, and tools. This is generally true, especially since most documentation and much source code is freely available online, making it an accessible part of training datasets.

However, remember that with such massive amounts of data, finding the relevant information presents challenges even for sophisticated models. Additionally, while programming languages evolve slowly and rarely break backward compatibility, many libraries and frameworks develop rapidly. As a result, the information our model draws from its training data about them will very quickly simply become outdated!

Fortunately, Cursor addresses this by allowing us to create a RAG (vector store) with documentation of our choice. In Cursor preferences under the Features tab, after scrolling down, you’ll find a Docs section. The Add new doc button lets you incorporate additional documentation for your libraries.

Cursor attempts to process indicated pages comprehensively, including their subpages. It’s helpful to verify what’s been processed by clicking the open book icon, confirming what’s available to the models.

Sometimes – depending on how source pages are structured – you’ll need to add documentation manually page by page, but this effort is worthwhile for libraries and tools central to your project. Once added, all Cursor models that support client databases will effectively utilize this documentation.

Keep two important points in mind:

  • The documentation database isn’t project-specific or stored in project files. This can be inconvenient when working across multiple projects that require different documentation sets.
  • Cursor doesn’t automatically update documentation when newer versions become available or links change. When using tools or libraries that receive updates, you’ll likely need to remove outdated documentation and add the current version manually.

In the Next Episode: About Modes

This article has grown quite long, so I’ve decided to split it into parts. In the next installment, I’ll explore how to effectively use Cursor’s various working modes, including how to create and leverage your own custom modes.

In my previous article I outlined one aspect of anti-social media’s mechanism – the push to constantly publish something, anything. However, their harmful effect is much, much deeper – and unfortunately much more powerful.

To properly describe this, we need to go back in time. And quite substantially – to the beginnings of humans, of homo sapiens. The topic isn’t as simple as it might seem, because when our ancestors appeared on Earth is a surprisingly debatable question. On one hand, fossils anatomically consistent with modern humans date back as far as 315,000 years. On the other hand, other scholars argue that although these might have been biologically (genetically even) corresponding to homo sapiens, they didn’t match the definition in terms of behaviors – that is, the appropriate level of thinking manifested in creating suitable artifacts – tools, art, etc. However, the most “conservative” estimate of these skeptics places the beginning of humans on Earth at about 50,000 years ago, as that’s when artifacts appear in excavations that leave no doubt about the level of their creators.

Working with this pessimistic value – 50,000 years of humanity – means that for tens of thousands of years, people lived in a certain specific way, and it’s precisely this way of life that shaped our psychology. We could add that even assuming the previous 250,000 years were “almost-humans,” considering the continuity of the evolutionary process, it was their way of life that shaped not only our current psychology but even the structure of our brain.

What was that life like? Aside from evaluating whether it was a “nice” life or not (especially from today’s perspective), it was a life with the following characteristic features:

  • Large families – typically, a woman during her lifetime had more than 6 children, unless she died during one of the childbirths
  • Strong blood ties – clans, tribes, etc.
  • Territorial stability – very few people traveled further than to the neighboring village
  • Social stability – the same people – most often forming one or several clans, lived in the same area for hundreds of years, resulting in people mostly interacting daily with the same individuals for most of their lives – changes in “personnel composition” resulted mainly from natural processes (births and deaths).
  • Stability of occupations – people mostly did roughly the same thing their entire lives, of course improving at it to some degree. This occupation was largely determined by the occupation of their parents, therefore usually was not the result of their own choice.

So for most of this time – for tens, hundreds of thousands of years – the average person lived in a large family, part of a clan/tribe, in the area of a smaller or larger village that they knew since childhood, among people they knew. After mastering basic skills (walking, eating, speaking), they took part in their parents’ work – if a girl, the mother’s; if a boy, the father’s. And having mastered from them the skills appropriate for gender and profile, they performed them until the end of their life. And if they did it reasonably well, they had a permanent place in the social structure of their village and clan.

This is exactly the kind of life that formed us. We find traces of this in our psychology, for example, in phenomena such as group dynamics or Dunbar’s number. Groups numbering more than a dozen or so people are unable to maintain cohesion and break down into subgroups – that’s a “trace” of the family size typical for 99.9996% of human history. We can personally know (recognize faces, know who they are, etc.) about 250 people at one time – that’s a “trace” of the size of a village with its “adjacencies.”

But these traces are more than just numbers, the sizes of these groups. It’s also something else – patterns, ways of interaction. Returning to the first example, a significant feature of group process is that the same sets of roles occur in groups regardless of culture, profession, social position, or education of people. Generally, I consider psychology to be a pseudoscience, but this thing is actually quite well-researched – i.e., group dynamics consistently work the same way in many studies conducted in different countries. At the same time, those studies also indicate that adapting to a group is a stressful process, which is understandable because from this long evolutionary perspective, forming a completely new group of people who didn’t know each other before was a rare event. And this is in turn meant this was not an event our brains were trained for from the evolutionary perspective. Instead, they were trained in maintaining a spot in an existing group.

Thus, for the overwhelming majority of human history, people spent most of their lives in a stable community where they had their place. This place stemmed primarily from the position of their parents in the community, but of course, there was a kind of competition (which also involved various rituals of passage – “transition to adulthood”). However, that competition was limited to that small community in which the person lived, and also regulated by the rules established in their culture.

Changes in the community were very slow and somewhat natural – some died, others were born. Even more slowly changed the rules and principles governing life, customs and morality, the principles that needed to be followed to cope. These remained unchanged for centuries.

At the same time, people knew individuals from their family and village much more “truly” – it wasn’t the case that they only saw their “enhanced,” public version. It’s hard to pretend to be someone you’re not in front of people you interact with your entire life in a relatively small area. At the same time, because of this, people knew that others weren’t perfect either, that they had worse moments, that something didn’t work out for them, and so on. So the picture of what others’ lives could be like was more realistic.

In summary: people evolved over tens or actually hundreds of thousands of years to live in stable communities, with clear, predictable rules, where the scale of comparing oneself to others was limited both by the size of the group and by culturally determined possibilities of potential advancement, and, on top of that, these comparisons were more truthful.

Meanwhile, the technical era – if counting from the wider implementation of machines and the first mechanical transport (railways) – has only lasted about 200 years, which is 0.4% of that time. The Internet era – counting not from its invention but from its popularization – is about 24 years, while anti-social media only emerged about 14 years ago. These numbers are 0,048% and 0,028% respectively – and that, I emphasize, is using a very conservative assumption as to how long humans have existed on Earth. If we consider the earlier period of formation of thinking, psychology, and even simply the human brain, the current period is imperceptibly small.

And it’s worth emphasizing that although technical changes have been ongoing for 200 years until the end of the 1960s traditional social forms (family) were still maintained. Put more simply, the degradation was already happening, but not on the scale it is now. And 70 years is too little for people to evolutionarily change their psychological construction.

So we’ve now reached a stage where in so-called developed societies, humans formed over millennia for stability, for finding their place in a small community, collide with a world where none of this exists! It’s exactly the opposite – few people know their neighbors, but everyone compares themselves to everyone else, and the model is influencers flaunting real or – more often – fake success. Instead of the natural process of finding one’s place in a local community, developing at a pace to which we are organically adapted, we have constant pressure to be “the best in the world” – which is, of course, impossible for the overwhelming majority of people. Thus, it puts these people in a position of eternal unhappiness, eternal unfulfillment, and dissatisfaction with themselves.

The effects? 15-year-olds comparing themselves to the most famous peers from around the world both in terms of beauty and achievements, which drives them into depression. A 25-year-old I knew personally who was in depression because he compared himself with global “success stories” and feel like a failure. And in a local community, normal from the perspective of how humans are built, each of them would be quite satisfied with their position and achievements.

Anti-social media deepen this problem in many ways. First, they flaunt images of the lives of “influencers” – people allegedly achieving stunning success, leading perfect lives, traveling the world in private jets, living in luxurious villas. Interestingly, the most popular among them are those who promise others they’ll teach them how to achieve such “success” – that is, how to become such an “influencer.”

It’s worth noting that this entire “upper echelon” of anti-social media, these most popular “creators,” don’t actually create anything, don’t build anything, don’t contribute any value to civilization. It resembles the behavior of mice from the final phase of Calhoun’s “mouse utopia” experiment which only showed each other how beautiful their fur was. Here too, we only have flaunting of one’s beauty and ostentatious consumption.

What’s worse, a significant part of all this is simply fake. Plastic surgeries and “beautifying” filters have been standard for a long time. Currently, “influencers” rent film studios that allow them to pretend they live in a luxury apartment. There are even studios mimicking private jets that can be rented by the hour to take photos pretending status and wealth. However, people who later watch these videos often aren’t aware of this.

And this fakery isn’t limited to the absolute “top from America” – it’s used by tens of thousands of content “creators.” And in Poland too, people rent apartments through AirBnB to shoot reels there. We also already have special kitchens in larger cities for shooting reels about cooking (a very popular niche): of course those are nor regular, designer, beautiful kitchens, polished to a shine by studio staff between recordings. Then we have millions of women frustrated that their kitchen or apartment doesn’t look like that. And in their own way, this “average” content is more harmful because its simulation of real life seems more authentic and less unattainable, though such a kitchen is just as unreal as a flight on a private jet built of plywood in a studio.

Artificial intelligence, which is increasingly entering anti-social media, adds a new level of fakery – faces and silhouettes will no longer just be “enhanced” with filters; now the entire environment, and even the characters themselves, can be entirely generated by AI. Thus, a completely unreal world is created, and therefore even more unattainable for the average person, but at the same time increasingly better at pretending to be such attainable reality – and therefore even more destructive.

This is much more harmful than the old reports of the lives of stars and princesses in tabloids. Firstly, because tabloids were read by a relatively small part of society. Secondly, no one had illusions they could live like a princess – it was a different world. Meanwhile, “influencers” pretend they are “just like us,” that anyone can achieve what they have. Thirdly, tabloids were read – the process of reading and viewing photos doesn’t affect emotions and people’s psyche in general as strongly as moving images saturated with intense colors from a glowing screen (in this respect, Instagram, TikTok, etc., are far more toxic than LinkedIn). And fourthly, even if someone read a tabloid, they didn’t do it many times a day, every day, from morning till night. But now everyone has a “smartphone” in their hand for many hours a day. Combined with the prevalence of anti-social media, this creates constant pressure on every user, constantly showing them how crappy their life is.

As a result, a system has emerged that, on one hand, bombards people with unrealistic models and expectations, and on the other, increasingly takes away what gave support for millennia – a stable local community where everyone could find their place. It’s no wonder, then, that the result is an epidemic of mental health problems, the scale of which in younger generations is already alarming.

Worse yet, there doesn’t seem to be a way out of this situation. Anti-social media have already become too important a part of social and professional life to completely break free from them. The possibility of living without them has already become a luxury available only to the few wealthy enough not to need them – or to those who decide to completely reject modern civilization and go somewhere into the wilderness. And artificial intelligence, although it can serve as a kind of buffer between us and the toxic system of anti-social media, simultaneously deepens the problem by introducing a new level of artificiality and fakery.

Maybe, however, AI will “finish off” anti-social media completely, to a level where there will no longer be any real content there, and an awakening will occur? I doubt it – but one can hope.

Recently, a small “storm” arose in the world of anti-social media. Meta announced that it would introduce “artificial characters” on Facebook and Instagram – profiles generated by AI with which users could interact. When these profiles actually appeared at the end of December, they were met with a very negative reaction and are supposedly going to be withdrawn. But personally, I believe they won’t be withdrawn – they’ll be quietly replaced with better versions, creating more convincing content that won’t be so easily recognized.

Because already, independent app developers have “connected the dots” and created software that automatically generates content for anti-social media. Under my posts on LinkedIn, AI-generated comments have appeared several times, and looking at posts, I’m convinced that at least 50%-60% of content on that platform is already created with significant AI assistance if not completely autonomously. And I think it’s similar on Facebook and increasingly – as tools for creating convincing video become more widespread – on Instagram and TikTok too. So I understand the thinking of Meta’s leadership – if external players are using it, why shouldn’t they use it themselves?

But has the fact that more and more posts are created by AI really drastically lowered the quality of content on LinkedIn or other anti-social media? I don’t think so, but others do. I’ve noticed that some users are starting to complain that the content is repetitive, boring, that posts are similar to each other, blending into one mush. And they blame AI for this. Meanwhile, they’re missing one important thing: AI is not the source of the problem. The source is the very essence and structure of anti-social media.

Let me explain.

Anti-social media has a fundamental structural flaw (?): it forces users to constantly publish new content. Content older than a week, or maybe two, even if exceptionally good and popular, practically disappears. So if you’re an influencer, salesperson, consultant, entrepreneur, or anyone whose sales – that is, livelihood – depends on visibility, then you need reach. And to have reach, you must publish. And not just occasionally, when you really have something to say, but regularly, daily, and preferably several times a day. Only then will the algorithms “notice” you and show your content to others, which will result in likes and followers, which will result in even more promotion – and ultimately sales, which is what everyone playing this game is ultimately after.

It’s nicely called “content marketing.”

The problem is that no one – not even the most brilliant mind in the world – can create truly valuable, deep content twice a day, every day, throughout the year. It’s simply impossible. So what happens? People have to write anything, repeat the same observations, arguments, present the same stories as “new,” and so on. If you have to create hundreds of posts, the quality of the average post must decrease. At best, one can hope for “creative” rehashing, that is, garnishing an age-old story with a personal introduction or a paradoxical (generated) graphic. Or reporting the same news as hundreds of thousands of other accounts.

At the same time, anti-social media promotes short content. Longer content gets truncated (notice that you have to click “read more”) and is less promoted. Why? Because the goal of the anti-social media operator is also sales, specifically showing ads to users. If a user spends too much time on a single post or video, they’re not seeing ads during that time. So reading longer texts – like this one – carefully is the worst thing in the world for LinkedIn or Facebook, because it means you won’t see any ads for 10 minutes! You’re supposed to see something short that you’ll either like or that will annoy you – be sure to click that icon! – and then scroll on, where another – short! – ad awaits. And so on, round and round, for as long as possible, because the longer you scroll, the more ads the anti-social service will show you, increasing the chance of a sale (“conversion”), for which their customers, the advertisers, pay.

So it has to be short, and it has to be frequent, a lot, as much as possible! Because only then you do exist.

The effect is that shallow, repetitive in essence but colorful, engaging mush MUST be created. That’s simply the logic of how anti-social media works.

And there’s no doubt that AI will do this better than a human. Of course, it won’t write anything groundbreaking and deeply wise – and not because AI as such is inherently incapable of it, but because in media like LinkedIn or Facebook, that’s completely not the point, as I’ve shown above. AI is able to calmly (it has no emotions, after all) and methodically adapt to this, generating exactly the kind of content that anti-social media algorithms expect – and that average users fitting a given profile, a given “niche,” like to “like.” Even more: AI, especially when fed with a stream of current news and posts from other profiles (to understand trends), has a chance to generate content objectively better than a human forced by the necessity of “what to post today, I have to post something.” And to generate it as often as needed for the LinkedIn system to start showing this content to more users.

What’s sad is that it’s becoming increasingly difficult to live without these media, without participating in this intellectual slippery slope that they create. You have no choice, you can’t write on LinkedIn or X only when you feel you have something smart and interesting to share, you can’t, say, share some longer and well-thought-out text once a month, because then you lose to the “noise” generated by those better adapted to this reality. I know this from my own experience. So isn’t entrusting the creation of daily “posts” necessary for survival in today’s world to artificial intelligence a great solution? Especially if most LinkedIn “consumers” won’t notice the difference?

AI is therefore not the cause of the problem – it is its solution that can free us from this necessary but intellectually degrading work of creating a constant stream of shallow content on one narrow topic. Perhaps an imperfect solution, but in a sense inevitable, a kind of “buffer” between people and the toxic system of anti-social media, which is the real source of the problem.

Want deeper and wiser content? Well, that requires time and effort – including from the reader. So where to look for it? Read books – especially old ones, certainly still written by humans. Read those blogs that are still functioning and that convey some thought, some reflection, or value. The only question is who will be able to afford this luxury – both the luxury of creating this content, which may be valuable but few will see, and the luxury of reading it.

Because the situation where you don’t have to be present on anti-social media is already a true luxury that only a few can afford.

We’re now in the third year of the AI transformation. Of course, the AI models have been developing for longer, but for about three years they’ve become more mainstream, more accessible, and cheaper relative to their capabilities. The impact on programming as a profession and on the entire IT industry is already clearly evident. More and more code is being created using these models, either directly or with their support – recently, Google’s CEO boasted that 25% of their code – a full quarter – is now being produced by models.

On the flip side, we hear complaints (mainly on antisocial media) that this code is inferior, non-optimal, and not at all readable; that it will be difficult to maintain because it will need to be understood and fixed, or we’ll need to use models again just to make sense of it.

These complaints don’t surprise me because I’ve heard them earlier. Much earlier.

History Repeats Itself

When I started university in 1989 (already having a few years of programming experience), I met older computer scientists who strongly complained that high-level language compilers (like Pascal or Modula 2) generated suboptimal machine code. They claimed that the resulting processor instructions caused it to perform many unnecessary operations, making the code inefficient. Plus, it was too large, as these redundant instructions and structures occupied (then much more valuable) memory space.

I had the opportunity to verify firsthand that this was true.

Back then, we had PCs running MS-DOS – and the first computer viruses. My university friends and I were writing such a virus whose purpose (payload) was to play music.

Before I continue, younger readers deserve an explanation: today’s computers have virtually high-class music synthesizers that can create high-quality sounds. Back then, there was just a tiny speaker connected to the motherboard, designed primarily to emit short beeps – its purpose was to alert the user about errors or process completions.

Playing melodies with this thing required some serious ingenuity. We had a talented classmate who could write code that played music – and not just any music, but the kind that would get stuck in your head and had a great rhythm. A unique mix of musical talent and ability to express it with code. He wrote a music piece for us in Pascal because he wasn’t very familiar with assembly language. After compilation, that piece of code that played his little tune was about 5 kilobytes, which was way too much for our needs.

Another friend and I sat down and started manually translating it into assembly language. It took us about four days of work. The result was that we achieved a size of about 800 bytes. The same result – the music played identically – but the code was five times smaller.

This way, I directly confirmed that the old computer scientists were indeed right: compiler-generated code was indeed suboptimal, and significantly so.

But So What?

So what if, on the scale of the whole industry it didn’t matter at all! Moore’s Law was at work (though a bit slower then), computer power and memory size were growing, so it was completely irrelevant that machine code was suboptimal and that as a result the processors were executing unnecessary operations, and the executable files were 5-6x “too big.” The increasing power of computers more than covered these inefficiencies.

What became much more important was that programmers could create something quickly, that they didn’t need those four days to fine-tune a program but could do it much faster. At the same time, they didn’t have to delve into register limitations, addressing, etc. – they could operate with easier-to-understand abstractions of variables, functions, and so on. And the fact that the result was at least five times less efficient and larger? No one cared! Computers were, as mentioned, getting faster anyway.

The further development of the software industry followed exactly this path: increasing processor power, more available memory, and programming languages becoming increasingly detached from the machine level. Then came Java and the virtual machine concept, which introduced another level of inefficiency. Except for a few niche applications, efficiency optimization was generally forgotten.

As a result, most contemporary programmers don’t even know how a processor is constructed, what machine language looks like, and especially have never used it. And this doesn’t hinder anything – it’s completely normal.

A New Paradigm

Do you understand the analogy already?

What difference does it make if code written by AI models isn’t perfect? It doesn’t matter at all, because code made by a low-paid junior in an outsourcing center somewhere in India is weak too. In fact, I would argue that on average, AI-generated code is already better and more consistent than what you typically get from outsourcing to inexperienced developers. The AI doesn’t get tired, doesn’t take shortcuts when deadlines loom, and consistently applies patterns it has learned from millions of code examples. Many critics make the mistake of comparing AI output to idealized human code rather than the actual average-quality code that dominates most systems today. When viewed in this realistic context, AI-generated code often comes out favorably.

So what if we might need an AI model later to understand and modify this code? No problem, we’ll have those models – and even better ones than we have now.

What we’re observing is another paradigm shift in programming. Instead of directly telling the machine what to do (which is what all structural and object-oriented programming languages were for), we now tell the machine what we want to achieve. And the machine generates an intermediate stage in the form of code, which is then translated (compiler/interpreter) to the next stage, and then the next, until finally, somewhere at the end, processors are still executing instructions from their relatively simple set, shuffling data in and out of registers.

Looking ahead, this intermediate stage in the form of code in some programming language will probably start to disappear. Why keep after all? What’s the point, if the machine could generate directly the result that the user expects? But getting to this point will probably take some time, because over these ~50 years, we’ve created a lot of “infrastructure” around programming languages, so generating them is currently simply the easier and faster way to get a working result.

Will the Value of Software Decrease?

Recently, I was asked an interesting question about whether the value of programs, applications, and software in general will decrease as a result of AI-driven development.

This question is worth considering. If the proliferation of programming with the help of AI models could cause a decrease in the value of the programmer profession, wouldn’t programs and applications also become less valuable? And if so, wouldn’t the entire industry implode?

In my opinion: of course the value of software will decrease! But this doesn’t mean the end for the industry. On the contrary – there will simply be more of these digital products.

Again, I’ll refer to history, because it shows certain long-term patterns of societal behavior. Over the centuries, many technologies have been introduced, and it has always been the case that initially, their given unit product was expensive, and then it became increasingly cheaper.

Take the example of the automobile. One hundred and forty years ago, cars were toys for the wealthy, and a hundred years ago, they were luxuries available only to the select few, especially outside the most developed countries. The production volume of leading car companies was tiny by today’s standards. A car was once a much more valuable item than it is now (especially a used one). Currently, the scale of production is incomparably larger and the production cost per unit, as well as the profit margin per unit, is much lower than it used to be1. Did this harm the industry? No.

The same goes for computers. A computer from 55 years ago was an incredibly expensive thing. The personal computer, which appeared in the early 1980s, about 45 years ago, was also a very expensive item – not as much as before, but still mainly accessible only to the upper-middle class. Those machines like the Commodore 64 or early PCs – these weren’t objects everyone could afford, again especially outside of the US & Western Europe. But there were far, far fewer of these computers than there are now.

Today, a computer isn’t such an expensive item and computers are available almost universally. Of course, there are expensive computers, but there are also cheap ones, and the differences between them aren’t gigantic. Someone who buys a cheap laptop for $500 at a big-box store can fundamentally do the same things as someone who bought the latest MacBook. Sure, not as conveniently or as quickly, but in terms of capabilities – the difference isn’t fundamental.

Did this harm the industry? Quite the opposite. Computer manufacturing companies are richer today than ever before.

Unit Value vs. Scale

The same will happen with software. The unit value of an individual program will of course be lower, but it’s worth noting that this is already happening. In the early computing era of the 1960s-70s, building software was a gigantic undertaking, involving many people, taking a long time, so only the largest companies could afford it.

Thanks to structural programming languages and personal computers, by the mid 1980s people could create programs themselves if they had the right knowledge. So there were more of these programs. By the end of that decade, in the early 1990s, freeware, shareware, and the open-source movement appeared, meaning more and more software was available completely free. Did this harm the industry? Quite the opposite. Giant software companies with enormous funds emerged only then, not in the era of large computing machines when individual programs were very expensive.

I suspect we’ll see something similar with software in the AI era – today’s specialized $100,000 enterprise solutions may become commoditized, but the overall market will expand dramatically as new use cases emerge. When software becomes cheaper to produce, we don’t make less of it – we make more, applying it to problems that previously couldn’t justify the development cost.

In my opinion, the answer to whether software will be less valuable is therefore ambiguous. On a per-unit basis, probably yes. But I think the industry as a whole will likely benefit, because we’ll have more digital products, not fewer. They’ll be increasingly tailored to various needs, with a greater “spread” of possibilities. If I have 100 programs trying to satisfy some need, whereas before I had 10, I now have a much better chance that at least some of them will satisfy that need well – will hit the mark.

On a macro scale, this will likely mean increasing value for those involved (especially large companies), but on a micro scale, the value of an individual “piece” of software will probably be much less than it was until recently.

The Future Is Already Here

What we’re witnessing isn’t just another phase in software development – it’s a fundamental reshaping of how humans interact with technology. The democratization of programming through AI doesn’t spell doom for our industry. Rather, it follows the same pattern we’ve seen with every technological revolution: what was once scarce becomes abundant, what was expensive becomes affordable, and the ecosystem around it grows exponentially.

For developers, this likely means evolution toward prompt engineering, system design, and problem definition rather than manual implementation. Companies will need fewer “code writers” but possibly more “solution architects” who understand both business domains and how to effectively direct AI tools. The value will shift from knowing programming language syntax to understanding how to describe problems and solutions in ways that AI can effectively translate into working systems.

The not far-off future where most software is AI-assisted or even AI-generated isn’t something to fear. It’s an opportunity to focus on what humans do best: defining problems worth solving and imagining solutions that machines wouldn’t conceive on their own. The value will shift from the mechanics of coding to the creativity of conceptualization and the wisdom to know which problems deserve attention.

As with compilers in the past, we won’t miss the tedious work that AI takes off our hands. We’ll simply move up another level of abstraction and keep building amazing things – just differently than before.

  1. The fact that new cars still seem relatively expensive today, particularly in Europe, is mainly due to high taxation and regulatory compliance costs, not the actual manufacturing expenses or profit margins. Just look at the prices of new vehicles in China.  ↩︎

Scrum – a method for engaged professionals to work better together.

This is my definition of what Scrum is – and what it has been from the very beginning. Let’s break down this definition into parts:

  • method – a certain way of operating, a set of rules and practices that structure the work process, giving it a repeatable rhythm, providing the necessary stability that makes it easier to deal with complexity,
  • engaged – yes, this is not for those who don’t want to make an effort, who don’t care, who want to float through working hours with minimal effort and, having done that, focus on what really interests them (if they have anything like that)
  • professionals – people who are “characterized by or conforming to the technical or ethical standards of a profession” (Merriam-Webster dictionary), that is people who have certain standards, who won’t do anything poorly or “just to get it done”,
  • work together – the concept of “shared work” – Scrum makes sense where there is a need for close, constant, and daily cooperation between people with diverse competencies and knowledge, where there is no room for “passing the baton,” where, like in a well-trained football team, everyone plays toward the same goal, rather than doing “their own thing” and not caring about the rest.
  • better – yes, this is for those who want to work better, who are always looking for new knowledge, new skills – they care, because they are engaged, they care, because they are professionals and as such seek the path to excellence in what they do.

Do you now understand why the average Scrum in an average company is the way it is? Do you understand why “big transformations” ended up the way they did? Well, how many engaged professionals are there – 10%? 15%? Maybe… at best!

There’s nothing you can do about that. However, you can choose who you want to be.

And who you want to work with.

After 12 years I am coming back to blogging in English. Stay tuned. 😀