In my previous article I outlined one aspect of anti-social media’s mechanism – the push to constantly publish something, anything. However, their harmful effect is much, much deeper – and unfortunately much more powerful.

To properly describe this, we need to go back in time. And quite substantially – to the beginnings of humans, of homo sapiens. The topic isn’t as simple as it might seem, because when our ancestors appeared on Earth is a surprisingly debatable question. On one hand, fossils anatomically consistent with modern humans date back as far as 315,000 years. On the other hand, other scholars argue that although these might have been biologically (genetically even) corresponding to homo sapiens, they didn’t match the definition in terms of behaviors – that is, the appropriate level of thinking manifested in creating suitable artifacts – tools, art, etc. However, the most “conservative” estimate of these skeptics places the beginning of humans on Earth at about 50,000 years ago, as that’s when artifacts appear in excavations that leave no doubt about the level of their creators.

Working with this pessimistic value – 50,000 years of humanity – means that for tens of thousands of years, people lived in a certain specific way, and it’s precisely this way of life that shaped our psychology. We could add that even assuming the previous 250,000 years were “almost-humans,” considering the continuity of the evolutionary process, it was their way of life that shaped not only our current psychology but even the structure of our brain.

What was that life like? Aside from evaluating whether it was a “nice” life or not (especially from today’s perspective), it was a life with the following characteristic features:

  • Large families – typically, a woman during her lifetime had more than 6 children, unless she died during one of the childbirths
  • Strong blood ties – clans, tribes, etc.
  • Territorial stability – very few people traveled further than to the neighboring village
  • Social stability – the same people – most often forming one or several clans, lived in the same area for hundreds of years, resulting in people mostly interacting daily with the same individuals for most of their lives – changes in “personnel composition” resulted mainly from natural processes (births and deaths).
  • Stability of occupations – people mostly did roughly the same thing their entire lives, of course improving at it to some degree. This occupation was largely determined by the occupation of their parents, therefore usually was not the result of their own choice.

So for most of this time – for tens, hundreds of thousands of years – the average person lived in a large family, part of a clan/tribe, in the area of a smaller or larger village that they knew since childhood, among people they knew. After mastering basic skills (walking, eating, speaking), they took part in their parents’ work – if a girl, the mother’s; if a boy, the father’s. And having mastered from them the skills appropriate for gender and profile, they performed them until the end of their life. And if they did it reasonably well, they had a permanent place in the social structure of their village and clan.

This is exactly the kind of life that formed us. We find traces of this in our psychology, for example, in phenomena such as group dynamics or Dunbar’s number. Groups numbering more than a dozen or so people are unable to maintain cohesion and break down into subgroups – that’s a “trace” of the family size typical for 99.9996% of human history. We can personally know (recognize faces, know who they are, etc.) about 250 people at one time – that’s a “trace” of the size of a village with its “adjacencies.”

But these traces are more than just numbers, the sizes of these groups. It’s also something else – patterns, ways of interaction. Returning to the first example, a significant feature of group process is that the same sets of roles occur in groups regardless of culture, profession, social position, or education of people. Generally, I consider psychology to be a pseudoscience, but this thing is actually quite well-researched – i.e., group dynamics consistently work the same way in many studies conducted in different countries. At the same time, those studies also indicate that adapting to a group is a stressful process, which is understandable because from this long evolutionary perspective, forming a completely new group of people who didn’t know each other before was a rare event. And this is in turn meant this was not an event our brains were trained for from the evolutionary perspective. Instead, they were trained in maintaining a spot in an existing group.

Thus, for the overwhelming majority of human history, people spent most of their lives in a stable community where they had their place. This place stemmed primarily from the position of their parents in the community, but of course, there was a kind of competition (which also involved various rituals of passage – “transition to adulthood”). However, that competition was limited to that small community in which the person lived, and also regulated by the rules established in their culture.

Changes in the community were very slow and somewhat natural – some died, others were born. Even more slowly changed the rules and principles governing life, customs and morality, the principles that needed to be followed to cope. These remained unchanged for centuries.

At the same time, people knew individuals from their family and village much more “truly” – it wasn’t the case that they only saw their “enhanced,” public version. It’s hard to pretend to be someone you’re not in front of people you interact with your entire life in a relatively small area. At the same time, because of this, people knew that others weren’t perfect either, that they had worse moments, that something didn’t work out for them, and so on. So the picture of what others’ lives could be like was more realistic.

In summary: people evolved over tens or actually hundreds of thousands of years to live in stable communities, with clear, predictable rules, where the scale of comparing oneself to others was limited both by the size of the group and by culturally determined possibilities of potential advancement, and, on top of that, these comparisons were more truthful.

Meanwhile, the technical era – if counting from the wider implementation of machines and the first mechanical transport (railways) – has only lasted about 200 years, which is 0.4% of that time. The Internet era – counting not from its invention but from its popularization – is about 24 years, while anti-social media only emerged about 14 years ago. These numbers are 0,048% and 0,028% respectively – and that, I emphasize, is using a very conservative assumption as to how long humans have existed on Earth. If we consider the earlier period of formation of thinking, psychology, and even simply the human brain, the current period is imperceptibly small.

And it’s worth emphasizing that although technical changes have been ongoing for 200 years until the end of the 1960s traditional social forms (family) were still maintained. Put more simply, the degradation was already happening, but not on the scale it is now. And 70 years is too little for people to evolutionarily change their psychological construction.

So we’ve now reached a stage where in so-called developed societies, humans formed over millennia for stability, for finding their place in a small community, collide with a world where none of this exists! It’s exactly the opposite – few people know their neighbors, but everyone compares themselves to everyone else, and the model is influencers flaunting real or – more often – fake success. Instead of the natural process of finding one’s place in a local community, developing at a pace to which we are organically adapted, we have constant pressure to be “the best in the world” – which is, of course, impossible for the overwhelming majority of people. Thus, it puts these people in a position of eternal unhappiness, eternal unfulfillment, and dissatisfaction with themselves.

The effects? 15-year-olds comparing themselves to the most famous peers from around the world both in terms of beauty and achievements, which drives them into depression. A 25-year-old I knew personally who was in depression because he compared himself with global “success stories” and feel like a failure. And in a local community, normal from the perspective of how humans are built, each of them would be quite satisfied with their position and achievements.

Anti-social media deepen this problem in many ways. First, they flaunt images of the lives of “influencers” – people allegedly achieving stunning success, leading perfect lives, traveling the world in private jets, living in luxurious villas. Interestingly, the most popular among them are those who promise others they’ll teach them how to achieve such “success” – that is, how to become such an “influencer.”

It’s worth noting that this entire “upper echelon” of anti-social media, these most popular “creators,” don’t actually create anything, don’t build anything, don’t contribute any value to civilization. It resembles the behavior of mice from the final phase of Calhoun’s “mouse utopia” experiment which only showed each other how beautiful their fur was. Here too, we only have flaunting of one’s beauty and ostentatious consumption.

What’s worse, a significant part of all this is simply fake. Plastic surgeries and “beautifying” filters have been standard for a long time. Currently, “influencers” rent film studios that allow them to pretend they live in a luxury apartment. There are even studios mimicking private jets that can be rented by the hour to take photos pretending status and wealth. However, people who later watch these videos often aren’t aware of this.

And this fakery isn’t limited to the absolute “top from America” – it’s used by tens of thousands of content “creators.” And in Poland too, people rent apartments through AirBnB to shoot reels there. We also already have special kitchens in larger cities for shooting reels about cooking (a very popular niche): of course those are nor regular, designer, beautiful kitchens, polished to a shine by studio staff between recordings. Then we have millions of women frustrated that their kitchen or apartment doesn’t look like that. And in their own way, this “average” content is more harmful because its simulation of real life seems more authentic and less unattainable, though such a kitchen is just as unreal as a flight on a private jet built of plywood in a studio.

Artificial intelligence, which is increasingly entering anti-social media, adds a new level of fakery – faces and silhouettes will no longer just be “enhanced” with filters; now the entire environment, and even the characters themselves, can be entirely generated by AI. Thus, a completely unreal world is created, and therefore even more unattainable for the average person, but at the same time increasingly better at pretending to be such attainable reality – and therefore even more destructive.

This is much more harmful than the old reports of the lives of stars and princesses in tabloids. Firstly, because tabloids were read by a relatively small part of society. Secondly, no one had illusions they could live like a princess – it was a different world. Meanwhile, “influencers” pretend they are “just like us,” that anyone can achieve what they have. Thirdly, tabloids were read – the process of reading and viewing photos doesn’t affect emotions and people’s psyche in general as strongly as moving images saturated with intense colors from a glowing screen (in this respect, Instagram, TikTok, etc., are far more toxic than LinkedIn). And fourthly, even if someone read a tabloid, they didn’t do it many times a day, every day, from morning till night. But now everyone has a “smartphone” in their hand for many hours a day. Combined with the prevalence of anti-social media, this creates constant pressure on every user, constantly showing them how crappy their life is.

As a result, a system has emerged that, on one hand, bombards people with unrealistic models and expectations, and on the other, increasingly takes away what gave support for millennia – a stable local community where everyone could find their place. It’s no wonder, then, that the result is an epidemic of mental health problems, the scale of which in younger generations is already alarming.

Worse yet, there doesn’t seem to be a way out of this situation. Anti-social media have already become too important a part of social and professional life to completely break free from them. The possibility of living without them has already become a luxury available only to the few wealthy enough not to need them – or to those who decide to completely reject modern civilization and go somewhere into the wilderness. And artificial intelligence, although it can serve as a kind of buffer between us and the toxic system of anti-social media, simultaneously deepens the problem by introducing a new level of artificiality and fakery.

Maybe, however, AI will “finish off” anti-social media completely, to a level where there will no longer be any real content there, and an awakening will occur? I doubt it – but one can hope.

Recently, a small “storm” arose in the world of anti-social media. Meta announced that it would introduce “artificial characters” on Facebook and Instagram – profiles generated by AI with which users could interact. When these profiles actually appeared at the end of December, they were met with a very negative reaction and are supposedly going to be withdrawn. But personally, I believe they won’t be withdrawn – they’ll be quietly replaced with better versions, creating more convincing content that won’t be so easily recognized.

Because already, independent app developers have “connected the dots” and created software that automatically generates content for anti-social media. Under my posts on LinkedIn, AI-generated comments have appeared several times, and looking at posts, I’m convinced that at least 50%-60% of content on that platform is already created with significant AI assistance if not completely autonomously. And I think it’s similar on Facebook and increasingly – as tools for creating convincing video become more widespread – on Instagram and TikTok too. So I understand the thinking of Meta’s leadership – if external players are using it, why shouldn’t they use it themselves?

But has the fact that more and more posts are created by AI really drastically lowered the quality of content on LinkedIn or other anti-social media? I don’t think so, but others do. I’ve noticed that some users are starting to complain that the content is repetitive, boring, that posts are similar to each other, blending into one mush. And they blame AI for this. Meanwhile, they’re missing one important thing: AI is not the source of the problem. The source is the very essence and structure of anti-social media.

Let me explain.

Anti-social media has a fundamental structural flaw (?): it forces users to constantly publish new content. Content older than a week, or maybe two, even if exceptionally good and popular, practically disappears. So if you’re an influencer, salesperson, consultant, entrepreneur, or anyone whose sales – that is, livelihood – depends on visibility, then you need reach. And to have reach, you must publish. And not just occasionally, when you really have something to say, but regularly, daily, and preferably several times a day. Only then will the algorithms “notice” you and show your content to others, which will result in likes and followers, which will result in even more promotion – and ultimately sales, which is what everyone playing this game is ultimately after.

It’s nicely called “content marketing.”

The problem is that no one – not even the most brilliant mind in the world – can create truly valuable, deep content twice a day, every day, throughout the year. It’s simply impossible. So what happens? People have to write anything, repeat the same observations, arguments, present the same stories as “new,” and so on. If you have to create hundreds of posts, the quality of the average post must decrease. At best, one can hope for “creative” rehashing, that is, garnishing an age-old story with a personal introduction or a paradoxical (generated) graphic. Or reporting the same news as hundreds of thousands of other accounts.

At the same time, anti-social media promotes short content. Longer content gets truncated (notice that you have to click “read more”) and is less promoted. Why? Because the goal of the anti-social media operator is also sales, specifically showing ads to users. If a user spends too much time on a single post or video, they’re not seeing ads during that time. So reading longer texts – like this one – carefully is the worst thing in the world for LinkedIn or Facebook, because it means you won’t see any ads for 10 minutes! You’re supposed to see something short that you’ll either like or that will annoy you – be sure to click that icon! – and then scroll on, where another – short! – ad awaits. And so on, round and round, for as long as possible, because the longer you scroll, the more ads the anti-social service will show you, increasing the chance of a sale (“conversion”), for which their customers, the advertisers, pay.

So it has to be short, and it has to be frequent, a lot, as much as possible! Because only then you do exist.

The effect is that shallow, repetitive in essence but colorful, engaging mush MUST be created. That’s simply the logic of how anti-social media works.

And there’s no doubt that AI will do this better than a human. Of course, it won’t write anything groundbreaking and deeply wise – and not because AI as such is inherently incapable of it, but because in media like LinkedIn or Facebook, that’s completely not the point, as I’ve shown above. AI is able to calmly (it has no emotions, after all) and methodically adapt to this, generating exactly the kind of content that anti-social media algorithms expect – and that average users fitting a given profile, a given “niche,” like to “like.” Even more: AI, especially when fed with a stream of current news and posts from other profiles (to understand trends), has a chance to generate content objectively better than a human forced by the necessity of “what to post today, I have to post something.” And to generate it as often as needed for the LinkedIn system to start showing this content to more users.

What’s sad is that it’s becoming increasingly difficult to live without these media, without participating in this intellectual slippery slope that they create. You have no choice, you can’t write on LinkedIn or X only when you feel you have something smart and interesting to share, you can’t, say, share some longer and well-thought-out text once a month, because then you lose to the “noise” generated by those better adapted to this reality. I know this from my own experience. So isn’t entrusting the creation of daily “posts” necessary for survival in today’s world to artificial intelligence a great solution? Especially if most LinkedIn “consumers” won’t notice the difference?

AI is therefore not the cause of the problem – it is its solution that can free us from this necessary but intellectually degrading work of creating a constant stream of shallow content on one narrow topic. Perhaps an imperfect solution, but in a sense inevitable, a kind of “buffer” between people and the toxic system of anti-social media, which is the real source of the problem.

Want deeper and wiser content? Well, that requires time and effort – including from the reader. So where to look for it? Read books – especially old ones, certainly still written by humans. Read those blogs that are still functioning and that convey some thought, some reflection, or value. The only question is who will be able to afford this luxury – both the luxury of creating this content, which may be valuable but few will see, and the luxury of reading it.

Because the situation where you don’t have to be present on anti-social media is already a true luxury that only a few can afford.

We’re now in the third year of the AI transformation. Of course, the AI models have been developing for longer, but for about three years they’ve become more mainstream, more accessible, and cheaper relative to their capabilities. The impact on programming as a profession and on the entire IT industry is already clearly evident. More and more code is being created using these models, either directly or with their support – recently, Google’s CEO boasted that 25% of their code – a full quarter – is now being produced by models.

On the flip side, we hear complaints (mainly on antisocial media) that this code is inferior, non-optimal, and not at all readable; that it will be difficult to maintain because it will need to be understood and fixed, or we’ll need to use models again just to make sense of it.

These complaints don’t surprise me because I’ve heard them earlier. Much earlier.

History Repeats Itself

When I started university in 1989 (already having a few years of programming experience), I met older computer scientists who strongly complained that high-level language compilers (like Pascal or Modula 2) generated suboptimal machine code. They claimed that the resulting processor instructions caused it to perform many unnecessary operations, making the code inefficient. Plus, it was too large, as these redundant instructions and structures occupied (then much more valuable) memory space.

I had the opportunity to verify firsthand that this was true.

Back then, we had PCs running MS-DOS – and the first computer viruses. My university friends and I were writing such a virus whose purpose (payload) was to play music.

Before I continue, younger readers deserve an explanation: today’s computers have virtually high-class music synthesizers that can create high-quality sounds. Back then, there was just a tiny speaker connected to the motherboard, designed primarily to emit short beeps – its purpose was to alert the user about errors or process completions.

Playing melodies with this thing required some serious ingenuity. We had a talented classmate who could write code that played music – and not just any music, but the kind that would get stuck in your head and had a great rhythm. A unique mix of musical talent and ability to express it with code. He wrote a music piece for us in Pascal because he wasn’t very familiar with assembly language. After compilation, that piece of code that played his little tune was about 5 kilobytes, which was way too much for our needs.

Another friend and I sat down and started manually translating it into assembly language. It took us about four days of work. The result was that we achieved a size of about 800 bytes. The same result – the music played identically – but the code was five times smaller.

This way, I directly confirmed that the old computer scientists were indeed right: compiler-generated code was indeed suboptimal, and significantly so.

But So What?

So what if, on the scale of the whole industry it didn’t matter at all! Moore’s Law was at work (though a bit slower then), computer power and memory size were growing, so it was completely irrelevant that machine code was suboptimal and that as a result the processors were executing unnecessary operations, and the executable files were 5-6x “too big.” The increasing power of computers more than covered these inefficiencies.

What became much more important was that programmers could create something quickly, that they didn’t need those four days to fine-tune a program but could do it much faster. At the same time, they didn’t have to delve into register limitations, addressing, etc. – they could operate with easier-to-understand abstractions of variables, functions, and so on. And the fact that the result was at least five times less efficient and larger? No one cared! Computers were, as mentioned, getting faster anyway.

The further development of the software industry followed exactly this path: increasing processor power, more available memory, and programming languages becoming increasingly detached from the machine level. Then came Java and the virtual machine concept, which introduced another level of inefficiency. Except for a few niche applications, efficiency optimization was generally forgotten.

As a result, most contemporary programmers don’t even know how a processor is constructed, what machine language looks like, and especially have never used it. And this doesn’t hinder anything – it’s completely normal.

A New Paradigm

Do you understand the analogy already?

What difference does it make if code written by AI models isn’t perfect? It doesn’t matter at all, because code made by a low-paid junior in an outsourcing center somewhere in India is weak too. In fact, I would argue that on average, AI-generated code is already better and more consistent than what you typically get from outsourcing to inexperienced developers. The AI doesn’t get tired, doesn’t take shortcuts when deadlines loom, and consistently applies patterns it has learned from millions of code examples. Many critics make the mistake of comparing AI output to idealized human code rather than the actual average-quality code that dominates most systems today. When viewed in this realistic context, AI-generated code often comes out favorably.

So what if we might need an AI model later to understand and modify this code? No problem, we’ll have those models – and even better ones than we have now.

What we’re observing is another paradigm shift in programming. Instead of directly telling the machine what to do (which is what all structural and object-oriented programming languages were for), we now tell the machine what we want to achieve. And the machine generates an intermediate stage in the form of code, which is then translated (compiler/interpreter) to the next stage, and then the next, until finally, somewhere at the end, processors are still executing instructions from their relatively simple set, shuffling data in and out of registers.

Looking ahead, this intermediate stage in the form of code in some programming language will probably start to disappear. Why keep after all? What’s the point, if the machine could generate directly the result that the user expects? But getting to this point will probably take some time, because over these ~50 years, we’ve created a lot of “infrastructure” around programming languages, so generating them is currently simply the easier and faster way to get a working result.

Will the Value of Software Decrease?

Recently, I was asked an interesting question about whether the value of programs, applications, and software in general will decrease as a result of AI-driven development.

This question is worth considering. If the proliferation of programming with the help of AI models could cause a decrease in the value of the programmer profession, wouldn’t programs and applications also become less valuable? And if so, wouldn’t the entire industry implode?

In my opinion: of course the value of software will decrease! But this doesn’t mean the end for the industry. On the contrary – there will simply be more of these digital products.

Again, I’ll refer to history, because it shows certain long-term patterns of societal behavior. Over the centuries, many technologies have been introduced, and it has always been the case that initially, their given unit product was expensive, and then it became increasingly cheaper.

Take the example of the automobile. One hundred and forty years ago, cars were toys for the wealthy, and a hundred years ago, they were luxuries available only to the select few, especially outside the most developed countries. The production volume of leading car companies was tiny by today’s standards. A car was once a much more valuable item than it is now (especially a used one). Currently, the scale of production is incomparably larger and the production cost per unit, as well as the profit margin per unit, is much lower than it used to be1. Did this harm the industry? No.

The same goes for computers. A computer from 55 years ago was an incredibly expensive thing. The personal computer, which appeared in the early 1980s, about 45 years ago, was also a very expensive item – not as much as before, but still mainly accessible only to the upper-middle class. Those machines like the Commodore 64 or early PCs – these weren’t objects everyone could afford, again especially outside of the US & Western Europe. But there were far, far fewer of these computers than there are now.

Today, a computer isn’t such an expensive item and computers are available almost universally. Of course, there are expensive computers, but there are also cheap ones, and the differences between them aren’t gigantic. Someone who buys a cheap laptop for $500 at a big-box store can fundamentally do the same things as someone who bought the latest MacBook. Sure, not as conveniently or as quickly, but in terms of capabilities – the difference isn’t fundamental.

Did this harm the industry? Quite the opposite. Computer manufacturing companies are richer today than ever before.

Unit Value vs. Scale

The same will happen with software. The unit value of an individual program will of course be lower, but it’s worth noting that this is already happening. In the early computing era of the 1960s-70s, building software was a gigantic undertaking, involving many people, taking a long time, so only the largest companies could afford it.

Thanks to structural programming languages and personal computers, by the mid 1980s people could create programs themselves if they had the right knowledge. So there were more of these programs. By the end of that decade, in the early 1990s, freeware, shareware, and the open-source movement appeared, meaning more and more software was available completely free. Did this harm the industry? Quite the opposite. Giant software companies with enormous funds emerged only then, not in the era of large computing machines when individual programs were very expensive.

I suspect we’ll see something similar with software in the AI era – today’s specialized $100,000 enterprise solutions may become commoditized, but the overall market will expand dramatically as new use cases emerge. When software becomes cheaper to produce, we don’t make less of it – we make more, applying it to problems that previously couldn’t justify the development cost.

In my opinion, the answer to whether software will be less valuable is therefore ambiguous. On a per-unit basis, probably yes. But I think the industry as a whole will likely benefit, because we’ll have more digital products, not fewer. They’ll be increasingly tailored to various needs, with a greater “spread” of possibilities. If I have 100 programs trying to satisfy some need, whereas before I had 10, I now have a much better chance that at least some of them will satisfy that need well – will hit the mark.

On a macro scale, this will likely mean increasing value for those involved (especially large companies), but on a micro scale, the value of an individual “piece” of software will probably be much less than it was until recently.

The Future Is Already Here

What we’re witnessing isn’t just another phase in software development – it’s a fundamental reshaping of how humans interact with technology. The democratization of programming through AI doesn’t spell doom for our industry. Rather, it follows the same pattern we’ve seen with every technological revolution: what was once scarce becomes abundant, what was expensive becomes affordable, and the ecosystem around it grows exponentially.

For developers, this likely means evolution toward prompt engineering, system design, and problem definition rather than manual implementation. Companies will need fewer “code writers” but possibly more “solution architects” who understand both business domains and how to effectively direct AI tools. The value will shift from knowing programming language syntax to understanding how to describe problems and solutions in ways that AI can effectively translate into working systems.

The not far-off future where most software is AI-assisted or even AI-generated isn’t something to fear. It’s an opportunity to focus on what humans do best: defining problems worth solving and imagining solutions that machines wouldn’t conceive on their own. The value will shift from the mechanics of coding to the creativity of conceptualization and the wisdom to know which problems deserve attention.

As with compilers in the past, we won’t miss the tedious work that AI takes off our hands. We’ll simply move up another level of abstraction and keep building amazing things – just differently than before.

  1. The fact that new cars still seem relatively expensive today, particularly in Europe, is mainly due to high taxation and regulatory compliance costs, not the actual manufacturing expenses or profit margins. Just look at the prices of new vehicles in China.  ↩︎

Scrum – a method for engaged professionals to work better together.

This is my definition of what Scrum is – and what it has been from the very beginning. Let’s break down this definition into parts:

  • method – a certain way of operating, a set of rules and practices that structure the work process, giving it a repeatable rhythm, providing the necessary stability that makes it easier to deal with complexity,
  • engaged – yes, this is not for those who don’t want to make an effort, who don’t care, who want to float through working hours with minimal effort and, having done that, focus on what really interests them (if they have anything like that)
  • professionals – people who are “characterized by or conforming to the technical or ethical standards of a profession” (Merriam-Webster dictionary), that is people who have certain standards, who won’t do anything poorly or “just to get it done”,
  • work together – the concept of “shared work” – Scrum makes sense where there is a need for close, constant, and daily cooperation between people with diverse competencies and knowledge, where there is no room for “passing the baton,” where, like in a well-trained football team, everyone plays toward the same goal, rather than doing “their own thing” and not caring about the rest.
  • better – yes, this is for those who want to work better, who are always looking for new knowledge, new skills – they care, because they are engaged, they care, because they are professionals and as such seek the path to excellence in what they do.

Do you now understand why the average Scrum in an average company is the way it is? Do you understand why “big transformations” ended up the way they did? Well, how many engaged professionals are there – 10%? 15%? Maybe… at best!

There’s nothing you can do about that. However, you can choose who you want to be.

And who you want to work with.

After 12 years I am coming back to blogging in English. Stay tuned. 😀