We’re now in the third year of the AI transformation. Of course, the AI models have been developing for longer, but for about three years they’ve become more mainstream, more accessible, and cheaper relative to their capabilities. The impact on programming as a profession and on the entire IT industry is already clearly evident. More and more code is being created using these models, either directly or with their support – recently, Google’s CEO boasted that 25% of their code – a full quarter – is now being produced by models.

On the flip side, we hear complaints (mainly on antisocial media) that this code is inferior, non-optimal, and not at all readable; that it will be difficult to maintain because it will need to be understood and fixed, or we’ll need to use models again just to make sense of it.

These complaints don’t surprise me because I’ve heard them earlier. Much earlier.

History Repeats Itself

When I started university in 1989 (already having a few years of programming experience), I met older computer scientists who strongly complained that high-level language compilers (like Pascal or Modula 2) generated suboptimal machine code. They claimed that the resulting processor instructions caused it to perform many unnecessary operations, making the code inefficient. Plus, it was too large, as these redundant instructions and structures occupied (then much more valuable) memory space.

I had the opportunity to verify firsthand that this was true.

Back then, we had PCs running MS-DOS – and the first computer viruses. My university friends and I were writing such a virus whose purpose (payload) was to play music.

Before I continue, younger readers deserve an explanation: today’s computers have virtually high-class music synthesizers that can create high-quality sounds. Back then, there was just a tiny speaker connected to the motherboard, designed primarily to emit short beeps – its purpose was to alert the user about errors or process completions.

Playing melodies with this thing required some serious ingenuity. We had a talented classmate who could write code that played music – and not just any music, but the kind that would get stuck in your head and had a great rhythm. A unique mix of musical talent and ability to express it with code. He wrote a music piece for us in Pascal because he wasn’t very familiar with assembly language. After compilation, that piece of code that played his little tune was about 5 kilobytes, which was way too much for our needs.

Another friend and I sat down and started manually translating it into assembly language. It took us about four days of work. The result was that we achieved a size of about 800 bytes. The same result – the music played identically – but the code was five times smaller.

This way, I directly confirmed that the old computer scientists were indeed right: compiler-generated code was indeed suboptimal, and significantly so.

But So What?

So what if, on the scale of the whole industry it didn’t matter at all! Moore’s Law was at work (though a bit slower then), computer power and memory size were growing, so it was completely irrelevant that machine code was suboptimal and that as a result the processors were executing unnecessary operations, and the executable files were 5-6x “too big.” The increasing power of computers more than covered these inefficiencies.

What became much more important was that programmers could create something quickly, that they didn’t need those four days to fine-tune a program but could do it much faster. At the same time, they didn’t have to delve into register limitations, addressing, etc. – they could operate with easier-to-understand abstractions of variables, functions, and so on. And the fact that the result was at least five times less efficient and larger? No one cared! Computers were, as mentioned, getting faster anyway.

The further development of the software industry followed exactly this path: increasing processor power, more available memory, and programming languages becoming increasingly detached from the machine level. Then came Java and the virtual machine concept, which introduced another level of inefficiency. Except for a few niche applications, efficiency optimization was generally forgotten.

As a result, most contemporary programmers don’t even know how a processor is constructed, what machine language looks like, and especially have never used it. And this doesn’t hinder anything – it’s completely normal.

A New Paradigm

Do you understand the analogy already?

What difference does it make if code written by AI models isn’t perfect? It doesn’t matter at all, because code made by a low-paid junior in an outsourcing center somewhere in India is weak too. In fact, I would argue that on average, AI-generated code is already better and more consistent than what you typically get from outsourcing to inexperienced developers. The AI doesn’t get tired, doesn’t take shortcuts when deadlines loom, and consistently applies patterns it has learned from millions of code examples. Many critics make the mistake of comparing AI output to idealized human code rather than the actual average-quality code that dominates most systems today. When viewed in this realistic context, AI-generated code often comes out favorably.

So what if we might need an AI model later to understand and modify this code? No problem, we’ll have those models – and even better ones than we have now.

What we’re observing is another paradigm shift in programming. Instead of directly telling the machine what to do (which is what all structural and object-oriented programming languages were for), we now tell the machine what we want to achieve. And the machine generates an intermediate stage in the form of code, which is then translated (compiler/interpreter) to the next stage, and then the next, until finally, somewhere at the end, processors are still executing instructions from their relatively simple set, shuffling data in and out of registers.

Looking ahead, this intermediate stage in the form of code in some programming language will probably start to disappear. Why keep after all? What’s the point, if the machine could generate directly the result that the user expects? But getting to this point will probably take some time, because over these ~50 years, we’ve created a lot of “infrastructure” around programming languages, so generating them is currently simply the easier and faster way to get a working result.

Will the Value of Software Decrease?

Recently, I was asked an interesting question about whether the value of programs, applications, and software in general will decrease as a result of AI-driven development.

This question is worth considering. If the proliferation of programming with the help of AI models could cause a decrease in the value of the programmer profession, wouldn’t programs and applications also become less valuable? And if so, wouldn’t the entire industry implode?

In my opinion: of course the value of software will decrease! But this doesn’t mean the end for the industry. On the contrary – there will simply be more of these digital products.

Again, I’ll refer to history, because it shows certain long-term patterns of societal behavior. Over the centuries, many technologies have been introduced, and it has always been the case that initially, their given unit product was expensive, and then it became increasingly cheaper.

Take the example of the automobile. One hundred and forty years ago, cars were toys for the wealthy, and a hundred years ago, they were luxuries available only to the select few, especially outside the most developed countries. The production volume of leading car companies was tiny by today’s standards. A car was once a much more valuable item than it is now (especially a used one). Currently, the scale of production is incomparably larger and the production cost per unit, as well as the profit margin per unit, is much lower than it used to be1. Did this harm the industry? No.

The same goes for computers. A computer from 55 years ago was an incredibly expensive thing. The personal computer, which appeared in the early 1980s, about 45 years ago, was also a very expensive item – not as much as before, but still mainly accessible only to the upper-middle class. Those machines like the Commodore 64 or early PCs – these weren’t objects everyone could afford, again especially outside of the US & Western Europe. But there were far, far fewer of these computers than there are now.

Today, a computer isn’t such an expensive item and computers are available almost universally. Of course, there are expensive computers, but there are also cheap ones, and the differences between them aren’t gigantic. Someone who buys a cheap laptop for $500 at a big-box store can fundamentally do the same things as someone who bought the latest MacBook. Sure, not as conveniently or as quickly, but in terms of capabilities – the difference isn’t fundamental.

Did this harm the industry? Quite the opposite. Computer manufacturing companies are richer today than ever before.

Unit Value vs. Scale

The same will happen with software. The unit value of an individual program will of course be lower, but it’s worth noting that this is already happening. In the early computing era of the 1960s-70s, building software was a gigantic undertaking, involving many people, taking a long time, so only the largest companies could afford it.

Thanks to structural programming languages and personal computers, by the mid 1980s people could create programs themselves if they had the right knowledge. So there were more of these programs. By the end of that decade, in the early 1990s, freeware, shareware, and the open-source movement appeared, meaning more and more software was available completely free. Did this harm the industry? Quite the opposite. Giant software companies with enormous funds emerged only then, not in the era of large computing machines when individual programs were very expensive.

I suspect we’ll see something similar with software in the AI era – today’s specialized $100,000 enterprise solutions may become commoditized, but the overall market will expand dramatically as new use cases emerge. When software becomes cheaper to produce, we don’t make less of it – we make more, applying it to problems that previously couldn’t justify the development cost.

In my opinion, the answer to whether software will be less valuable is therefore ambiguous. On a per-unit basis, probably yes. But I think the industry as a whole will likely benefit, because we’ll have more digital products, not fewer. They’ll be increasingly tailored to various needs, with a greater “spread” of possibilities. If I have 100 programs trying to satisfy some need, whereas before I had 10, I now have a much better chance that at least some of them will satisfy that need well – will hit the mark.

On a macro scale, this will likely mean increasing value for those involved (especially large companies), but on a micro scale, the value of an individual “piece” of software will probably be much less than it was until recently.

The Future Is Already Here

What we’re witnessing isn’t just another phase in software development – it’s a fundamental reshaping of how humans interact with technology. The democratization of programming through AI doesn’t spell doom for our industry. Rather, it follows the same pattern we’ve seen with every technological revolution: what was once scarce becomes abundant, what was expensive becomes affordable, and the ecosystem around it grows exponentially.

For developers, this likely means evolution toward prompt engineering, system design, and problem definition rather than manual implementation. Companies will need fewer “code writers” but possibly more “solution architects” who understand both business domains and how to effectively direct AI tools. The value will shift from knowing programming language syntax to understanding how to describe problems and solutions in ways that AI can effectively translate into working systems.

The not far-off future where most software is AI-assisted or even AI-generated isn’t something to fear. It’s an opportunity to focus on what humans do best: defining problems worth solving and imagining solutions that machines wouldn’t conceive on their own. The value will shift from the mechanics of coding to the creativity of conceptualization and the wisdom to know which problems deserve attention.

As with compilers in the past, we won’t miss the tedious work that AI takes off our hands. We’ll simply move up another level of abstraction and keep building amazing things – just differently than before.

  1. The fact that new cars still seem relatively expensive today, particularly in Europe, is mainly due to high taxation and regulatory compliance costs, not the actual manufacturing expenses or profit margins. Just look at the prices of new vehicles in China.  ↩︎