AI replacing Software Engineers is just silly to think
I wouldn't say the same about Programers though.
Lately, there’s been a lot of talk about whether AI will replace software engineers. Big names are fueling the conversation. Salesforce’s CEO claims they won’t hire new engineers in 2025 because AI tools have increased their productivity by 30%. Mark Zuckerberg said that AI in 2025 will be able to do a mid-level engineer’s job.
It’s enough to make any software engineer wonder: Will I still have a job by the end of 2025?
Let’s analyze what AI can and can’t do today, and how you can prepare yourself for the future.
Consider supporting my work and subscribing to this newsletter.
As a free subscriber, you get:
✉️ 1 post per week
🧑🎓 Access to the Engineering Manager Masterclass
As a paid subscriber, you get:
🔒 1 chapter each week from the book I am writing “The Engineering Manager’s Playbook”
🔒 50 Engineering Manager templates and playbooks (worth $79)
🔒 The complete archive
AI vs. AGI
First, let’s clarify what we mean by AI. The tools we use today, like ChatGPT, are examples of Generative AI (GenAI). These systems analyse patterns in big datasets to generate text, code, and other outputs. They’re powerful, but they don’t “think” - instead they predict text based on statistical analysis.
AGI, or Artificial General Intelligence, would go further. It’s a hypothetical system that could think and reason like a person across a broad range of tasks. We don’t have that yet, and we might not have it for decades.
Today’s AI is more like a fancy autocomplete. It’s decent at following instructions but still struggles with creativity, context, and judgment. This difference matters when we ask what AI can really replace.
AI Today
Using ChatGPT, GitHub Copilot, Cursor or similar tools will make you see the following when you use them to code:
Low-Quality Output: Code with errors, security vulnerabilities, or just wrong based on outdated packages.
Hallucinations: When an LLM perceives patterns or objects that are nonexistent, creating nonsensical or inaccurate outputs.
Super-Bugs: Bugs that require 10x the effort to fix than manually written code.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. ~Kernighan's Law
Developers already spend a big chunk of their time reviewing and debugging code rather than writing it. AI creating more overly “clever” code that is hard to follow, makes general debugging even tougher when something will inevitably go wrong.
These are big problems. Hard to solve. And in my opinion impossible to tackle in 2025, or even in the next few years.
Prompt Engineering is Engineering (It’s in the Name)
Using AI in a way that actually increases productivity requires a new skill: prompt engineering. This doesn’t mean just asking questions but asking the right questions in the right way.
Creating clear, specific instructions to get the answers we want from AI.
The clarity of your prompt determines the quality of the result.
And we’re all figuring out how to best do that so that we can increase our efficiency using AI.
Are You a Programmer?
If you see your job as just “writing code”, I am sorry to say that AI will replace you. And it has started doing that already.
For example, I can ask ChatGPT right now to create a Python script for a simple web scraper. It will give me the code for it in seconds - no human programmer is needed.
But software engineering has always been about much more than just writing code. And as Martin Fowler wisely said:
Why is it so important for humans to understand code? Because all code, whether written by humans or AI, can have bugs, and eventually, it will need to be updated. Either by humans or AI.
If this is a human and the AI-generated code has not been properly reviewed, making changes to it will take unreasonably longer than it should. In some cases, being slow to fix an issue could lead to catastrophic results (actual livelihoods are on the line depending on the software).
If we rely entirely on AI for this, we better hope it’s advanced enough to figure out what went wrong and fix it on its own (that’s AGI level). Simply that means that AI will need to be able to do at least these three things reliably:
Locate the issue given a trigger
Prompt engineer itself to lead to a fix
Produce code that actually fixes the issue
And all of that with no human interaction.
My view and what I see is that we’re very far from this reality. We are now celebrating and getting very excited about AI being barely able to do the third one.
Even if we were able to have all this done by AI, I don’t know who thinks it’s a good idea to be in a world with black boxes that no human can really control.
That’s a scary and quite dark world in my opinion.
Back to reality: If you see yourself as a problem solver who uses AI as a tool to meet business goals, you’re in the best place.
You now have so many more new tools to use and work with, with the AI hype. Learning how to use them well will make you 10x more impactful in whatever you do and ahead of people who are paralyzed thinking AI will steal their jobs.
Related posts
🔥 The hottest programming language in 2025: English by
Will AI replace mid-level engineers in 2025? by
The Weekend My AI Came Alive and Why OpenAI Had to Stop It! by
The existential danger posed by AI isn’t apocalyptic; it’s philosophical by
Useful links
👨💻 Become a better Software Engineer with CodeCrafters (use my partner link to get 40% off and build your own Redis, Git, Kafka and more from scratch).
🎯 Book a mock interview with me (System design or Behavioural & Leadership) to smash your next real interview!
👋 Let’s connect on LinkedIn!