Oh wow. How has it been over 3 years since I last posted? So much for my new year’s resolution in '23 to write one post per month! I’m at least 36 posts behind now. This is almost embarrassing.
I guess with all the 'on the job' writing I have been doing, my writing itch got scratched enough. But another reason for me to not like it as much anymore is that I started using ChatGPT to help me write. Turns out that when there is no challenge, I tend to lose interest. The absolute tsunami of stolen content writers pass off as their own on blogs and in articles also certainly did not help.
How I use AI
So before we dive into the actual subject, I want to start explaining first that I do use AI more or less daily in my work. Also to make it clear I’m not "against" AI; it has been a nice boost to my productivity and (more importantly) motivation since it is actually good at doing a lot of the "boring stuff". Scaffolding the plumbing for a new service, writing some mappers or writing the service documentation I’ve written dozens of times before.
The AI integration in IntelliJ is fantastic. I can often generate most of the entire chain of CRUD calls for basic database interactions, and adding some Python scripts to handle some recurring tasks is much faster since instead of copy-pasting from Stack Overflow, I have my personal copy-pasting Junior developer. And more often than not, the code even runs!
I use it extensively when writing architecture documentation. It’s great at explaining some of the choices we made based on my input, and can also quite easily add some basic explanations of tools like Kafka or Spring to the mix. Here there is already a bit of experience using LLMs that comes into play, since the defaults tend to be much to verbose for technical documentation. Fortunately instructions to reduce the verbosity generally help improve information density.
But it’s not "fun" though, this kind of writing is just "work".
How I won’t use AI
In my experience LLMs are great at copying and remixing information, but it can’t imagine and create. Of course it appears that it can, and many people claim it can, but this just shows how much of what we do has been done before. The more specific the problem, the more specific you guide the LLM to a certain piece of code or documentation it’s been trained on. I can, by giving it very specific instructions on how I want the code to be structured, even get Copilot to spit out example code I’ve put online in blogposts.
I have not seen a productivity improvement on the actual fun parts; where you’re solving more complex pieces of logic, try to implement a very hot path in an efficient way, or want it to follow "state of the art" best practices that are simply different from most of the outdated example code it’s trained on.
And I’m not mad about it. The creative problem solving where you really have to focus on what you’re building is for me one of the most fun parts of writing software. It’s for me something that can trigger a complete "hyperfocus". The actual craftsmanship of software engineering where professionalism and computer science fundamentals intersect.
I see a lot of claims that AI will be able to do this, soon. Especially in the "I don’t' write code by hand anymore" crowd. But when I look at what they actually manage to produce, it’s never the "fun" stuff. It’s always boring boilerplate level code, or small changes. Never big ideas. But I don’t believe actual software engineering was ever about the boring bits. As long as I have been programming I’ve always tried to automate the boring pieces, and every great developer I’ve met has this exact same automation urge.
What worries me
AI is here to stay.. So how is it going to impact us in the long term? When people go "all in" on not actually understanding code, who will be making the decisions? What happens when Google decides it wants to force you to use a new language they designed?
The first obvious major issue that seems to be completely overlooked by CTOs in general is that you’re handing your keys to your IT kingdom to some very American and very capitalistic companies. Companies that are currently investing billions into hard- and software at a massive loss, that need to make up for that loss in the coming years. Once the dust has settled in who wins the AI wars, that debt needs to be settled too. If a company is all in on AI, what is preventing these companies from massively raising prices? I fully expect these companies to ask for 10.000 dollars or more per seat per year to recoup the costs.
Another issue that I already see happening is that junior developers are getting completely stuck. Not stuck at problems only, but also stuck in their personal development. There is already an issue with junior developers raised in the iPhone age not really understanding how computers work, since they were not exposed to the nasty internals we had to deal with. Moving around drivers and utilities in your autoexec.bat and config.sys just to get a game to start was hell, but it absolutely started us on a path of understanding the internals. Modern closed systems don’t even have a hood you can peek under.
And now with AI being the "norm" and companies actively pushing developers to rely on it, I see them getting stuck. I have had multiple people reach out to me the past year that were going around in circles. They didn’t even try to solve the underlying problem. "I have this exception and Copilot doesn’t know" was all I got. And mind you, some of these developers were "senior" software engineers.
With juniors especially the pressure to perform as fast as their peers is massive. For them it’s almost impossible to explain to a product owner that they take 3 times as long moving a ticket in Jira from left to right because they want to actually understand what they’re doing.
And this is where we, the "very senior" engineers, need to step up. We need to be guiding other engineers, shield them from pressure, so they can actually grow to a level where hopefully eventually they will be able to replace us.
What’s next?
So to conclude this, I’m going to do the fun stuff myself. I don’t need a robot to cook and eat my food, and I also don’t want a robot to do fun stuff like writing a blog post or interesting code. I am better at it anyway, and if we ever get to a singularity level of Gen AI that can improve itself, our main concern is whether or not we’re going to be turned into paperclips, since no one will have a job anymore anyway.
I am not going to pressure myself into trying to keep a certain pace in blog posts, but I do intend to write more. And I also intend to just write about things that I enjoy writing about, even if it’s of less interest than my typical "How to do X in Java". And I guess we will always need people to create new things that those AI companies can then take and profit off of!
So here it is! Aside from a spell-checker, this is a Certified 100% AI-Free post!