Lets talk about AI
AI is now in everything. It's the new Internet of Things for the modern age. From search, to writing documents, to buying online, to writing commit messages. It is getting shoved into everything.
But lets ignore that for now and focus on what it is like to be a developer who started coding properly somewhere in the very early 2000s.
In the beginning
There was basic autocomplete. If you were using a typed language in an IDE you would get an autocomplete list of all properties and methods associated with an object. This was super helpful for a young me learning my way through C# at uni.
I was super enthusiastic about Linux at the time and tried vim with mono but couldn't get ctags working correctly and so gave up and just booted back into my windows system that had .NET and visual studio express setup on it.
Using the right IDE for the language was really the only way to get a decent experience without a lot of leg work tying to get things working. You could absolutely get it working if you knew what you were doing and had the patience to configure it all.
I both didn't know what I was doing, still don't to a great degree, and I had limited patience.
Finding solutions to problems
Stackoverflow was an amazing resource. You could, with enough searching, get a solution that you could either straight up copy/paste, or, at worst, copy/pasta.
But often there would be questions asking for help with the exact problem you were having without an answer. At this point you would be searching google, trawling though forums, email groups, and personal dev blogs looking for ideas to form a solution.
If you were a beginner at a thing, sometimes the answers wouldn't make sense and asking questions around it were usually met with RTFM.
Then along came Microsoft
As a teenager my Linux evangelism knew no bounds. I would install it on everything that had a CPU and a LAN port. This included family computers. Feels good man. Or at least until all the support requests came rolling in.
At that time Microsoft were the sworn enemy. They were the big evil. But then they had a change of CEO and a change of direction.
They started contributing code to open source projects, even doing open source projects of their own. And they even allowed the use of Linux on Windows thanks to the Linux subsystem.
Their biggest contribution to my daily dev life was the humble LSP server. This magical thing would allow you to use the editor of your choice (if it supported them) to get full language support as if it was an IDE.
It gave us full IDE features like function or variable rename that updated references to those, jump to definition and so on but in a humble text editor.
I don't dislike IDE's as a general concept. Use whatever you need to do your best work. I do, however, love to experiment with different software, text editors, git front ends and so on.
Some people like sports, I like random software packages.
Onto the modern age
In the last few years AI has taken off in real a big way. At the start of this, I mentioned it is being put into everything.
Predictive text is not a new thing, but given enough training data it can, and does, seem like magic. This is true of the current generation of LLMs. They're trained on the worlds data: all our emails, our phone text inputs, our searches, all our public Github repos, and the internet as a whole.
The results are impressive. As a skeptic, I've mainly held off using such things until a few years ago when I tried copilot. I found it to be a mixed bag with just enough good suggestions to keep paying for it. The biggest enhancement was figuring out how to turn off accepting all and just accept line by line.
You're absolutely right!
In the last few months, so called agentic solutions seem to have taken a leap forward. I tried vibe coding a brand new project and was impressed with what it gave me. It was not a complicated project, just a simple web app. Nevertheless, the design was nice to my eyes, the code was good for the most part, but the cost to time ratio was what blew my mind.
$5 of tokens and half an hour got me from an empty folder to a deployed app.
From this I then tried out a more complex mobile app, specifically asking for the expo framework. It then got into a dependency loop that ended up burning through $50 of tokens in a single afternoon. This was because I had no clue about what I was actually letting it do.
This then, is where I feel we're at.
Given sufficient domain knowledge, a brand new project, and enough credit, you can work with a coding agent and get 80% of a good solution. It requires:
- a small initial prompt to setup
- small individual steps
- clear and precise prompts along those steps
- careful review of tools run and code generated
- some general LLM rules in place
With no domain knowledge, a massive starting prompt and just letting it crack on without any planning is where you start burning money.
Finding solutions to problems now
More and more, for dev queries, I've found myself turning to the AI chats to do my googling for me. I've been pleasantly surprised with just how good they are. It's like an augmented search.
If I don't fully understand what it's suggesting, I can ask it further follow up questions. It tells me I'm right to question it and it does not tell me to RTFM.
I've also tried attaching a context driven agent to a large internal codebase and have found that to be mostly great in finding out where and why things happen.
It's not all good though. There are hallucinations, and sometimes just plain wrong answers, but it still feels like less time than trying to search the minefield that is google.
The big worry
With it seemingly being everywhere, is that good thing? Are we about to loose deep knowledge as a result?
Our field is fast paced and at this point I've probably learned and forgotten many libraries and frameworks over the years. The only thing that has remained relatively constant is the core language at use. A framework these days is for Christmas and not for life.
So maybe deep knowledge over their quirks isn't all that important? More so the ability to change and get up to speed with whatever is round the corner is arguably more important. Reasoning models do help with learning a thing.
What then about the energy costs for all this compute power? Climate change is happening at a rapid pace. Is the AI gold-rush just adding more logs to the fire that we're supposed to be trying to put out?
What about the fact that companies are hoovering up our data to train these models? I don't remember consenting, but then again given the crazy length and language of terms of service, I must have, right?
Accepting what you can and can't control
I guess the internet age also had these kind of questions. People would loose knowledge since they could just search a thing instead of reading a book.
Servers were being spun up at a rate of knots in big data centres that needed plenty of power.
For all our concerns, it seems that big tech is just not going to stop any time soon. All we can hope for is that the race for the best AI leads to more renewable power solutions.
For products that shouldn't have AI in them, we can vote with our wallets and find alternatives, though they're probably quite rare by now.
As a developer though, I do feel that AI is the next step in the journey. The next layer of abstraction that takes us further from the bare metal.
After all, you wouldn't code a blog in assembly, right??