If nothing else, AI was an interesting study about humans
So, instead, I find it much more interesting to study the human side of things, how the advent of AI has been a fascinating study of human behavior put on full display - even if you ignore the obvious benefits and positive effect that it has had on my personal development work.
So, in no particular order, here are things that I noticed, and opinions I have about it.
A lot of people go straight from building nothing to building exploitative, unethical shit
I find it interesting how many people suddenly came out of the bushes and started building automations, who were never before seen or heard from as an kind of builders. I see this most in my real life looser surroundings, but also if you go online and see how much unethical, tiresome and exploitative stuff is created.
Mass-created brainrot short videos for TikTok and YouTube, Instagram "model" accounts linking back to cam site offers, spamming AI-generated e-books and websites/blogs, all from people who didn't have YouTube accounts before, didn't know how to host a website, and had no interest in creative or content writing before.
Sure, there are plenty of others who build cool stuff, and often have found ways to integrate LLMs with their existing drive for experimentation - but I can't help but notice how vocal people are about exploiting the possibilities that were already there before. I find this interesting, because the people who have been building things for a while have a completely different way to go about it.
I don't remember programmers complaining about automation
I find it curious that three years ago, you could have walked up to a programmer and complained about automation taking jobs away, and their immediate response would have been a shrug, or "yeah but if that work can be automated, do you really want to do it by hand?".
I can't say if these people are the same as the surprisingly vocal "front against developer automation" subset that seems to make their voices heard across state lines lately, but all I can say is that I have never heard a single programmer being this vocal against job automation before.
It makes me think that a lot of people a) were fine with automation as long as it couldn't even potentially affect them and b) that apparently, a lot of people have never before spent the time to think about what our job actually is. It is fluent in nature, a bit like gold miners who settle for a few weeks at a creek, but eventually that creek will stop producing gold, so they move on. If I do the same work today that I did last year, than something is wrong. My job is to swoop in where there is a need, and then work on making myself obsolete, because the system I built runs fine without further maintenance.
This has always been the case, and I never really realized how large the portion of software developers is who treat an inherently dynamic profession as something static. Programming is not fun when it is static, and neither is it challenging or motivating to manually fix things in the database everyday for five years, which is essentially what this comes down to. Ideally, we never write the exact same line of code twice - but if we have to, it better be designed in a way that an LLM can easily fill in and allow us to focus on the bigger picture.
Many people don't have the attention span to use LLMs
For something that gives you near-instant results, I find it utterly fascinating how many people don't have the patience to sit down and build something with AI support. I have heard a lot of complaints about how "AI isn't that smart after all haha" on topics that are totally possible to achieve with come iterative programming and clever design work. Scratch that, it doesn't even have to be clever, it just has to be design work.
The other week, I sat down for a relaxed evening and tried to build a whole working game in Python / PyGame. My goal was to not write a single line of code myself, and originally I thought I would have to split it up into components that connect to each other, and then let ChatGPT fill in the boilerplate.
Instead, I ended up with the surprising realization that you can generate 2900 lines of code all in one file (it took about 5 minutes of writing time in the end) and end up with a puzzle game based on JSON files that gives you an infinite canvas with clue cards, police reports, a computer menu where you can search for case files and discover clues in them, with main menus, options, automatically generated dialog-inserts (optionally voiced by Elevenlabs API), an interaction mechanic where you can phone in to your contact and submit the clues you discovered. There was A TON of stuff in there, features, menus, and the very nice way of using underlying JSON files to dynamically expand the story, create clue cards with Markdown syntax where you can link to clues or unlock certain cards based on clues.
Like I said originally: I did not write a single line of code on that thing, and it sits on my disk ready to publish if only I could bring myself to actually be creative and write a murder mystery, because I completely lost interest after the technical side was done.
Here's the thing: This was an edge case scenario to see how far I could get without any kind of manual coding. I can't even remember when I last saw more than a thousand lines of code in one file at work. If our ideas don't fit into 2900 lines of single-file code, then it has probably been time to take a step back and split it into components about 1900 lines ago. There are several features and functions in this game that would have taken me longer to research and prototype than the whole evening project took, never mind getting the math right on vectors, saving game states, building an infinite canvas with sorting and "bring stray cards home" functionality. All I did was direct this movie, and I am honestly quite surprised by how far this went just me testing stuff and saying "this doesn't work the way I want it yet" until it all worked out.
But just this process of sitting down for a full evening and designing/testing/prompting probably puts me in the upper 1% of people who have built stuff this large and complex, a lot of people give up well before that. Sure, there were 15 years of experience designing and building software that went into this project - but honestly, apart from knowing that I wanted it extendable under the hood by basing the actual story on JSON files, I didn't do anything that 5YOE me couldn't have come up with. I went with a complete "user mindset" into that project and just came up with things that would have been cool on the fly - I only realized that I probably needed a main menu like an hour into the whole process.
I find it interesting how rarely you find people building complete projects with AI, and how often I have heard the complaint that it is only good for short bursts of boilerplate. I have built several projects, games, full automation pipelines in the last three years where I only did limited coding myself, and I actually very much enjoy this process of just daydreaming about things that would be cool, and seeing what the machine comes up with. When working in Python, it rarely makes actual mistakes, at worst I have to do some fine-tuning or notice a performance issue here and there. But getting code that didn't compile? It must have been months.
I find it interesting how much our attention spans play a role in something that provides damn near instant results.
My boss still can't run a script I give him
One of my main uses for AI at work is actually coming up with simple automations, especially little GUIs that just wrap the code into a nice modern-looking dark theme, have a drag&drop area to throw a file on, and maybe some buttons and dropdown menus for settings. GUI development has always been my least favorite thing in the world (I would rather try and fix a color bug in CSS), so the fact that I can suddenly take my most useful scripts and make them usable for non-coders has been a complete game changer in my book.
That being said, the interesting part here isn't that it's now possible to do these things, but rather that if you were to tell your boss "hey install Python from the windows store and then put this file on your desktop to run it" - you might as well save your breath. He won't do it, he won't understand what you even want from him, and he will save your face in his mind as someone who insulted him by calling him stupid.
If you do the same thing, compile it to an executable with another AI-generated Python scripto that wraps pyinstaller and a few configurations, you come out with a .exe that you can send him, and he'll save you in his mind as someone who fixes his most annoying problems.
Much like the issue with the attention span: That guy has the same access to AI as anyone has, and he could totally type "hi my coworker said I should install Python through the windows store, how do I do that?" and overcome his knowledge gap in five minutes. But instead, he is much more likely to use AI to write emails (which sounds horrible, shows immediately, and probably takes longer to get usable by removing the emojis and em-dashes than to write by hand), or to have it fill in the technical details in a 40-page contract that then hopefully crosses my desk for review so I can write the actually important details into that thing instead of superficial nonsense.
In fact: That tool that I wrote? He no longer needs me for that, he could walk up to me and say "look at me, I'm the captain now". There was nothing special to it, just a bit of looping over files, firing off a web request, parsing the results. Nothing that a non-programmer can do, but definitely something that he could ask the machine for the same way I did.
I find it very interesting how the approaches we have been trained in are furthered, deepened, and narrowed down to what we already knew before we started using LLMs - there are very few people who use it to break out of old patterns and establish new ones, most people just use it to do the same things as before, but hopefully faster.
Your boss won't run his own scripts in 20 years either, our jobs are safe.
A lot of software developers still don't improve their immediate surroundings
I have always thought it a bit odd how many people you meet in our trade who don't fix the things that they have in their immediate area of influence - and then, often enough, complain about.
It used to be that I had a little bit of understanding-ness in my heart for those poor, overworked souls who didn't have the time to sit down and think of possible solutions to the problems that ailed them - back when these things took an hour or two, or ten if the problem was complicated and needed research in a field we weren't trained in.
But these days? Holy hell are people not seeing the possibilities in front of them. It takes under an hour to come up with a chrome extension that is only meant for personal usage and fixes the manual labor in a third-party system. It takes five minutes to go from idea to a script that automatically resizes images into thumbnails of different sizes for a website project, and you can integrate that into the build pipeline so it completely removes the manual work or potentially slow-loading sites. It takes no time to generate an AutoHotkey script that automates annoying work. Very little work to write a script that goes into your startup folder and runs every morning to generate a report with yesterday's server errors or that collects faulty database entries to alert you. Almost no time to write a little script that fires off all JSON files in a folder as web requests and parses the results. Text replacement tools to take productive customer data out of the files so you can properly test with them. Using emails in your inbox as triggers to run scripts, or generate JIRA tasks.
A tool with a GUI that logs into an SSH server to search log files for an ID? I do that several times per day, and the script I use for that took an hour to build, and only because I am a complete noob with SSH and needed to learn the absolute basics to understand how to take a console command and transfer it into a repeatable Python logic.
The point I am trying to make is this: I find a good application for LLM usage just about every single day, and I have automated literally hundreds of small or large steps over the past two years that I might not have without it for time reasons alone - but in times where 90% of such automations take less than ten minutes, I find it fascinating how few people you meet are doing similar improvements to their immediate surroundings. It used to be easier to explain back when you needed a pretty concrete idea to type into Google, but nowadays you can quiz the LLM on possible solutions even if you have zero clue where to start at first.
Takeaway: Humans are the most interesting part about AI
I want to reiterate: I have read the same science fiction books to be wary of AI, especially in its current unregulated Wild West phase of the adoption cycle. I have also read the Metro books to know what life will be like in a few years' time, and a good part of me misses the days from five years ago when I routinely had fifteen tabs of StackOverflow discussions open as I was trying to see if their particular solution applied to my particular problem.
A lot has changed in the past few years, maybe the heaviest impact is how it has sucked the joy out of most creative stuff online, where you first need to dissect an image to see if your brain will be able to receive neuron activation from it. I found that post quite fascinating about . Artwork is quite similar to that, I have a book by behind me right now that I loved to just flip through, and I would completely disregard the same images today because they look totally AI generated.