Yes, yes, I'm one of those AI users now

8 May 2026

It's been a weird year. Last May I was working full-time for a client when they announced that they wanted to try Claude Code. Suddenly, I was furnished with an account and encouragement to try it out. I typed some clumsy prompts into Sonnet 3.5 and it had a clumsy go at writing Rust. It wasn't that great for my typical development work but I found a few use cases where it could solve annoying or tedious problems.

Fast forward to now. New models are extremely capable and my job is infused with AI every day. Whether I'm writing code, debugging, understanding a new codebase, or scripting an unfamiliar toolchain, the robot helps. I've been doing this kind of work for a while. I can state confidently that I'm at least three times faster for similar or better technical output. The benefits flow directly to my clients—higher quality for fewer billable hours.

I'm sharing this because I owe this blog an update. The last time I discussed AI directly was early last year and it would be fair to describe me as a hater. I predicted some of the controversies around open source contributions and preserving human communities online and I was pretty bummed about the inevitability of it all. I assumed I would be part of the tribe writing code the old way. Not for any particular reason—just to stick it to the people who messed up those things that I like. Clearly, that's not how things played out.

Thing is, I like being paid to write software. No sane business is going to pay me to spend two days typing out a Rust module that a frontier LLM can whip up in five minutes, even if it takes me half an hour to review. There's a great deal of work and judgment around that coding, which is why I still have a job, but it's crystal clear that if I rejected LLMs entirely I'd be less employable. I'm a consultant, which puts me at the pointy end of these discussions. If I want to get paid I need to meet the market. For now this remains more attractive than becoming a plumber.

Unfortunately, it appears that AI is going to be pretty disruptive to society. Copyright concerns are being waved away and our leaders are clearly more interested in having AI than not. I live in a democracy, flawed as it is, and the outcome is what it is. Regulations around misinformation and economic support for affected people will matter a great deal. Immiserating myself in protest will not, although it might get upvotes on some corners of the internet.

Note that I'm only talking about employment or contracting here. If you're writing and selling software by yourself then you can take as much time as you want, given sufficient cash flow. If you're writing hobby open source software then anything goes—write it however you like, and police contributions however you like. No arguments from me.

Of course, the LLMs won't stop here. Right now an experienced developer provides the big picture thinking and a great deal of supervision. Even among those who've taken up AI there's a widespread view that "it's just another tool" or "there'll always need to be someone who really understands the computers". It's both comforting, and the most appropriate way to use this tech today with its current capabilities. I also suspect it's wrong in the long term.

Models are already quite smart. Given the right prompt and context they can do wild things. As long as my work is primarily looking at a screen and typing things on a keyboard the LLMs are still coming for me.

Make hay while the sun shines, I guess?


Serious Computer Business Blog by Thomas Karpiniec
Posts RSS, Atom