I am not hostile to use of AI. I am against arbitrage on trust. It is wrong to trick people into believing a human wrote or created something where none did.

No AI was used to write the text on this site. If you come to this site, you came to read me write something, and it would be a betrayal of the reader to instead serve them machine-generated text. And this site attempts to be as factual as possible; large language models will invent new facts that did not exist before. That isn’t acceptable for something that should reflect the historic record to the best of my knowledge.

(And if you are thinking “but em-dashes!”: That’s actually a feature of the software I use to build this site. Using non-ASCII punctuation is not a crime!)

I do not use AI-generated or AI-enhanced images for most of the same reasons. That’s just not what anyone came here for, and they do not reflect the historic record, so I will not use them.

On the other hand, I have found that AI is brutally effective at proof-reading, and I use it for that on this site. Until mid-2025, I resisted any kind of computer assistance to correct technical problems with my writing (grammar & spelling). That isn’t because my writing is perfect. It’s really not! But software tools invariably made it worse. They mostly follow rules invented for bad reasons, and hammered out all the individuality and personality from my writing.

AI is different, because you can tell it specifically not to re-write your text according to made-up rules, and it listens. I use a prompt like this for GitHub Copilot:

Please proof-read the file [FILE]. Only correct outright errors, misspellings, poor grammar, and so on. Do not fix idiomatic or conversational grammar - it is a personal reflection, not an academic paper or newspaper article. Write corrections out to [TEMPORARY FILE] - give a few words of context and what should be fixed.

That finds only genuine syntactical errors in my text, and I manually apply the suggested changes. This rules, and is something that just was not possible before the early 2020s without paid human proof readers – who often have many of the same failure modes of pre-LLM software anyway. Most importantly, it is not trust arbitrage. Nobody would be upset if they found out I use one of the aforementioned pre-LLM tools; AI is a better way of doing the same thing.

Elsewhere, I do not think it is a betrayal of the reader to use AI assistance for the code that backs this site (the parts I wrote are a Hugo template), because nobody came here expecting hand-written HTML and CSS. If that ship ever sailed, it was in the 1990s. So if needs be, I don’t mind using AI assistance with the invisible parts of the code, but I enjoy doing it without; it was my day job for many years and I can do it with my literal eyes closed.