The Total Hatred For AI in Tech
If you have existed on the planet earth in the last 36 months you have been undoubtedly been exposed to a slew of AI tools and integrations, half of which are questionably executed and the other half is questionable if it even is AI. With all of that, coming right off of the crypto boom especially the tech-savvy have immediately questioned this hype and over the months grew to hate it with a fury.
I do not except myself from that, I was there.
For the first months what I previously followed as promising new tech got turned on its head overnight by capitalist pieces of shit at openAI and friends and completely soured my mood for anything that has proclaimed itself AI, and I myself got caught in the rumor hate mill: AI uses 200 quadrillion times more power than a google search, AI uses oceans of water, we need to double data centers because of AI, AI will kill us all, AI stolen my bicycle.
As I do not like blindly hating and I also started to distrust how enthusiastic these AI CEOs were about proclaiming the end of humanity (how does that fearmongering make them money?) I invested many many months into learning how the tech works, where it comes from, where it might be going, why it's now the big hype and how to differentiate it's actual usefulness and use-cases from the snakeoil.
Summarizing this months-long journey could very well be it's own blog post, if not an entire book, so I will not go too much into it here, but let me show you what I'm now gonna do about it.
The Quest for Good AI
As I now feel pretty confident identifying what AI can and cannot do good or even at all, I started wondering: what is some actual good (and ethical!) use of the tech. And a short while ago it hit me: generating commit message bodies.
The problem domain
Everyone hates doing documentation, right? Most can't even bother to do a few words for a good commit message in there, and in many projects the commit log is the only documentation you have. I personally spend some time a few years ago to muscle-memory at least a baseline of commit-coherence. I always start my commits with “added/changed/removed/improved/translated” and try to keep my commits small so I can summarize the changes in a few words after that keyword. It served me well and improved my commit history a lot. But something I never got myself to do was: commit bodies. I always thought in github etc. you can just click the commit and see the changes so why should I summarize it myself, but it does help a lot: on a command line you can just use git log and get an overview of everything that changed and even if you can read the language the code is written in, taking the time to wrap yourself around the coders convention and style takes time, there is a lot of fluff, lots of unnecessary diff removing whitespace and the likes. A proper commit body laying out stuff and maybe even explaining some things is absolute peak. Take this example:
How I normally commit removing this one Changelog.md file from my Asp.Net Core SPA project:
removed unused changelog
Now, I mean, it contains literally all the changes, right? But now compare it to this:
removed unused changelog - Deleted the CHANGELOG.md file that documented how Visual Studio created the project - The file previously listed steps for creating an ASP.NET Core Web API project - The file was removed entirely from the repository
Now, you know what was in this file, and can probably also tell why I just removed it? Awesome, right?? And here comes the best part:
I didn't write this, my cat did.
May I introduce you to: cat-coder, my model derived from 'qwen3-coder:30b.
Training your cat
When you first install something like ollama and download a model like qwen3-coder:30b first let me tell you a few things: it will take some space, like 30gb disk space and about 25gb RAM when running, but you don't need a fancy gpu. In fact! I run it on my laptop that just has a 12th gen i7 mobile CPU installed, and it runs absolutely fine. Don't let big green talk you into buying some fancy graphics card, good models run on limited hardware and don't sip a sea worth of water. qwen3-coder as the name suggests is optimized for code related tasks, it can generate code, analyze code, and that's what we use it for. But first, we have to make it behave.
When you first talk to it trying to get it to summarize stuff it suffers the silicon valey glace-yapping disease, constantly repeating your question, telling you how smart you are and engagement pushing “If you need help with that just ask me! I can help you! 🥺”, so first, some brain surgery.
Best you go into your ~/.ollama directory and run ollama show qwen3-coder:30b --modelfile > qwen3-coder.modelfile, this will create a so called “modelfile” for you. A modelfile is a way for ollama to let you tweak some parameters, like temperature, context size, and most importantly, provide system prompts. If you're interested on reading up on these things… good luck the internet is full of slopcoder ai-generating articles about it that are garbage and wrong and they don't know what they talking about. But in any way, to make this useful, we have to mainly tweak 3 things:
- Template: The default template doesn't ingest system prompts
- System prompt: Tell the cat to fucking behave itself
- Context size: give the orange cat enough lasagna to handle your diff-size
Template is simple, there are many default templates out there, and I just made this:
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>
user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant"""
This tells ollama to ingest the system prompt before the user prompt that we are about to provide:
SYSTEM """
You are a coding assistant tool that generates code and documentation.
You give short and concise replies and keep your responses on topic.
You only give replies in plain text without markdown.
You only do exactly what is asked of you.
"""
You can adjust and fine tune this, play around with it, but this is what worked for me so far. Chances are when the model does something I don't like (like I think rn it sometimes uses dashes for the changelist, and sometimes richtext-dots, gotta fix that at some point) I will most likely add a line to the system prompt. It is basically the per-intrustive thought it runs every time before answering. I could anthropomorphize the stochastic parrot here by explaining how a system prompt is like a learned command your dog picks up at dog school but let's not kid ourselves, the reason it's here is because it needs to be inserted before every single prompt, LLMs don't learn, and that's why they will never get smarter.
Lastly, we need to expand the context size, because we will feed it ‘git diff’ output and depending on how often you commit those be getting large:
PARAMETER num_ctx 65536
This will increase the model size from 20gb in ram to 25gb, I only got 40gb so that's my limit. You can also set it to 262144, that puts it at 44gb and certainly makes it more powerful.. but one of my RAM slots is soldered so I can't upgrade from my current 8gb+32gb without spending way too much money.
Once our modelfile looks good, we can create our new model with this command: ollama create cat-coder --file .\qwen3-coder.modelfile . You can of course name it something different, I just like my lil cat. If you make changes to your modelfile you'll always have to stop your model with ollama stop cat-coder and delete it with ollama rm cat-coder and then create it anew. A bit tedious but that's what training a cat is like I guess, stupid little super smart bitches having their own mind and being unpredictable.
Putting your Cat to Use
Now, you might be tempted by these little popups in jetbrains IDEs and the like about using their integrated coding AI tools and see that they support local AI.
It's a trick, they are fucking with you, they want you to get frustrated and pay for their cloud AI, fuck em'.
I wasted so many hours trying to make their stupid chat window and git tool work with my cat, they fuck up some weird shit and IDK why and don't care, fuck em, we going command line baybeee!!!.
Always coming back to old love git
What is it that we want? AI generated git log summaries? So why go to some weird IDE, when git got it all! The power of git alias allows us to make reality whatever we want!
First, let's make a nice diff-alias that gives us just the changes, so we have something for our model to work with:
git config --global alias.diff-short 'diff HEAD -U0 --color=always --minimal --ignore-all-space --ignore-blank-lines'
Then, pipe it into our model with another alias:
git config --global alias.sum-changes '!git diff-short | ollama run cat-coder \"summarize the changes in this diff using a dashed bullet list, make is short.\"'
when we now run ‘git sum-changes’ it will call our cat-coder, inject git diff HEAD and tells it to summarize the changes. And thanks to our earlier system prompt it will now give us a simple list of bullet points of all the changed files! and we can even just adjust the prompt in here if we feel only something minor is off and we don't feel like re-creating the model because of a new model file.
But now, the grand finale:
git config --global alias.ci-gen '!f() { git commit --edit -S -m \"$1\" -m \"$(git sum-changes)\"; }; f'
This one is a bit roundabout, and that's because git alias is a bit weird about positional parameters, but wrapping the git commit call in a function allows us to access the first positional parameter for the first parameter in git commit. I added -S there you should remove that if you don't sign your commits. But yea basically, multiple calls to -m (message) add multiple lines to the commit message, and --edit makes it so that our default git editor gets opened with the commit information before finalizing the commit. And since for the second -m command our git sum-changes is called we will get the commit body prepared by our lovely cat assistant. If you now run e.g. git ci-gen "removed unused changelog" you will see the familiar loading icon from ollama, and after a while you will have your default git editor opened with the commit message you provided (“removed unused changelog”) together with a commit body generated by your little AI pawsistant!
You now can just proofread the changes, I mean you wrote the code, you know what changed. And then just save and your commit, with all your extensive generated documentation is done. And from now on, whenever you want to have a commit have a bit more than the “minor changes” just run git ci-gen and wait a few seconds (to minutes depending on your hardware) and you got it all laid out there, ready for you to review.
Closing Notes
And please, do review it, code is complex and AI is extremely fallible, I had not a single commit body so far where I didn't needed to remove or fix things. But! All I had to do was remove a line or two, or fix a wording, and got an entire changelog ready for the go. This encourages me so much to do way better commit messages and at the same time doesn't take much time!
THIS is a useful AI tool, THIS is the AI I want, and since nobody else does it, I'll just do it myself.
Sidenote: this is just passing the diff into the command line… and on windows that has a 8k character limit. So yea, keep your commits smol.
