The Cost of AI Coding for Open-Source Maintainers
I have been active in open-source development for many years on critical software projects such as the rust compiler, libp2p in the past and linux kernel, mavlink and px4 today. From what I have seen so far, one of the biggest costs of AI-coding is being pushed directly to open-source maintainers, especially the ones maintaining large and popular projects.
People often talk about how AI makes contributors faster. Maybe it does. But in open source, that speed usually does not remove work. It transfers it in the worst possible way.
The Review Cost
The pattern is very common now. Someone writes a prompt, gets some output, makes 1-2 commits and sends the PR right away.
But maintainers cannot merge things by magic. Someone has to verify that the change makes sense and is correct, does not introduce regressions, does not break things and is worth for what it's doing.
Getting help from AI is not the problem by itself. The problem starts when the author does not understand what they submitted. In many cases, this is easy to notice: huge changes in very few commits, long PR descriptions full of unnecessary information, patches touching unrelated areas and weak answers in review. At that point, the missing effort is simply moved to the maintainer. Most of the time (if not always) it is much better to ask maintainers about the change rather than submitting random work and expecting maintainers to guide you from start to finish.
The Actual Damage
Time is already one of the rarest resources in open source as most maintainers use their free time to keep projects healthy and even then they often cannot dedicate enough time to all the project's plans and goals. Maintainers already deal with many different problems on their limited time like regressions, issue triage, design discussions, coding, release, etc. Filtering the AI slop add even more work on top of that and usually slows their other works on the project.
The damage is not only wasted time. It also hurts trust. When projects receive enough AI slop, maintainers become more defensive and real contributors get mixed into the same queue and the quality of communication drops.
How Do I Deal With This
When I see something that looks like AI-generated work, I do not review it immediately. I first check a few basic things:
- Does it solve a real and already known problem?
- Is there a clear reason for the change?
- Is the patch small enough to review properly?
- Does it stay within one area instead of touching everything at once?
- Can the author explain the design and tradeoffs?
- Does the PR description contain useful information instead of a wall of text?
- Is it improving something real or just rewriting working code for no reason?
- Does it have a proper commit history?
If the answer to these questions is mostly no, I do not spend time reviewing it.
If a patch does not address an existing bug, an accepted feature request or a clearly defined problem, then my first question is simple: why should this be added at all? Complexity is already high enough in most serious projects.