By February 17, 2026, Godot’s public problem had a catchy label: “AI slop PRs.”
The line that actually mattered was quieter and worse. Maintainer Remi Verschelde said Godot spends so much time helping new contributors get pull requests into mergeable shape that he does not know how long the project can keep doing it.
That is the real story. Not that maintainers suddenly dislike AI. Not that newcomers became unwelcome overnight. The pressure point is narrower: a contribution lane that used to assume costly human effort now has to verify whether the effort was real before review can even start.
The review lane stopped beginning from trust
The public turn started on February 16, 2026, when game director Adriaan de Jongh posted that Godot’s GitHub was seeing “increasingly many” LLM-generated pull requests, calling them a “MASSIVE time waster for reviewers” and saying contributors often did not understand their own changes. Later that day, Verschelde replied with the maintainer version of the same diagnosis: Godot now has to second-guess new-contributor PRs multiple times per day.
His questions were the plot:
- Is the description just verbose LLM output?
- Was the code actually written and understood by the submitter?
- Were the tests really run, or merely claimed?
That is already a different job than ordinary code review. Review used to start from a working assumption that the person on the other side had done enough grounded work to explain the change, reproduce the bug, and respond to follow-up. Once that assumption weakens, the lane turns adversarial before anyone has said yes or no to the patch itself.
Verschelde made the human cost explicit in a second post the same day. “AI slop PRs,” he wrote, were becoming “increasingly draining and demoralizing” for Godot maintainers, and more funding to pay more maintainers was the only viable solution he could think of.
Godot already had the rules. The cost is enforcing them.
This is what makes the episode more interesting than a culture-war flare-up. Godot was not speaking into a policy vacuum.
Its own pull request guidelines already say contributors should only submit code they understand and are prepared to explain to a maintainer. The same page says AI use is discouraged, contributions made entirely by AI are prohibited, and any AI assistance that materially shaped a submission should be disclosed. Contributors are also told to proofread, test, and respect reviewer time.
That is the old social contract, written down.
The contract says a PR should arrive carrying at least three signals:
- authorship or accountable adaptation,
- enough understanding to answer review,
- enough testing to justify reviewer attention.
What Godot’s February 2026 moment exposed is that having the rule and enforcing the rule are different budgets. If submission becomes cheap enough to mass-produce plausible-looking diffs and descriptions, the maintainer has to spend scarce time re-establishing a trust boundary the workflow once got for free.
A pull request is not a patch. It is a labor funnel.
The phrase “bad PR” can sound small from the outside, like a mildly annoying diff.
Godot’s documentation on testing pull requests makes the hidden work visible. Reviewers and testers may need to fetch CI artifacts, run platform-specific builds, or compile a PR branch from source when artifacts have aged out or the needed configuration is not covered. That is not decorative process. It is real operator labor attached to each claim that a change works.
So the maintainer complaint in this story is not simply “some code is ugly.” The complaint is that the expensive side of the exchange never disappeared:
- someone still has to inspect the diff,
- someone still has to decide whether the explanation is coherent,
- someone still has to test or reproduce the behavior,
- someone still owns the regression if the project merges bad work.
Generative tools cut the cost of producing a submission. They do not cut the cost of proving that the submission deserves trust.
That is why this story belongs in governance, not etiquette.
”Welcoming” turned out to be an intake design
The most revealing line in the secondary coverage was not about AI output quality. It was Verschelde’s explanation that Godot “prides itself in being welcoming to new contributors” and that maintainers spend substantial time helping them get PRs into mergeable shape.
That sentence matters because it reveals what “welcoming” was buying the project before this episode.
Welcoming was not only a tone. It was a bet that first-time contributors who showed up with rough work had still invested enough real effort that coaching them would compound:
- they could answer questions,
- they could revise the patch,
- they could learn the norms,
- they might become durable contributors.
Cheap synthetic generation scrambles that bet. A maintainer can no longer tell, quickly or cleanly, whether a rough PR came from a sincere newcomer who needs help or from someone using a model to spray plausible diffs into a public lane. The consequence is ugly whichever way the maintainer turns. Merge too easily and the project inherits regressions. Filter too aggressively and sincere newcomers get treated like adversaries.
That is why this is not well summarized as “AI makes spam.” The harder consequence is that it makes hospitality expensive.
The scale number matters only as context
One trap in stories like this is pretending a single number proves more than it can.
It does not help to claim that most Godot PRs are AI-generated. The evidence does not support that. What the evidence does support is a scale context in which extra low-signal volume becomes painful fast.
GameDeveloper reported 4,681 open pull requests in the godotengine/godot repository on February 17, 2026. The public GitHub PR list showed 4,770 open pull requests when accessed on March 18, 2026. Those counts do not prove that AI caused the entire backlog. They do show that Godot’s intake lane was already operating at a size where additional adversarial verification work is not a minor inconvenience. It compounds on top of an existing queue.
That distinction matters. This is a story about review economics under stress, not a story with a clean causal percentage.
The implied fixes are friction or capacity
Verschelde did not publish a neat reform package, and this draft should not pretend he did. But the options implied by the evidence all point in the same direction: if verification stays expensive, the project either has to make intake slower or make review capacity larger.
The visible menu looks like this:
- stronger evidence requirements in PR descriptions,
- harder disclosure expectations around AI assistance,
- pre-issue or pre-triage gates before code review starts,
- narrower first-time contributor lanes,
- more funded maintainer time for triage and review.
Godot’s own funding page makes the capacity point concrete. It explicitly says donations support core development work including code reviews. That is not incidental bookkeeping. It is the operational answer to the economic asymmetry exposed by the Bluesky thread.
If you want the system-design version of this same problem, CoClaw’s reviewer lane piece argues that approvals only work when there is enough inspectable evidence around them. Godot’s episode shows the open-source version of that rule under public pressure.
What lingers after the thread
The strongest detail in this story is not “AI slop.” It is the moment a maintainer stops asking only whether a patch is correct and starts asking whether the person behind it can be trusted to stand behind it.
That is the after-image worth keeping.
Open ecosystems do not break first when a bad patch merges. They start breaking earlier, at intake, when a pull request no longer means “someone did the work” and instead means “someone asked a human to find out.”
Godot’s February 2026 flare-up is one public example of a broader rule:
when contribution stays cheap and verification stays human, welcoming stops being a posture and becomes a governance budget.
Sources and boundary
Primary grounding:
- The core timeline and pressure in this story come from the February 16, 2026 Bluesky posts by Adriaan de Jongh and Remi Verschelde.
- Verschelde’s role is grounded in Godot’s official Contact page, which lists him as Project Manager.
- Godot’s official pull request guidelines and testing pull requests documentation ground the claims about contributor obligations and reviewer labor.
Secondary framing:
- GameDeveloper and PC Gamer are used to anchor the public February 17, 2026 framing and to preserve dated context around backlog scale and the “welcoming” tension.
What this piece does not claim:
- It does not claim a measured share of Godot PRs that are AI-generated.
- It does not claim Godot has already adopted a specific new gate, restriction, or detection system.
- It does not claim that novice mistakes can always be separated cleanly from AI-assisted mistakes.
Confidence boundary: high confidence on the public quotes, dates, official contributor rules, official testing workflow, and dated PR backlog counts; moderate confidence on the broader governance interpretation that this episode exposes a reusable open-source intake problem; low confidence on any attempt to quantify authorship or predict Godot’s next policy move.