Replies: 64 comments 182 replies
-
|
such a great intiative |
Beta Was this translation helpful? Give feedback.
-
|
I know this is a pretty ambitious idea and not trivial to implement, but it would be really powerful to have an AI-detection mechanism with a configurable threshold at the repository or organization level. That way, teams could decide what percentage of AI-generated code is acceptable in pull requests. Another possible approach would be to define a set of rules or prompts and evaluate pull requests against them. PRs that don’t meet those rules could be automatically flagged or potentially even closed. |
Beta Was this translation helpful? Give feedback.
-
|
As of today, I would say that 1 out of 10 PRs created with AI is legitimate and meets the standards required to open that PR.
On 28 Jan 2026, at 18:41, Camilla Moraes ***@***.***> wrote:
Another possible approach would be to define a set of rules or prompts and evaluate pull requests against them. PRs that don’t meet those rules could be automatically flagged or potentially even closed.
This is definitely something we’re exploring. One idea is to leverage a repository’s CONTRIBUTING.md file as a source of truth for project guidelines and then validate PRs against any defined rules.
In regards to AI-generated code, have you seen cases where the code is AI-generated but still high-quality and genuinely solves the problem? Or is it alwaays just something you want to close out immediately? I'm curious because I'm wondering if an AI-detection mechanism would rule out PRs where AI is used constructively, but that's where we'd want to test this thoroughly and understand what sensible thresholds look like.
—
Reply to this email directly, view it on GitHub<#185387 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABBWEYEKF6WLNDKE376L3GD4JDYFXAVCNFSM6AAAAACS7B7C7OVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNRTGEZTMMI>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as disruptive content.
This comment was marked as disruptive content.
-
|
Hey! I am from Azure Core Upstream and we have a lot of OSS maintainers who mainly maintain repositories on GitHub. We held an internal session to talk about copilot and there is a discussion on the topic where maintainers feel caught between today’s required review rigor (line-by-line understanding for anything shipped) and a future where agentic / AI-generated code makes that model increasingly unsustainable. below are some key maintainer's pain points:
|
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
|
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
-
|
An option to limit new contributors to one open PR would be nice. Just today I had to batch-close several AI generated PRs which were all submitted around the same time. For this protection, defining "new contributor" is probably not possible to do perfectly. But anyone who has no interactions with a project prior to the last 48 hours seems like a good heuristic. The point is to catch such a user at submission time and limit the amount of maintainer attention they can take up. For a different type of problem, I'd like to be able to close PRs as "abandoned", similar to the issue close statuses. It's a clear UI signal to the contributor that their work isn't being rejected but I'm not going to finish it for them. Several of the low quality contributions I have handled, dating back to before the Slop Era but getting worse, are simply incomplete and need follow through. |
Beta Was this translation helpful? Give feedback.
-
|
For the long term horizon: Implement a reviewer LLM that first does an initial scoring of the PRs? Critique is far easier than creation of a correct result. That automated pre-moderation should give the edge needed to handle. Depending on whether you just use rich prompting or fine-tuning, you can even start building an "oracle vox" for your project, which acts as a reasonably informed, reasonably on point virtual representative for the project/organization. |
Beta Was this translation helpful? Give feedback.
-
|
This is a very real problem, and I appreciate that it’s being treated as systemic rather than blaming maintainers or contributors individually. One concern I have with repo-level PR restrictions is that they may disproportionately impact first-time contributors who do want to engage meaningfully but don’t yet have collaborator status. Personally, I think the most promising direction here is criteria-based PR gating rather than blanket restrictions things like required checklist completion, passing CI, linked issues, or acknowledgement of contribution guidelines before a PR can be opened. On AI usage specifically, transparency feels more scalable than prohibition. Clear disclosure combined with automated guideline checks could help maintainers focus on high-intent contributions without discouraging responsible AI-assisted workflows. Looking forward to seeing how these ideas evolve especially solutions that preserve openness while respecting maintainer time. |
Beta Was this translation helpful? Give feedback.
-
|
Thinking along the lines of the discussion first approach that Ghostty uses, I think one way to create just enough friction would be to have an opt-in where a PR has to be linked to an open issue or discussion topic. So when an unprivileged (i.e. does not have elevated privileges on the repo) user tries to create a PR, there's a required field that takes an issue/discussion number. If that's not provided (or the corresponding issue/discussion is closed), then the PR can't be created. This could be trivially worked around by throwing in any old issue/discussion (or by creating one), but it may cause just enough friction to help. To guard against this, perhaps maintainers could set a "minimum age" for the issue/discussion (e.g. 12 hours) to prevent creating fake issues to support a spammy PR. |
Beta Was this translation helpful? Give feedback.
-
|
Maybe something like this could be a starting point: https://github.com/mitchellh/vouch See also the announcement here: https://x.com/mitchellh/status/2020252149117313349 |
Beta Was this translation helpful? Give feedback.
-
|
We’re seeing more repos auto-closing PRs lately, not as a stance against contribution, but as a way to survive review overload. What keeps coming up—both in OSS and internal platform teams—is that the problem isn’t low-quality code, but low-context change. AI has made it trivial to generate syntactically correct diffs, but the signals maintainers rely on to reason about impact haven’t scaled with that volume. In high-traffic repositories, that shows up as:
At that point, review stops being about correctness and becomes a context-reconstruction exercise, which is where the real cost explodes. What we’re experimenting with in Watchflow is intentionally defensive, not prescriptive:
The goal isn’t to auto-reject contributions or “out-AI” AI-generated noise. It’s to reduce the number of low-context PRs that ever land in a maintainer’s inbox—and make the remaining ones cheaper to reason about before human attention is burned. There’s a preview setup at https://watchflow.dev where you can try rules in analysis mode before enforcing anything. For anyone curious, I wrote up the reasoning in more detail here: https://medium.com/@dimitris.kargatzis/the-ai-flood-is-breaking-oss-maintainers-are-hitting-the-limit-30c41247db5a |
Beta Was this translation helpful? Give feedback.
-
Before a PR is even created, require: Linked issue Checklist completion Affected areas Test evidence Form-based PR creation + required fields. If any field missing → PR creation blocked.
First-time contributor PRs: Auto-labeled needs-triage Auto-assigned to bot queue No maintainer ping Human review only after automation passes.
Run a fast bot pipeline: Lint Tests Docs build License header check Commit message format If any fail → PR auto-closed with explanation. Not commented. Closed.
Score contributors by: Accepted PRs Reverted PRs Time-to-fix feedback Thresholds: Tier 0 → PRs require passing all gates Tier 1 → Skip basic checks Tier 2 → Direct PR + reviewer assignment This removes blanket collaborator-only models.
Checkbox: “I used AI to generate part of this PR” If checked: Require explanation of what was generated Require manual verification notes Undeclared AI usage discovered later → contributor cooldown.
Accounts that: Abandon PRs Spam low-quality patches Get: Temporary PR block Increasing backoff windows Automated. No maintainer involvement.
Expose: PRs auto-closed by automation PRs reaching human review Median review time Optimize pipeline based on data, not anecdotes. Bottom line Quality problems are systems problems, not people problems. |
Beta Was this translation helpful? Give feedback.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as a violation of GitHub Acceptable Use Policies
-
|
I have skimmed through all the top level comments in here and haven't seen this suggusted yet, sorry if I missed it somewhere. In addition to requiring a linked issue before creating a PR, there could be a special "label" for issues that marks them as "ready for PR". This way the maintainers could effectively throttle the amount of incoming PRs, by the amount of opened issues marked with said label. |
Beta Was this translation helpful? Give feedback.
-
|
great discussion ! subscribing and will watch this space some sort of reputation like scheme is probably needed here
another thing which is most likely needed also
my 2 cents / kopecks PS or we can just clone Linus and let him deal with AI bots his way - https://lore.kernel.org/lkml/CAHk-=wgnRQiKqWVrO_uF1btYM2K8r8xL95RGdKU3QLe8B58nrw@mail.gmail.com/ - however we then may need AI therapist to heal their hurt ego + PTSD :) |
Beta Was this translation helpful? Give feedback.
-
|
Generally I would love to see new contributors opening an issue first and where the contributor gets assigned to work on something by the maintainer/s. There are cases where fly-by contributions can be immensely valuable and often it is best done in multiple small PRs - issues would serve the gating where as trying to encourage people to slam one big PR over several small ones will backfire. Most problematic slop contributors drop multi-thousand PR in your repos blowing everything up through single PR with little thought so I would use gating like getting assigned in the issue to get past the threshold of being known contributor - or saying "hello" to the maintainers, collaborators and other contributors. Requiring an issue first also fosters a culture where people negotiate a bit first to get things done. |
Beta Was this translation helpful? Give feedback.
-
|
What a great discussion and I heard and read a lot of good ideas already. One thing that I am missing in all of this, is what plans there are in regards of "Building community, not just walls" as outlined in the blog? The features I see are for building walls, that we as maintainers unfortunately need, at the same time it would be great to get more things to build communities:
|
Beta Was this translation helpful? Give feedback.
-
|
Why not give the choice to the users? |
Beta Was this translation helpful? Give feedback.
-
|
Hey everyone, thanks for all the feedback you've been providing here. We're reviewing it carefully and really appreciate all your questions and ideas. I wanted to share an update that we just released two new repository settings to limit pull requests to repo collaborators or disable them entirely. You can checkout the changelog or community discussion for more details. As always, feel free to drop a comment if you have any questions about how these settings work for feedback on how they impact your projects and community. |
Beta Was this translation helpful? Give feedback.
-
|
I would like to have optional but configurable rules for PRs such as
|
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone,
I wanted to provide an update on a critical issue affecting the open source community: the increasing volume of low-quality contributions that is creating significant operational challenges for maintainers.
We’ve been hearing from you that you’re dedicating substantial time to reviewing contributions that do not meet project quality standards for a number of reasons - they fail to follow project guidelines, are frequently abandoned shortly after submission, and are often AI-generated. As AI continues to reshape software development workflows and the nature of open source collaboration, I want you to know that we are actively investigating this problem and developing both immediate and longer-term strategic solutions.
What we're exploring
We’ve spent time reviewing feedback from community members, working directly with maintainers to explore various solutions, and looking through open source repositories to understand the nature of these contributions. Below is an overview of the solutions we’re currently evaluating.
Short-term solutions:
Long-term direction:
As AI adoption accelerates, we recognize the need to proactively address how it can potentially transform both contributor and maintainer workflows. We are exploring:
Next Steps
These are some starting points, and we’re continuing to explore both immediate improvements and long-term solutions. Please share your feedback, questions, or concerns in this thread. Your input is crucial to making sure we’re building the right things and tackling this challenge effectively. As always, thank you for being part of this conversation. Looking forward to hearing your thoughts and working together to address this problem.
Beta Was this translation helpful? Give feedback.
All reactions