Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
772edeb
Add an LLM policy for `rust-lang/rust`
jyn514 Apr 17, 2026
815da6e
address some of jieyouxu's comments
jyn514 Apr 17, 2026
17a35f4
revert extraneous change
jyn514 Apr 17, 2026
61e5e2c
address some more review comments
jyn514 Apr 18, 2026
8ee5ed4
more review comments
jyn514 Apr 18, 2026
7cd8c17
more wording
jyn514 Apr 18, 2026
2db7465
rewrite "trivial changes" section
jyn514 Apr 21, 2026
9b2b3c2
rewrite intro to 'Allowed with caveats'
jyn514 Apr 21, 2026
e3b1394
Be more specific in "Moderation policy"
jyn514 Apr 21, 2026
e3f2aec
Add explicit conditions for modification or removal
jyn514 Apr 21, 2026
864428f
mention that the policy is intentionally conservative
jyn514 Apr 21, 2026
593d538
extend "Penalties" section with sentencing guidelines
jyn514 Apr 21, 2026
75050a2
be more clear where the CoC is invoked
jyn514 Apr 23, 2026
b6a8662
minor edits; add "Motivation and guiding principles" section
jyn514 Apr 28, 2026
9a944f7
Relax and clarify moderation guidelines
jyn514 Apr 28, 2026
14956c3
Carve out a space for experimentation
jyn514 Apr 28, 2026
8fe7281
fix typo
jyn514 Apr 28, 2026
791e46f
recommend adversarial review from another LLM
jyn514 Apr 28, 2026
8520038
markdown formatting
jyn514 Apr 28, 2026
b14e8ca
more markdown formatting
jyn514 Apr 28, 2026
69b6dc1
Note that explicitly marking LLM content is ok
jyn514 May 13, 2026
d682475
Exempt t-security-response from a few requirements
jyn514 May 13, 2026
ea4e504
Make "solicited" even stricter
jyn514 May 13, 2026
7eeecbb
Carve out a space for experimentation
jyn514 May 13, 2026
cd9aecd
Revert "Exempt t-security-response from a few requirements"
jyn514 May 13, 2026
ee4f26c
address a few of TC's concerns
jyn514 May 14, 2026
7956574
address a few of Jack's concerns
jyn514 May 14, 2026
b88855a
remove the 'additional scrutiny' examples
jyn514 May 14, 2026
4305e14
move 'using an llm to discover bugs' to the caveats section, without …
jyn514 May 14, 2026
ab6f8a4
move LLM-authored code to its own section; add a zulip stream as policy
jyn514 May 14, 2026
d9d8238
add a section about staying on-topic
jyn514 May 14, 2026
83b9363
wording
jyn514 May 14, 2026
9efffad
relax moderation policy guidelines
jyn514 May 14, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@
- [Project groups](./governance/project-groups.md)
- [Policies](./policies/index.md)
- [Crate ownership policy](./policies/crate-ownership.md)
- [LLM usage policy](./policies/llm-usage.md)
- [Infrastructure](./infra/index.md)
- [Other Installation Methods](./infra/other-installation-methods.md)
- [Archive of Rust Stable Standalone Installers](./infra/archive-stable-version-installers.md)
Expand Down
2 changes: 2 additions & 0 deletions src/how-to-start-contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,8 @@ To achieve this goal, we want to build trust and respect of each other's time an
- Please respect the reviewers' time: allow some days between reviews, only ask for reviews when your code compiles and tests pass, or give an explanation for why you are asking for a review at that stage (you can keep them in draft state until they're ready for review)
- Try to keep comments concise, don't worry about a perfect written communication. Strive for clarity and being to the point

See also our [LLM usage policy](./policies/llm-usage.md).

[^1]: Free-Open Source Project, see: https://en.wikipedia.org/wiki/Free_and_open-source_software

### Different kinds of contributions
Expand Down
195 changes: 195 additions & 0 deletions src/policies/llm-usage.md
Comment thread
oli-obk marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
## LLM Usage Policy

For additional information about the policy itself, see [the appendix](#appendix).

### Overview
Comment thread
nikomatsakis marked this conversation as resolved.

Using LLMs while working on `rust-lang/rust` is conditionally allowed, when done with care.
LLMs are not a substitute for thought,
and we do not allow them to be used in ways that risk losing our shared social and technical understanding of the project,
nor in ways that hurt our goals of creating a strong community.

The policy's guidelines are roughly as follows:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
Comment thread
jyn514 marked this conversation as resolved.

> We carve out a space for "experimentation" to inform future revisions to this policy.

### Rules
#### Legend
- ✅ Allowed
- ❌ Banned
- ⚠️ Allowed with caveats. Must disclose that an LLM was used.
Comment thread
jyn514 marked this conversation as resolved.
- ℹ️ Adds additional detail to the policy. These bullets are normative.

#### ✅ Allowed
The following are allowed.
- Asking an LLM questions about an existing codebase.
- Asking an LLM to summarize comments on an issue, PR, or RFC.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Asking an LLM to summarize comments on an issue, PR, or RFC.
- Asking an LLM to summarize comments on an issue or PR.

This is a policy for r-l/r. Speaking about RFCs may confuse the reader as to the scope of this policy.

View changes since the review

- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
- ℹ️ This does not allow reposting the summary publicly on `rust-lang/rust`.

As written, this could read as a prohibition on posting the summary publicly anywhere. That would be an overreach and is not what I think is meant.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the clarification is fair, but I don't think that removing the point about personal use is fair. The point is to clarify that it's for personal use, and posting it publicly outside the project counts as that still.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a policy for r-l/r, so I'd consider it out of bounds for this to prohibit, e.g., a member of another team using an LLM to construct a summary and then posting that to a Project team space outside of r-l/r for team use. It would stretch the meaning of personal use too far to consider that personal use.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant is:

This does not allow reposting the summary publicly on rust-lang/rust. This only includes your own personal use.

is fine, because posting it publicly outside of rust-lang/rust counts as personal use under this policy, and is therefore fine.

- Asking an LLM to privately review your code or writing.
- ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below.
- Writing dev-tools for your own personal use using an LLM, as long as you don't try to merge them into `rust-lang/rust`.
- Using an LLM to generate possible solutions to an issue, learning from them, and then writing something from scratch in your own style.
- Syncing code and documentation into `rust-lang/rust` (e.g., using submodules, subtrees, [josh](https://github.com/josh-project/josh), etc.) from other repositories that do not follow this policy.
- Using an LLM in the creation of experimental code changes that are not meant to be reviewed and will never be merged but must live as draft PRs on `rust-lang/rust` for tooling reasons, such as to run crater or perf.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Using an LLM in the creation of experimental code changes that are not meant to be reviewed and will never be merged but must live as draft PRs on `rust-lang/rust` for tooling reasons, such as to run crater or perf.

I don't believe the policy means to prohibit this, and I believe it'd be better to say this explicitly.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch, added.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, considering how easily we could generate code to run via crater or perf, how expensive this is, and how sensitive it can be to minute changes, I would rather not allow any code to be run in those circumstances that wouldn't be valid to merge normally.

And to be clear, I mean code here. Most of the time, the issue is just that the code needs to be made more generic or documented, not that the code will be substantially different from the end result. If you need to rewrite the entire thing after a perf run, I would say it wasn't a useful perf run, and I only think that should happen due to misconceptions that were found by perf, not just inherently due to the authorship restriction.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well it's not like a random user is able to invoke crater and perf, a trusted member must approve first.

PRs not intended for merging can legitimately used for running crater to gauge the effect of a change or gather statistics from the ecosystem, e.g. rust-lang/rust#129604, rust-lang/rust#137044, rust-lang/rust#154887. Also this suggested-change is not restricted to crater/perf either, e.g. one could use draft PRs for testing CI (rust-lang/rust#154969) or bot interactions (rust-lang/rust#155221). So having a "must be valid to merge normally" restriction makes no sense.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. My motivating use cases are edition testing and lang research, neither of which produce PRs that will ever be merged.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Syncing code and documentation into `rust-lang/rust` (e.g., using submodules, subtrees, [josh](https://github.com/josh-project/josh), etc.) from other repositories that do not follow this policy.

It's my understanding this is meant to be allowed, but I don't otherwise see it.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say that this is okay since the goal is explicitly not to extend this policy to those repositories, although I would hesitate to just make this a single footnote since there are a few nuances, like needing to resolve merge conflicts. Those specific changes required to merge the subtree should still follow this policy, even if the original repo does not.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Using an LLM to generate solutions to an issue, learning from them, and then writing a solution from scratch yourself and in your own style.

I'd suggest moving this one to the allowed category (and revising it to mention generating example solutions in the plural, as that's better guidance for what someone should really do).

This is more similar to the other allowed items than to the with caveats items. As with using an LLM to review one's own code, this is a private use. We're requiring that the person write the solution that others will see from scratch. I.e., I read that as asking for independent creation — prohibiting copying in any form (in fact, it might be a good idea to make the language stronger about this, to improve clarity; maybe add "(no copying)" after "from scratch").

Nobody other than the author therefore will ever see these educational materials. Months could pass between the author looking at these examples and writing a solution. We don't demand to know the books or papers the author might have read that contained example code for similar problems.

Demanding disclosure here is a reach into a private space, and so is an overreach.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I've moved this to the "Allowed" section.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just feels like you failed to take in the rest of the policy and decided to remove a large part of it because you disagree with it. The entire policy effectively clarifies what it considers valid "rewording" of LLM output and what isn't, and so adding a tiny bullet point at the beginning that overrides that and says it's all allowed with no caveats just undermines all that.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no valid rewordings under this policy. As the policy says:

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)". No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.

It then fully bans all contributions that were "originally authored" by LLMs other than those by approved bots, those under the recently added experimental process, machine translation, and trivial code changes. I.e., the other things under the Allowed with caveats section.

The odd one out is this one. The others represent things "originally authored" by an LLM, as the policy defines it. This one is not that. The policy requires that a solution be rewritten from scratch — i.e., it cannot be "originally authored" by an LLM.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I guess I wasn't 100% correct on the reasoning here, but the effect is the same: you appear to be failing to take in the policy and deciding to remove a decent portion because you disagree with it.

The point here is that the policy is intentionally conservative when it comes to LLM usage: anything where the LLM could have potentially influenced the output in ways you didn't directly control is included. This is one of those obvious cases. It's like asking a friend to copy their homework; even if you just read a friend's paper and then rewrote your own, it's still fundamentally different than writing your own paper.

And similarly, examples like this exist in code too. For example, there have been multiple cases where code has been leaked and reverse engineers have explicitly refused to read it for legal reasons; simply knowing what was done taints the idea of a "clean-room" implementation. This is an obvious example of that. You can't say that this is "too private;" if we decide that LLMs are our business and you've decided to contribute, you should let us know, just like how you should let us know if you copy-paste code verbatim from another project. Your alternative is to not contribute code, not to hide its source.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point here is that the policy is intentionally conservative when it comes to LLM usage: anything where the LLM could have potentially influenced the output in ways you didn't directly control is included.

Included in what the policy allows, without disclosure, are:

  • Asking an LLM questions about an existing codebase.
  • Asking an LLM to summarize comments on an issue, PR, or RFC.
  • Asking an LLM to privately review your code or writing.

Any of these can influence your output — i.e., what you'll send to a reviewer. The LLM can give you broken answers about the workings of the code base, can give you a backward description of what was discussed in the RFC, and in review, can push you away from correct answers and toward wrong ones. I.e., the policy allows you to learn incorrect things from an LLM, without disclosure, that may result in you submitting a very confused PR.

The suggested text says:

  • Using an LLM to generate solutions to an issue, learning from them, and then writing a solution from scratch yourself and in your own style.

This, too, is simply an opportunity to learn something from the LLM. You may learn good things, you may learn bad things. That's up to you. You still have to write your own solution.

Learning from things and writing one's own solution is not copying. If it were, then there are many other kinds of learning we would need to prohibit (not just disclose) far more urgently.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue isn't that it's copying, though, or whether you learned. The issue is whether an LLM was involved at all. It's a specific case where many people underestimate the effect the tools have on the end result, and so, we ask for disclosure in general to avoid having to litigate the very specific circumstances of what happened to determine whether they need to disclose or not.

It feels like the main issue, which you're explicitly not pointing out, is that you're concerned that people would be judged based upon whether an LLM was involved at all, and therefore, LLM usage should be kept as a "dirty pleasure" in this particular instance. I don't think that it's worth diluting the policy or creating confusion just because people are specifically unwilling to admit they used a particular tool.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern here is that it's inconsistent and an overreach. This would be more consistently placed with the learning rules rather than the original creation rules. It's an overreach in the same way that it'd be an overreach to require disclosure for the other learning rules.

The concerns about disclosure eroding trust in the Project are separate, and I've articulated them elsewhere.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You seem to be intentionally ignoring all of the arguments where LLMs are a very specific case that cannot be generalised. Sure, in a perfectly frictionless vacuum, we would not be asking for disclosure in this particular case. But for a number of different reasons, we're asking this.

Let me come up with an equally hypothetical and unrealistic scenario: imagine if, five years ago, a massive campaign had been undergoing to fill StackOverflow with subtly wrong information to sabotage developers in addition to all the correct information. It would be reasonable to ask all developers to disclose if they had used information on StackOverflow to develop a change in these very specific circumstances. This isn't "overreach," it's a pragmatic desire to correct for potential issues, and simply disclosing that something was involved is not a massive privacy concern.

Of course, I know exactly why people would be uncomfortable disclosing in this scenario, and it's because they don't want to be judged for making a potentially unethical decision by using these tools, although since that line of thought is banned from discussion in this RFC, I will both refrain from making it a part of my argument and insist that you refrain from making it a part of yours.

Simply put, I think that if people wish to not disclose because of its relation to (forbidden topic), then that is an argument that is not suitable for this policy. If you disagree, then you can take a look at my RFC, which explicitly reduces LLM usage even further because of the presence of that argument.

#### ❌ Banned
The following are banned.
- Comments from a personal user account that are originally authored by an LLM.
- ℹ️ This also applies to issue bodies and PR descriptions.
Comment on lines +37 to +40
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the work lang members do, as a team, is reviewing language proposals made to us in rust-lang/rust issue bodies, PR descriptions, and comments. Here's a random recent one, by way of example:

We tend to care about policies for how lang-related issue and PR descriptions are put forward, e.g., the stabilization report template. Though we are not ourselves large contributors of code to r-l/r, we are in the set of maintainers of r-l/r — at least, that's how I see it.

So it surprised me a bit, given that this document sets policy for what's allowed for people making lang proposals to us in r-l/r (e.g., by prohibiting LLM-assisted drafting), that we weren't included on this FCP, though it had been earlier discussed.

I don't know what to do about that. I don't really want to ask here for the hassle of the FCP being restarted. And yet, given what the policy covers, it seems awkward to me that we're not on it — as though I'm commenting here with needs and interests as an outsider. That's how it feels, anyway.

Is there anything we can do about this? Maybe the scope can be narrowed so that it doesn't set policy for lang proposals or documentation items lang owns? Maybe something else? I don't know. @jyn514, what do you think?1

View changes since the review

Footnotes

  1. If you don't mind, I really do want to hear specifically from @jyn514 on this (no hurry).

Copy link
Copy Markdown
Member Author

@jyn514 jyn514 May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to narrow the scope so this excludes lang proposals and stabilization reports, so that t-lang can set their own policy. Are there other things in t-lang's purview you would like to see excluded?

- ℹ️ This does not apply if the LLM content is clearly quoted and marked, you can post that.
However, the content of the comment must stand on its own even without the LLM content; it's not a substitute for your own words.
- ℹ️ See also "machine-translation" in ⚠️ below.
- Documentation that is originally authored by an LLM.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Documentation that is originally authored by an LLM.
- Documentation that is originally generated by an LLM.

Without changing the semantics, could we search and replace all uses of "authored by an LLM" with "generated by an LLM"? As @xtqqczze has pointed out elsewhere:

Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

Pulling the idea of authorship into this policy seems unnecessarily philosophical for what it's trying to accomplish. The technology is called "generative AI". It'd seem more clear, to me, to stick with "generated".

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jyn specifically clarified authorship as a term here to be specific, so, you can't just change one section and be consistent. It would require at least structural changes to the policy.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As best I can tell from a careful review, it requires a search-and-replace and then some minor redrafting of the The meaning of "originally authored" section. That section could be redrafted, e.g., as:

This document uses phrases such as "generated by an LLM" to mean "text that was generated by an LLM (and then possibly edited by a human)". No amount of editing can change how the text was originally created; how it was generated originally sets the initial style and it is very hard to change once it's set.

(Of course, there are many ways to redraft it. This is just one way.)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think it would be fair to change "originally authored" to "originally generated," just, it would have to be a structural change affecting multiple places and not just the one line. For what it's worth, I think that the distinction is pedantic, but will defer to jyn on what the better wording is. I assume that "authored" was chosen intentionally.

- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
- ℹ️ This includes *any* doc comments, or non-trivial source comments.

Reordering this to make it clear first and foremost that "Documentation" includes any doc comments, moving "non-trivial source comments" second. This also drops the quantitative "multiple paragraphs"; some multi-paragraph comments may be trivial, and some one-sentence comments may not be.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are using an LLM to write a multi-paragraph comment that is trivial, IMO that should also be banned. If you have a load-bearing single-line comment, I think that falls under "code changes authored by an LLM", although I'm not sure how to say that concisely.

- ℹ️ This includes compiler diagnostics.
Comment thread
jyn514 marked this conversation as resolved.
LLMs are conditionally allowed to assist with the *logic* surrounding a diagnostic (see "code changes" under ⚠️ below),
but they must not be used to author the message itself.
- Treating an LLM review as a sufficient condition to merge or reject a change.
LLM reviews, if enabled by a team, **must** be advisory-only.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me what this "team" language refers to here, as this is the rust-lang/rust policy

Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person,
Comment on lines +50 to +51
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this is limited to rust-lang/rust, probably better to just restrict to no LLM reviews.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually really want to keep allowing LLM reviews. I think they're low-risk and give people a chance to see whether the bot catches real issues.

but they may not have a policy that an LLM review substitutes for a human review.
- ℹ️ See "review bots" in ⚠️ below.
- ℹ️ An LLM review does not substitute for self-review. Authors are expected to review their own code before posting and after each change.

#### ⚠️ Allowed with caveats
The following are decided on a case-by-case basis.
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.

- Using machine-translation (e.g. Google Translate) from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- "Trivial" code changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html).
- ℹ️ Be cautious about PRs that consist solely of trivial changes.
See also [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation).
- Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines).
- ℹ️ This also includes reviewers who use LLMs to discover flaws in unmerged code.
- Using an LLM as a "review bot" for PRs.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm OOTL but I find this section situationally strange — where did the "review bot" come from?

IME AI-powered review bots that directly participates in PR discussions (esp the "app" ones) are configured by repository owner, but AFAIK r-l/r (which this policy applies solely to) did not have any such bots. I highly doubt a contributor will bring in their own review bot in public. So practically this has to be either

  • someone requested a review from Copilot, which may be we can opt-out?
  • the reviewer outsourced the review work to a coding agent, which is already covered in the sections
  • at least one team actually considered enabling such review bots in the future? as this is linked previously in that "Teams can have a policy that code can be merged without review" part, but I don't think this will ever happen given the the stance of this policy

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I highly doubt a contributor will bring in their own review bot in public.

I wish it worked like that :( People can just trigger GitHub copilot, or I suppose any other review bot, and let it comment on a r-l/r PR. Some people don't even do it willingly, but GH does it automatically for them, as GH copilot has a tendency to re-enable itself even if you sometimes disable it.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Yeah currently disabling review is a personal/license-owner setting, it is not possible to configure from the repository PoV 😞 but I think this is something that we may bring up to GitHub.

It may be possible to use content exclusion to blind Copilot, but I'm not sure if this hack is going to produce any overreaching effects (e.g. affecting private IDE usage too).

Copy link
Copy Markdown
Contributor

@apiraino apiraino Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

someone requested a review from Copilot, which may be we can opt-out?

I think this is exactly the point of pointing that out in our policy. Some people trigger a "[at]copilot review" in our repos without asking us for consent. This is rude behaviour and we don't want that.

And, yes, as you point out opting out of this "trigger" is currently only a project-wide setting, not at a repository level so we are looking with GitHub if they could make this setting more fine-grained (here on Zulip a discussion with the Infra team)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@clarfonthey I understand you are frustrated but it doesn't help to take it out on the people we're working with. Can I ask you to take a break from commenting on this RFC for a bit? Feel free to DM me with any concerns you have about the policy itself.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you're right; I deleted the comment

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Unsolicited review bots are becoming an increasing problem; for example: https://web.archive.org/web/20260426133344/https://github.com/rust-lang/rust-clippy/issues/16893#issuecomment-4321880160

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for flagging xtqqczze - the same bot has commented in 6+ issues on the rust-clippy repo and in my case was giving unsolicited advice in a completely derailing direction (solving a specific case I obviously already worked around rather than the general case rust-lang/rust-clippy#16901 (comment))

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xtqqczze both rust-lang/rust-clippy#16893 and rust-lang/rust-clippy#16901 are issues not PRs, and that @QEEK-AI account commented spontaneously without any summoning. So I don't think these instances fall under this "Review Bot" rule (which is still "⚠️ Allowed with caveats"). At the very least these are "Comments […] authored by an LLM" which is "❌ Banned", and they are also outright "spam" that the current CoC can already handle.

- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM.
You **must not** post (or allow a tool to post) LLM reviews verbatim on your personal account unless clearly quoted with your own personal interpretation of the bot's analysis.
- ℹ️ Review bot accounts must be blockable by individual users via the standard GitHub user-blocking mechanism. (Note that some GitHub "app" accounts post comments that look like users but cannot be blocked.)
- ℹ️ Review bots that post without being approved by a maintainer will be banned.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned this leaves room for reviewers to trigger a review bot without consent of the author of the PR, which could alienate the PR author. If I opened a PR and it got reviewed by an LLM bot, I would probably close the PR and never try contributing to the project again. I've seen this happen in another project. I think there should be an agreement between the reviewer and PR author before triggering a review bot.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"approved by a maintainer" is the key point here, if an LLM review bot is "approved by a maintainer" it means such is a public decision and should be mentioned in CONTRIBUTING.md, and that's the agreement.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An agreement among maintainers to impose LLM review bots on nonconsenting contributors would drive those contributors away.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Copy link
Copy Markdown
Member

@kennytm kennytm May 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrasing LLM output is already addressed in lines 67-68.

The premise of this whole section is that somehow a bot (as a separate account, line 69) can be officially "⚠️ Allowed with caveats" (line 57) for reviewing.

If you think that a review bot account should not be allowed, even if approved by maintainers, this whole thread would be more relevant on the parent item (line 66; I've commented about this before).

P.S. I don't think this policy implies any LLM review bot account will be allowed "right now" or "soon", I believe there must at least be an FCP.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Thinking about this further, this seems like an overall better process than having a review bot comment on a PR. There's no room for ambiguity about whether a PR author is responsible for responding to LLM output; only the reviewer who decides to use an LLM is in a position to interpret the LLM output because "Comments from a personal user account that are originally authored by an LLM" are explicitly forbidden.

- ℹ️ If a more reliable tool, such as a linter or formatter, already exists for the language you're writing, we strongly suggest using that tool instead of or in addition to the LLM.
- ℹ️ Configure LLM review tools to reduce false positives and excessive focus on trivialities, as these are common, exhausting failure modes.
- ℹ️ LLM comments **must not** be blocking; reviewers must indicate which comments they want addressed. It's ok to require a *response* to each comment but the response can be "the bot's wrong here".
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's okay to require PR authors to have to say "the bot's wrong here"; the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

I don't see how line 73 disagrees with this. The statement "It's ok to require a response" refers to the reviewer requiring a response from the author to address the bot comment, not from the bot itself. The previous statement "reviewers must indicate which comments they want addressed." also suggested that the reviewer has taken the 'onus' of the bot comment. In this scenario I don't find requiring the PR author to say "the bot's wrong here" to dismiss the comment is unfair to the author; in fact, having that 2nd step "reviewers must indicate which comments they want addressed" means the PR author is in fact rejecting the combined analysis of the bot and the reviewer, so I'd say this is more biased against reviewers.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current wording is a bit ambiguous and could conceivably be interpreted to mean that "it's okay to require a response" implicitly. I would like to see this clarified to say explicitly that a bot's comment only needs to be responded to if a reviewer explicitly indicates that.

- In other words, reviewers must explicitly endorse an LLM comment before blocking a PR. They are responsible for their own analysis of the LLM's comment and cannot treat it as a CI failure.
- ℹ️ This does not apply to private use of an LLM for reviews; see ✅ above.

All of these **must** disclose that an LLM was used.

#### Experiment: LLM-authored code changes
Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
1. "Solicited" means that a reviewer has communicated *ahead of time* that they are willing to review an LLM-authored PR.
- ℹ️ New contributors cannot use an LLM unless they first talk with a reviewer.
This must be the *same* reviewer who will be assigned to the PR.
2. "Non-critical" means that it is extremely unlikely for the PR to cause a [soundness](https://jacko.io/safety_and_soundness.html) regression.
- ℹ️ Examples:
- Changes to internal tooling like `tidy`, `x setup`, and `linkchecker` are probably ok.
- Changes that have a strong soundness impact, like the trait system, MIR building, or the query system are probably not ok.
3. "High-quality" means that it is held to at least the same standard as other code changes.
Everyone reads code, not just the author and reviewer;
we are not interested in "vibe-coded" PRs that degrade the quality of the codebase.
4. "Well-tested" means that you have covered all edge-cases that either you or the reviewer can think of.
- ℹ️ LLM-authored PRs will be held to a higher standard than human-authored PRs, because LLMs make it easier to write tests.
- ℹ️ If there is no existing test suite for a section of code, you must either write a new test suite or close the PR.
There are no exceptions for "writing the tests seems hard".
5. "Well-reviewed" means the author and reviewer both commit to fully understanding the code.
- ℹ️ All review requirements in [our existing review policy](../compiler/reviews.md#basic-reviewing-requirements) still apply.
- ℹ️ A review from a project member does not substitute for self-review.
Authors are expected to review their own code before posting and after each change.
- ℹ️ We recommend, but do not require, using a second LLM for adversarial local review before publishing your changes.

LLM-authored PRs must be tagged with a new `ai-assisted` label.
All such PRs will be posted to a new (private) Zulip channel, which will be accessible to all members of the `rust-lang` organization.
The goal of the channel is *not* to act as an additional gate-keeper on LLM-authored PRs.
Instead, it's to collect information about *whether this experiment is working*:
Are people doing interesting and useful things with LLMs? Are they learning? Are they making repeat contributions?

Because the new channel is private, it will have higher-than-normal standards for what counts as on-topic.
For example, the following are on-topic:
- Whether a PR meets the criteria for the experiment exception
- Whether a PR follows the policy in general

And the following are off-topic:
- Technical and design discussions. These should be posted directly on the PR or in a public Zulip channel.
- Discussions about effort, communication style, or intent
- General discussions about the LLM policy
## Appendix
### Motivation and guiding principles

There is not a consensus within the Rust project—and likely never will be—about when/how/where it is acceptable to use AI-based tools.
Many members of the Rust project and community find value in AI;
many others feel that its negative impact on society and the climate are severe enough that no use is acceptable.
Still others are working out their opinion.

Despite these differences, there are many common goals we all share:

- Building a community of deep experts in our collective projects.
- Building an inclusive community where all feel welcome and respected.

To achieve those goals, this policy is designed with the following points in mind:

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- LLMs are a new technology, and we are still learning how to use, moderate, and improve them.
Since we're still learning, we have chosen an intentionally conservative policy that lets us maintain the standard of quality that Rust is known for;
but leave space open to experiment with LLMs to inform future policies.


### Moderation policy
#### It's not your job to play detective
["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/).
Don't try to be the police for whether someone has used an LLM.
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Point to this policy, maybe report it to the mods (at your discretion), and move on.

To an outside reader, reporting something to the mods is going to sound like an escalation against the contributor. Given that, it reads a bit backward that the clear violations should only result in the policy being cited while the borderline violations require a report to the mods.

There are many ways this could be redrafted to avoid this. What I'd suggest is leaving whether to report this to the mods to the discretion of the reviewer in all cases and removing the conditional on whether the case is borderline or clear.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mods have openly encouraged people to report non-violations so they can keep an eye on things. I don't think that we should be reinforcing the stereotype that communication with the mods is inherently accusatory; we should be breaking that assumption instead.


#### Be honest
Conversely, lying about whether or how you've used an LLM is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Conversely, lying about whether or how you've used an LLM is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
Conversely, claiming to have not used an LLM when you did or stating your use of an LLM as less than it was is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.

I think to be fair to people, this language needs to have a safe harbor. People shouldn't have to worry that they have precisely calibrated their disclosures, i.e., not too high, not too low.

They might have done the work many months ago and may not remember the exact details. It must always be safe to over-disclose — to err on the side of stating that more LLM use happened.

View changes since the review

Copy link
Copy Markdown
Member

@kennytm kennytm May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the suggested edit is much harder to understand than the original text though

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually disagree; I would say that lying is a larger burden to meet and would not include people who are mistaken. Instead of weakening the wording, we could maybe clarify that the mods understand the difference between an lie and a misunderstanding.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the suggested edit is much harder to understand than the original text though

There are many ways to redraft, and I'm OK with one that continues to frame it as lying. E.g.:

Suggested change
Conversely, lying about whether or how you've used an LLM is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
Conversely, lying about not using an LLM when you did, or lying about relying less on an LLM than you did is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, I already kind of solved this in my version:

If a user is found to be repeatedly lying about LLM usage (using LLMs in a non-trivial way without disclosing that usage)

It doesn't need to be particularly elegant; parentheses are fine.

If you are not sure where something you would like to do falls in this policy, please talk to the [moderation team](mailto:rust-mods@rust-lang.org).
Don't try to hide it.

#### Penalties
The policies marked with a 🔨 follow the same guidelines as the code of conduct:
Violations will first result in a warning, and repeated violations may result in a ban.
- 🔨 Violations of the "Be honest" section

Other violations are left up to the discretion of reviewers and moderators.
For minor violations we recommend telling the author that we can't review the PR until it complies with the policy, with pointers to exactly what they need to do.
For major violations or extractive PRs, we recommend closing the PR or issue.

Using an LLM does **not** mean it's ok to harrass a contributor.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Using an LLM does **not** mean it's ok to harrass a contributor.
That a person used an LLM does **not** mean it's OK to harass that contributor.

I think what this means to say is that it's not OK to harass people who used an LLM. It reads to me, though, as saying that using an LLM doesn't excuse the user harassing people, which is also obviously true, though probably too obvious to mention.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just feels pedantic, honestly. The new version feels a bit clunky to read, and the original still is understandable imho.

All contributors must be treated with respect.
The code-of-conduct applies to *all* conversations in the Rust project.

### Responsibility

Your contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above.

### The meaning of "originally authored"

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)".
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not comfortable with the definition of "originally authored" as written here. Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

View changes since the review

No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
In the manner the phrase is used in this policy, no amount of editing changes how something was "originally authored"; authorship sets the initial style and it is very hard to change once it's set.

Taking a different approach here, of narrowing the focus to the phrasing in this policy, rather than trying to get people to agree with the fully general statement.

View changes since the review


For more background about analogous reasoning, see ["What Colour are your bits?"](https://ansuz.sooke.bc.ca/entry/23)

### Non-exhaustive policy
Comment thread
jyn514 marked this conversation as resolved.

This policy does not aim to be exhaustive.
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌
Comment thread
jyn514 marked this conversation as resolved.
Comment on lines +182 to +183
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌
- Uses that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Uses that use LLMs for creation or show LLM output to another human are likely banned ❌

The more correct word here is uses. The word usage means a customary pattern or habit of use (or a rate or quantity of use).

View changes since the review

Comment on lines +182 to +183
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The policy has evolved since this section was written, and I wonder whether it needs to be redrafted in a softer way. It now reads as far starker than the rest of the policy does.

View changes since the review


### Conditions for modification or dissolution
This policy is not set in stone, and we can evolve it as we gain more experience working with LLMs.

Minor changes, such as typo fixes, only require a normal PR approval.
Major changes, such as adding a new rule or cancelling an existing rule, require
a simple majority of members of teams using rust-lang/rust (without concerns).
Comment on lines +189 to +190
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read "without concerns" ambiguously here. Does it mean "it must achieve a simple majority of the members of teams using r-l/r and no concerns must be filed" or that "it must achieve a simple majority of the members of teams using r-l/r without considering concerns to have effect for blocking it."

Either one I could see as having been intended. The first is more similar to how we normally treat concerns, but it's in conflict with wanting only a simple majority, as it actually requires a kind unanimity — any one person can block it. The second would be closer in spirit to a simple majority system, but it's in tension with our usual system of concerns.

Which reading is intended? Probably this should be made more clear.

View changes since the review


This policy can be dissolved in a few ways:

- An accepted FCP by teams using rust-lang/rust.
- An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence, as decided by a leadership council FCP.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence, as decided by a leadership council FCP.
- An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence, as decided by a leadership council FCP.
- By the leadership council adopting, by FCP, a Project-wide policy and stating in an FCP that it displaces this policy.

In my view, the LC (at any later time) would be within its rights to decide that rust-lang/rust is a shared space and that the drawbacks exceed the benefits, to the Project, of r-l/r having a policy different from that of the Project overall. The LC should be able to do this without having to make an evidentiary finding that the policy is harming the reputation of Rust.

View changes since the review

Copy link
Copy Markdown

@clarfonthey clarfonthey May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, this would be redundant for the first point, since if the LC can just decide whenever to dissolve the policy, then it doesn't matter if they have concerns about active harm.

Second, if you just think the LC can veto any change to how a repository is managed after the teams managing that repo have signed off, without an RFC, I find that at least a little bit concerning.

Third, this is supposed to be a temporary policy, to be replaced by a project-wide policy. (Note that if a project-wide policy sets aside the option for this policy to be tailored by a few specific teams, I would consider that replacing this policy, even if this effectively stays as-is.)

So, from the perspective of this being refined by a larger RFC, I think that it's fair to say that it should only be replaced if there is active and demonstrable harm, not just because the LC doesn't like it. And uh, it's a massive conflict of interest for you to be proposing that to begin with, since you would be one of the people potentially making that decision.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Third, this is supposed to be a temporary policy, to be replaced by a project-wide policy.

The author has rejected framing this as an interim policy, so I'm not sure this is correct. See #1040 (comment).

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't comment for what jyn thinks, but to me, the goal here is to not belittle a policy that has already been put in place even if we also agree that policy needs to be put in place elsewhere in the project. The idea isn't that the policy itself is temporary, but rather than the only potential next step would be either:

a) The policy has shortcomings that need to be addressed, or
b) The policy is superceded by a broader one

Neither of these cases are "the LC doesn't like the policy and wants to remove it entirely"

Loading