Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
772edeb
Add an LLM policy for `rust-lang/rust`
jyn514 Apr 17, 2026
815da6e
address some of jieyouxu's comments
jyn514 Apr 17, 2026
17a35f4
revert extraneous change
jyn514 Apr 17, 2026
61e5e2c
address some more review comments
jyn514 Apr 18, 2026
8ee5ed4
more review comments
jyn514 Apr 18, 2026
7cd8c17
more wording
jyn514 Apr 18, 2026
2db7465
rewrite "trivial changes" section
jyn514 Apr 21, 2026
9b2b3c2
rewrite intro to 'Allowed with caveats'
jyn514 Apr 21, 2026
e3b1394
Be more specific in "Moderation policy"
jyn514 Apr 21, 2026
e3f2aec
Add explicit conditions for modification or removal
jyn514 Apr 21, 2026
864428f
mention that the policy is intentionally conservative
jyn514 Apr 21, 2026
593d538
extend "Penalties" section with sentencing guidelines
jyn514 Apr 21, 2026
75050a2
be more clear where the CoC is invoked
jyn514 Apr 23, 2026
b6a8662
minor edits; add "Motivation and guiding principles" section
jyn514 Apr 28, 2026
9a944f7
Relax and clarify moderation guidelines
jyn514 Apr 28, 2026
14956c3
Carve out a space for experimentation
jyn514 Apr 28, 2026
8fe7281
fix typo
jyn514 Apr 28, 2026
791e46f
recommend adversarial review from another LLM
jyn514 Apr 28, 2026
8520038
markdown formatting
jyn514 Apr 28, 2026
b14e8ca
more markdown formatting
jyn514 Apr 28, 2026
69b6dc1
Note that explicitly marking LLM content is ok
jyn514 May 13, 2026
d682475
Exempt t-security-response from a few requirements
jyn514 May 13, 2026
ea4e504
Make "solicited" even stricter
jyn514 May 13, 2026
7eeecbb
Carve out a space for experimentation
jyn514 May 13, 2026
cd9aecd
Revert "Exempt t-security-response from a few requirements"
jyn514 May 13, 2026
ee4f26c
address a few of TC's concerns
jyn514 May 14, 2026
7956574
address a few of Jack's concerns
jyn514 May 14, 2026
b88855a
remove the 'additional scrutiny' examples
jyn514 May 14, 2026
4305e14
move 'using an llm to discover bugs' to the caveats section, without …
jyn514 May 14, 2026
ab6f8a4
move LLM-authored code to its own section; add a zulip stream as policy
jyn514 May 14, 2026
d9d8238
add a section about staying on-topic
jyn514 May 14, 2026
83b9363
wording
jyn514 May 14, 2026
9efffad
relax moderation policy guidelines
jyn514 May 14, 2026
24f236c
ban llms from writing safety comments
jyn514 May 15, 2026
f85aac6
Group some "personal use" bullets together
jyn514 May 15, 2026
adfc5e2
remove "by a team" phrase
jyn514 May 15, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@
- [Project groups](./governance/project-groups.md)
- [Policies](./policies/index.md)
- [Crate ownership policy](./policies/crate-ownership.md)
- [LLM usage policy](./policies/llm-usage.md)
- [Infrastructure](./infra/index.md)
- [Other Installation Methods](./infra/other-installation-methods.md)
- [Archive of Rust Stable Standalone Installers](./infra/archive-stable-version-installers.md)
Expand Down
4 changes: 3 additions & 1 deletion src/how-to-start-contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,13 +122,15 @@ We know that starting contributing in a FOSS[^1] project could be confusing at t
both contributors and reviewers have the best possible experience when collaborating in our project.

To achieve this goal, we want to build trust and respect of each other's time and efforts. Our recommendation is to follow these simple guidelines:
- Start small. A big ball of code as first contribution does not help to build trust
- Start small. A big ball of code as first contribution does not help to build trust.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- The work you submit is your own, meaning that you fully understand every part of it
- You take care of checking in detail your work before submitting it - ask questions or signal us (with inline comments or `todo!()`) the parts you're unsure about
- If you want to fix an issue but have doubts about the design, you're welcome to join our [Zulip][rust-zulip] server and ask for tips
- Please respect the reviewers' time: allow some days between reviews, only ask for reviews when your code compiles and tests pass, or give an explanation for why you are asking for a review at that stage (you can keep them in draft state until they're ready for review)
- Try to keep comments concise, don't worry about a perfect written communication. Strive for clarity and being to the point

See also our [LLM usage policy](./policies/llm-usage.md).

[^1]: Free-Open Source Project, see: https://en.wikipedia.org/wiki/Free_and_open-source_software

### Different kinds of contributions
Expand Down
116 changes: 116 additions & 0 deletions src/policies/llm-usage.md
Comment thread
oli-obk marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
## Policy
Comment thread
nikomatsakis marked this conversation as resolved.
Outdated

For additional information about the policy itself, see [the appendix](#appendix).

### Overview
Comment thread
nikomatsakis marked this conversation as resolved.

Using an LLM while working on `rust-lang/rust` is conditionally allowed.
However, we find it important to keep the following points in mind:

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.

Therefore, the guidelines are roughly as follows:

> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**.
Comment thread
jyn514 marked this conversation as resolved.

> LLMs work best when used as a tool to write *better*, not *faster*.

Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> LLMs work best when used as a tool to write *better*, not *faster*.
> In `rust-lang/rust`, please do not use LLMs as a tool to write *faster*.

Having this as a high-level summary is offering a judgement on LLMs that feels like it isn't necessary for the policy, and makes consensus more difficult to reach. For anti-LLM folks it's saying that they work best when used to write "better", which is a point in dispute. I would also expect (but don't want to put words in people's mouths) that for pro-LLM folks the point that they don't work well when used to work faster may be in dispute.

I've tried to rephrase this in a fashion that, rather than expressing a general statement on when "LLMs work best", is instead expressing what is desired *for rust-lang/rust as that's the scope of this policy.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is adapted from a quote by @ubiratansoares. This edit changes the quote beyond recognition, and I would rather remove it than edit this much.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I think it would be best removed, on the basis that previous line covers similar territory and seems less controversial.

Copy link
Copy Markdown
Member

@Kobzol Kobzol Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tbh I don't actually understand what this quote is supposed to mean; if anything, I would phrase it the other way around (you can use LLMs to do [things you can already do] to get them done faster, but you shouldn't use them to do things you don't already know how to do yourself).

Copy link
Copy Markdown
Member

@AndyGauge AndyGauge May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, it was the

LLMs work best when used as a tool to write better, not faster.

that I took back to my team and reworked our approach to AI-generated code. I think that statement itself has a lot of weight.

#### Legend

- ✅ Allowed
- ❌ Banned
- ⚠️ Allowed with caveats. Must disclose that an LLM was used.
Comment thread
jyn514 marked this conversation as resolved.
- ℹ️ Adds additional detail to the policy. These bullets are normative.

### Rules

#### ✅ Allowed
The following are allowed.
- Asking an LLM questions about an existing codebase.
- Asking an LLM to summarize comments on an issue, PR, or RFC.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Asking an LLM to summarize comments on an issue, PR, or RFC.
- Asking an LLM to summarize comments on an issue or PR.

This is a policy for r-l/r. Speaking about RFCs may confuse the reader as to the scope of this policy.

View changes since the review

- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use.
- ℹ️ This does not allow reposting the summary publicly on `rust-lang/rust`.

As written, this could read as a prohibition on posting the summary publicly anywhere. That would be an overreach and is not what I think is meant.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the clarification is fair, but I don't think that removing the point about personal use is fair. The point is to clarify that it's for personal use, and posting it publicly outside the project counts as that still.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a policy for r-l/r, so I'd consider it out of bounds for this to prohibit, e.g., a member of another team using an LLM to construct a summary and then posting that to a Project team space outside of r-l/r for team use. It would stretch the meaning of personal use too far to consider that personal use.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant is:

This does not allow reposting the summary publicly on rust-lang/rust. This only includes your own personal use.

is fine, because posting it publicly outside of rust-lang/rust counts as personal use under this policy, and is therefore fine.

- Asking an LLM to privately review your code or writing.
- ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below.
- Writing dev-tools for your own personal use using an LLM, as long as you don't try to merge them into `rust-lang/rust`.
- Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines).
- ℹ️ This also includes reviewers who use LLMs to discover bugs in unmerged code.
Comment thread
jyn514 marked this conversation as resolved.
Outdated

Comment thread
traviscross marked this conversation as resolved.
Comment thread
traviscross marked this conversation as resolved.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Using an LLM to generate solutions to an issue, learning from them, and then writing a solution from scratch yourself and in your own style.

I'd suggest moving this one to the allowed category (and revising it to mention generating example solutions in the plural, as that's better guidance for what someone should really do).

This is more similar to the other allowed items than to the with caveats items. As with using an LLM to review one's own code, this is a private use. We're requiring that the person write the solution that others will see from scratch. I.e., I read that as asking for independent creation — prohibiting copying in any form (in fact, it might be a good idea to make the language stronger about this, to improve clarity; maybe add "(no copying)" after "from scratch").

Nobody other than the author therefore will ever see these educational materials. Months could pass between the author looking at these examples and writing a solution. We don't demand to know the books or papers the author might have read that contained example code for similar problems.

Demanding disclosure here is a reach into a private space, and so is an overreach.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I've moved this to the "Allowed" section.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This just feels like you failed to take in the rest of the policy and decided to remove a large part of it because you disagree with it. The entire policy effectively clarifies what it considers valid "rewording" of LLM output and what isn't, and so adding a tiny bullet point at the beginning that overrides that and says it's all allowed with no caveats just undermines all that.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no valid rewordings under this policy. As the policy says:

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)". No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.

It then fully bans all contributions that were "originally authored" by LLMs other than those by approved bots, those under the recently added experimental process, machine translation, and trivial code changes. I.e., the other things under the Allowed with caveats section.

The odd one out is this one. The others represent things "originally authored" by an LLM, as the policy defines it. This one is not that. The policy requires that a solution be rewritten from scratch — i.e., it cannot be "originally authored" by an LLM.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I guess I wasn't 100% correct on the reasoning here, but the effect is the same: you appear to be failing to take in the policy and deciding to remove a decent portion because you disagree with it.

The point here is that the policy is intentionally conservative when it comes to LLM usage: anything where the LLM could have potentially influenced the output in ways you didn't directly control is included. This is one of those obvious cases. It's like asking a friend to copy their homework; even if you just read a friend's paper and then rewrote your own, it's still fundamentally different than writing your own paper.

And similarly, examples like this exist in code too. For example, there have been multiple cases where code has been leaked and reverse engineers have explicitly refused to read it for legal reasons; simply knowing what was done taints the idea of a "clean-room" implementation. This is an obvious example of that. You can't say that this is "too private;" if we decide that LLMs are our business and you've decided to contribute, you should let us know, just like how you should let us know if you copy-paste code verbatim from another project. Your alternative is to not contribute code, not to hide its source.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue isn't that it's copying, though, or whether you learned. The issue is whether an LLM was involved at all. It's a specific case where many people underestimate the effect the tools have on the end result, and so, we ask for disclosure in general to avoid having to litigate the very specific circumstances of what happened to determine whether they need to disclose or not.

It feels like the main issue, which you're explicitly not pointing out, is that you're concerned that people would be judged based upon whether an LLM was involved at all, and therefore, LLM usage should be kept as a "dirty pleasure" in this particular instance. I don't think that it's worth diluting the policy or creating confusion just because people are specifically unwilling to admit they used a particular tool.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern here is that it's inconsistent and an overreach. This would be more consistently placed with the learning rules rather than the original creation rules. It's an overreach in the same way that it'd be an overreach to require disclosure for the other learning rules.

The concerns about disclosure eroding trust in the Project are separate, and I've articulated them elsewhere.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You seem to be intentionally ignoring all of the arguments where LLMs are a very specific case that cannot be generalised. Sure, in a perfectly frictionless vacuum, we would not be asking for disclosure in this particular case. But for a number of different reasons, we're asking this.

Let me come up with an equally hypothetical and unrealistic scenario: imagine if, five years ago, a massive campaign had been undergoing to fill StackOverflow with subtly wrong information to sabotage developers in addition to all the correct information. It would be reasonable to ask all developers to disclose if they had used information on StackOverflow to develop a change in these very specific circumstances. This isn't "overreach," it's a pragmatic desire to correct for potential issues, and simply disclosing that something was involved is not a massive privacy concern.

Of course, I know exactly why people would be uncomfortable disclosing in this scenario, and it's because they don't want to be judged for making a potentially unethical decision by using these tools, although since that line of thought is banned from discussion in this RFC, I will both refrain from making it a part of my argument and insist that you refrain from making it a part of yours.

Simply put, I think that if people wish to not disclose because of its relation to (forbidden topic), then that is an argument that is not suitable for this policy. If you disagree, then you can take a look at my RFC, which explicitly reduces LLM usage even further because of the presence of that argument.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I've moved this to the "Allowed" section.1

Thanks @jyn514.

Footnotes

  1. Note for posterity: this thread is displaying out of causal order. Jyn's message appeared, at least for me, after the earlier messages in the thread.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@clarfonthey regardless of the policy issues: Please do not accuse TC of arguing in bad faith. It feels uncomfortably close to bullying them because you don't like their opinions.

#### ❌ Banned
The following are banned.
- Comments from a personal user account that are originally authored by an LLM.
- ℹ️ This also applies to issue bodies and PR descriptions.
Comment on lines +38 to +41
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the work lang members do, as a team, is reviewing language proposals made to us in rust-lang/rust issue bodies, PR descriptions, and comments. Here's a random recent one, by way of example:

We tend to care about policies for how lang-related issue and PR descriptions are put forward, e.g., the stabilization report template. Though we are not ourselves large contributors of code to r-l/r, we are in the set of maintainers of r-l/r — at least, that's how I see it.

So it surprised me a bit, given that this document sets policy for what's allowed for people making lang proposals to us in r-l/r (e.g., by prohibiting LLM-assisted drafting), that we weren't included on this FCP, though it had been earlier discussed.

I don't know what to do about that. I don't really want to ask here for the hassle of the FCP being restarted. And yet, given what the policy covers, it seems awkward to me that we're not on it — as though I'm commenting here with needs and interests as an outsider. That's how it feels, anyway.

Is there anything we can do about this? Maybe the scope can be narrowed so that it doesn't set policy for lang proposals or documentation items lang owns? Maybe something else? I don't know. @jyn514, what do you think?1

View changes since the review

Footnotes

  1. If you don't mind, I really do want to hear specifically from @jyn514 on this (no hurry).

Copy link
Copy Markdown
Member Author

@jyn514 jyn514 May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to narrow the scope so this excludes lang proposals and stabilization reports, so that t-lang can set their own policy. Are there other things in t-lang's purview you would like to see excluded?

- ℹ️ See also "machine-translation" in ⚠️ below.
- Documentation that is originally authored by an LLM.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Documentation that is originally authored by an LLM.
- Documentation that is originally generated by an LLM.

Without changing the semantics, could we search and replace all uses of "authored by an LLM" with "generated by an LLM"? As @xtqqczze has pointed out elsewhere:

Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

Pulling the idea of authorship into this policy seems unnecessarily philosophical for what it's trying to accomplish. The technology is called "generative AI". It'd seem more clear, to me, to stick with "generated".

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jyn specifically clarified authorship as a term here to be specific, so, you can't just change one section and be consistent. It would require at least structural changes to the policy.

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As best I can tell from a careful review, it requires a search-and-replace and then some minor redrafting of the The meaning of "originally authored" section. That section could be redrafted, e.g., as:

This document uses phrases such as "generated by an LLM" to mean "text that was generated by an LLM (and then possibly edited by a human)". No amount of editing can change how the text was originally created; how it was generated originally sets the initial style and it is very hard to change once it's set.

(Of course, there are many ways to redraft it. This is just one way.)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I think it would be fair to change "originally authored" to "originally generated," just, it would have to be a structural change affecting multiple places and not just the one line. For what it's worth, I think that the distinction is pedantic, but will defer to jyn on what the better wording is. I assume that "authored" was chosen intentionally.

- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments.
- ℹ️ This includes *any* doc comments, or non-trivial source comments.

Reordering this to make it clear first and foremost that "Documentation" includes any doc comments, moving "non-trivial source comments" second. This also drops the quantitative "multiple paragraphs"; some multi-paragraph comments may be trivial, and some one-sentence comments may not be.

View changes since the review

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you are using an LLM to write a multi-paragraph comment that is trivial, IMO that should also be banned. If you have a load-bearing single-line comment, I think that falls under "code changes authored by an LLM", although I'm not sure how to say that concisely.

Comment thread
jyn514 marked this conversation as resolved.
Outdated
- ℹ️ This includes compiler diagnostics.
Comment thread
jyn514 marked this conversation as resolved.
- Code changes that are originally authored by an LLM.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels overly restrictive in the current wording in a way that I'm not sure I really am comfortable not raising a concern as compiler team member.

There is some nuance here that this doesn't capture that I think should be. Certainly, I think in general, I'm happy to ban "unsolicited" code that is LLM-generated, but I think that an outright ban on all "non-trivial" LLM-generated code is too strong. I'd like to see LLM-generated code allowed under the following strong caveats:

  • The reviewer is pre-decided, and has agreed to review LLM-generated code
    • Importantly, this does not mean a PR can be opened and then picked up by an "LLM-friendly" reviewer
  • The code is well-reviewed (meaning, that the reviewer is committing to ensuring they fully understand the code, well enough that they could easily have written it themselves; and the author has also reviewed the code)
  • Changes are "non-critical" (such as a non-compiler tool, code under a feature gate, diagnostics, etc.)

I personally think this is a pretty reasonable space to carve out for "experimentation": it doesn't subject reviewers who don't want to review LLM-generated code to unwanted reviews, it helps to ensure that code stays high-quality, and it limits fallback of any "mistakes" in the process.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The code is well-tested" is another valuable caveat to add here. Requiring this is much less onerous in the context of LLM-assisted code.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it. I think it's a standard we want to hold for all contributions, but doesn't always get met. It's a nice position to have here.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd quite like to see an explicit carve out for teams or even individuals to do some experimentation - in specific areas or with specific maintainers that wouldn't affect maintainers who aren't interested in participating. Teams would obviously need to decide if they wanted to have such an experiment, but it would be useful input to any future revisions - e.g. "hey, we tried this in a controlled environment over here and we actually found it useful and helpful, maybe we could consider relaxing this point", etc.

- This does not include "trivial" changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html), which fall under ⚠️ below.
We understand that while asking an LLM research questions it may, unprompted, suggest small changes where there really isn't another way to write it.
However, you must still type out the changes yourself; you cannot give the LLM write access to your source code.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very weird to me. Either the change is small enough to be trivial, or it is not. I'm not sure what typing it out does?

Beyond this, it's not clear what this is aimed at? Is this aimed at when someone is conversing back and forth with an agent and they say "I suggest you do XYZ", or is this aimed at autocomplete-like code generation.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed the requirement to type out the code yourself.

- We do not accept PRs made up solely of trivial changes.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Comment thread
jyn514 marked this conversation as resolved.
Outdated
See [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation).
- See also "learning from an LLM's solution" in ⚠️ below.
- Treating an LLM review as a sufficient condition to merge a change.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
LLM reviews, if enabled by a team, **must** be advisory-only.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that this is limited to rust-lang/rust, probably better to just restrict to no LLM reviews.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually really want to keep allowing LLM reviews. I think they're low-risk and give people a chance to see whether the bot catches real issues.

but they may not have a policy that an LLM counts as a person.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- ℹ️ See "review bots" in ⚠️ below.
- ℹ️ An LLM review does not substitute for self-review. Authors are expected to review their own code before posting and after each change.

#### ⚠️ Allowed with caveats
The following are decided on a case-by-case basis.
Please avoid them where possible.
In general, existing contributors will be treated more leniently here than new contributors.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
We may ask you for the original prompts or design documents that went into the LLM's output;
please have them on-hand, and be available yourself to answer questions about your process.
Comment thread
jyn514 marked this conversation as resolved.
Outdated

- Using an LLM to generate a solution to an issue, learning from its solution, and then rewriting it from scratch in your own style.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, see my comment on the "Code changes that are originally authored by an LLM." ban, but I do like laying out this "less-restrictive" point explicitly. I would move the "asking for details about how you generated the solution" to under this point, but modify it heavily.

Rather than stating like "we need to know exactly what you said to the LLM and what model you used", I think a better approach is saying something like "You should be prepared to share the details of the direction you gave to the LLM. These may include general prompts or design documents/constraints."

I'm not sure that sharing the exact prompts or output, or the exact model does anything. What's the reasoning? I'm much more interested in what direction the author intended to take.

If the idea is to be able to "recreate" or "oversee" what the author did, that's just never going to work. This isn't something we can reasonably expect reviewers at large to do. Rather, if anything, this is something that I could see from a more mentor/mentee relationship. If it ever is at the point that a "random" reviewer wanted or needed to see this, then the PR likely just needs to be closed and further discussion should happen elsewhere before continuing.

- Using machine-translation from your native language without posting your original message.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- ℹ️ This policy also applies to non-LLM machine translations such as Google Translate.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- Using an LLM as a "review bot" for PRs.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm OOTL but I find this section situationally strange — where did the "review bot" come from?

IME AI-powered review bots that directly participates in PR discussions (esp the "app" ones) are configured by repository owner, but AFAIK r-l/r (which this policy applies solely to) did not have any such bots. I highly doubt a contributor will bring in their own review bot in public. So practically this has to be either

  • someone requested a review from Copilot, which may be we can opt-out?
  • the reviewer outsourced the review work to a coding agent, which is already covered in the sections
  • at least one team actually considered enabling such review bots in the future? as this is linked previously in that "Teams can have a policy that code can be merged without review" part, but I don't think this will ever happen given the the stance of this policy

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I highly doubt a contributor will bring in their own review bot in public.

I wish it worked like that :( People can just trigger GitHub copilot, or I suppose any other review bot, and let it comment on a r-l/r PR. Some people don't even do it willingly, but GH does it automatically for them, as GH copilot has a tendency to re-enable itself even if you sometimes disable it.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly.

Yeah currently disabling review is a personal/license-owner setting, it is not possible to configure from the repository PoV 😞 but I think this is something that we may bring up to GitHub.

It may be possible to use content exclusion to blind Copilot, but I'm not sure if this hack is going to produce any overreaching effects (e.g. affecting private IDE usage too).

Copy link
Copy Markdown
Contributor

@apiraino apiraino Apr 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

someone requested a review from Copilot, which may be we can opt-out?

I think this is exactly the point of pointing that out in our policy. Some people trigger a "[at]copilot review" in our repos without asking us for consent. This is rude behaviour and we don't want that.

And, yes, as you point out opting out of this "trigger" is currently only a project-wide setting, not at a repository level so we are looking with GitHub if they could make this setting more fine-grained (here on Zulip a discussion with the Infra team)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@clarfonthey I understand you are frustrated but it doesn't help to take it out on the people we're working with. Can I ask you to take a break from commenting on this RFC for a bit? Feel free to DM me with any concerns you have about the policy itself.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, you're right; I deleted the comment

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.

Unsolicited review bots are becoming an increasing problem; for example: https://web.archive.org/web/20260426133344/https://github.com/rust-lang/rust-clippy/issues/16893#issuecomment-4321880160

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for flagging xtqqczze - the same bot has commented in 6+ issues on the rust-clippy repo and in my case was giving unsolicited advice in a completely derailing direction (solving a specific case I obviously already worked around rather than the general case rust-lang/rust-clippy#16901 (comment))

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xtqqczze both rust-lang/rust-clippy#16893 and rust-lang/rust-clippy#16901 are issues not PRs, and that @QEEK-AI account commented spontaneously without any summoning. So I don't think these instances fall under this "Review Bot" rule (which is still "⚠️ Allowed with caveats"). At the very least these are "Comments […] authored by an LLM" which is "❌ Banned", and they are also outright "spam" that the current CoC can already handle.

- ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. They **must not** post under a personal account.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- ℹ️ Review bots that post without being approved by a maintainer will be banned.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned this leaves room for reviewers to trigger a review bot without consent of the author of the PR, which could alienate the PR author. If I opened a PR and it got reviewed by an LLM bot, I would probably close the PR and never try contributing to the project again. I've seen this happen in another project. I think there should be an agreement between the reviewer and PR author before triggering a review bot.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"approved by a maintainer" is the key point here, if an LLM review bot is "approved by a maintainer" it means such is a public decision and should be mentioned in CONTRIBUTING.md, and that's the agreement.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An agreement among maintainers to impose LLM review bots on nonconsenting contributors would drive those contributors away.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Copy link
Copy Markdown
Member

@kennytm kennytm May 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rephrasing LLM output is already addressed in lines 67-68.

The premise of this whole section is that somehow a bot (as a separate account, line 69) can be officially "⚠️ Allowed with caveats" (line 57) for reviewing.

If you think that a review bot account should not be allowed, even if approved by maintainers, this whole thread would be more relevant on the parent item (line 66; I've commented about this before).

P.S. I don't think this policy implies any LLM review bot account will be allowed "right now" or "soon", I believe there must at least be an FCP.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.

Thinking about this further, this seems like an overall better process than having a review bot comment on a PR. There's no room for ambiguity about whether a PR author is responsible for responding to LLM output; only the reviewer who decides to use an LLM is in a position to interpret the LLM output because "Comments from a personal user account that are originally authored by an LLM" are explicitly forbidden.

- ℹ️ If a linter already exists for the language you're writing, we strongly suggest using that linter instead of or in addition to the LLM.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- ℹ️ Please keep in mind that it's easy for LLM reviews to have false positives or focus on trivialities. We suggest configuring it to the "least chatty" setting you can.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
- ℹ️ LLM comments **must not** be blocking; reviewers must indicate which comments they want addressed. It's ok to require a *response* to each comment but the response can be "the bot's wrong here".
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's okay to require PR authors to have to say "the bot's wrong here"; the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.

I don't see how line 73 disagrees with this. The statement "It's ok to require a response" refers to the reviewer requiring a response from the author to address the bot comment, not from the bot itself. The previous statement "reviewers must indicate which comments they want addressed." also suggested that the reviewer has taken the 'onus' of the bot comment. In this scenario I don't find requiring the PR author to say "the bot's wrong here" to dismiss the comment is unfair to the author; in fact, having that 2nd step "reviewers must indicate which comments they want addressed" means the PR author is in fact rejecting the combined analysis of the bot and the reviewer, so I'd say this is more biased against reviewers.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current wording is a bit ambiguous and could conceivably be interpreted to mean that "it's okay to require a response" implicitly. I would like to see this clarified to say explicitly that a bot's comment only needs to be responded to if a reviewer explicitly indicates that.

- In other words, reviewers must explicitly endorse an LLM comment before blocking a PR. They are responsible for their own analysis of the LLM's comment and cannot treat it as a CI failure.
- ℹ️ This does not apply to private use of an LLM for reviews; see ✅ above.

All of these **must** disclose that an LLM was used.

## Appendix

### No witch hunts
Comment thread
jyn514 marked this conversation as resolved.
Outdated
["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/).
Do not try to be the police for whether someone has used an LLM.
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on.
Point to this policy, maybe report it to the mods (at your discretion), and move on.

To an outside reader, reporting something to the mods is going to sound like an escalation against the contributor. Given that, it reads a bit backward that the clear violations should only result in the policy being cited while the borderline violations require a report to the mods.

There are many ways this could be redrafted to avoid this. What I'd suggest is leaving whether to report this to the mods to the discretion of the reviewer in all cases and removing the conditional on whether the case is borderline or clear.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mods have openly encouraged people to report non-violations so they can keep an eye on things. I don't think that we should be reinforcing the stereotype that communication with the mods is inherently accusatory; we should be breaking that assumption instead.


Conversely, lying about whether you've used an LLM is an instant [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation.
If you are not sure where you fall in this policy, please talk to us.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Don't try to hide it.

### Responsibility

All contributions are your responsibility; you cannot place any blame on an LLM.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
All contributions are your responsibility; you cannot place any blame on an LLM.
Your contributions are your responsibility; you cannot place any blame on LLMs that you have used.

Clarity / wording.

View changes since the review

- ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above.
Comment thread
jyn514 marked this conversation as resolved.
Outdated

### "originally authored"
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Comment thread
jyn514 marked this conversation as resolved.
Outdated

This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)".
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m not comfortable with the definition of "originally authored" as written here. Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.

View changes since the review

No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
Comment thread
jyn514 marked this conversation as resolved.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set.
In the manner the phrase is used in this policy, no amount of editing changes how something was "originally authored"; authorship sets the initial style and it is very hard to change once it's set.

Taking a different approach here, of narrowing the focus to the phrasing in this policy, rather than trying to get people to agree with the fully general statement.

View changes since the review


For more background about analogous reasoning, see ["What Colour are your bits?"](https://ansuz.sooke.bc.ca/entry/23)

### Non-exhaustive policy
Comment thread
jyn514 marked this conversation as resolved.

This policy does not aim to be exhaustive.
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌
Comment thread
jyn514 marked this conversation as resolved.
Comment on lines +183 to +184
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Usages that use LLMs for creation or show LLM output to another human are likely banned ❌
- Uses that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅
- Uses that use LLMs for creation or show LLM output to another human are likely banned ❌

The more correct word here is uses. The word usage means a customary pattern or habit of use (or a rate or quantity of use).

View changes since the review

Comment on lines +183 to +184
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The policy has evolved since this section was written, and I wonder whether it needs to be redrafted in a softer way. It now reads as far starker than the rest of the policy does.

View changes since the review


This policy is not set in stone.
We can evolve it as we gain more experience working with LLMs.
Comment thread
jyn514 marked this conversation as resolved.
Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would feel better if we made this policy explicitly time-limited or tied to a process of gathering more information.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Niko, you're one of the loudest voices trying to dictate the direction we're going. I would argue that a majority of the pushback from sensible policies like this one have come from you; since you're effectively the project manager for the project, your voice carries further than a dozen people's, and it feels like you're genuinely oblivious to this. Plus, a lot of the arguments you've offered have been from the position that whatever you think is reasonable is canonically reasonable, which is perspective that resists all form of negotiation.

We all agree that this policy is not going to be permanent, but a large portion of the project seems to be in agreement that this should be the policy we adopt until a project-wide policy is adopted.

It's also worth noting, since it's been brought up multiple times, that we don't do policy by majority vote. This is even true for a policy like this one: if we did majority vote, we'd just ban all LLM usage, but we're not doing that because we're willing to compromise.

Right now, it seems pretty unsubstantiated that a handful of voices have dictated this position. While it's true that a small number of people have been active in the policy channel, a majority of the project have pointed out their desire for a total ban on LLM usage. This, being noticeably more lenient on that, is a compromise from us. You should consider whether you're willing to compromise at all on your stance, and what compromise would mean for you.

As I mentioned in one of the discussions, I do think it's a false equivalence that both sides need to concede something, but if you don't even know what it means to compromise, then negotiation is utterly impossible. I really am not convinced that you understand what a compromise of the pro-LLM position would be, based upon the utter confusion you've expressed when mentioning that some of the contributions you've done would not be acceptable under some of the proposed policies.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not plan to actually engage in this conversation any further (I acknowledge my biases and when to step out), but I think it's worth pointing out to the at-least-5 people who gave a thumbs-down reaction to my comment that I personally have a rule when it comes to this.

If I ever decide to mark my dissent on a comment with the thumbs-down emoji, I always reply explaining why unless everything I wish to say has already been said. Many times, the result is far more critical to the poster than a simple emoji, but I do this because I genuinely want people to understand why I feel a particular way, rather than just saying "I don't like this and will not explain why." We don't improve if we don't know what's wrong.

My above comment, in my mind, is required to give Niko's a thumbs-down reaction, because otherwise I'm being insincere to him and everyone else reading. I do not say that I disagree with something without saying why; in that case, it's better to not say anything at all.

Again, I acknowledge that my explanation can be deeply hurtful. Disagreement is a painful but necessary process. I also know that there are plenty of times where I have been excessively hurtful without providing the relevant constructive feedback, and think it's worth calling me out for that.

I don't apply my standards to anyone else. Lots of people just don't have time to write up a full response. But I personally, in these cases, simply don't respond at all.

So, consider whether your simple thumbs-down emoji constitutes genuinely useful feedback, or whether you're just being excessively hurtful instead. And, if you would like to express your dissent in private, I'm open to DMs on Zulip too; this is an open invitation to just say what you feel without a filter. It would be hypocritical of me to be so blunt with my opinions and not accept the same in kind.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll reply here since this is 'the thread', but I want to say first that I don't agree with much of what you wrote @clarfonthey. I believe Niko is raising a concern in good faith, though I'd like to understand it better.

I would feel better if we made this policy explicitly time-limited or tied to a process of gathering more information.

@nikomatsakis, can you elaborate on specifically what information you think we should be seeking, and what process you're imagining for iterating towards a better policy? Are you seeking commitment from folks who have engaged so far to keep engaging in discussion on Zulip? Something else? I'd be happy to chat offline (Zulip or more sync meeting if you'd prefer) if that makes more sense.

I see @jyn514 left a comment below with some more data on project opinions, but it's not clear to me if that's the kind of data you're seeking, or something else. Could you elaborate on what you're looking for and what kinds of process/timeline you would find better than the copious discussion and iteration that has landed us on this (and some other) proposals?

I personally think a policy like this one that is relatively restrictive, but scope-limited and leaving room for usage in other areas of the Project gives us a good balance for continued input on where the world is and leaving the door open for private usage for those comfortable with doing so. That combination seems guaranteed to ensure we're not going to stop discussing since everyone seems to want something different from this policy, even if we manage to get to consensus on landing this in the meantime.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the vote of confidence, Mark. And @clarfonthey I appreciate that I have reputational clout in the project -- though I'd also note that it doesn't usually translate into me getting my way without fighting for it tooth and nail. =) In any case, I wouldn't be speaking up this much if I didn't feel it was important.

To answer your questions Mark:

Are you seeking commitment from folks who have engaged so far to keep engaging in discussion on Zulip?

No, I think the Zulip discussions are not useful. I want to see a more structured process. I think it would look like this:

  • First, there'd be a group of people who are working to form a policy. This would be a representative, high-trust group that contains some folks from various positions here. And for the record, I don't particularly want to be in it. =)
  • Second, I think it'd be useful to take the next step to making the qualitative data we gained on Rust Project Perspectives into quantitative data. I talked about this on Zulip a few times but one idea is to do targeted polling to try and figure out "how widely are each of the major families of concerns shared" and "what is the texture".

For example, @jyn514 has expressed openness to having a separate review queue for "LLM-authored content". How many others on the compiler team share that opinion? I have no idea. And of course @clarfonthey has expressed ethical concerns, and I don't really know how many people share that bright red line. And that's just existing maintainers, what about people who've opened PRs in the last year? How many of them work with LLMs at work or on a daily basis? What are there experiences like?

Another thing I'm very curious to understand, something I think could be useful, is -- what are people afraid of or hopeful for as a result of this policy. That might inform the conversation.

For example, for me, one of my big fears is that if we will be distancing ourselves from future contributors, many of whom will be coding with LLMs. When Rust started, we made a deliberate choice to use Github and not Bugzilla because, frankly, Github is where the people are. I would be interested to see if the perspectives around LLM usage vary between existing maintainers and future contributors or along other lines.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to https://rethinkpriorities.org/research-area/adoption-llms-tech-workers/ 91% of respondants have used LLMs for work with 29% using it daily.

This data is a year old now, and I have many reasons to believe usage of tools like Claude Code has only increased since then, and dramatically at that. Of the four tech companies I have direct knowledge of, all four have gone from AI being used for coding by a minority of developers, to being used by almost every developer for coding in that time. In two of them using AI coding tools is practically mandatory. It's also quite clear to everyone that companies like Anthropic are struggling to keep up with the growth in usage.

I personally have approached AI with extreme skepticism from the beginning, and I still consider its functionality to be dramatically oversold by the companies selling it, but it's extremely widely used already, is already a very effective tool when used correctly, and I think @nikomatsakis is absolutely correct in thinking this will distance contributors who would use it, which is now essentially all new programmers.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the company I work for, it's also the case that most people use AI quite a bit; however, multiple new employees have expressed that they don't want to use it much or at all because it could interfere with learning. They want to go from junior engineers to senior engineers, and the best way we know to do that is hands-on experience.

So I agree that it's become an industry standard (and policies which do not reflect this may be unsustainable); however, it's not necessarily true that all new programmers will be AI users initially.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to second @nikomatsakis's point, mostly: I'm not sure that I necessarily care that this is "time limited", but with as-restrictive as this is, I don't want us to merge this and think it is "enough". I also don't know what "correct" here looks like. Let me try to spell out exactly what I would and would not like:

  • I would like for us to gather more/better quantitative of AI usage within the Project, and among contributors to the Project.
  • I would like for us to continue discussing what a Project-wide policy looks like and find consensus.
  • I would like for us to evaluate/monitor actual effectiveness of this policy once we merge.
    • Does this "reduce the spam"? Is it easier to moderate? Do maintainers feel less burdened?
    • Are there things that we missed in this policy? Are there parts of the policy that just don't seem to be a problem?
  • I would like for us to re-evaluate as models change, as the ecosystem changes, as best-practices change, etc.
  • I would like for us to identify clearly the problems that we are trying to solve, evaluate alternative solutions than a "restrictive AI policy", and then evaluate how this policy fits with those solutions.
  • I would not like for us to assume that this policy is "as permissive as it gets" nor "as restrictive as it gets".
  • I would not like for us to broadcast this as an "anti-AI" stance. (Rather, I want to set this as a "we're figuring things out, and we need to focus on maintainers and quality until we do.")
  • I would not like for this policy to let us stop treating contributors with respect (regardless of AI use)
  • I would not like for us to disregard how AI is used outside the Project and how the policies we set affect our relationships with individuals, companies, and organizations

In all, I think best said:

I don't want us to think of this policy as "done". I want it to be as another stepping stone in figuring out what works. I don't think "only talking" gets us very far (which is why some policy, even if more restrictive or less restrictive than some would like, is still a good step), but I don't think that this is a "solution", only another means to help us figure out what works for the Project. I don't want us to merge this and then any time we are discussing, someone can just point and say: "look, we merged a policy, why are we still discussing this?"

Unfortunately, we're bad at ensuring we don't set something down and forget to pick it up again. A time-limited or event-limited policy can help with this. If we said "this policy is only in effect for a year", then in a year, we must reevaluate whether this policy "worked" and what changes (if any, should be made). I'm not sure what an "event-limited" process would look like: but I could imagine it's some combination of doing a survey, identifying key "events" like e.g. a capable/free "open model" being available, additional tooling being built that could obviate the need for some of this policy, the Project gaining consensus on a Project-wide policy, some team raising a concern, etc.

I imagine what we actually want is some combination. Just taking a stab:

This policy is not set in stone, and can be amended with a simple majority of members of teams using rust-lang/rust (without concerns).
This policy can be dissolved in a few ways:

  • Consensus (n-2) of all members of teams using rust-lang/rust (without concerns)
  • A formal Project-wide policy in place AND 1 year passing since this policy is first merged
  • An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence; as decided by a leadership council FCP (consensus without conerns)

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence; as decided by a leadership council FCP

👍 I like the idea of having an escape hatch if there's a crisis.

Consensus (n-2) of all members of teams using rust-lang/rust (without concerns)

I think this is implied, but 👍 to spelling it out explicitly.

A formal Project-wide policy in place AND 1 year passing since this policy is first merged

I don't like that this leaves no room for a project-wide policy that allows teams to set more specific policies.

Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's a sunset clause, what's the fallback policy? Ideally it's a policy that everyone dislikes, so there's incentive to properly fix it.

The current status quo seems to be... fully permissive but also people will get mad at you if you submit LLM-generated work? That seems less than ideal.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is my second point includes not just time passing, but also the Project-wide policy (which is, I guess, the "fallback"). I don't necessarily everyone think has to dislike that, but rather that needs to be something more fundamentally shared across the entire project than a rust-lang/rust specific policy.

The other two points are an active dissolution than fundamentally requires either consensus (same as forming the policy), or evidence of active harm.

Loading