-
Notifications
You must be signed in to change notification settings - Fork 233
Add an LLM policy for rust-lang/rust
#1040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 15 commits
772edeb
815da6e
17a35f4
61e5e2c
8ee5ed4
7cd8c17
2db7465
9b2b3c2
e3b1394
e3f2aec
864428f
593d538
75050a2
b6a8662
9a944f7
14956c3
8fe7281
791e46f
8520038
b14e8ca
69b6dc1
d682475
ea4e504
7eeecbb
cd9aecd
ee4f26c
7956574
b88855a
4305e14
ab6f8a4
d9d8238
83b9363
9efffad
24f236c
f85aac6
adfc5e2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,152 @@ | ||||||||||||||||||||||||||||||||||||||||||||||
| ## LLM Usage Policy | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| For additional information about the policy itself, see [the appendix](#appendix). | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### Overview | ||||||||||||||||||||||||||||||||||||||||||||||
|
nikomatsakis marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Using LLMs while working on `rust-lang/rust` is conditionally allowed, when done with care. | ||||||||||||||||||||||||||||||||||||||||||||||
| LLMs are not a substitute for thought, | ||||||||||||||||||||||||||||||||||||||||||||||
| and we do not allow them to be used in ways that risk losing our shared social and technical understanding of the project, | ||||||||||||||||||||||||||||||||||||||||||||||
| nor in ways that hurt our goals of creating a strong community. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| The policy's guidelines are roughly as follows: | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| > It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to **create**. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| > LLMs work best when used as a tool to write *better*, not *faster*. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Having this as a high-level summary is offering a judgement on LLMs that feels like it isn't necessary for the policy, and makes consensus more difficult to reach. For anti-LLM folks it's saying that they work best when used to write "better", which is a point in dispute. I would also expect (but don't want to put words in people's mouths) that for pro-LLM folks the point that they don't work well when used to work faster may be in dispute. I've tried to rephrase this in a fashion that, rather than expressing a general statement on when "LLMs work best", is instead expressing what is desired *for
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is adapted from a quote by @ubiratansoares. This edit changes the quote beyond recognition, and I would rather remove it than edit this much.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Then I think it would be best removed, on the basis that previous line covers similar territory and seems less controversial.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Tbh I don't actually understand what this quote is supposed to mean; if anything, I would phrase it the other way around (you can use LLMs to do [things you can already do] to get them done faster, but you shouldn't use them to do things you don't already know how to do yourself).
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Honestly, it was the
that I took back to my team and reworked our approach to AI-generated code. I think that statement itself has a lot of weight. |
||||||||||||||||||||||||||||||||||||||||||||||
| ### Rules | ||||||||||||||||||||||||||||||||||||||||||||||
| #### Legend | ||||||||||||||||||||||||||||||||||||||||||||||
| - ✅ Allowed | ||||||||||||||||||||||||||||||||||||||||||||||
| - ❌ Banned | ||||||||||||||||||||||||||||||||||||||||||||||
| - ⚠️ Allowed with caveats. Must disclose that an LLM was used. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Adds additional detail to the policy. These bullets are normative. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| #### ✅ Allowed | ||||||||||||||||||||||||||||||||||||||||||||||
| The following are allowed. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Asking an LLM questions about an existing codebase. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Asking an LLM to summarize comments on an issue, PR, or RFC. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
This is a policy for |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This does not allow reposting the summary publicly. This only includes your own personal use. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
As written, this could read as a prohibition on posting the summary publicly anywhere. That would be an overreach and is not what I think is meant. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the clarification is fair, but I don't think that removing the point about personal use is fair. The point is to clarify that it's for personal use, and posting it publicly outside the project counts as that still.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is a policy for r-l/r, so I'd consider it out of bounds for this to prohibit, e.g., a member of another team using an LLM to construct a summary and then posting that to a Project team space outside of r-l/r for team use. It would stretch the meaning of personal use too far to consider that personal use. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What I meant is:
is fine, because posting it publicly outside of |
||||||||||||||||||||||||||||||||||||||||||||||
| - Asking an LLM to privately review your code or writing. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Writing dev-tools for your own personal use using an LLM, as long as you don't try to merge them into `rust-lang/rust`. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used. | ||||||||||||||||||||||||||||||||||||||||||||||
| Please refer to [our guidelines for fuzzers](https://rustc-dev-guide.rust-lang.org/fuzzing.html#guidelines). | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This also includes reviewers who use LLMs to discover flaws in unmerged code. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In my opinion, it seems unnecessary to require the disclosure of LLM usage when finding bugs or flaws, if we're already requiring that a human verified the bug and wrote the report. Is there a particular reason that this disclosure requirement is here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think disclosure of LLM use is critical because honesty is crucial for open source projects. If you claim credit for something that an LLM found, that is dishonest and corrosive to trust.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fair point. I think I wasn't sure how literally this was meant to be taken, given that bug reports were singled out. In other words, is it only if the LLM played an essential role, or is it if the LLM was involved at all. Probably the latter case is explained by the other policies, like how it is acceptable to use LLMs for understanding. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Both avid LLM users and people who dislike LLMs are in favour of disclosure, so, it's not really controversial. LLMs work differently than humans do and knowing that one was involved can really help review.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not opposed to disclosure, and I agree that humans should not take credit for work done by an LLM. At the same time, I am concerned that given the environment surrounding AI usage in the project, issues and PRs that disclose any LLM usage will be subject to prejudice. I believe that reviewers should as much as possible judge PRs on their merits and their content, not based on what tools were involved in the creation process.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have a much worse reaction to finding out a contrib has LLM generated parts by noticing patterns. If you're transparent about it i bear no ill feelings, just possibly a lack of interest of reviewing it myself. I am willing and enjoy reviewing 3kloc diff changes, if i know the author thought about the changes and i can review it with the author's intent behind it. I will def be sceptical about someone's future contribs if i encountered LLM code without expecting it
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree that tool use should be disclosed if one did not fully and completely verify the issue oneself, as one is then leaning on the correctness of the tool rather than on one's own personal judgment for reducing false positives. Complete agreement there. But I don't want people sending us those. This is what I mean about:
In saying, "oh, I used Claude to find this", it gives people an excuse for a high false-positive rate. "Hey, don't blame me, Claude said it was a bug." Do we want that at all? The policy reads that we don't, as it requires that "you personally verify the bug". Yet it also requires disclosure if someone uses these tools as a smart grep. I don't see what's gained by that. Even I would be likely to read into the disclosure an implication that the person didn't actually check it. There's no way to counter that. There's research on this:
From the Schilke and Reimann paper:
People who disclose LLM use are trusted less, holding contribution quality constant, even by people who use LLMs themselves, even when reviewers were already aware of the LLM use, even when the author asserts having reviewed and revised it, even when the author asserts the LLM was only used for proofreading, and even when the disclosure is mandated. I worry about this eroding trust in our project. Regarding PRs, if the code is not byte-for-byte identical with what the person would have written (given more time due to slow hands) without the assistance of an LLM, then I don't want it at this point. I don't think we want those 3kloc PRs with quality problems that become obvious just below the surface. For me, the issue on this goes beyond disclosure. This policy effectively bans those anyway. It allows only "using an LLM to generate a solution to an issue, learning from its solution, and then rewriting it from scratch in your own style." I doubt anyone is going to rewrite one of those 3kloc PRs using this method. But it also requires disclosure even when writing from scratch, and I think it's an overreach to demand disclosure of things that people have learned from. And, as above, I worry about it eroding trust.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. right, I derailed this thread somewhat. The disclosure here is about issues. And there I'm not having a negative reaction. It's like a more directed fuzzers, and we already have disclosure of fuzzers by convention
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's part of why I found the explicit disclosure requirement a bit strange, since we already have conventions (but no requirements, AFAIK) for disclosure in the case of other tools, like fuzzers. I don't understand why we need to require disclosure for bug-finding tools specifically in the case of LLMs. However, mentioning how a bug was found is helpful even just for the purpose of analyzing what tools are most effective at finding which types of issues, so I don't have an issue with disclosure—it just seemed weird to me to single LLMs out. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Something that I haven't seen discussed here is disclosure as a mean to enforce the policy. It is usually possible to detect AI-written contributions, but except in extreme cases it is very hard to know whether that contribution abides our requirements, and even harder to prove it does not. If we require disclosure, failing to disclose is a violation of the policy and can be escalated to moderators on that basis alone. On the other hand if we don't, we need to find some other evidence, and this is much harder. Therefore I support disclosure even if it indeed erodes trust (which I do not believe it does).
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Could we narrow this somehow? I routinely run models over the lang nomination queue to prepare for meetings. It's not the main purpose of my process, but nonetheless, the models reliably flag things that I would find anyway, e.g., violations of RFC 0344 lint naming conventions, setting the lint level and The model is going to have found this flaw in the unmerged code first, and I'm going to have seen that, so I'd read that as falling under this policy. But I'm not looking forward to having to pollute all my comments, e.g.:
Or could we come up with a scheme for blanket disclosure? Could we disclose once somewhere, e.g., in team, whether all our review comments should be treated as possibly LLM assisted? I suppose I'd even be OK if triagebot wants to walk around in my shadow posting scarlet letters after each of my comments as long as I don't need to retype this everywhere.1 Footnotes
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm a bit confused why you're asking for this clarification since this is under the allowed section. You just don't need to disclose it, from what I can see.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, under the rules it currently requires disclosure.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah, that feels a bit confusing, then. Because the way this list is worded, it looks like it is technically allowed, and probably should be moved down to the caveats section. I don't think that we should be allowing blanket disclosure. Part of the reason why this policy exists is to avoid complacency, and this is a great way to ensure it.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Something I should mention here that's probably not immediately obvious to those outside of lang is that (as we do on the lang calls) I'm generally leaving these comments during a live lang meeting. We might go through 25 items in 150 minutes. That doesn't leave a lot of bandwidth (hence the lang-ops preparation for the calls). That's the context in which I'm asking for a reliable low-cost way to comply with this while being able to leave the comments I need to leave. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think that you need to “pollute” your comments here. I would say, if you're using an LLM to blanket go through all these items, you could easily satisfy this policy by:
Disclosure still counts if it requires clicking on a link.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this equivalent?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The "where you are the only one that sees the output" clause would be a more expansive prohibition. You could be working with someone else, with another team, or posting it publicly (but not to r-l/r). Maybe something like:
(See the recent additions for other items this framing is trying to encompass.)
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like Niko's suggestion to make this more clear that these are examples of a larger policy point. I've added it. |
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
|
traviscross marked this conversation as resolved.
traviscross marked this conversation as resolved.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I'd suggest moving this one to the allowed category (and revising it to mention generating example solutions in the plural, as that's better guidance for what someone should really do). This is more similar to the other allowed items than to the with caveats items. As with using an LLM to review one's own code, this is a private use. We're requiring that the person write the solution that others will see from scratch. I.e., I read that as asking for independent creation — prohibiting copying in any form (in fact, it might be a good idea to make the language stronger about this, to improve clarity; maybe add "(no copying)" after "from scratch"). Nobody other than the author therefore will ever see these educational materials. Months could pass between the author looking at these examples and writing a solution. We don't demand to know the books or papers the author might have read that contained example code for similar problems. Demanding disclosure here is a reach into a private space, and so is an overreach.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree. I've moved this to the "Allowed" section. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This just feels like you failed to take in the rest of the policy and decided to remove a large part of it because you disagree with it. The entire policy effectively clarifies what it considers valid "rewording" of LLM output and what isn't, and so adding a tiny bullet point at the beginning that overrides that and says it's all allowed with no caveats just undermines all that.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There are no valid rewordings under this policy. As the policy says:
It then fully bans all contributions that were "originally authored" by LLMs other than those by approved bots, those under the recently added experimental process, machine translation, and trivial code changes. I.e., the other things under the Allowed with caveats section. The odd one out is this one. The others represent things "originally authored" by an LLM, as the policy defines it. This one is not that. The policy requires that a solution be rewritten from scratch — i.e., it cannot be "originally authored" by an LLM. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So, I guess I wasn't 100% correct on the reasoning here, but the effect is the same: you appear to be failing to take in the policy and deciding to remove a decent portion because you disagree with it. The point here is that the policy is intentionally conservative when it comes to LLM usage: anything where the LLM could have potentially influenced the output in ways you didn't directly control is included. This is one of those obvious cases. It's like asking a friend to copy their homework; even if you just read a friend's paper and then rewrote your own, it's still fundamentally different than writing your own paper. And similarly, examples like this exist in code too. For example, there have been multiple cases where code has been leaked and reverse engineers have explicitly refused to read it for legal reasons; simply knowing what was done taints the idea of a "clean-room" implementation. This is an obvious example of that. You can't say that this is "too private;" if we decide that LLMs are our business and you've decided to contribute, you should let us know, just like how you should let us know if you copy-paste code verbatim from another project. Your alternative is to not contribute code, not to hide its source. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The issue isn't that it's copying, though, or whether you learned. The issue is whether an LLM was involved at all. It's a specific case where many people underestimate the effect the tools have on the end result, and so, we ask for disclosure in general to avoid having to litigate the very specific circumstances of what happened to determine whether they need to disclose or not. It feels like the main issue, which you're explicitly not pointing out, is that you're concerned that people would be judged based upon whether an LLM was involved at all, and therefore, LLM usage should be kept as a "dirty pleasure" in this particular instance. I don't think that it's worth diluting the policy or creating confusion just because people are specifically unwilling to admit they used a particular tool.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My concern here is that it's inconsistent and an overreach. This would be more consistently placed with the learning rules rather than the original creation rules. It's an overreach in the same way that it'd be an overreach to require disclosure for the other learning rules. The concerns about disclosure eroding trust in the Project are separate, and I've articulated them elsewhere. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You seem to be intentionally ignoring all of the arguments where LLMs are a very specific case that cannot be generalised. Sure, in a perfectly frictionless vacuum, we would not be asking for disclosure in this particular case. But for a number of different reasons, we're asking this. Let me come up with an equally hypothetical and unrealistic scenario: imagine if, five years ago, a massive campaign had been undergoing to fill StackOverflow with subtly wrong information to sabotage developers in addition to all the correct information. It would be reasonable to ask all developers to disclose if they had used information on StackOverflow to develop a change in these very specific circumstances. This isn't "overreach," it's a pragmatic desire to correct for potential issues, and simply disclosing that something was involved is not a massive privacy concern. Of course, I know exactly why people would be uncomfortable disclosing in this scenario, and it's because they don't want to be judged for making a potentially unethical decision by using these tools, although since that line of thought is banned from discussion in this RFC, I will both refrain from making it a part of my argument and insist that you refrain from making it a part of yours. Simply put, I think that if people wish to not disclose because of its relation to (forbidden topic), then that is an argument that is not suitable for this policy. If you disagree, then you can take a look at my RFC, which explicitly reduces LLM usage even further because of the presence of that argument.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @clarfonthey regardless of the policy issues: Please do not accuse TC of arguing in bad faith. It feels uncomfortably close to bullying them because you don't like their opinions. |
||||||||||||||||||||||||||||||||||||||||||||||
| #### ❌ Banned | ||||||||||||||||||||||||||||||||||||||||||||||
| The following are banned. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Comments from a personal user account that are originally authored by an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This also applies to issue bodies and PR descriptions. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+38
to
+41
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Most of the work lang members do, as a team, is reviewing language proposals made to us in We tend to care about policies for how lang-related issue and PR descriptions are put forward, e.g., the stabilization report template. Though we are not ourselves large contributors of code to r-l/r, we are in the set of maintainers of r-l/r — at least, that's how I see it. So it surprised me a bit, given that this document sets policy for what's allowed for people making lang proposals to us in r-l/r (e.g., by prohibiting LLM-assisted drafting), that we weren't included on this FCP, though it had been earlier discussed. I don't know what to do about that. I don't really want to ask here for the hassle of the FCP being restarted. And yet, given what the policy covers, it seems awkward to me that we're not on it — as though I'm commenting here with needs and interests as an outsider. That's how it feels, anyway. Is there anything we can do about this? Maybe the scope can be narrowed so that it doesn't set policy for lang proposals or documentation items lang owns? Maybe something else? I don't know. @jyn514, what do you think?1 Footnotes
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm happy to narrow the scope so this excludes lang proposals and stabilization reports, so that t-lang can set their own policy. Are there other things in t-lang's purview you would like to see excluded? |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ See also "machine-translation" in ⚠️ below. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Documentation that is originally authored by an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Without changing the semantics, could we search and replace all uses of "authored by an LLM" with "generated by an LLM"? As @xtqqczze has pointed out elsewhere:
Pulling the idea of authorship into this policy seems unnecessarily philosophical for what it's trying to accomplish. The technology is called "generative AI". It'd seem more clear, to me, to stick with "generated". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. jyn specifically clarified authorship as a term here to be specific, so, you can't just change one section and be consistent. It would require at least structural changes to the policy.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As best I can tell from a careful review, it requires a search-and-replace and then some minor redrafting of the The meaning of "originally authored" section. That section could be redrafted, e.g., as:
(Of course, there are many ways to redraft it. This is just one way.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, I think it would be fair to change "originally authored" to "originally generated," just, it would have to be a structural change affecting multiple places and not just the one line. For what it's worth, I think that the distinction is pedantic, but will defer to jyn on what the better wording is. I assume that "authored" was chosen intentionally. |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This includes non-trivial source comments, such as doc-comments or multiple paragraphs of non-doc-comments. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Reordering this to make it clear first and foremost that "Documentation" includes any doc comments, moving "non-trivial source comments" second. This also drops the quantitative "multiple paragraphs"; some multi-paragraph comments may be trivial, and some one-sentence comments may not be.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If you are using an LLM to write a multi-paragraph comment that is trivial, IMO that should also be banned. If you have a load-bearing single-line comment, I think that falls under "code changes authored by an LLM", although I'm not sure how to say that concisely.
jyn514 marked this conversation as resolved.
Outdated
|
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This includes compiler diagnostics. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||||||||||||
| - Code changes that are originally authored by an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This feels overly restrictive in the current wording in a way that I'm not sure I really am comfortable not raising a concern as compiler team member. There is some nuance here that this doesn't capture that I think should be. Certainly, I think in general, I'm happy to ban "unsolicited" code that is LLM-generated, but I think that an outright ban on all "non-trivial" LLM-generated code is too strong. I'd like to see LLM-generated code allowed under the following strong caveats:
I personally think this is a pretty reasonable space to carve out for "experimentation": it doesn't subject reviewers who don't want to review LLM-generated code to unwanted reviews, it helps to ensure that code stays high-quality, and it limits fallback of any "mistakes" in the process. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "The code is well-tested" is another valuable caveat to add here. Requiring this is much less onerous in the context of LLM-assisted code.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I like it. I think it's a standard we want to hold for all contributions, but doesn't always get met. It's a nice position to have here.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd quite like to see an explicit carve out for teams or even individuals to do some experimentation - in specific areas or with specific maintainers that wouldn't affect maintainers who aren't interested in participating. Teams would obviously need to decide if they wanted to have such an experiment, but it would be useful input to any future revisions - e.g. "hey, we tried this in a controlled environment over here and we actually found it useful and helpful, maybe we could consider relaxing this point", etc. |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This does not include "trivial" changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html), which fall under ⚠️ below. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Be cautious about PRs that consist solely of trivial changes. | ||||||||||||||||||||||||||||||||||||||||||||||
| See also [the compiler team's typo fix policy](https://rustc-dev-guide.rust-lang.org/contributing.html#writing-documentation:~:text=Please%20notice%20that%20we%20don%E2%80%99t%20accept%20typography%2Fspellcheck%20fixes%20to%20internal%20documentation). | ||||||||||||||||||||||||||||||||||||||||||||||
| - See also "learning from an LLM's solution" in ⚠️ below. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Treating an LLM review as a sufficient condition to merge or reject a change. | ||||||||||||||||||||||||||||||||||||||||||||||
| LLM reviews, if enabled by a team, **must** be advisory-only. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
Outdated
|
||||||||||||||||||||||||||||||||||||||||||||||
| Teams can have a policy that code can be merged without review, and they can have a policy that code must be reviewed by at least one person, | ||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Given that this is limited to rust-lang/rust, probably better to just restrict to no LLM reviews.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I actually really want to keep allowing LLM reviews. I think they're low-risk and give people a chance to see whether the bot catches real issues. |
||||||||||||||||||||||||||||||||||||||||||||||
| but they may not have a policy that an LLM review substitutes for a human review. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ See "review bots" in ⚠️ below. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ An LLM review does not substitute for self-review. Authors are expected to review their own code before posting and after each change. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| #### ⚠️ Allowed with caveats | ||||||||||||||||||||||||||||||||||||||||||||||
| The following are decided on a case-by-case basis. | ||||||||||||||||||||||||||||||||||||||||||||||
| In general, new contributors will be scrutinized more heavily than existing contributors, | ||||||||||||||||||||||||||||||||||||||||||||||
| since they haven't yet established trust with their reviewers. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| - Using an LLM to generate a solution to an issue, learning from its solution, and then rewriting it from scratch in your own style. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
Outdated
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Of course, see my comment on the "Code changes that are originally authored by an LLM." ban, but I do like laying out this "less-restrictive" point explicitly. I would move the "asking for details about how you generated the solution" to under this point, but modify it heavily. Rather than stating like "we need to know exactly what you said to the LLM and what model you used", I think a better approach is saying something like "You should be prepared to share the details of the direction you gave to the LLM. These may include general prompts or design documents/constraints." I'm not sure that sharing the exact prompts or output, or the exact model does anything. What's the reasoning? I'm much more interested in what direction the author intended to take. If the idea is to be able to "recreate" or "oversee" what the author did, that's just never going to work. This isn't something we can reasonably expect reviewers at large to do. Rather, if anything, this is something that I could see from a more mentor/mentee relationship. If it ever is at the point that a "random" reviewer wanted or needed to see this, then the PR likely just needs to be closed and further discussion should happen elsewhere before continuing. |
||||||||||||||||||||||||||||||||||||||||||||||
| - Using machine-translation (e.g. Google Translate) from your native language without posting your original message. | ||||||||||||||||||||||||||||||||||||||||||||||
| Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Using an LLM as a "review bot" for PRs. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe I'm OOTL but I find this section situationally strange — where did the "review bot" come from? IME AI-powered review bots that directly participates in PR discussions (esp the "app" ones) are configured by repository owner, but AFAIK r-l/r (which this policy applies solely to) did not have any such bots. I highly doubt a contributor will bring in their own review bot in public. So practically this has to be either
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I wish it worked like that :( People can just trigger GitHub copilot, or I suppose any other review bot, and let it comment on a r-l/r PR. Some people don't even do it willingly, but GH does it automatically for them, as GH copilot has a tendency to re-enable itself even if you sometimes disable it. It is also not possible to opt-out of the PR author requesting a Copilot review, if I remember correctly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’ve seen this behavior elsewhere on GitHub, where contributors effectively use a personal account as a kind of "review bot" to comment on PRs without approval from maintainers.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yeah currently disabling review is a personal/license-owner setting, it is not possible to configure from the repository PoV 😞 but I think this is something that we may bring up to GitHub. It may be possible to use content exclusion to blind Copilot, but I'm not sure if this hack is going to produce any overreaching effects (e.g. affecting private IDE usage too).
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I think this is exactly the point of pointing that out in our policy. Some people trigger a "[at]copilot review" in our repos without asking us for consent. This is rude behaviour and we don't want that. And, yes, as you point out opting out of this "trigger" is currently only a project-wide setting, not at a repository level so we are looking with GitHub if they could make this setting more fine-grained (here on Zulip a discussion with the Infra team)
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @clarfonthey I understand you are frustrated but it doesn't help to take it out on the people we're working with. Can I ask you to take a break from commenting on this RFC for a bit? Feel free to DM me with any concerns you have about the policy itself. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah, you're right; I deleted the comment There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Unsolicited review bots are becoming an increasing problem; for example: https://web.archive.org/web/20260426133344/https://github.com/rust-lang/rust-clippy/issues/16893#issuecomment-4321880160 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for flagging xtqqczze - the same bot has commented in 6+ issues on the rust-clippy repo and in my case was giving unsolicited advice in a completely derailing direction (solving a specific case I obviously already worked around rather than the general case rust-lang/rust-clippy#16901 (comment))
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @xtqqczze both rust-lang/rust-clippy#16893 and rust-lang/rust-clippy#16901 are issues not PRs, and that |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Review bots **must** have a separate GitHub account that marks them as an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
| You **must not** post (or allow a tool to post) LLM reviews verbatim on your personal account unless clearly quoted with your own personal interpretation of the bot's analysis. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Review bot accounts must be blockable by individual users via the standard GitHub user-blocking mechanism. (Note that some GitHub "app" accounts post comments that look like users but cannot be blocked.) | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Review bots that post without being approved by a maintainer will be banned. | ||||||||||||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm concerned this leaves room for reviewers to trigger a review bot without consent of the author of the PR, which could alienate the PR author. If I opened a PR and it got reviewed by an LLM bot, I would probably close the PR and never try contributing to the project again. I've seen this happen in another project. I think there should be an agreement between the reviewer and PR author before triggering a review bot.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "approved by a maintainer" is the key point here, if an LLM review bot is "approved by a maintainer" it means such is a public decision and should be mentioned in CONTRIBUTING.md, and that's the agreement. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. An agreement among maintainers to impose LLM review bots on nonconsenting contributors would drive those contributors away. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If a reviewer really wants to use an LLM to review, they could run that LLM on their own, filter through the output to determine what is actually relevant and correct, and post in their own words about the identified problems. That doesn't require bothering a nonconsenting PR author with LLM output.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Rephrasing LLM output is already addressed in lines 67-68. The premise of this whole section is that somehow a bot (as a separate account, line 69) can be officially " If you think that a review bot account should not be allowed, even if approved by maintainers, this whole thread would be more relevant on the parent item (line 66; I've commented about this before). P.S. I don't think this policy implies any LLM review bot account will be allowed "right now" or "soon", I believe there must at least be an FCP. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Thinking about this further, this seems like an overall better process than having a review bot comment on a PR. There's no room for ambiguity about whether a PR author is responsible for responding to LLM output; only the reviewer who decides to use an LLM is in a position to interpret the LLM output because "Comments from a personal user account that are originally authored by an LLM" are explicitly forbidden. |
||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ If a more reliable tool, such as a linter or formatter, already exists for the language you're writing, we strongly suggest using that tool instead of or in addition to the LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ Configure LLM review tools to reduce false positives and excessive focus on trivialities, as these are common, exhausting failure modes. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ LLM comments **must not** be blocking; reviewers must indicate which comments they want addressed. It's ok to require a *response* to each comment but the response can be "the bot's wrong here". | ||||||||||||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think it's okay to require PR authors to have to say "the bot's wrong here"; the onus should be on whoever triggers the bot to determine whether there's any validity to what the bot posted.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I don't see how line 73 disagrees with this. The statement "It's ok to require a response" refers to the reviewer requiring a response from the author to address the bot comment, not from the bot itself. The previous statement "reviewers must indicate which comments they want addressed." also suggested that the reviewer has taken the 'onus' of the bot comment. In this scenario I don't find requiring the PR author to say "the bot's wrong here" to dismiss the comment is unfair to the author; in fact, having that 2nd step "reviewers must indicate which comments they want addressed" means the PR author is in fact rejecting the combined analysis of the bot and the reviewer, so I'd say this is more biased against reviewers. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The current wording is a bit ambiguous and could conceivably be interpreted to mean that "it's okay to require a response" implicitly. I would like to see this clarified to say explicitly that a bot's comment only needs to be responded to if a reviewer explicitly indicates that. |
||||||||||||||||||||||||||||||||||||||||||||||
| - In other words, reviewers must explicitly endorse an LLM comment before blocking a PR. They are responsible for their own analysis of the LLM's comment and cannot treat it as a CI failure. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This does not apply to private use of an LLM for reviews; see ✅ above. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| All of these **must** disclose that an LLM was used. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ## Appendix | ||||||||||||||||||||||||||||||||||||||||||||||
| ### Motivation and guiding principles | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| There is not a consensus within the Rust project—and likely never will be—about when/how/where it is acceptable to use AI-based tools. | ||||||||||||||||||||||||||||||||||||||||||||||
| Many members of the Rust project and community find value in AI; | ||||||||||||||||||||||||||||||||||||||||||||||
| many others feel that its negative impact on society and the climate are severe enough that no use is acceptable. | ||||||||||||||||||||||||||||||||||||||||||||||
| Still others are working out their opinion. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Despite these differences, there are many common goals we all share: | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| - Building a community of deep experts in our collective projects. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Building an inclusive community where all feel welcome and respected. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| To achieve those goals, this policy is designed with the following points in mind: | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| - Many people find LLM-generated code and writing deeply unpleasant to read or review. | ||||||||||||||||||||||||||||||||||||||||||||||
| - Many people find LLMs to be a significant aid to learning and discovery. | ||||||||||||||||||||||||||||||||||||||||||||||
| - LLMs are a new technology, and we are still learning how to use, moderate, and improve them. | ||||||||||||||||||||||||||||||||||||||||||||||
| Since we're still learning, we have chosen an intentionally conservative policy that lets us maintain the standard of quality that Rust is known for. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### Moderation policy | ||||||||||||||||||||||||||||||||||||||||||||||
| #### It's not your job to play detective | ||||||||||||||||||||||||||||||||||||||||||||||
| ["The optimal amount of fraud is not zero"](https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/). | ||||||||||||||||||||||||||||||||||||||||||||||
| Don't try to be the police for whether someone has used an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
| If it's clear they've broken the rules, point them to this policy; if it's borderline, report it to the mods and move on. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
To an outside reader, reporting something to the mods is going to sound like an escalation against the contributor. Given that, it reads a bit backward that the clear violations should only result in the policy being cited while the borderline violations require a report to the mods. There are many ways this could be redrafted to avoid this. What I'd suggest is leaving whether to report this to the mods to the discretion of the reviewer in all cases and removing the conditional on whether the case is borderline or clear. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The mods have openly encouraged people to report non-violations so they can keep an eye on things. I don't think that we should be reinforcing the stereotype that communication with the mods is inherently accusatory; we should be breaking that assumption instead. |
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| #### Be honest | ||||||||||||||||||||||||||||||||||||||||||||||
| Conversely, lying about whether or how you've used an LLM is considered a [code of conduct](https://rust-lang.org/policies/code-of-conduct/) violation. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I think to be fair to people, this language needs to have a safe harbor. People shouldn't have to worry that they have precisely calibrated their disclosures, i.e., not too high, not too low. They might have done the work many months ago and may not remember the exact details. It must always be safe to over-disclose — to err on the side of stating that more LLM use happened.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the suggested edit is much harder to understand than the original text though There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I actually disagree; I would say that lying is a larger burden to meet and would not include people who are mistaken. Instead of weakening the wording, we could maybe clarify that the mods understand the difference between an lie and a misunderstanding.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There are many ways to redraft, and I'm OK with one that continues to frame it as lying. E.g.:
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I mean, I already kind of solved this in my version:
It doesn't need to be particularly elegant; parentheses are fine. |
||||||||||||||||||||||||||||||||||||||||||||||
| If you are not sure where something you would like to do falls in this policy, please talk to the [moderation team](mailto:rust-mods@rust-lang.org). | ||||||||||||||||||||||||||||||||||||||||||||||
| Don't try to hide it. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| #### Penalties | ||||||||||||||||||||||||||||||||||||||||||||||
| The policies marked with a 🔨 follow the same guidelines as the code of conduct: | ||||||||||||||||||||||||||||||||||||||||||||||
| Violations will first result in a warning, and repeated violations may result in a ban. | ||||||||||||||||||||||||||||||||||||||||||||||
| - 🔨 Violations of the "Be honest" section | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Other violations are left up to the discretion of reviewers and moderators. | ||||||||||||||||||||||||||||||||||||||||||||||
| For most first-time violations we recommend closing and locking the PR or issue. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Using an LLM does **not** mean it's ok to harrass a contributor. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
I think what this means to say is that it's not OK to harass people who used an LLM. It reads to me, though, as saying that using an LLM doesn't excuse the user harassing people, which is also obviously true, though probably too obvious to mention. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This just feels pedantic, honestly. The new version feels a bit clunky to read, and the original still is understandable imho. |
||||||||||||||||||||||||||||||||||||||||||||||
| All contributors must be treated with respect. | ||||||||||||||||||||||||||||||||||||||||||||||
| The code-of-conduct applies to *all* conversations in the Rust project. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### Responsibility | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Your contributions are your responsibility; you cannot place any blame on an LLM. | ||||||||||||||||||||||||||||||||||||||||||||||
| - ℹ️ This includes when asking people to address review comments originally authored by an LLM. See "review bots" under ⚠️ above. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### The meaning of "originally authored" | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| This document uses the phrase "originally authored" to mean "text that was generated by an LLM (and then possibly edited by a human)". | ||||||||||||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I’m not comfortable with the definition of "originally authored" as written here. Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author. |
||||||||||||||||||||||||||||||||||||||||||||||
| No amount of editing can change authorship; authorship sets the initial style and it is very hard to change once it's set. | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Taking a different approach here, of narrowing the focus to the phrasing in this policy, rather than trying to get people to agree with the fully general statement. |
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| For more background about analogous reasoning, see ["What Colour are your bits?"](https://ansuz.sooke.bc.ca/entry/23) | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### Non-exhaustive policy | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
|
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| This policy does not aim to be exhaustive. | ||||||||||||||||||||||||||||||||||||||||||||||
| If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview: | ||||||||||||||||||||||||||||||||||||||||||||||
| - Usages that do not use LLMs for creation and do not show LLM output to another human are likely allowed ✅ | ||||||||||||||||||||||||||||||||||||||||||||||
| - Usages that use LLMs for creation or show LLM output to another human are likely banned ❌ | ||||||||||||||||||||||||||||||||||||||||||||||
|
jyn514 marked this conversation as resolved.
Comment on lines
+183
to
+184
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
The more correct word here is uses. The word usage means a customary pattern or habit of use (or a rate or quantity of use).
Comment on lines
+183
to
+184
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The policy has evolved since this section was written, and I wonder whether it needs to be redrafted in a softer way. It now reads as far starker than the rest of the policy does. |
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| ### Conditions for modification or dissolution | ||||||||||||||||||||||||||||||||||||||||||||||
| This policy is not set in stone, and we can evolve it as we gain more experience working with LLMs. | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| Minor changes, such as typo fixes, only require a normal PR approval. | ||||||||||||||||||||||||||||||||||||||||||||||
| Major changes, such as adding a new rule or cancelling an existing rule, require | ||||||||||||||||||||||||||||||||||||||||||||||
| a simple majority of members of teams using rust-lang/rust (without concerns). | ||||||||||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+190
to
+191
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I read "without concerns" ambiguously here. Does it mean "it must achieve a simple majority of the members of teams using r-l/r and no concerns must be filed" or that "it must achieve a simple majority of the members of teams using r-l/r without considering concerns to have effect for blocking it." Either one I could see as having been intended. The first is more similar to how we normally treat concerns, but it's in conflict with wanting only a simple majority, as it actually requires a kind unanimity — any one person can block it. The second would be closer in spirit to a simple majority system, but it's in tension with our usual system of concerns. Which reading is intended? Probably this should be made more clear. |
||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| This policy can be dissolved in a few ways: | ||||||||||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||||||||||
| - An accepted FCP by teams using rust-lang/rust. | ||||||||||||||||||||||||||||||||||||||||||||||
| - An objective concern raised about active harm the policy is having on the reputation of Rust, with evidence, as decided by a leadership council FCP. | ||||||||||||||||||||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
In my view, the LC (at any later time) would be within its rights to decide that There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. First, this would be redundant for the first point, since if the LC can just decide whenever to dissolve the policy, then it doesn't matter if they have concerns about active harm. Second, if you just think the LC can veto any change to how a repository is managed after the teams managing that repo have signed off, without an RFC, I find that at least a little bit concerning. Third, this is supposed to be a temporary policy, to be replaced by a project-wide policy. (Note that if a project-wide policy sets aside the option for this policy to be tailored by a few specific teams, I would consider that replacing this policy, even if this effectively stays as-is.) So, from the perspective of this being refined by a larger RFC, I think that it's fair to say that it should only be replaced if there is active and demonstrable harm, not just because the LC doesn't like it. And uh, it's a massive conflict of interest for you to be proposing that to begin with, since you would be one of the people potentially making that decision.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The author has rejected framing this as an interim policy, so I'm not sure this is correct. See #1040 (comment). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can't comment for what jyn thinks, but to me, the goal here is to not belittle a policy that has already been put in place even if we also agree that policy needs to be put in place elsewhere in the project. The idea isn't that the policy itself is temporary, but rather than the only potential next step would be either: a) The policy has shortcomings that need to be addressed, or Neither of these cases are "the LC doesn't like the policy and wants to remove it entirely" |
||||||||||||||||||||||||||||||||||||||||||||||
Uh oh!
There was an error while loading. Please reload this page.