-
-
Notifications
You must be signed in to change notification settings - Fork 35.2k
doc: add policy on LLM-generated contributions #62447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,6 +19,7 @@ works. | |
| * [Code of Conduct](#code-of-conduct) | ||
| * [Issues](#issues) | ||
| * [Pull Requests](#pull-requests) | ||
| * [Policy on LLM-generated contributions](#policy-on-llm-generated-contributions) | ||
| * [Developer's Certificate of Origin 1.1](#developers-certificate-of-origin-11) | ||
|
|
||
| ## [Code of Conduct](./doc/contributing/code-of-conduct.md) | ||
|
|
@@ -47,6 +48,13 @@ dependencies, and tools contained in the `nodejs/node` repository. | |
| * [Reviewing Pull Requests](./doc/contributing/pull-requests.md#reviewing-pull-requests) | ||
| * [Notes](./doc/contributing/pull-requests.md#notes) | ||
|
|
||
| ## [Policy on LLM-generated contributions](./doc/contributing/ai-contributions.md) | ||
|
|
||
| Do not submit commits containing content written in whole or in part by a | ||
| large language model (LLM) or AI code-generation tool. | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The entire clause here is unenforceable. As models improve, how are you going to be able to tell the difference? All this does is provide incentive for contributors to lie.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It doesn't have to be enforceable, and this policy explicitly says so. |
||
| See the [full policy on LLM-generated contributions](./doc/contributing/ai-contributions.md). | ||
|
|
||
| ## Developer's Certificate of Origin 1.1 | ||
|
|
||
| ```text | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,76 @@ | ||||||
| # Policy on LLM-generated contributions | ||||||
|
|
||||||
| * [Policy](#policy) | ||||||
| * [Scope](#scope) | ||||||
| * [Enforcement](#enforcement) | ||||||
| * [Rationale](#rationale) | ||||||
|
|
||||||
| ## Policy | ||||||
|
|
||||||
| Do not submit commits containing code, documentation, or other content | ||||||
| written in whole or in part by a large language model (LLM), AI | ||||||
| code-generation tool, or similar technology. This includes cases where an | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "or similar technology" is fairly vague. For example, would this apply to our automation to rewrite
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. James also pointed this out. I'll reword this to that non-AI automations are explicitly excluded.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see the distinction in reproducibility of changes. If the change was partly or fully automated, the source code for automation should be mentioned in the PR.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What about explicitly defining allowed automation as ones in which the source machinery can also be provided in whole to inspect and understand its behaviour? This would cover cases like scripted automation whether that be through files we've included in the repo itself or modules we depend on. |
||||||
| LLM produced a draft that the contributor then edited. All authorship of | ||||||
| submitted changes must be human. | ||||||
|
|
||||||
| ## Scope | ||||||
|
|
||||||
| This applies to content that lands in the repository: source code, | ||||||
| documentation, tests, tooling, and other files submitted via pull requests. | ||||||
jasnell marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
|
|
||||||
| It does not apply to: | ||||||
|
|
||||||
| * Pull request descriptions, review comments, issue discussion, or other | ||||||
| communication that is not part of the committed tree. Those are covered by | ||||||
| general expectations around good-faith participation and the | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is contradictory. If an AI agent reviews a PR and makes an alternative code suggestion, and the PR author accepts that suggestion, then this policy is violated.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think that makes it contradictory? If an agent makes the suggestion, and the suggestion is accepted, then that becomes part of the committed tree, doesn't it?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so the policy would be: you can use an agent to review code but you can't accept any of it's suggestions?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If they are strictly "please include this code" suggestions, via GitHub's mechanism for this or otherwise, then yeah, that would be the case. I admit this is quite weird :/ The intent here was to provide some amount of compromise. It backfired into this corner here, but I think it would be consistent with the notion that committed code in the actual source tree is what's covered by the policy, so weird as it is, I'd keep it this way (unless folks don't need the compromise!). |
||||||
| [Code of Conduct][]. | ||||||
|
Comment on lines
+23
to
+26
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Note that it somewhat conflicts with nodejs/admin@8b746bc – which would be worth revisiting if this PR lands as is
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, that's also somewhat conflicting with #62105 (and indeed, anything that seeks to solidify allowing LLM-generated PRs), so that moderation policy would have to change in either case. That said, removing this exception would put it in line with that moderation policy, I think. I'm not sure on the best approach here yet. |
||||||
| * Vendored dependencies and other vendored content (e.g. code under `deps/` | ||||||
| and vendored code under `tools/`). That content is maintained upstream | ||||||
| under its own governance. | ||||||
| * AI-powered accessibility tools like screen readers or text-to-speech | ||||||
| software, provided they do not influence the content of the contribution. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It makes no sense to add this here as screen readers and text-to-speech software do not write code contributions.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, speech-to-text can. I know someone that does that, actually.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Which raises another interesting case here: use of AI as disability-assistive device. This would make this policy completely unenforceable.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure I understand how this would impact the enforceability of the policy. If a contributor uses AI tools for accessibility reasons, and discloses this in their PR (or in some blanket statement clearly visible elsewhere), then they wouldn't be in violation. If they fail to disclose ahead of time, and the PR is contested under this policy, then they disclose, then there's still no problem. I just realized that you may mean that providing for this exception provides an additional avenue for abuse of the policy, since folks can simply state that they're using AI tools for accessibility reasons. I think this is the same category of (un-)enforceability as "they can just lie", so, still, I don't think this adds or subtracts any amount of enforceability.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, it's not in the same category. Your assumption there is that someone saying they are using it for accessibility is equivalent to someone lying about using it, and that's not a valid equivalency. to draw that out a bit more.. The "they can just lie" category is ... they'll use it and won't tell you. The "accessibility" category is ... they'll use it and there's nothing you can do about it. But this is besides the point. The larger issue is that the policy in unenforceable because it's simply not possible to reasonably and unabiguously tell the difference.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Not quite. My assumption is that someone lying that they are using it for accessibility is equivalent to someone lying about not using it. You did clarify the categories and their differences though. Thanks for that 👍.
Agreed. |
||||||
|
|
||||||
| ## Enforcement | ||||||
|
|
||||||
| This policy cannot be fully enforced. There is no reliable way to determine | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
You can strike the word "fully" here. It can't be enforced, period.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think that's accurate. "Enforcement" in this context simply means a policy violation can be detected and consequences can be rendered. Sure, not all cases are detectable, but certainly self-identifying cases (among others) are. "fully" still applies because there are cases where enforcement is possible.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And those cases become completely arbitrary. If someone suspects a PR might be, despite any other considerations of whether the change is correct in every other way, based solely on a generalized fear that maybe proprietary code might be introduced at some point. Except, our existing processes already account this. Something that is missing in all of this discussion: how are the existing policies, that are already in place, inadequately suited for dealing with the inherent risks in accepting arbitrary code contributions from untrusted sources? |
||||||
| whether code was written by a human or generated by an LLM. Detection tools | ||||||
| have high error rates, and generated output can be trivially edited to avoid | ||||||
| detection. Compliance relies on contributor good faith. | ||||||
|
|
||||||
| If a collaborator suspects a pull request contains LLM-generated content, | ||||||
| they should raise the concern in the pull request. The normal | ||||||
| [consensus seeking][] process applies: collaborators discuss, the author can | ||||||
| respond, and a good-faith determination is made. If consensus is that the | ||||||
| contribution violates this policy, it must not land. If consensus cannot be | ||||||
| reached, the matter can be escalated to the TSC. | ||||||
|
Comment on lines
+40
to
+45
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This sounds a bit witch-hunt-y and also exploitable to block certain contributions someone might not like, or from a person another contributor might not like. Proving a negative is not really feasible so this could be used to get changes stuck in an uncertain limbo forever.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These last two sentences are not particularly different from any existing policy involving consensus around a PR and escalating to TSC. I'll adjust this to refer to the existing procedure for dealing with such disputes, rather than re-hash it here.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. My concern is mostly on the first sentence. The use of "suspects" there seems to me to imply an implication of guilt which should be judged. That's what gave me the witch-hunt-y vibes.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And on what basis does one determine this decision? Here, let me give you two examples... one of these snippets was written by me, the other by Claude: which one is acceptable and which one is not? Option A: for (let n = 0; n < 1000; n++) {
console.log('Am I AI generated?');
}Option B: let n = 0;
while (n++ < 1000) {
console.log('Or am I AI generated?');
}What this part of the policy basically boils down to is: any contribution can be contested and our normal consensus seeking rules are to be followed to resolve the contention. That's no different than what we have now except for the added incentive for a contributor to just lie about whether they used an agent to help write the code. If the policy is unenforceable it's not useful.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, this specific point is my main concern. If the policy is not defined in a way that is clearly measurable/identifiable then it's likely to just encourage lying about it, and the ability to contest things arbitrarily could easily be abused.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm gonna guess But, yes, to your point, that's clearly not enough to go on here, and there's a reasonably good change my guess was wrong, or even that their both AI or neither are AI. Obvious cases can, will, and do exist. The rest relies on good faith participation. I think it's clear that your point here is that that's insufficient, but there are other policies within Node.js core that require good faith participation, like the DCO itself.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, and those existing other policies, including the DCO, are sufficient on their own. We do not need to layer on an unenforceable policy just to put on a show that "AI bad, Human good". Nothing in this document strengthens or improves our existing policies. Nothing in the debate has demonstrated that our existing policies are inadequate.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
This document guides contributors towards acceptable AI use. I don't believe we have an analogous policy with similar level of comprehensiveness in this repository.
This debate started with 19k LoC LLM generated PR that was opened and positively reviewed. If we believe this PR can be merged - our existing polices are adequate, but if on other hand we think that merging such change harms Node.js in long term and introduces risks - the policy change is warranted. Clearly the debate isn't over, but this PR is a big milestone in closing it.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't know if #61764 was LLM generated, but it looks like it was, and if it had been merged, it would have hidden a bug.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
how can you determine that with a one line / one word change?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just a guess from PR description.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If 19k PR are generally bad... I don't see how we will ever be able to land new subsystems. Forget about QUIC, HTTP3, new streams etc... we can just iterate on what we already have. Or just pretend that adding it as a vendor dependency is somehow something else. I do agree that these are problematic but I'm also confused regarding to what the alternative is?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. James has been doing a great job with landing QUIC changes/improvements iteratively. I don't think we need to prevent adding changes which are 20k+ lines in whole. But I do think that if they are that big we should be finding ways to break up those changes into more digestible ones. Feature forks have been effective in the past, or just landing pieces as internal systems which we fully expect could change entirely before we actually release any public API surface code depending on them. |
||||||
|
|
||||||
| When evaluating a concern: | ||||||
|
|
||||||
| * Stylistic resemblance to LLM output is not sufficient on its own. Some | ||||||
| people write that way. Consider the contributor's history, the nature of | ||||||
| the change, and whether they can engage with review feedback. | ||||||
| * Do not cite AI content-detection tools as evidence. They are unreliable. | ||||||
| * If a contributor is asked directly whether they used LLM tools and | ||||||
| responds dishonestly, that undermines the good faith this policy depends | ||||||
| on. This should weigh against them in any consensus determination. | ||||||
|
|
||||||
| ## Rationale | ||||||
|
|
||||||
| Contributors to this project need to be able to explain their changes, | ||||||
| respond to review, and take responsibility for what they submit. | ||||||
| LLM-generated pull requests do not meet that bar. | ||||||
|
|
||||||
| Collaborators review every change on a volunteer basis. LLM-generated | ||||||
| submissions shift the verification burden onto those reviewers, with no | ||||||
| assurance the contributor has done that work themselves. | ||||||
|
|
||||||
| The copyright status of LLM output is unresolved. Training data for popular | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's not quite accurate. Because the training of the LLM on that content is done by a different person than the one using the LLM to produce the output it's legally clean-room design, which there is already legal precedence for when Phoenix Technologies and American Megatrends copied the IBM bios. It's possible the law could be revised to address LLMs specifically in a different way, but the current state is not an absence of definition.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🤔 I do not see the parallel you are suggesting.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The details of how a thing works get captured as some description by one person, that description then gets relayed to another. They can now build the thing described and it is legally not considered derivative. A bit of a wacky edge case of the legal system, but that's how it was ruled. 🤷🏻
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. None of us in this thread are lawyers (AFAIK), and this is the first time I'm hearing of this. If the copyright law is truly resolved at this point, then this can be dropped. Can you point to references indicating that this applies here?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Clean room" aspect of LLM generated code is at the very least debatable. Since LLM can reproduce word-for-word the content it was trained on we could argue very different things about it.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A human can reproduce things word-for-word from memory too. If a thing is or is not a "copy" in spirit could certainly be debated. But how the law is addressing these cases presently is with this clean-room case as precedence. Until that changes, from a purely legal perspective, such code is not considered a copy. I am, of course, not a lawyer. But I have seen multiple actual legal cases now citing this clean-room case successfully. This is likely one of the many factors the LF legal team considered when reviewing the current legality of LLM-based contributions when they came to the conclusion it was acceptable. So my point here is just that legality, at least at present, is a non-issue. Presenting that as justification not to allow such contributions is a weak argument. We should focus on clarifying the stronger cases against LLM-assisted contribution.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Both the EU and the US have (so far) said that purely AI derived output is not copyrightable. Chad Whitacre wrote a great article covering the licencing/copyright issues with some references to the rulings etc
So we'll have to wait for court cases to learn where this line is...
This means that without enough human input, the work is not copyrightable and therefore becomes public domain. Without copyright the license is not enforceable. |
||||||
| models includes material under licensing terms that may not be compatible | ||||||
| with the Node.js license. Contributors certify their submissions under the | ||||||
| [Developer's Certificate of Origin 1.1][DCO], which requires that the | ||||||
| contribution was created by them and that they have the right to submit it. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is mistating what the DCO says. There are three clauses separated by an OR statement. You're focusing solely on the first clause and ignoring the "or" You're also making the assumption that the person using the tool has not performed their own thorough review of the output generated. For instance, I may be in the minority but I read every line generated by the AI agents I use. This also ignores the fact that the Linux Foundation legal team has established that AI contributions do not violate the DCO.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is the Node.js project not free to have its own interpretation here? If it's not free to do so, I can remove all the language about the DCO. There are other rationales. If it is free to do so, then it makes sense to respond to the rest of what you've said: The language I put here does focus solely on clause (a), but I'm confused as to how (b) applies if the LLM did not provide the code to the contributor under an open source license. The LLM is not even an entity that can enter into any contract or hold copyright at all, so I'm not sure how either would apply.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Only people can attest to the DCO. Nothing in any of this absolves the person opening the PR from responsibility to ensure everything in the PR is appropriate to contribute. Your exact concern would apply to me hiring a contractor to write the code on my behalf. If I'm opening the PR, it becomes my responsibility, regardless of how the contribution was written. That's what the foundation legal team established. Is the project free to have its own interpretation? Sure. But I'm not sure why we would.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I'm having trouble understanding this analogy. A contractor is (by definition) entering into a contract with you (which would presumably fall under (b)). An LLM can't do that (as you said). Or, is it more because of (a), since you can say you authored it "in part" and you're asserting the right to submit it under the license regardless of its origin (i.e. the "it's a tool, no different from vim or a template-generator-thing" kind of argument)? That seems more plausible to me.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Alright, I re-read your blog post on this. It seems like the contractor analogy hinges on the notion that you (not the contractor in this analogy) are the copyright holder. While that does cover the "right to submit" portion of (a), it doesn't cover the "created in whole or in part by me". But! That's okay, since "tool use" basically covers it for LLMs/AI. It just leaves me with more questions about contractors, but that's irrelevant to this conversation. Now, whether you are the copyright holder in the LLM scenario is of course potentially suspect, but as you and @Qard have pointed out in this PR, LLMs rarely produce truly copyright-infringing material (despite how they themselves were created), and when they do, the usual copyright rules apply and no additional rules or policy is needed to cover that. In short, it took me a while to get there, and I'm sorry about that, but I agree that the DCO is not inherently violated by LLM/AI-created contributions (though as with any other PR, it could be violated through other means). It shall be stricken from this PR when I get the chance.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's fair to note that while it becomes your responsibility, the assumption is that you indeed have a legal arrangement with said contractor. It's impossible to have such an arrangement with an LLM, so I don't think your analogy holds. LF legal's position here, while untested in court (and thus, neither inherently correct or incorrect), is that this is fine for the DCO - but that doesn't mean it's the same as with a contractor, which is something that has very much been tested in court.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. From the DCO's perspective they are equivalent. Look at the second clause again:
Notice that it doesn't say anything about legal arrangements. It doesn't say anything about how the work was created, under what conditions it was created, etc. It just says that, "to the best of my knowledge", it's "covered under an appropriate ... license" and I have the right to submit, whether created "in whole or in part by me".
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, to the best of my knowledge LLM work was not created under any open source license. It is parts and pieces of proprietary code leaked online, GPL licensed code, code without explicit license, etc. |
||||||
| It is not clear how anyone can honestly certify that for LLM output. | ||||||
|
|
||||||
| [Code of Conduct]: https://github.com/nodejs/admin/blob/HEAD/CODE_OF_CONDUCT.md | ||||||
| [DCO]: ../../CONTRIBUTING.md#developers-certificate-of-origin-11 | ||||||
| [consensus seeking]: ../../GOVERNANCE.md#consensus-seeking-process | ||||||
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this effectively means we cannot land more V8 updates, for example, #61898 contains https://chromium-review.googlesource.com/c/v8/v8/+/7269587 and a bunch of other code written by Gemini (you can search the V8 commit log to find a lot more).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that deps are an explicit exception:
node/doc/contributing/ai-contributions.md
Lines 27 to 29 in 7178b64
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, perhaps that needs to be clearer and mention that there is a scope in the TL;DR, considering probably ~90% of the code in the code base is vendored in.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll see what I can do to clarify this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If deps are an exception, isn't that a somewhat simple loophole? Just include the code as a dependency instead of part of node core? Not sure what we are solving then...