Skip to content

Designing a uniformity opt-out #3554

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
alan-baker opened this issue Oct 27, 2022 · 28 comments · Fixed by #3713
Closed

Designing a uniformity opt-out #3554

alan-baker opened this issue Oct 27, 2022 · 28 comments · Fixed by #3713
Labels
wgsl WebGPU Shading Language Issues
Milestone

Comments

@alan-baker
Copy link
Contributor

Split from #3479

This issue is for discussion of a design for a uniformity opt out mechanism.

There have been three proposals (to my knowledge):

  1. An attribute (e.g. @uniform).
  2. A new block statement (e.g. unchecked_uniformity { ... }).
  3. Variants of built-in functions (e.g. uniform_textureSample(...)).

CC @jimblandy

@alan-baker alan-baker added the wgsl WebGPU Shading Language Issues label Oct 27, 2022
@alan-baker alan-baker added this to the V1.0 milestone Oct 27, 2022
@brendan-duncan
Copy link

For Unity shaders, the important part of the opt-out design is that shader translators like Tint or Naga can generate the opt-outs, as we will not be able to author the source shaders with the extra information.

@krogovin
Copy link

krogovin commented Nov 1, 2022

Looking at WGSL 2022-10-25 Minutes it starts with trying to make the escape hatch as local as possible, and then it returns to marking blocks as unsafe-like. In contrast, just having unsafe function versions maximizes locality.

What is the motivation for marking blocks of code as unsafe-like instead of just having unsafe functions? I do not see any benefit at all from the marking of blocks of code over having unsafe function variants.

@alan-baker
Copy link
Contributor Author

We should also consider how uniformity might evolve in the future when designing this feature. Along those lines, two future directions come to mind:

  1. Additional scopes
  2. Descriptor arrays

Descriptor arrays in Vulkan require dynamically uniform access by default. There is a non-uniform capability, but that leads to less efficient code so indicating that type of access is likely useful to indicate.

Additional scopes are more future looking. I believe this will be useful for future features like subgroups, but might rely on underlying language extensions.

These two features make me leans towards using an attribute over new builtin names. It would be awkward to have builtins for each scope and wouldn't work at all for descriptor array access.

I personally prefer either attributes or builtins over a new statement type. Attributes have the extra advantage of being able to attached to more things than statements. For example, we could attach an attribute to a function parameter without having the statement opt-out cover a whole function.

@dneto0
Copy link
Contributor

dneto0 commented Nov 1, 2022

For example, we could attach an attribute to a function parameter without having the statement opt-out cover a whole function.

I think it's worth fleshing out a critical-user-journey of how this is used. For example, the "programmer knows better, imprecise analysis at fault" case:

  1. Developer writes shader code, and feeds it to CreateShaderModule
  2. CreateShaderModule fails with a message. The diagnostic should point to root cause: which value is non-uniform.
  3. The programmer, knowing better, annotates the value to indicate it's actually a uniform value but that the analysis is too weak to know it.
  4. Developer feeds the shader module to CreateShaderModule and CreateShaderModule succeeds.

Concretely, this is the original program:

@group(0) @binding(0) var t: texture_2d<f32>;
@group(0) @binding(1) var s: sampler;

@fragment
fn main(
   @builtin(position) p: vec4<f32>,
   @location(0) f: f32
) {
  if (f<0) {
    let c = textureSample(t, s, p.xy);
  }
}

Compiling with Tint, we get:

$ tint a.wgsl
a.wgsl:10:13 error: 'textureSample' must only be called from uniform control flow
    let c = textureSample(t, s, p.xy);
            ^^^^^^^^^^^^^

a.wgsl:9:3 note: control flow depends on non-uniform value
  if (f<0) {
  ^^

a.wgsl:9:7 note: reading from user-defined input 'f' may result in a non-uniform value
  if (f<0) {
      ^

So in step 3 the programmer sanitizes the uniformity analysis, e.g. by declaring that they know that the f value is actually uniform:

@group(0) @binding(0) var t: texture_2d<f32>;
@group(0) @binding(1) var s: sampler;

@fragment
fn main(
   @builtin(position) p: vec4<f32>,
   @uniform @location(0) f: f32  // Programmer says they know the 'f' value is uniform.
) {
  if (f<0) {
    let c = textureSample(t, s, p.xy);
  }
}

Benefits:

  • Cooperation/interaction between tooling and developer. The analysis acts as a guardrail that teaches but allows the bypass
  • Using an attribute like this captures programmer intent: "I know better than the analysis can know, that the 'f' value is uniform".
    • For this case, the attribute is better than options 2 and 3, which "just" silence the analysis.

Another interesting case: "programmer knows better, accepts tradeoff in quality due to the incorrectly computed implicit derivative". In this scenario "just" silencing the analysis might be a better match for programmer intent. It would actually be a lie to assert in the source code that a value is uniform when the programmer knows it isn't uniform.

@dneto0
Copy link
Contributor

dneto0 commented Nov 1, 2022

Where could you put an attribute? We have a lot of choices in the design space:

  • on a formal parameter, as above: Asserts the value is always uniform
  • on a let: Asserts the value is always uniform
  • on a var: Asserts the value stored in the memory locations is always uniform, regardless of what initializations and assignments occur
  • could also be on an expression. I'd suggest limiting it to a singular expression
    • This could include a parenthesized expression. So if/when we get differently-scoped uniformity, we could use two attribute names to avoid the parsing ambigutity when you reach a parenthesis @uniform (without params) and a differently named thing like @uniform_for(subgroup)

I'm reminded again that SPIR-V has Uniform and UniformId which very closely mirrors this attribute option.

@tex3d
Copy link

tex3d commented Nov 2, 2022

I like the attribute approach described here by @dneto0. I also like the ability to supply the uniform scope (like group, subgroup, quad) with something like @uniform_for(subgroup). I think that scope is essential to describe the different degrees of uniformity necessary for different operations such as the screen-space gradients in fragment shaders. Of course, this scope impacts uniform analysis, so it would have to be updated to independently track the different uniform scopes.

To build on it, I think a name like @assume_uniform or @assert_uniform could be a better fit. You really are trying to promise that the value is uniform for analysis purposes. What @assert_uniform implies is that some debug-mode runtime could potentially catch a violated assumption by adding checks. The potential for debug-mode checks and the implication that you're lying if you violate the assumption or assertion based on the wording in the attribute, psychologically discourages blindly applying it anytime you get an analysis error (unlike a function/block/built-in variant approach).

I don't think we should be making it normal to call functions that require a level of uniformity that it isn't actually present, and just rely on implementations to have some predictable fallback behavior when requirements are violated. If someone is doing that, they should fix the shader code.

Regarding this debate between variable/expression attributes vs. function, block, or built-in variant based opt-out, I think the variable/expression attribute fits into the uniformity analysis story most cleanly, and potentially provides many benefits down the line. I think that provably uniform can be tracked separately from user-asserted assumptions as well.

The downside is that if the different needed uniformity scopes are not properly captured/expressed with the attribute, a user would be required to "lie" about uniformity in some cases (promising a value is fully uniform rather than subgroup or quad uniform which might be all that was true and required). This "lie" could lead to invalid assumptions about uniformity on anything dependent on this value, and failures if those assumptions are checked or enforced at runtime. So this approach requires a lot more work to achieve a full solution.

The function/block based opt-out seems to just turn off the error for sensitive operations in a given scope, and built-in variants are similar for the operations themselves. This means either a) uniformity analysis will still be wrong about some unknown number of things, uncorrected by the opt-out mechanism, or b) we just give a blanket pass to code that actually violates the rules. It doesn't leave behind the detailed information required to document where precisely the uniform analysis is too conservative (or just can't know), allow the runtime to catch the introduction of a new source of non-uniformity that wasn't intentional (help catch real bugs), or allow for uniform-aware codegen / optimizations as highlighted by prior comments.

Given these trade-offs, I wonder if a global opt-out could be provided until a proper attribute-on-value based approach was ready and robust, after which the global opt-out could eventually be deprecated. Code using global opt-out could be prevented from running in some less trusted mode, or without some compatibility setting in some future, or assumptions could be verified/enforced with extra runtime cost. Putting properly-scoped attributes on the originating source of (analysis-based) potential non-uniformity could reduce assumption validation/enforcement overhead for a "safe" mode as well.

@kdashg
Copy link
Contributor

kdashg commented Nov 2, 2022

WGSL 2022-11-01 Minutes
  • MM: What about a uniformity opt-in?
  • KG: Want developers to have to think about it first. Don’t want to start with the confusing behavior to start with. Want the guardrails on by default.
  • KG: would like to see a proposal for which functions would need to be marked with nonuniform options. Eg. what entry points are modified. Generally I like if annotations for builtins for cleanliness.
  • DN: Ben made comment that one thing he doesn’t like with new names for builtins, is that they don’t actually do different things.
  • DN: Also, in v1, we have just one level of scopes, but we expect to have more scopes in the future. The number of builtin names you’d need would multiply.
  • MM: I believe it’s group opinion that we need UAn for compute shaders for safety.
  • DN: I do think that question is still open, e.g. other ways to clog the gpu
  • JB: Does WebGL have this problem?
  • KG: You can write very slow shaders in webgl
  • MM:
  • JB: I just wonder if we’re adding any new risks here or not, vs webgl?
  • DN:
  • TR: Just an idea, could you have a “more trusted” mode?
  • JB: One nice thing about AB’s thing, attributes on vars and argos, plus attribs on statements are both useful things. I could imagine you want both. We maybe don’t need to choose, it might be useful to do both.
  • KG: Whatever gets us consensus faster.
  • JB: Yes. Also btw we do have a “collective “ section
  • AB: Won’t always be builtin functions at fault, e.g. descriptor arrays (when we get to that feature). Attributes are the lowest cost thing right now. Gives you the freedom to modify your code in the smallest scope way. E.g. putting it on a comparison, rather than the whole if-statement. Once it’s a statement, you can’t get below a certain level of granularity without refactoring your code. Attr
  • DN: I added an example of a user experience where the user knows better than UAn, and diags tell them what’s wrong, and they say “no that’s fine actually” when they mark with an attrib.
  • DN: Flipside is when an author is instead merely willing to accept bad/wrong derivatives, and annotates instead acceptability rather than actually correct. Like “pretend uniform”.
  • AB: SPIR-V has prior art: has Uniform decoration and UniformID (for scoped). And for descriptor arrays it has a “non-uniform” control for descriptor arrays.
  • KG: Have several options here. My dream was how Rust has unsafe block and unsafe expressions, and also unsafe annotations on functions. The MVP for what we need would be annotations on functions. What would make me super satisfied is function-level annotation. That’s tolerable and minimal.
  • JB: Have a question. Talking about attributes. But putting it things on stuff we don’t annotate now. Types, return types, parameters, struct members. We don’t put them on expressions yet. When someone says we put attributes I think of putting them on things we already have. Are you suggesting putting on expressions.
  • AB: Yes, that’s a possibility.
  • JB: Seems to be roughly isomorphic between unsafe-expr and attr.
  • JB: Surgical approach seems not acceptable to Unity. Need to have “a” opt-out which can be used with cooperation with the code generator in the bulk way. Would prefer surgical,...
  • KG: In extrema could have unsafe-annotations on all functions.
  • AB: Unity’s only requirement is that tooling can generate it for them.
  • JB: But don’t want it strewn all over.
  • KG: You only really need those when certain spots. If your tooling is smart enough about which have to be non-uniform calls, then it can emit more surgical annotations/wrappers as needed.
  • TR: Could have subgroup operations assumeUniform and makeUniform (latter would pick first lane’s value), to make it actually uniform.
  • TR: Don’t like mangling the functions.
  • KG: Most people don’t like those.
  • EG: Think the premise of wider opt-out is needing a standard mechanism. Ok… resetting understanding.
  • MM: Have said before. Jury is still out on what the security requirements are. It may actually be the case that both derivatives and barriers require the analysis. That could be the outcome.
  • AB: Touching on makeUniform. We have workgroupBroadcast proposal for compute shaders. But don’t think it’s clear for fragment.
  • TR: For fragment, need quads to cooperate.
  • AB: This is where we have the issue of whether we have the invocations in the quad actually getting together again to supply their values.
  • MM: Is there a quad barrier?
  • TR: No.
  • AB: There is a vulkan extension for subgroup uniform control flow that moves the reconvergence requirement down to subgroup level rather than whole invocation group. But that’s not universally available to where we’re targeting WebGPU.
    • VK_KHR_shader_subgroup_uniform_control_flow
  • KG: You can kinda do certain transforms to incur total reconvergence but it’s not trivial or great.
  • DN: Concretely how about:
    • @uniform and @fake_uniform (bikeshed name). Applicable to various things, and captures programmer intent.
  • EG: In Rust practice, you use a single ‘unsafe’ construct but document it as you go (in comment?)
  • TR: ‘assume_uniform’ does the same thing.
  • JB: Watch out for Rust’s unsafe. Rust unsafe functions and blocks were incorrectly unified in first versions. Two different concepts actually. Let’s be cautious about unsafe functions and unsafe blocks.
  • KG: That’s why I want to look to someone else. Otherwise we may fall into the same trap. Like, they had troubles with this, but then they fixed it.
  • AB: If nobody is allergic to using an attribute as the solution, then @assume_uniform can capture both cases. Then I can take this and make it concrete. Then the discussion is what things to annotate.
  • JB: Consider future direction. The analysis can report an error whose cause threads back through several functions. When that occurs, we can provide detailed diagnostics. However, it will be annoying for users. Programmes know where they thought things were going to be uniform. They have no way to say their expectations about their program. Because we can’t capture programmer intent at that granularity, we’ll cause difficulty for those difficult spots. Think we’ll want to put annotations on function-level boundary. User will want to say how each function will behave. Produces a summary on each function of effect. Will want the analysis to check their expectations. That will be useful just for the sake of being able to debug. Let’s keep that in mind, maybe not necessary for V1.
  • KG: Namewise, I want the name to be negative and scary. Really like “unsafe” for that reason. Negative connotation is really important. Want people to not want to apply this opt-out.
  • MM: +1 on what Jim said earlier. Historically expected an author to assert that a particular variable is uniform. Would cause an error. Good for cutting off the length of action-at-a-distance.
  • JB: +1 Kelsey’s point about negative connotation. People are swayed by connotations of ‘unsafe’ and ‘use strict’ in JavaScript.
  • AB: Understand that argument. Where you end up changing your code differs depending on positive or negative connotation.
  • EG (in chat): take_off_kneeepads
  • ZJ (in chat): ignore_non_uniform

@tex3d
Copy link

tex3d commented Nov 9, 2022

Follow up comment: I recently found out about Vulkan's uniformity requirement for barrier being "dynamic uniform". My previous assumption that attributes on values would be sufficient is not valid for the "dynamic uniform" case.

Here's a pseudocode example for "dynamic uniform"

if (a()) {
  // a() is not uniform, and exec here is not uniform
} else {
  // exec here is not uniform
  if (b()) {
    // b() is not uniform, and exec here is not uniform
  } else {
    // exec is dynamic uniform here due to (compiler unknown) dependency between a() and b()
    // uniform if and only if a() is false and b() is false.
    // if (!a() && !b()) on any thread, then this is true for *all* threads.
    barrier();
  }
  // exec here is not uniform
}

So if "dynamic uniform" is to be supported, it seems that some form of attribute on the barrier() call, or on the block, or on a function wrapping the barrier() call, is necessary. No value or expression attribute will suffice in this case.

Note: FXC may not allow this case for HLSL, and I know of at least one implementation that cannot run this code as expected on D3D.

@kdashg
Copy link
Contributor

kdashg commented Nov 9, 2022

WGSL 2022-11-08 Minutes
  • KG: What I really like about using the negative term instead of the positive term is that it suggests that things are normally safe but we’re using an escape hatch. Flags code for extra-careful review.
  • KG: When reviewing an unsafe block, the goal is to become sure that what’s in the unsafe block is indeed safe - it’s just that the compiler can’t see that
  • AB: If we were to write an attribute-based proposal with a placeholder name, would that be all right? Would that let us move forward on substantive questions like, where can this be used? Separately from naming
  • DG: re: attribute: In one of the issues, someone was porting rendering and there was a variable input to the shader that was uniform, but the analysis had no way of knowing that. I support the uniform if it supplies additional information about the program that the compiler can’t determine.
  • KG: And if that’s violated, we say, “that’s on you”?
  • DG: yes
  • DG: Unless people think we can get away without the ability to mark knowledge that the programmer has
  • KG: We should go as far as we can in that direction
  • TR: I made a long comment on the issue, but I recently found out about Vulkan’s “dynamic uniformity” requirement for barriers - they’re not quite the same as what I expected. This kind of broke the idea that you could simply attribute values and get a guarantee of uniformity in control flow for barriers. Marking individual values with attributes wouldn’t be sufficient, but you can create control flow that can’t be marked up sufficiently.
  • AB: It depends on the use case. If you could put an attribute on an expression, you could put it on the condition of an if, or put it on a let and then use that in the if
  • TR: It could be non-uniform when false, but uniform when true. You could have two if statements, and in the … (See comment on issue)
  • AB: If you could put the attribute on the function call, then you could put it on the builtin itself, avoiding a proliferation of attributes on builtins. This gives it a lot of flexibility
  • TR: There’s a difference between a value being uniform, and the control flow operating on that value being uniform
  • AB: The key for the analysis is that what causes a problem is a collective operation called from non-uniform control flow. That could depend on the uniformity of a value. But with an attribute on expressions, you can always make it work.
  • TR: But also uniformity scope matters. So I don’t know if we can capture everything we need. We need a simple escape hatch for v1.
  • AB: The bare minimum is to put an attribute on function calls. That would solve everyone’s problems, it might not always be idea but it would suffice
  • KG: For 1.0, messy suffixes might not be so bad
  • AB: That would also suffice
  • KG: Nobody prefers it, but it’s shippable
  • BC: We would never be able to get rid of it
  • KG: We would have to support them forever, but we could discourage people from using them once we have something better. And they’re not hard to implement
  • BC: Is that preferable to module scope?
  • KG: Yes, preferable, even for v1. Module scope will become something people slap on as a habit, without thinking about it.
  • BC: I prefer function-scope annotations to littering the language with functions we don’t want to keep. That would just turn off errors on a particular function
  • TR: How would that affect the rest of the analysis, using the results of those functions?
  • BC: You could proceed with the analysis, but simply silence the warnings. Or, you could avoid propagating the non-uniformity
  • AB: If I branch on a non-uniform value in a flagged function and use a barrier, would I get an error?
  • ZJ: Is this an annotation that works within a function, or does it affect the function’s callers as well?
  • KG: If you mark a function, you’re saying, just pretend this is uniform. So you would not get errors if you used its value
  • ZJ: Maybe if we are going to have function-scope opt-out, where we put an attribute on the whole function, then does that mean that any expressions in the body of that function will not produce errors?
  • KG: yes
  • TR: It makes much more sense to preserve the analysis within the function, but mute the errors
  • BC: I’m worried about the implementation complexity of determining where to place the attributes, if you wanted to avoid placing them unnecessarily. Right now, we only return one error and then bail, so having to identify all the places where it’s needed would be a major effort
  • AB: Having an attribute on a function call would clearly work just as well as having variant functions with ugly names. Unity’s feedback is, they don’t care how we do it - just as long as the tool does it for them
  • BC: Let’s discuss internally
  • KG: I do like the equivalence between the attribute on a function call and calling a function that has a suffix, it’s nice to have those two equivalents. I don’t quite understand the implementation complexity concerns around collecting multiple errors
  • KG: Can we do other things to the language, like the uniformRead primitive (Uniformity analysis: find a way to load from a storage buffer uniformly. #2321), that would make these annotations less necessary in the first place?

@dneto0
Copy link
Contributor

dneto0 commented Nov 17, 2022

Follow up comment: I recently found out about Vulkan's uniformity requirement for barrier being "dynamic uniform". My previous assumption that attributes on values would be sufficient is not valid for the "dynamic uniform" case.

Here's a pseudocode example for "dynamic uniform"

if (a()) {
  // a() is not uniform, and exec here is not uniform
} else {
  // exec here is not uniform
  if (b()) {
    // b() is not uniform, and exec here is not uniform
  } else {
    // exec is dynamic uniform here due to (compiler unknown) dependency between a() and b()
    // uniform if and only if a() is false and b() is false.
    // if (!a() && !b()) on any thread, then this is true for *all* threads.
    barrier();
  }
  // exec here is not uniform
}

To make sure I understand the concern, I wanted to make this more concrete.

The example is meaningful if some invocations reach the barrier.
A good meaningful case is if all invocations reach the barrier.

So a() and b() should both be false, but the compiler must believe they are MayBeNonUniform.
Then at runtime make w = 1, so that both a() and b() return false.

So:

     var<workgroup> w: i32;
     fn a() -> bool  { return  (w & 1) == 0; } // w is even
     fn b() -> bool  { return  (w & 3) == 0; } // w is a multiple of 4

     @compute @workgroup(3)
     fn main(@builtin(local_invocation_index) lid: u32) {
        if (lid==0) { w = 1; }  // falsify a() and b()
        workgroupBarrier();

        // now copy the example code fragment from above.
        if (a()) {
          // a() is not uniform, and exec here is not uniform.  [[1]]
        } else {
           // exec here is not uniform [[2]]
           if (b()) {
             // b() is not uniform, and exec here is not uniform.  [[3]]
           } else {
             // [[4]]
             // exec is dynamic uniform here due to (compiler unknown) dependency between a() and b()
             // uniform if and only if a() is false and b() is false.
             // if (!a() && !b()) on any thread, then this is true for *all* threads.
             workgroupBarrier();
           }
           // exec here is not uniform.  [[5]]
        }
     }

And now I see @tex3d's point.
Any set of attributes applied to the conditions or the 'if' constructs themselves to promise uniformity ends up being either a lie, or not powerful enough to give us the desired ability to "see" that the inner barrier call is uniform.

  • Let's assume any attribute on a condition is a promise that a condition or flow is uniform.
  • Then any attributes we apply individually on the evaluations of conditions a() and b() must get us far enough to say that execution at [[4]] is uniform.
  • But then that would also either say that:
    • that [[1]] and [[2]] are executing uniformly because we promised evaluation of a() is uniform, which is a lie,
    • or that [[3]] is executing uniformly, because we promised evaluating b() is uniform, which is also a lie

That's a very nice demonstration that getting what we want when we only have "BLAH IS UNIFORM" attributes forces us to lie at least a little bit.

@dneto0
Copy link
Contributor

dneto0 commented Nov 17, 2022

In internal discussions a 4th option has come been expressed (that I wasn't aware of until recently), which is a mashup of the other three:

  1. An attribute on a call to a builtin to ignore uniformity problems with an invocation of the scope.

Formally this is like option 3's uniform_textureSample but where it's spelled as two tokens instead of one: @unchecked_uniformity textureSample That avoids the concern about future combinatorial blowup of number of builtins.

Google doesn't have an opinion on this yet. We're still talking.

@dneto0
Copy link
Contributor

dneto0 commented Nov 21, 2022

After internal discussion, Google requires a coarse grain control to be present. Our preference for the coarse grain control is an option on CreateShaderModule (i.e. not in the shader, and so it's out of band). We suggest a name such as assumeDynamicUniformityForDerivatives.

@krogovin
Copy link

krogovin commented Nov 21, 2022

After internal discussion, Google requires a coarse grain control to be present. Our preference for the coarse grain control is an option on CreateShaderModule (i.e. not in the shader, and so it's out of band). We suggest a name such as assumeDynamicUniformityForDerivatives.

what about the uniform issues coming from barriers (be it fragment, vertex or compute shading) ?

@alan-baker
Copy link
Contributor Author

After internal discussion, Google requires a coarse grain control to be present. Our preference for the coarse grain control is an option on CreateShaderModule (i.e. not in the shader, and so it's out of band). We suggest a name such as assumeDynamicUniformityForDerivatives.

what about the uniform issues coming from barriers (be it fragment, vertex or compute shading) ?

WGSL only has compute barriers. It would be possible to have a separate feature for barrier opt-out, but there is less consensus about that being a requirement.

@kdashg
Copy link
Contributor

kdashg commented Nov 22, 2022

WGSL 2022-11-15 Minutes
  • JB: Personally attribute syntax could be a perfectly fine way to use in a lot of places, without needing a lot of innovation. Think it’s very similar to what I had in mind to the block; to me it’s a spelling difference.
  • DN: Tex’s comment says, he likes a well-named attribute, especially one that could be checked in a debug mode - an assertion that could be checked dynamically. He also liked the very coarse-grained opt-out, to support Unity. Taking that, I can agree with a very coarse-grained opt-out. But we want to avoid blog posts where people just copy code without thinking about it. An option on createShaderModule would get the global opt-out someplace people wouldn’t copy-paste blindly.
  • DN: It would also be useful to have an attribute you can put on a variable, “I promise this is uniform”. JP is concerned about the details: can it be explained clearly, in a way that people can absorb? I’d like to pursue this path and make a concrete behavior, so that we can all be on the same page.
  • KG: Coarse-grained opt-out: I would hope that we could just make that a non-standard implementation-specific thing. If we do standardize it, we should be clear that it’s going away. I would like to prevent people from building on this.
  • DN: We implemented it as a warning first, then errors. The feedback has been completely one-sided: people want it to go away. Raph has been the only one mildly positive about this. We should take that into account.
  • BC: One-sided for fragment shaders. There are no security issues entailed, so people would only be opting into non-portable behavior.
  • KG: The people who are happy with a feature aren’t going to complain, though. When it works, people are happy and we never hear about it.
  • BC: The global opt-out would be an explicit action that people have to take.
  • KG: The global opt-out would be too easy to reach for as a cure-all. I’d be more inclined to just give them a built-in which has “_nonuniform” in the name, so that they know what they’re getting into, and it documents their willingness to accept the non-portable behavior. Let’s just solve that pain point. Would a “_nonuniform” sampler builtin be acceptable to these users?
  • BC: We’re not keen on “_nonuniform”. You’ll be teaching people to use the non-uniform examples always. And it affects the generated code.
  • KG: I don’t agree with that.
  • BC: For SPIR-V -> WGSL, Tint would take SPIR-V, emit WGSL, warn if there were violations. The WGSL would not compile unless you passed the flag in the browser. So you’re always erring on the side of preventing problems. But you’re supporting the case of people with an existing corpus of problems. People copying WGSL from the web will have to change the API. We have industry experts asking us for this opt-out.
  • KG: We’re saying try it out. We also have industry experts who have told us about bugs and flakiness in derivative-based code that they’ve been able to eliminate when they fixed uniformity issues.
  • JP: That was a problem with barriers in compute shaders.
  • KG: If the opt-out is local, then it doesn’t blindly apply to other areas of the code that were working, or to new code that you introduce. A shader-wide opt-out covers up too much, and people lose the benefit of the analysis when they’re not expecting to.
  • DN: I’d like to pursue both options.
  • KG: I’d rather have name mangling and detailed opt-outs than a global opt-out.
  • DN: The SPIR-V -> WGSL conversion can just spew the local opt-outs everywhere, it’s not pretty but the output isn’t beautiful anyway.
  • JB: Let’s agree that we want something local in v1, and that the global opt-out, if it ever happens, will only happen after we also have something more local, and can decide if we care about a global opt-out at all
  • KG likes scary names
  • DN: SPIR-V has Uniform and UniformId decorations on a value already, with semantics very close to what we want. For UniformId you provide a scope to indicate what area of uniformity you want
  • EG: The SPIR-V decorations seem analogous to the aliasing annotations in LLVM IR generated from Rust.
  • KG: So you’d be able to round-trip WGSL through SPIR-V and maintain the uniformity?
  • DN: Yes, I think that would be workable.
  • DN: For the Unity case, you could slap the attribute on every expression that influences control flow. Even if you load something from memory, you could sanitize it at the point that it affects control flow.
  • BC: Would that affect function calls as well?
  • DN: In the limit, you could put “if @uniform true” around your texture operation.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Nov 23, 2022
It's a fine-grain control that disables uniformity errors
for fragment shader builtins that ordinarily are required
to execute in uniform control flow.

Updates those functions to say they yield an indeterminate value
when invoked in non-uniform control flow.

Contributes to: gpuweb#3554
dneto0 added a commit to dneto0/gpuweb that referenced this issue Nov 23, 2022
It's a fine-grain control that disables uniformity errors
for fragment shader builtins that ordinarily are required
to execute in uniform control flow.

Updates those functions to say they yield an indeterminate value
when invoked in non-uniform control flow.

Contributes to: gpuweb#3554
@RSSchermer
Copy link

I hope this thread is the right place to discuss this, apologies if it is not.

I'm personally very interested in having the ability to use a "persistent threads" style of GPU usage to process irregular workloads through WebGPU, where threads are kept spinning until the irregular workload is done. In particular, I'm interested in job switching within a single kernel (which I believe is a style referred to as "uberkernels" in the literature). This relies on workgroup coherence, which in the classical persistent threads style would be ensured by picking a workgroup size that matches exactly the subgroup/warp/wavefront size of the device. However, this information is not (and I suspect will never be) available through WebGPU's API. I believe I can work around this by picking a "worst-case" workgroup size of 64 and then use a barrier to achieve the desired coherence with acceptable overhead.

Below is a contrived example of the sort of behavior I'm looking to achieve:

const JOB_ADD_1 = 0u;
const JOB_ADD_2 = 1u;
const JOB_EXIT = 2u;

@group(0) @binding(0)
var<storage, read_write> data: array<u32>;

var<workgroup> iterations: u32;

var<workgroup> job: u32;

var<workgroup> count: atomic<u32>;

@compute @workgroup_size(64, 1, 1)
fn main(@builtin(workgroup_id) workgroup_id: vec3<u32>, @builtin(local_invocation_index) local_index: u32) {
    loop {
        if local_index == 0 {
            if iterations >= 20 {
                data[workgroup_id.x] = atomicLoad(&count);
                job = JOB_EXIT;
            } else if iterations >= 10 {
                job = JOB_ADD_2;
            }

            iterations += 1;
        }

        workgroupBarrier();

        if job == JOB_ADD_1 {
            atomicAdd(&count, 1);
        } else if job == JOB_ADD_2 {
            atomicAdd(&count, 2);
        } else { // JOB_EXIT
            return;
        }
    }
}

(As an aside, I believe the barrier to in actuality occur in uniform control flow in this example, though I understand why conservative static analysis would not be able to prove that. However, I must admit that I don't fully understand the description of uniformity analysis in the spec. Is my intuition here correct, or am I misunderstanding something?)

Currently, this works on Chromium, but it warns that "workgroupBarrier must only be called from uniform control flow". Am I correct in understanding from this thread, and the meeting minutes linked here, that this warning is to become an error unless an opt-out escape hatch is added? If this is indeed the case, I would be very interested in having an escape hatch not just for derivatives, but for barriers as well, and I suspect I won't be the only user interested in processing these kinds of persistent threads style workloads through WebGPU.

@jrprice
Copy link
Contributor

jrprice commented Nov 24, 2022

@RSSchermer I don't think you need an opt-out style escape hatch to satisfy the uniformity analysis for this code. Instead, you will be able to use the workgroupUniformLoad() builtin that we are in the process of adding, which could broadcast the value of the job variable to all invocations in the workgroup in a manner that the uniformity analysis understands.

The updated logic would look something like this:

loop {
        if local_index == 0 {
            // update the value of `job` as before
        }

        let next_job = workgroupUniformLoad(&job);

        // Use next_job to control the conditional exit from the loop
        if next_job == JOB_ADD_1 {
            atomicAdd(&count, 1);
        } else if next_job == JOB_ADD_2 {
            atomicAdd(&count, 2);
        } else { // JOB_EXIT
            return;
        }
    }

@RSSchermer
Copy link

@jrprice Thank you! That looks very interesting, I wasn't aware. Yes, I do believe that would enable the kinds of kernels I have in mind.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Nov 28, 2022
1. Localized opt-out:

Add unchecked_uniformity attribute

It's a fine-grain control that disables uniformity errors
for fragment shader builtins that ordinarily are required
to execute in uniform control flow.

Updates those functions to say they yield an indeterminate value
when invoked in non-uniform control flow.

2. Coarse-grain opt out:
Add 'fragmentShaderDerivativesCheckedUniformity' boolean field
defaulting `true`, in GPUShaderModuleDescriptor

When false, weakens the uniformity analysis so no uniformity error is generated
for derivative-based collective operations.

Fixed: gpuweb#3554
@dneto0
Copy link
Contributor

dneto0 commented Nov 28, 2022

#3644 proposes both a fine-grain and a coarse-grain opt-out mechanism for fragment shader derivatives.

  • Fine grain: a @unchecked_uniformity attribute applied to the call site of a fragment shader derivative builtin function. This disables uniformity errors for that call site.
  • Coarse grain: a fragmentShaderDerivativesCheckedUniformity option in GPUShaderModuleDescriptor defaulting to true.
    When set to false, disables uniformity errors for fragment shader derivative builtins.

@kdashg
Copy link
Contributor

kdashg commented Nov 29, 2022

WGSL 2022-11-22 Minutes
  • DN: Does Mozilla have a consensus here?
  • JB: We have one about the spec, and one about the platform.
  • JB: If the locally scoped opt-outs would serve the needs of people doing mechanical generation, it seems premature to provide a global opt-out, because the people requesting the opt-outs have not had a chance to really try out the new local opt-outs, vs the desire for a global opt-out. As far as we see, all of these proposals handles the needs of e.g. Unity doing mechanical transformations. We very much want to have the default such that people continue to get help even as the opt-out to individual parts of their program. [where they know better or don’t care]
  • JB: We also do feel that the undefined behavior that plagues fragment shaders is not as severe as the UB that plagues e.g. C++. We want to put soemthing in people’s hands before we go too far in reacting to their demands.
  • JB: The platform thing, there’s been saying in office hours, that derivative problems are not UB-like-nasal-demons, so question for the group: Is it indeed not UB, where drivers have no obligations in this case, or is it less than that.
  • DN: I was looking at the D3D spec, says “unbounded value” or something like that. Have not looked at the VK spec. I think don’t worry about it. You’re going through a hardware channel, where they don’t want the world to blow up either, even if they might not be able guarantee you what you want.
  • AB: It’s “undefined value”, which is like our “indeterminate value”. Will not destroy your shader.
  • JB: Great! I do think the rest of our concerns still stand, wrt global vs local opt out.
  • AB: Wanted to touch on something you raised, something we discussed internally. That this isn’t severe, and how hard should we try to prevent bugs here? We’re drawing an arbitrary line. It’s eaiser to get behind it for a barrier, where you will hang. But it’s a gradient, so I think the question is where we should put that line.
  • DG: Wanted to clarify, that e.g. derivatives won’t be UB, not sec issue, but might just be junk. E.g. could you get different values in different corners of the quad?
  • KG: Yes
    • KG: We also touched on this. Alan has a good point about being a line in the sand that we’re choosing to draw. As Jim says, we’d be more comfortable we should give developers something to try. Think many things could be satisfactory to their needs. Maybe we’re asking for implementation time to see if their use case can be satisfied. Generally don’t care too much about the specifics, except care that the opt-out is localized. We have a bunch of things in the solution space that meet that. Would like to unblock those in spec and implementation. Then wait for developer feedback.
  • DN: Google will have to go back and think.
  • KG: Where did the local opt-out end up?
  • DN: TR thinks the attribute should be “this thing is uniform”, so it should capture dev intent about intent for the program. But thers’ teh case where you might put two of these, but they might both be lies. The dev should not be lying with an attribute in the program. But he can come up with a case where you can construct program that forces you to lie about uniformity. So he’s convinced me that a simplistic attribute alone would not be sufficient here.
  • JB: Could we get a link?
  • DG: So is DN saying if we implemented a uniform opt-out attrib, maybe like a rust unsafe, and you did something unsafe in there, and you lied. Or by using….
  • DN: Arg is, if only opt-out you add is “this particular val is uniform”, then there’s a program you can construct, where you have to put the attribute on two things….
  • JB: So for an example program that is in fact correct, you have to lie.
  • AB: That feels like we’re misusing the attrib, or instead just misnaming the attrib.
  • DN: Yeah, this starts from TR’s desire that that the attribute is an assertion that a val is uniform, and could potentially be e.g. checked in debug mode. AB I guess would be saying that that may be too far along to start from.
  • JB: The programmer’s understanding might be pretty complicated, such that a simple attribute might not be able to represent the complexity. For example, the programmer might know things like, “If x is true, then y is uniform, otherwise z is uniform”. You can write correct programs using that knowledge, but a simple “@uniform” attribute can’t express it. You’d have to mark both y and z as uniform, and that’s not true.

@fintelia
Copy link

Is the plan still to consider @uniform to mean that the annotated value is uniform across the entire draw call? Because that'll also force programmers to lie if they branch on a value that is constant across any given primitive (say thanks to "flat" interpolation) but not the same for all primitives in the draw.

@dneto0
Copy link
Contributor

dneto0 commented Nov 29, 2022

Is the plan still to consider @uniform to mean that the annotated value is uniform across the entire draw call? Because that'll also force programmers to lie if they branch on a value that is constant across any given primitive (say thanks to "flat" interpolation) but not the same for all primitives in the draw.

That is not in #3644

@dneto0
Copy link
Contributor

dneto0 commented Dec 6, 2022

Google’s position has evolved:

  • For compute: Keep the uniformity errors.
  • For derivatives in fragment shaders:
    • Experience shows that value provided by analysis is far outweighed by the false positives.
    • Therefore:
      • Completely remove errors for derivatives.
      • Provide no opt-outs.
    • Instead, provide tooling to help developers debug their shaders, more than just uniformity.

@kdashg
Copy link
Contributor

kdashg commented Dec 7, 2022

WGSL 2022-12-06 Minutes
  • JB: Mozilla talked about this some. As Mozilla understands it, the committee has consensus on the following:
    • In compute shaders, using barriers in non-uniform control flow can have serious consequences for stability. WGSL should apply the uniformity unconditionally in compute shaders, and opt-out annotations should not be supported.
    • In fragment shaders, the consequences of computing derivatives in non-uniform control flow are not as serious: non-portable or meaningless values and rendering artifacts, but not undefined behavior. These problems are often considered tolerable even in production.
    • In fragment shaders, an annotation on calls to builtin function that compute derivatives (implicitly or explicitly) to say that the call need not be executed in uniform control flow makes sense. Mozilla supports the attribute as described in Add uniformity analysis opt-out for derivative-using builtins #3644.
    • Making fragment shader uniformity failures hard errors, without providing opt-outs, localized or otherwise, puts one’s users in a situation from which they have no recourse, and it is inevitable that users would object. Mozilla feels that such a process provides no insight into how helpful users would find an analysis with localized opt-outs.
    • Applying fragment shader uniformity analysis to a production code base translated into WGSL provides only limited information on how helpful the analysis is to people writing new shaders: the shaders have already been debugged, so there should be few problems to detect in the first place.
    • Mozilla suggests that the fragmentShaderDerivativesCheckedUniformity processing option should demote uniformity errors in fragment shaders to warnings. Since the attribute on derivative collective operations fully silences warnings, this gives users a quick path to getting things up and running, and then lets them silence warnings as time permits. This preserves gentle pressure on users towards portable behavior.
  • AB: Want clarification. How I read the last part is, if the default remains the same….?
  • JB: Starting point is error. If flip the option to false, they become warnings.
  • KG: Sympathize David’s concern that NDEBUG defaults to true. Would be better that the default eval is false.
  • DN: Fine with flipping the sense of the global option.
  • TR: For Microsoft, this sounds fine. Comment from David though.
  • DN: I posted a comment today summarizing Google’s developed understanding that the value of the analysis is outweighed by the false positives.
  • JB: Those are on already-debugged shaders. (And hence doesn’t provide the signal we care about.)
  • BC: Have heard from experienced developers writing new shaders, and they complained.
  • DG: Discussed briefly with Myles. We had assumed uniformity analysis was required for security. If non-uniform control flow for derivatives is used to smuggle info out of other shaders, we don’t see it as an opt-out. If it’s not an issue that way, then we don’t have strong opinions on enforcement. E.g. could be a lint in dev tools.
  • KG: Think in fragment shaders it’s not a security concern.
  • DG: Observation on Google’s testing shaders. They are game shaders, because they run well enough, doesn’t mean it’s secure. Can’t be asserted that they are secure (including not leaking information).
  • JP: Keep coming back to the question of whether the analysis is useful. Think that misses the point we’re trying to make. Just because the analysis can be useful to some people, doesn’t justify that it must run in CreateShaderModule. Question is does that provide extra value over and above running in other tools.
  • KG: There is value in having it run “on accident”, ie. unintentionally. These are things people don’t understand, and won’t know to seek out.
  • JB: Want to reinforce. We have an opportunity to shift the industry. Introduce a really wide audience to a good analysis. Having it in CreateShaderValue is an opportunity to widely broadcast the opinionated thing that we think are good. It’s within our power to take out, but not add later. Think we should be brave and try it. If everybody hates it, then take it out later. That’s the backward compatible thing.
  • JB: Re: security. If anybody finds a case of info leakage via derivatives, then I don’t think we have any way to address that without going to a no-opt-out shader model. That’s not what we have in WebGL, and no professional developer will tolerate. No-opt-out is a non-starter. There must be some opt-out in fragment shaders. Should not end up with something less useful than WebGL.
  • AB: Trying to conceptualize the security risk, between shaders. Can understand how invocations can leak info to each other, but not between shaders. Derivatives are not special in that way. You might ask about indeterminate values.
  • KG: We say indeterminate values that are safe, i.e. don’t leak.
  • KG: Repeat Jim’s point that WebGL definitely doesn’t protect this derivatives case itself.
  • AB: Re: be brave and propagate FXC. FXC doesn’t complain about many of these. But we go further.
  • JB: Didn’t want to index too much on FXC. Just wanted to acknowledge prior work.
  • TR: FXC had more strict validation, and it was relaxed because developers needed things that FXC couldn’t prove uniform. Believe it’s not enforced for derivatives.
  • BC: Want to flip the fact-finding thing. Do we have evidence/facts that people want this and it would have saved them from bugs in shader code.
  • KG: Think it’s hard to answer.
  • JB: Ben is correct there’s a duality to the question. Will try to talk to folks to get that information.
  • KG: Do personally believe this provides value. Even folks who have written lots of shaders before, and I didn’t know the depths and paucity of the guarantees that are given to us. I extrapolate that to other people. Think we have an overall consensus and differ only on degree. Think nobody is critically disadvantaged by the consensus we’re offering.
  • GR: Think there has been maybe contempt by shader authors. I know the quote of “faster horses”. That suggests we know best, and risk of people ignoring this shader language entirely. Not sure if it’s productive. But that’s my paramount concern.
  • KG: We’re in an unusual situation, to push back in laxness that exists in shader authoring. Many folks don’t realize we have these constraints. And even if we get it wrong, can offer getting around it.
  • BC: Agree with what Greg said. That’s part of the reason Google pushes hard on this. Don’t want to offend the community of people who have been in this space longer than most of us. Nobody we have spoken to believe derivatives in fragment shaders believe it should stop them from doing what they need to do. Before we make it more difficult to write shaders, is it a problem that needs solving.
  • JB: Mozilla agrees expert shader authors need to be able to write what they need to write.
  • KG: Do you think requiring pro shader authors to ask for this opt-out is unreasonable and will turn them off WebGPU in a harmful way.
  • BC: It won’t turn off folks who are forced to write WebGPU code. But we will get negative publicity.
  • JB: Feels like a very scoped concern. Have made millions of choices. We are here to say what we think is good.
  • EG: Asking context to Ben. I’ll ask async.
  • GR: It’s been said, we’re forcing people to write WGSL to use WebGPU. If there are users who are excited to use WGSL, would love to hear their opinions. Have heard folks who have used WebGL are resistant in a number of areas. Want to hear more.
  • KG: I would have liked to have this feature.
  • JB: Do hear some people actually like it.
  • DN: Raph likes WGSL, and has switched his project from the GLSL pipeline.
  • KG: Hard to get feedback because the people who know in their bones the actual constraints, there are very few of them.
  • KG: Want the group to consider taking Add uniformity analysis opt-out for derivative-using builtins #3644, modulo reversing the sense of the global opt-out, and with the amendment that we’d like to see warnings required. They’re warnings, so it’s down to the UA to actually present them or not.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Dec 13, 2022
Diagnostics may be "filtered" both locally and globally.
A filter can drop a matching diagnostic, or change its severity level.

Fixes: gpuweb#3554
dneto0 added a commit to dneto0/gpuweb that referenced this issue Dec 13, 2022
Diagnostics may be "filtered" both globally and over a range of source
text.
A filter can drop a matching diagnostic, or change its severity level.

Future versions of WGSL can drop a rule too.

Created "diagnostic rule name" token class
- They are spelled like identifier-like things but are not explicitly
  enumerated in the spec.

Fixes: gpuweb#3554
@kdashg
Copy link
Contributor

kdashg commented Dec 14, 2022

WGSL 2022-12-13 Minutes
  • PR Add uniformity analysis opt-out for derivative-using builtins #3644 : proposed a fine-grain and coarse-grain opt-out
    • Withdrawn by Google
  • New Google proposal: PR Add diagnostics, derivative uniformity problems trigger diagnostics #3683
    • DN: Summary is:
      • Describes diagnostics
      • defines “triggering rule”, creates one triggering rule for “derivative_uniformity”
      • Provides source-range-based and global mechanism for filtering diagnostics. If any error diagnostics remain after filtering, then causes shader creation error.
  • JB: like the machinery, seems good to have in general. Having it in the spec will help us more opportunities to have other kinds of diagnostics. Know we can handle them and them not be showstoppers if they’re wrong.
  • JB: Concern (not a blocker), the only thing this does is affect the diagnostics that are reported. Doesn’t affect the behaviour of the analysis. If the developer knows that in practice a value is uniform, there is no way to say that and influence subsequent behaviour of the analysis (elsewhere). So if you have this function that you know returns a uniform value, you have to apply the diagnostic filter to every call site that is influenced by the return value of that call.
  • AB: Agreed. That’s no difference from the previous proposal.
  • JB: The kinds of tools I want to influence the analysis.
  • DN: Yes, can add that later as a nice add-on (which itself could not be complete, by Tex’ argument a few weeks ago). That is not necessary for v1.
  • JB: spec isn’t clear what you can apply a ranged diagnostic filter to. It shows examples.
  • DN: Right now it’s spec’d as call_expr (which is a new thing).
  • AB: Can put it anywhere you can put an attribute. Currently
  • JB: Would be nice to put it on compound statements.
  • JB: The defaults are where Mozilla wants.
  • DG: Started asking internally about leakage across threads. Started looking for myself, but don’t have an answer.
  • JB: Any time there’s a security problem we’ll revisit anything.
  • KG: Resolved to take this. And this solves Designing a uniformity opt-out #3554
  • DN: Q about generating “all” the warnings. As drafted, this diagnostics PR says have to generate all the warnings.
  • JB: Think James’ concern is about the combinatorially many paths that are then required to generate many warnings.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 3, 2023
Diagnostics may be "filtered" both globally and over a range of source
text.
A filter can drop a matching diagnostic, or change its severity level.

Future versions of WGSL can drop a rule too.

Created "diagnostic rule name" token class
- They are spelled like identifier-like things but are not explicitly
  enumerated in the spec.

Fixes: gpuweb#3554
dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 3, 2023
…ound statements

This implements option C from gpuweb#3689

It removes the ability to apply range diagnostic filters directly to
expressions.

Fixes gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 5, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 6, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
@kdashg
Copy link
Contributor

kdashg commented Jan 11, 2023

WGSL 2023-01-10 Minutes
  • Also:
  • JB: Are there really two PRs? Would be nice to reduce to one, if so.
  • DN: [yes?]
  • DN: Briefly, uniformity issues are an error in compute shaders, but only a diagnostic (soft) error for fragment shaders. [...]
    • Uniformity failure triggers a diagnostic
    • Diagnostic is can be filtered
    • But if not filtered by default then it produces a shader creation error.
  • MM: So the opt-out is for disabling the diagnostic, not modifying the dataflow through the graph?
  • DN: Yes
  • JB: Do I understand that the reason for collecting these sets, when we have a user-def function, we have the detail to attribute the error to specific builtin calls in the user-def funcs?
  • DN: Yes, we don’t want to have to go back and re-calculate this
  • JB: I would like another week to better understand the language used here.
  • DN: In real life, you’d construct this differently anyway.
  • JB: Doesn’t that not work, because the analysis needs to match?
  • DN: If you turn off the error, then it can’t cause an error, so you might as well not emit that edge.
  • JB: But in terms of how we assign tags to funcs, diag tags have no applicability to [...]
  • DN: [...]
  • DN: The obvious thing to do in an implementation,
  • KG: Would prefer to have pseudocode that implementors implement rather than spec language that is different from how it should be implemented.
  • AB: The modifications david was referring to is good for turning something “off”. But not great for different levels of “warn”, “info”, “error”. So it depends on the maximum severity that may be in effect.
  • MM: Nitpick: It would be nice if we referred to these by the [class of] builtin that they are, rather than the type of shader. [imagine if we added workgroup to frag shaders]
  • MM: We are discussing with our HW team about how bad it is to get derivative stuff wrong.
  • JB: If it’s “bad derivatives are a disaster”, then we have trouble
  • MM: Just trying to reflect our harsh realities.
  • AB: Regardless of where that lands, this is two things, one is opting out, other is introducing a way to talk about severity of diagnostics.
  • MM: Sounds legit. We don’t have opinions about details about opting out of UnAn. Mechanism for quieting the error seems fine, we’re mostly concerned about security here.
  • DN: Sounds like we’re on the same page.
  • MM: What happens if you try to opt out of UnAn in a compute shader.
  • DN: Only opt-out is actually for derivative-uniformity as an attribute, so it wouldn’t apply.
  • MM: sounds like we should warn if it’s used where it doesn’t make sense.
  • KG: UAs are allowed to emit warnings!
  • Resolved: Write more and better spec text.

@kdashg
Copy link
Contributor

kdashg commented Jan 18, 2023

WGSL 2023-01-17 Minutes
  • Add diagnostic filtering; range diagnostics apply to control flow constructs and compound statements #3713
  • FROM LAST WEEK:
    • JB: I would like another week to better understand the language used here.
  • Anything come from that?
  • Whats left:
    • Better explanation of the triggering location. (DN)
  • JB: Commented in PR. Overall looked fine. Concerned about tags assigned to function and params when diagnostic is filtered. Only real substantive question..
  • DN: When tried to write with explicit tracking, got wrong. Need to address with new text to compute the right things. Good, narrowed down to that
  • JB: Sounds like just waiting for new text
  • JP: So, just editorial? Good to land?
  • JB: Pretty substantial, the algorithm is(?) wrong.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 23, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Jan 27, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
@dneto0
Copy link
Contributor

dneto0 commented Feb 15, 2023

Fixing #3689 fixes this.

dneto0 added a commit to dneto0/gpuweb that referenced this issue Feb 21, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Feb 24, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Feb 27, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Mar 3, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Mar 3, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit to dneto0/gpuweb that referenced this issue Mar 3, 2023
- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

Fixes: gpuweb#3554 gpuweb#3689
dneto0 added a commit that referenced this issue Mar 6, 2023
…structs and compound statements (#3713)

Add diagnostics, derivative uniformity failures trigger diagnostics

Fixes: #3554 #3689

- Define diagnostics, each having:
    - severity: error, warning, info, off
    - triggering rule
    - triggering location
- The spec names certain triggering rules.
  Currently only "derivative_uniformity" is defined, and it
  defaults to "error" severity.
- When named, triggering rules are named like identifiers.
- There is an unnamed trigger rule, used for error diagnostics
  that are not filterable. An example is a uniformity failure
  on a barrier call in a compute shader.
- Uniformity section is updated generally to say that analysis
  failures trigger a "uniformity failure", instead of "rejects the program".
  In one place, say that uniformity failure for deriviative builtins
  trigger derivative_uniformity at the call site of that builtin;
  and that uniformity failures for barrier builtins trigger a
  shader-creation error.
- Triggering rules are named like identifiers.
- Define diagnostic filters, which have
    - affected range: span of source text
    - new severity
    - triggering rule
- Filters with an unknown triggering rule are ignored.
- There are global filters, which are given as directives at
   the top of the module (mixed in with enables).
- There are "range" filters, which are given as @diagnostic
   attributes before: control flow statements, switch bodies,
   function declarations, and compound statements.
- @diagnostic attributes may be given more than once on a particular
  object.  But they must not "conflict", meaning have the same
  range and triggering rule, but different "new severity".
- The affected ranges of diagnostic filters perfectly nest, by
  constrution.
- To apply a filter to a diagnostic:
    - the triggering location must fall in the affected range
    - they must have the same triggering rule
    - the severity of the diagnostic is updated to the one specified
      in the filter.
- Define a step-wise process for processing triggered filters, roughly:
  1. For each diagnostic with triggering rule R, apply the diagnostic
     filter for rule R whose affected range is smallest but also
     encloses the diagnostic's triggering location, if one exists.
  2. Discard "off" diagnostics.
  3. For each rule R, all but one "info" diagnostic for R may be
     discarded.
  4. For each rule R, all but one "warning" diagnostic for R may be
     discarded. If there is such a diagnostic, all the info messages
     for R may be discarded.
  5. If there is at least one error diagnostic, all other diagnostics
     may be discarded.
     Generate either a shader-creation or pipeline-creation error.
  6. Diagnostics generated during shader-creation are reported
     in the messsages member of the GPUConpilationInfo object.
  7. Error diagnostics generated during pipeline-creation are reported
     to the enclosing GPU error scope.

- Incorporate diagnostic severities into uniformity analysis
  - Define a "potential-trigger-set" as the set of trigger rules that
    might be triggered at locations we've already analyzed, but due
    to the non-uniformity of a value or flow that is yet to be analyzed.
    There are only two possible members:
    - derivative_uniformity
    - an unnamed triggering rule, for unfilterable uniformity requirements
       for compute shader barrier-like synchronization
  - The *RequiredToBeUniform nodes are expanded into one for each of the
    diagnostic severities that can be reported to the application.
    - This fixes which diagnostcs are triggered, from which triggering
      locations.
    - Each is associated with a "potentially-triggered-set".
  - Introduce new parameter return tag
    * Splits parameter tag into two:
      * parameter tag for pure uniformity requirements
      * parameter return tag for effects on return value
    * Update analysis to refer to two tags appropriately

  - Define "nearest enclosing diagnostic filter" for a source location
    and triggering rule, when taken as a pair.

  - Call site tag for derivative-using functions is taken from the
    neareset enclosing diagnostic filter.

- Clarify affected range of diagnostics applied at the start of the
  syntactic form. They also apply to any *other* diagnostics listed at the
  start of *that* syntactic form.
  This matters if/when we add a "unrecognized diagnostic" triggering
  rule.  Per feedback from James Price.

- Clarify the wording of the overall uniformity analysis algorithm.

- No order of analysis is implied

- Simplify wording for definition of parameter tag, pointer parameter tag.
  Use bullet points.

- Don't early exit from the severity loop when triggering diagnostics
  for uniformity failures.

- Note that keeping the graph for a function is useful for providing
  informative diagnostics.

- Say that a "call site" location is specifically the source location for the first
  token in the parsed call_phrase grammar rule.

- The attribute multiplicity exception belongs in the row for @diagnostic

Co-authored-by: Alan Baker <[email protected]>
Co-authored-by: Jim Blandy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment