Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shader Execution Reordering (SER) proposal #277

Open
wants to merge 22 commits into
base: main
Choose a base branch
from

Conversation

rasmusnv
Copy link

Add Shader Execution Reordering (SER) proposal for consideration.

Copy link
Member

@damyanp damyanp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks ready to take into the repo for further review. Before completing the PR, please pick the next free proposal number (at time of writing this comment, that would be 0021) and rename the file and the reference to it on line 4 appropriately.

proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved

```C++
template<payload_t>
static HitObject HitObject::TraceRay(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are interested in adding explicit communication from application in terms of what it wants (or needs) from the TraceRay call regarding reordering. I would assume we have to pass extra values to the call through extra args or through call attributes/decorations (similar to [unroll]) instead of relying on implicit behavior (which differs between vendors - some do implicitly sort/reorder, some don't).

This could be implemented using three possible values (using attribute form of expressing it in examples):

  1. PreferNoReorder - the default value with the behavior according to the current spirit. Application doesn't need the reordering happen but the call still is a reorder point and implementation can decide to reorder. Note: This could be also simply spelled out in the spec to be the default if Reorder calls are present.
  2. Reorder - with this value, it is equivalent to:
    HitObject hit = [PreferNoReorder]HitObject::TraceRay( ... );
    ReorderThread( hit ); 
    the value should be limited to shader stages supporting ReorderThread(...).
  3. NoReorder - the value means that implementation is required to not reorder. This may be necessary for algorithm correctness in some cases (like, a wave remaining the same before and after the TraceRay call). Note: There should be a note mentioning that this can degrade performance and thus should be only used for correctness, otherwise see PreferNoReorder.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly would such a NoReorder enforce? Does it require the threads doing TraceRay to stay in the same wave as threads not doing TraceRay (similar to DXR 1.1)?

Example:

void RayGen() {
  int A = WaveActiveBallot(true);
  if (<divergent cond>) {
    int C = WaveActiveBallot(true);
    [NoReorder]
    TraceRay(…);
    int D = WaveActiveBallot(true);
    // D is guaranteed to be equal to C (and made up of the same threads)
  }
  int B = WaveActiveBallot(true);
  // Is B guaranteed to be equal to A (and made up of the same threads)?
}

The second one (A==B) is problematic if the driver uses continuation-passing style to implement raytracing (e.g. https://youtu.be/KDXOAwPVTWY?t=607).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proposed [Reorder] is equivalent to TraceRay followed by ReorderThread. We believe this could introduce confusion to an already complex API, especially regarding when and where reordering occurs. Instead, the implementation can reasonably perform straightforward analysis to recognize sequences like TraceRay/ReorderThread/Invoke, reducing the need for additional API complexity.

Regarding [NoReorder] we agree with @Flakebi about the feasibility of such an option. While RayQuery can be implemented like this (as you mention in a separate comment) it is not certain that any implementation can support calling anyhit and intersection shaders while not modifying active threads.

We can explicitly make HitObject::TraceRay and Invoke not reorder. Then the app can control reordering by adding or omitting ReorderThreads. If the app wants the implementation to decide, legacy TraceRay can be used. Note that this does not actually guarantee anything about active threads before/after the call. It is more about the trade-off between overhead and coherence.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you're saying that some implementations can support ray query, but can't execute the shaders per lane without a resort? That's an interesting limitation which is currently not expressed in the specification, as there seem to be implied uses where the thread assignment gets not reordered by TraceRay even in the presence of any hit shaders being called by TraceRay. For example, if I want to keep quads together, you're saying there's no way to do that right now with TraceRay as explained in the SER specification?

My assumption is that any implementation support ray-query can maintain the thread assignment.

The complication in the "straightforward analysis" is that tooling may move instructions around between TraceRay, ReorderThread, Invoke, and some implementations may prefer to use the ReorderThread into either TraceRay or Invoke, and then it's undecidable which one to do if there's a ReorderThread in the middle of it, even though the implementation may know that re-ordering in TraceRay for example will provide no benefit - and it can't communicate this. ReorderThread is a fine API if all re-ordering would be limited to just that API call, but the reality is that there are multiple re-order points, and the specification language for SER specifically uses comments to explain where no reordering should happen - which is unfortunately not something the compiler knows about. I.e. in samples like:

// Invoke unified surface shading. We are already hit-coherent so not worth
// reordering again.
HitObject::Invoke( hit, payload );

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you're saying that some implementations can support ray query, but can't execute the shaders per lane without a resort? That's an interesting limitation which is currently not expressed in the specification, as there seem to be implied uses where the thread assignment gets not reordered by TraceRay even in the presence of any hit shaders being called by TraceRay. For example, if I want to keep quads together, you're saying there's no way to do that right now with TraceRay as explained in the SER specification?

TraceRay is a reorder point so there are no guarantees about active threads afterwards. This doesn't necessarily imply that a full reorder will happen, but some amount of changes in active threads may still happen. As mentioned, RayQuery does not invoke shaders, so it is not exactly comparable to TraceRay that itself calls anyhit and intersection shaders.

The complication in the "straightforward analysis" is that tooling may move instructions around between TraceRay, ReorderThread, Invoke

Could you clarify what the tooling is here? If you mean dxc I believe we can restrict movement of things like resource accesses.

some implementations may prefer to [f]use the ReorderThread into either TraceRay or Invoke, and then it's undecidable which one to do if there's a ReorderThread in the middle of it, even though the implementation may know that re-ordering in TraceRay for example will provide no benefit - and it can't communicate this

Could you expand on this? If the driver knows that fusing with TraceRay will provide no benefit it seems logical that it should fuse with Invoke.

ReorderThread is a fine API if all re-ordering would be limited to just that API call, but the reality is that there are multiple re-order points, and the specification language for SER specifically uses comments to explain where no reordering should happen - which is unfortunately not something the compiler knows about. I.e. in samples like:

Since there is no ReorderThread call before the Invoke, it makes sense to assume that no extra effort should be made to reorder the Invoke, beyond what is required for the underlying architecture. If the underlying architecture requires an extra reorder for the Invoke anyway, it may want to utilize information provided in the most recent ReorderThread call.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That requires us to specify it though: "TraceRay and Invoke will not perform 'heavy-weight' reordering", I assume only in the presence of ReorderThreads, and thus behave as-if we always specified the [NoReorder] flag on them (well, nearly, because you're saying not heavy-weight, which would have to be defined as well - what does heavyweight mean?) I think it's clear by this point that we're back to the original reason we wanted this expressed clearly and explicitly, to not rely on interpretation of what heavyweight means or not.

Which brings me back to the question I raised - are there implementations which simply cannot guarantee that TraceRay maintains a 1:1 mapping? If so, that makes [NoReorder] impossible to implement, but I don't see why that should be the case given you can always fall back to inlining all shaders and using RayQuery, and most likely applications would only use that in cases where there is no reason to reorder to begin with (i.e. only one any hit shader.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That requires us to specify it though

Yes, we can specify this clearly and it can be phrased in a better way. The important part is what the app conveys to the driver and what the driver is allowed to do in return. The app does not explicitly expect any reordering for HitObject::TraceRay and HitObject::Invoke but the driver is still allowed flexibility for scheduling the work efficiently. The driver can account for things like the number of shaders and how to run them, and whether to fuse ReorderThread with another call (which effectively moves the reorder point).

Which brings me back to the question I raised - are there implementations which simply cannot guarantee that TraceRay maintains a 1:1 mapping?

Forcing 1:1 mapping is very restrictive and not something we'd be comfortable with. It is certainly not always the case today. It is also unclear why this would be useful given that the app has little control over thread arrangement to begin with.

Inlining is not generally a valid solution as shaders are compiled separately and pipeline creation should not consume excessive time for recompilation. It can also become problematic if there are many call sites that need inlining of the same shader(s).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding a strict [NoReorder], agreed that we shouldn't allow the app to force a 1:1 thread mapping.

The driver can't always inline everything (consider a pipeline with many shaders and many TraceRay calls in nested CHS/Miss, and a large recursion limit, or early compilation of collections), and without inlining such a guarantee would put an unreasonable burden on the implementation.
Note that this issue is not just about the set of active threads immediately before/after a TraceRay, but also about reconvergence with other threads that did not execute the TraceRay, see the earlier example by @Flakebi.

However, a [PreferNoReorder] hint on reorder points that tells the driver it shouldn't spend any additonal effort on reordering for coherence would be fine in that regard.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Reorder points section has been updated with a more detailed account for expected reordering behavior. Let me know if this aligns with our discussion or if there is still a need for attributes.

Copy link
Collaborator

@derlemsft derlemsft Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In last week's meeting we talked about how the default behavior is no reorder, but implementations can allow reordering if explicitly specified by user. Hitobjecttraceray followed by hitobjectinvoke is a prefer no reorder, because no reorder thread is called directly. No attributes or parameters needed.

Are we ok to close this comment? @rasmusnv @rdrabins

- The UAV writer must issue a `DeviceMemoryBarrier` between the write and the
reorder point.

## Separation of ReorderThread and HitObject::Invoke

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually we would like to have additional functions (also see my comment to TraceRay), one example would be:

template<payload_t> 
static HitObject HitObject::TraceRayReorderInvoke()

that would be quivalent to:

    HitObject hit = [PreferNoReorder]HitObject::TraceRay( ..., payload );
    ReorderThread( hit );
    HitObject::Invoke( hit, payload );

It'd be back to original DXR 1.0 TraceRay spirit, this time we'd now that at least on implementations that support reorder it should bring reordering here as well.

Motivations:

  • some vendors can go to execution of a shader without getting back to shader.
    It has some adventages like whole wave doesn't wait for all rays to finish. This is getting increasingly important as traversal steps start to vary more and more in complex scenes.
  • we'd like to encourage writing it in a single instruction instead of relying on the compiler to fuse three separate operations together. For example:
       ...
       HitObject hit = [PreferNoReorder]HitObject::TraceRay( ..., payload );
       color = tex.Sample(sampler, UV);
       ReorderThread(hit);
       HitObject::Invoke( hit, payload );
    In this case, the app decided to put the .Sample() between TraceRay and Reorder to hide the latency of the TraceRay call. Our compiler will have a hard time deciding if it's ok to move the .Sample() after the re-order or not. It's much simpler if this code is written as:
    HitObject::TraceRayReorderInvoke(payload)
    color = tex.Sample(sampler, UV);

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TraceRayReorderInvoke is similar in spirit to the attributes.

The first point is not clear to us. It is already the case that the whole wave doesn't have to wait for all rays to finish at any reorder point.

We are not quite sure why the compiler is not able to move the Sample call after ReorderThread. There are no side effects in ReorderThread and TraceRay was already a reorder point, meaning that restrictions on memory visibility apply already. A sampled texture is also read-only and is not affected by side-effects.

Our stance is that it should be easy for an implementation to merge 2-3 calls into one if it is deemed beneficial, without adding further API surface. Any shader code in between the calls is typically not affected by any side effects that may occur in TraceRay or Invoke. The only exception would be RW UAVs that are globally coherent and are accessed in between the calls. However, it is always possible to move code across ReorderThread since it does not cause or observe side effects.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In last week's meeting we agreed to no monolithic API, but would make adjustments to the spec for clarity.
Options going forward:

  • In spec we could say calling the three functions is the same as calling trace ray semantically
  • In spec explicitly call out that compiler and driver might merge these functions, and putting anything in between the function calls might prevent that.
  • Have example code as a helper function. In the example code that is traceray, reorder, invoke, and the implementation does those three in a row, and it's just a sample code for the case where the app wants
  • Add a helper library to compiler, and ship some helper functions as a hlsl header

@rdrabins given the above options, which would intel prefer?

Copy link

@Anteru Anteru Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the options above, the last one seems most preferable - a function we can reliably pattern match and which is "officially sanctioned".


---

#### HitObject::Invoke

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like in comment for TraceRay, we are interested in adding more communication about reordering.
We'd consider set of values:

  1. PreferNoReorder - (same as for TraceRay) the default value with the behavior according to the current spirit. Application doesn't need the reordering happen but the call still is a reorder point and implementation can decide to reorder. Note: This could be also simply spelled out in the spec to be the default if Reorder calls are present.
  2. Reorder - with this value, the [Reorder]HitObject::Invoke( hit, payload ) call would be equivalent to:
    ReorderThread( hit ); 
    HitObject::Invoke( hit, payload );

the value should be limited to shader stages supporting ReorderThread(...).

  1. Reorder(hint,bits) - calling [Reorder(hint,bits)]HitObject::Invoke( hit, payload ) would be equivalent to:
    ReorderThread( hit, hint, bits); 
    [PreferNoReorder]HitObject::Invoke( hit, payload );
    Like above the value should be limited to shader stages supporting ReorderThread(...).

We don't propose NoReorder here. In case of traceRay, implementation already has to have the inlined raytracing in place so they can probably use it for implementing that.

The shader invocation depending on implementation may necessary involve reordering as some kind of function call implementation byproduct.
Such implementation has to deal with difficult choices in compilation. e.g.:

   ReorderThread(hit, hint, bits);
   color = tex.Sample(sampler, UV);
   HitObject::Invoke( hit, payload );
  • Is sampler benefiting the reordering before it or not?
  • Should I reorder twice (at reorder and at invoke) or nop at ReorderThread and reorder according to hint while Invoking the shader?

when separation of ReorderThread isn't important part of algorithm, It's easier if application writes it like that:

   color = tex.Sample(sampler, UV);
   [Reorder(hint, bits)]HitObject::Invoke( hit, payload );

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, we do not see a real need for this. If attributes are present at these calls, there is a risk that the user would simply specify [Reorder] on both TraceRay and Invoke, which does not actually mean what we want.

For questions like if Sample benefits from a particular location w.r.t. reordering, the assumption should be that it has an impact and that the application performs the access in the coherence scope where it makes the most sense. However, it does not necessarily hinder fusion. If it is without dependencies, it can still be moved before or after the fused call.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, generally agreed no attributes.

Copy link
Member

@pow2clk pow2clk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll disclose that I'm not a DXR expert. A lot of my feedback is copy editing, trying to make the document easier to read. I feel a bit stronger about my thoughts on testability and DXIL op feedback.

proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
State value getters return scalar, vector or matrix values dependent on the provided opcode.
Scalar getters use the `hitobject_StateScalar` dxil intrinsic and the return type is the state value type.
Vector getters use the `hitobject_StateVector` dxil intrinsic and the return type is the vector element type.
Matrix getters use the `hitobject_StateMatrix` dxil intrinsic and the return type is the matrix element type.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it might be clearer to leave these out of the table above and give them their own special section. It already looks like a new topic because of the table. I think dropping them to the bottom instead of in the middle of the list of intrinsics would be less confusing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved them last, but I think I disagree about leaving them out of the initial opcode table. That table is supposed to list all opcodes and their values. The later table doesn't have the values.

proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
proposals/NNNN-shader-execution-reordering.md Outdated Show resolved Hide resolved
Comment on lines 976 to 987
### Memory coherence and visibility

Due to the existence of non-coherent caches on most modern GPUs, an
application must take special care when communicating information across
reorder points through memory (UAVs).

Specifically, if UAV stores or atomics are performed on one side of a reorder
point, and on the other side the data is read via non-atomic UAV reads, the
following is required:
- The UAV must be declared `[globallycoherent]`.
- The UAV writer must issue a `DeviceMemoryBarrier` between the write and the
reorder point.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requiring global memory coherence can be a large hit to performance. Especially since a driver can decide not to sort if the workload is coherent (or sort in a more local manner).
Using globallycoherent and DeviceMemoryBarrier makes it impossible for a driver to omit the synchronization work when it’s not needed.

I propose that we add a new UAV attribute and barrier scope that relates the synchronization to reordering. So,

  • The UAV must be declared [reordercoherent]
  • The UAV writer must issue a Barrier(UAV_MEMORY, THREAD_REORDER_SCOPE) between the write and the
    reorder point.

That way, when a driver decides not to reorder, the barrier can be a no-op.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems useful at first glance, but expanding the memory model may overly broaden the scope of this proposal. It might be better suited as a separate proposal.

It is also unclear what use case [reordercoherent] is addressing. A UAV marked as [reordercoherent] can only communicate within the same DispatchRaysIndex, across any reordering. This essentially makes it a local scratch pad, which could be more efficiently handled using live state in the shader program (or as payload values if communicated with other shaders).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems useful at first glance, but expanding the memory model may overly broaden the scope of this proposal. It might be better suited as a separate proposal.

It needs one more enum value in SEMANTIC_FLAG and (optionally? see below) a [reordercoherent] marker on UAVs.

Vulkan also has a separate scope ShaderCallKHR for this.

Requiring global coherence is surely fine for some implementations but may incur a performance hit in other implementations.

It is also unclear what use case [reordercoherent] is addressing. A UAV marked as [reordercoherent] can only communicate within the same DispatchRaysIndex, across any reordering.

The marker was meant to connect the barrier to certain UAVs. This allows UAVs that are not explicitly marked to skip synchronization.
I suppose an alternative is to not require marking UAVs but expect all bound UAVs to be synchronized with a reorder-scope barrier.
Although this may be expensive if reordering happens globally. Then this would be equivalent to mark all UAVs as [globallycoherent].

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The marker was meant to connect the barrier to certain UAVs. This allows UAVs that are not explicitly marked to skip synchronization.

This part I understand. What I would like to understand is the use case for a reorder-scope barrier. Since it can only be used to synchronize within the same DispatchRaysIndex, the data that is written/read to/from the reorder-scope coherent UAV can instead be stored as live state or in payload during intermediate steps and then be written once at the end (if needed). This seems more efficient overall.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From last week's meeting, AMD proposal for a Reorder Coherence domain: just add new enums for UAV attribute, and barrier ( ) enum param.

@rasmusnv @Flakebi can we confirm the decision here?

@rasmusnv
Copy link
Author

@microsoft-github-policy-service agree company="NVIDIA"

Copy link
Member

@damyanp damyanp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there's a large number of unresolved conversations on this PR - these either need to be moved into issues or addressed in the spec before we can merge it.

@damyanp damyanp added this to the Shader Model 6.9 milestone Oct 16, 2024
@rasmusnv
Copy link
Author

I have picked proposal number 0025. All pending changes have been made. If anything is left to discuss or change I suggest we open separate issues after the merge.

Copy link
Member

@damyanp damyanp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have picked proposal number 0025. All pending changes have been made. If anything is left to discuss or change I suggest we open separate issues after the merge.

This seems like a good approach to me, it's getting quite hard to keep track of everything in this PR! Thank you to everyone for all the detailed comments and responses!

I'm going to mark this as approved, but I think we'll still need to get all the conversations marked as resolved before the PR can be merged, and this repo is set to require two approvals as well.

For resolving the conversations we should follow up with the original authors and/or file issues (and link them in the conversation before resolving it) if necessary.

Comment on lines +113 to +114
The `HitObject` has value semantics, so modifying one `HitObject` will not
impact any other `HitObject` in the shader. A shader can have any number of

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for HitObject objects having value semantics while RayQuery objects have reference semantics?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HitObject is lighter weight than RayQuery (no traversal state, only hit information). RayQuery mutates during traversal. There's also a lifetime problem with references. For example, if you allocate a RayQuery within a loop but assign it to a variable outside, it will appear to go out of scope. Because of the value-semantics of HitObject, this will just work.


A `HitObject` is default-initialized to encode a NOP-HitObject (see `HitObject::MakeNop`).
A NOP-HitObject can be used with `HitObject::Invoke` and `ReorderThread` but
does not call shaders or provide additional information for reordering.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does not call shaders or provide additional information for reordering.

Should it be specified that both operations do nothing (ReorderThread causes no reordering)? Or is it valid for an implementation to do thread reordering here when HitObject is a NOP-HitObject?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read it as that it is valid to do reordering, though no hint or any other info are available. So, likely ReorderThread does no reordering. But we don’t want to guarantee that it does nothing for the same reason that we don’t want to guarantee reconvergence with a [NoReorder] attribute.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is valid to reorder even with a NOP-HitObject. At the very least this makes sense if not all threads in a wave have a NOP-HitObject. Hints can still be used as well.

similarity of subsequent work being performed by threads. The resolution of
the hint is implementation-specific. If an implementation cannot resolve all
values of `CoherenceHint`, it is free to ignore an arbitrary number of least
significant bits. The thread ordering resulting from this call may be

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd like to have a clearer description of how hit is mapping to coherence.
Could we add the sentence that higher bit of hint has more impact on coherence then lower ones.

e.g. consider 3 hints, and chose the most coherent pair of hint:
A = 0b100000
B = 0b111111
C = 0b011111

For example 3 methods for extracting coherence by defining opposite divergence metric:

  1. If higher bit has more coherence then all lower bits after it, A,B pair is most coherent. Divergence more or less xor(...)
  2. If its arithmetic difference then A and C. Divergence is more or less abs(substract(...)).
  3. It could be also divergence = bitcount(xor) that tries to find most similar bits. then B and C is most coherent.

3 doesn't cope well with "it is free to ignore an arbitrary number of least
significant bits", but 1 and 2 fit well into definition.

I'm mentioning that because we discuss this feature only as reordering and forget that based on the ordering, the system needs to extract waves to execute.
Without understanding what is the actual "distance" between the hint codes its hard. It would be good to have that understanding on both sides of GPU.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe arithmetic difference is the most flexible and intuitive definition for users. This approach allows a hint to incorporate both hierarchical aspects (with more important elements in MSBs) and ranges of bits that represent closeness along some axis. I agree it should be spelled out.

Copy link

@jasilvanus jasilvanus Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed that it makes sense to add some details what we mean with a hierarchical hint, and giving an example.

However, I think the numerical value of the XOR makes more sense than arithmetic difference. We could also mention (numerical) sorting, even if the implementation doesn't explicitly sort. It coincides with lexicographical sorting by individual bits, which nicely matches the hierarchical interpretation.

Consider:

hint1 = 0b0000
hint2 = 0b0111
hint3 = 0b1000

If the bits in the hint are intended to be used hierarchically (as per my understanding), then hint2 is closer to hint1 than to hint3. That is reflected in the numerical value of the XOR: hint2 ^ hint1 = 0b111, while hint2 ^ hint3 = 0b1111. However, using arithmetic difference, hint2 would be closer to hint3.

Edit: Fixed the example.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

XOR prioritizes the strictly hierarchical use case. The problem with XOR is when a bit range represents similarity along some linear axis and we want to group based on closeness.

Take for example the following X,Y hint pairs along some linear axis of similarity:

X  Y  Arith XOR
5  0  5     5
5  1  4     4
5  2  3     7
5  3  2     6
5  4  1     1
5  5  0     0
5  6  1     3
5  7  2     2
5  8  3     13
5  9  4     12

In this use case, arithmetic difference produces a desirable metric but not XOR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was interpreting the hints to have hierarchical semantics, but checking the spec again I couldn't find any explicit mentioning of that, so fine with me.

Again, good point that we should explain this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I view it like in hint one encodes closeness in two metrics:

  1. discreet space, any difference on given bits mean you are different, on this and that property. e.g.
    • you will use resource with this and that indices
    • shader will enter some if for some particular effect
  2. some numeric that is arithmetic based on which you can calculate distance. the only example comes to my mind, as @rasmusnv, mentions some linear things. I think the example that will be used most likely is parts of morton coded directions and origins.

now, the problem with arithmetic is that it potentially see the two hints with all bits different as the closest, so it breaks the 1. ... totally

on the other hand xor is succesfully used for checking morton builder.
even more crude, the closer code is considered the one that has lower first different bit (maybe I should put that on a list two, actually). so xor IMO should work great for 1. and good for 2. IMHO.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that there are some interesting aspects to specifying XOR between the two, but I think we need to consider something like this in the context of the entire system to see if it's reasonable.

Let's say a sort is performed that will put the hint values in descending order. Chunks of this sorted work is distributed to various execution units for dispatch. This should work for both arithmetic distance (it's an arithmetic sort) and XOR distance (the MSB has the highest weight on its order). The possible approximation with respect to the distance metric is at chunk boundaries.

As mentioned, this still leaves the question of how to prioritize dispatch groups within each execution unit when values are not a perfect match. This assumes that we are doing something sophisticated, rather than simply grabbing the next N sorted items.

In the earlier example, the three values would be ordered as:

B = 0b111111
A = 0b100000
C = 0b011111

Let's assume that we only have to group 2 items. We either pick A&B or A&C depending on if it's a XOR or arithmetic distance metric. Another choice is a strict order metric, which is actually the simplest in this case. That would indicate that we start with A&B.

Then consider the case where we have to group more work, say 8. Let's say that we have some duplicates (x-factor indicates the number of duplicates):

B = 0b111111 x 4
A = 0b100000 x 1
C = 0b011111 x 4

Would we be better served grouping B&C, B&A, A&C? I think this is not clear. What I'm trying to say is that the exact distance metric for dispatch grouping might be best left unspecified.

Maybe a suitable formulation would be that the system should in general try to dispatch hint values in arithmetic order (NOTE: not distance), but the exact mapping to waves is left unspecified.

Thoughts?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if the expectation was we'd group A&B in your second example, based on the XOR distance. Given the grouping is allowed to throw away lower-order bits, it already implies higher order bits are more important, and it will be likely used to set "feature bits" anyways by ISVs because "arithmetic distance" between shaders is not really something you can calculate. But you can say "those two shaders are going to do X" and we want to group by similarity, so the more similar bits are set the more likely they are similar, with higher bits used to indicate "more expensive" things.

Undefined means it'll be defined by implementations, after all.


```C++
// Primary ray wants perfect shadows and does not need to reorder as it is
// coherent enough.
Copy link

@rdrabins rdrabins Nov 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd appreciate using language that doesn't suggest that there is no reorder at trace ray.
It should be the expectation that impl should only minimize this. for instance:

// Primary ray wants perfect shadows and does not need
// to reorder more than required and best performant for given implementation
// The domain of shadow rays is expected to be coherent enough.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest commit includes clarifications like this.

Comment on lines +994 to +1002
- The UAV must be declared `[reordercoherent]`.
- The UAV writer must issue a `Barrier(UAV_MEMORY, REORDER_SCOPE)` between
the write and the reorder point.

When communicating both between threads and across reorder points, global coherency
can be utilized:
- The UAV must be declared `[globallycoherent]`.
- The UAV writer must issue a `DeviceMemoryBarrier` between the write and the
reorder point.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also require a DeviceMemoryBarrier between a reorder point (including shader entry) and the read?

When not requiring it, the compiler would have to conservatively assume an implicit barrier after every reorder point that is followed by a reordercoherent/globallycoherent UAV read, even in cases where the app does not require one. Whether this implicit barrier may cost performance depends on the implementation.

I think this is related to the general discussion on the DXR memory model and how globallycoherent should be correctly used for inter-thread communication even in non-reorder cases.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my point of view, the DeviceMemoryBarrier call makes sure that all writes have completed before the reorder point gets to move the work to a different thread. When reading after the reorder point, there's no ordering problem. The only risk is that stale data is read from a local cache. That is mitigated by the globallycoherent attribute, causing the local cache to be bypassed.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only risk is that stale data is read from a local cache. That is mitigated by the globallycoherent attribute, causing the local cache to be bypassed.

That depends on how the implementation implements globallycoherent. For instance, I think it is possible to implement globallycoherent by allowing such loads and stores to be served by local caches, and only invalidating/flushing caches at explicit memory barriers. Such an implementation would be unnecessarily forced to invalidate the local cache on shader entry in cases where such a UAV is accessed, but the shader doesn't actually require the synchronization.

Also, from a formal perspective, I find it a bit problematic to define abstract language semantics in terms of "local caches" of the implementation. It would be better to say that the barrier between the write and the reorder point serves as release operation, and the suggested corresponding barrier before the read as acquire operation. HLSL doesn't (yet?) support release/acquire, but it would be closer to it and could more easily be transformed to use release/acquire semantics in the future. With a single barrier, that's not possible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree in principle but we are not trying to extend the memory model beyond compute here (beside the new barrier type). The reorder point is simply seen as a change of thread. In a compute shader it would surely be enough with a barrier on the write side to ensure write ordering?
D3D11_3_FunctionalSpec, section 7.14.4 indeed mentions local caches.
Also look at Producer - consumer dataflow through UAVs in WorkGraphs.

Copy link

@jasilvanus jasilvanus Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's separate the reordercoherent and globallycoherent discussions. I think globallycoherent semantics in DXR are a bit problematic because of the synchronization limitations, but that's not the focus of this change.

For reordercoherent, are you saying that you agree that not requiring a barrier between the reorder point and the load does penalize some implementations, but we accept that in favour of a simpler API that is closer to what e.g. WG require?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I'm trying to say is that we are not trying to add acquire and release semantics to HLSL with this spec proposal.

What you are suggesting is calling Barrier both before and after the reorder point. First of all, note that you would have to have a barrier after each and every reorder point that wants to read the data after a single write since each reorder point will be a new thread. So a single write can result in a many required barriers, which may be difficult for the user. Second, there is no distinction between acquire and release with Barrier which means that all possible implementations are harmed (e.g., ordering + cache flush at both sides). Finally, it is not clear that there is any case where a high-frequency cache flush would be better than adopting the globallycoherent semantics of bypassing local caches, which all implementations already supports.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I'm trying to say is that we are not trying to add acquire and release semantics to HLSL with this spec proposal.

Agreed.

What you are suggesting is calling Barrier both before and after the reorder point. First of all, note that you would have to have a barrier after each and every reorder point that wants to read the data after a single write since each reorder point will be a new thread.

Nit: Let's not mix the thread terminology (as in the globallycoherent discussion) with the reorder terminology. We can say that at reorder points, reordercoherent synchronization is necessary. If we want to give it a name, maybe "execution locus" is a better term instead of "thread"?

So a single write can result in a many required barriers, which may be difficult for the user.

Agreed.

Second, there is no distinction between acquire and release with Barrier which means that all possible implementations are harmed (e.g., ordering + cache flush at both sides).

In the worst case, putting a reordercoherent barrier immediately after every reorder point (CallShader/TraceRay/ReportHit/HitObject ops / shader entry) should be equivalent to the current proposed behavior without the barrier on the reading side. Because it immediately follows a reorder point, it is clear that only reads can be affected, and if the implementation uses the bypass strategy nothing needs to be done.
An implementation that doesn't use cache bypass would have to add exactly all these barriers, and the explicit one just makes that explicit.

Finally, it is not clear that there is any case where a high-frequency cache flush would be better than adopting the globallycoherent semantics of bypassing local caches, which all implementations already supports.

Agreed that this is totally unclear in general, but I think when designing a specification that is supposed to be used for an extended lifetime, including on not yet existing hardware, one should be cautious to unnecessarily restrict the freedom of possible implementations.

To be clear, I am not demanding to change it to require the barrier on the reading side, I'm rather pointing out a potential issue. If there is consensus that the single-barrier solution is preferable, I'm totally fine with that.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We re-discussed internally and are fine with the proposed solution of only requiring a barrier on the write side. It may penalize some implementations, but it's really a very special corner case where it makes a difference, and it's questionable whether it's worth the effort optimizing for that.

Comment on lines +989 to +992
This proposal includes a new coherence scope for reorder points. The scope
is limited to communication within the same dispatch index. Specifically,
if UAV stores or atomics are performed on one side of a reorder point, and
on the other side the data is read via non-atomic UAV reads, the following

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, there are two cases of UAV accesses across reorder points:

  • within the same shader (e.g. RGS) across e.g. a TraceRay
  • across shaders, e.g. a store in RGS and a load in CHS

We should explicitly mention both cases and the required behavior.
My current understanding is that we require reordercoherent in both cases, which seems to be different to Vulkan which AFAIK doesn't require explicit synchronization within a shader.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is mentioned that a shader transition is another reorder point so the same rules apply. Does that cover it? Do you have a relevant reference for Vulkan?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that the current wording implicitly means that reordercoherent is required in both cases, but I think we should explicitly spell that out, particularly if it differs in a subtle way from what Vulkan does.

Check the ShaderCallKHR scope in https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#shaders-scope-shadercall. I interpret it as requiring explicit synchronization across shader call boundaries, but not across reorder points within a shader.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Design Meeting Agenda item for the design meeting
Projects
Status: No status
Status: Triaged
Development

Successfully merging this pull request may close these issues.