feat: Restructure how we do sync / waits #153
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The big issue we have right now is that we're limited by the "slots" a counter has. Either users are stuck with the "base" 4 slots, or they have to dynamically allocate to get more. Which is also fragile for the user to know beforehand.
The core realization of this change is that any time a fiber is waiting (IE it needs to be in the counter's queue), it's "asleep", and we can guarantee the stack memory is valid. So instead of the counter allocating memory, we allocate memory on the stack for the wait, and use a linked list to store the "queue" of waiting fibers.
BREAKING CHANGE: This removes AtomicCounter, TaskCounter, and AtomicFlag. And replaces them with WaitGroup. WaitGroup functions very similarly to TaskCounter, but users no longer need to worry about how many "waiting fiber slots" they need. Fibtex is also restructured. It's no longer possible to configure the "lock behavior"