Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocator: apply violence to free up some bits in the chunk header #332

Open
nwf opened this issue Nov 1, 2024 · 0 comments
Open

Allocator: apply violence to free up some bits in the chunk header #332

nwf opened this issue Nov 1, 2024 · 0 comments

Comments

@nwf
Copy link
Member

nwf commented Nov 1, 2024

The current allocator is a dlmalloc-like design with a header fronting every allocated object or contiguous span of unallocated space. Specifically, as defined at

struct __packed __aligned(MallocAlignment)
MChunkHeader
{
/**
* Each chunk has a 16-bit metadata field that is used to store a small
* bitfield and the owner ID in the remaining bits. This is the space not
* consumed by the metadata. It must be reduced if additional bits are
* stolen for other fields.
*/
static constexpr size_t OwnerIDWidth = 13;
/**
* Compressed size of the predecessor chunk. See cell_prev().
*/
SmallSize prevSize;
/**
* Compressed size of this chunk. See cell_next().
*/
SmallSize currSize;
/// The unique identifier of the allocator.
uint16_t ownerID : OwnerIDWidth;
/**
* Is this a sealed object? If so, it should be exempted from free in
* `heap_free_all` because deallocation requires consensus between the
* holder of the allocator capability and the holder of the sealing
* capability.
*/
bool isSealedObject : 1;
bool isPrevInUse : 1;
bool isCurrInUse : 1;
/// Head of a linked list of claims on this allocation
uint16_t claims;
. Of note, there are 16 bits in every header dedicated to finding the previous header. However, the only operations that traverses the header linked list backwards are the...

  • ok_in_use_chunk integrity check function, having already checked that the prior chunk was not in use, and
  • mspace_free_internal when doing coalescing, gated on, again, the prior chunk not being in use.

There's a wonderful thing about chunks not in use: their bodies are free for our use. That's where we put the MChunk and TChunk free object metadata. We could, for the small price of increasing our minimum allocation size, including chunk header, from 16 bytes (the header and two 32-bit addresses, sizeof(MChunk)) to 24 (since headers must be 8-byte aligned), put more stuff therein. Notably, we can steal a trick from malloc()s past and stuff a pointer (or address or even relative offset, as desired), at the foot of a free span, pointing back to its header. !MChunkHeader::isPrevInUse then indicates that this footer is present and valid.

This would let us reclaim the 16 bits in object headers for other purposes: larger relative displacements for the next and claims pointers, more owner bits (or even move owners into being heap objects, like claims). It wouldn't, I think, even be that much work to do the refactoring, since it's really just changing the definition of MChunkHeader::cell_prev, as that determines both the location and encoding of the pointer to the previous chunk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant