You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 16, 2021. It is now read-only.
- As latency or throughput performance become more critical than memory use
- there comes a point where one has to break out memory pools.
- reference: http://www.javacodegeeks.com/2015/12/
native-speed-file-backed-large-data-storage-pure-java.html
Solution Snippet
- Pool backed Segmented Storage.
- Segmented Storage is based on a pool
- allows allocation of larger storage containers
- not a single pool chunk.
- segmented storage region is not necessarily made up of contiguous chunks.
- Indeed, this is its most important feature.
- made up of equal sized chunks from a backing pool
- the chunks scatters across virtual address space and might not even be in order.
- Whole idea is to have a storage container (say 1 GB)
- made up of a selection of chunks (say 1 MB each).
Solution Difficulties
- In current JVMs the maximum addressable array or ByteBuffer
- contains only 2**31 elements.
- Segmented storage can address much greater quantities of memory
- and backing that memory with memory mapped files if required.
- Consider that we need 20 billion doubles,
- we cannot allocate them into an array or a ByteBuffer;
- but we can use segmented memory so that we can achieve our goal.
- Use Cases where we want to handle more memory than RAM
- need memory mapped files than just using anonymous swap space
- JVM will not compete with for swap space with other programs
- GC memory distributes object access beyond anonymous virtual memory
- without JNI, the largest mem mapped file 'chunk' that can be managed in Java
- is 2 ^ 31 bytes
- but I may require to manage over 100G memory on a 16G machine
Solution Implementation
- need mapped byte buffers
- each buffer is a chunk in a pool of free chunks
- take some of the byte buffer chunks of the free pool into storage container
- when container is freed the chunks are returned to the pool
- mapped byte buffers here are java.nio.DirectByteBuffer objects
- with file backed memory
- data is not stored in heap,
- its located in the same virtual address location throughout the object lifetime
- so we can use the address for ever & avoid object lookups
intrinsic in JVM
- when method is intrinsic JVM JIT compiler replaces it with machine code
- no compilation
- unsafe version is also intrinsic & compiles down to machine code that gcc would produce
- it is fast, no object dereferences or bound checks
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
The text was updated successfully, but these errors were encountered: