Skip to content

Commit

Permalink
Fix all typos I could find (#93)
Browse files Browse the repository at this point in the history
* Fix all typos I could find

* Fix typos in README.md
  • Loading branch information
Kleinmarb authored Jul 29, 2024
1 parent 82f1489 commit 14bafdd
Show file tree
Hide file tree
Showing 6 changed files with 11 additions and 10 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ The `hybrid` feature flag enables a hybrid implementation of GxHash. This is dis
## Benchmarks

[![Benchmark](https://github.com/ogxd/gxhash/actions/workflows/bench.yml/badge.svg)](https://github.com/ogxd/gxhash/actions/workflows/bench.yml)
GxHash is continuously benchmarked on X86 and ARM Github runners.
GxHash is continuously benchmarked on X86 and ARM GitHub runners.

To run the benchmarks locally use one of the following:
```bash
Expand All @@ -96,7 +96,7 @@ Throughput is measured as the number of bytes hashed per second.

*Some prefer talking **latency** (time for generating a hash) or **hashrate** (the number of hashes generated per second) for measuring hash function performance, but those are all equivalent in the end as they all boil down to measuring the time it takes to hash some input and then apply different scalar transformation. For instance, if latency for a `4 bytes` hash is `1 ms`, then the throughput is `1 / 0.001 * 4 = 4000 bytes per second`. Throughput allows us to conveniently compare the performance of a hash function for any input size on a single graph.*

**Lastest Benchmark Results:**
**Latest Benchmark Results:**
![aarch64](./benches/throughput/aarch64.svg)
![x86_64](./benches/throughput/x86_64.svg)
![x86_64-hybrid](./benches/throughput/x86_64-hybrid.svg)
Expand Down
2 changes: 1 addition & 1 deletion src/gxhash/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ unsafe fn compress_many(mut ptr: *const State, end: usize, hash_vector: State, l
let remaining_bytes = remaining_bytes - unrollable_blocks_count * VECTOR_SIZE;
let end_address = ptr.add(remaining_bytes / VECTOR_SIZE) as usize;

// Process first individual blocks until we have an whole number of 8 blocks
// Process first individual blocks until we have a whole number of 8 blocks
let mut hash_vector = hash_vector;
while (ptr as usize) < end_address {
load_unaligned!(ptr, v0);
Expand Down
2 changes: 1 addition & 1 deletion src/gxhash/platform/arm.rs
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ pub unsafe fn compress_8(mut ptr: *const State, end_address: usize, hash_vector:
let mut t2: State = create_empty();

// Hash is processed in two separate 128-bit parallel lanes
// This allows the same processing to be applied using 256-bit V-AES instrinsics
// This allows the same processing to be applied using 256-bit V-AES intrinsics
// so that hashes are stable in both cases.
let mut lane1 = hash_vector;
let mut lane2 = hash_vector;
Expand Down
4 changes: 2 additions & 2 deletions src/gxhash/platform/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ pub use platform::*;
use core::mem::size_of;

pub(crate) const VECTOR_SIZE: usize = size_of::<State>();
// 4KiB is the default page size for most systems, and conservative for other systems such as MacOS ARM (16KiB)
// 4KiB is the default page size for most systems, and conservative for other systems such as macOS ARM (16KiB)
const PAGE_SIZE: usize = 0x1000;

#[inline(always)]
Expand All @@ -29,7 +29,7 @@ unsafe fn check_same_page(ptr: *const State) -> bool {
let address = ptr as usize;
// Mask to keep only the last 12 bits
let offset_within_page = address & (PAGE_SIZE - 1);
// Check if the 16nd byte from the current offset exceeds the page boundary
// Check if the 16th byte from the current offset exceeds the page boundary
offset_within_page < PAGE_SIZE - VECTOR_SIZE
}

Expand Down
2 changes: 1 addition & 1 deletion src/gxhash/platform/x86.rs
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ pub unsafe fn compress_8(mut ptr: *const State, end_address: usize, hash_vector:
let mut t2: State = create_empty();

// Hash is processed in two separate 128-bit parallel lanes
// This allows the same processing to be applied using 256-bit V-AES instrinsics
// This allows the same processing to be applied using 256-bit V-AES intrinsics
// so that hashes are stable in both cases.
let mut lane1 = hash_vector;
let mut lane2 = hash_vector;
Expand Down
7 changes: 4 additions & 3 deletions src/hasher.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@ use crate::gxhash::*;
/// A `Hasher` for hashing an arbitrary stream of bytes.
/// # Features
/// - The fastest [`Hasher`] of its class<sup>1</sup>, for all input sizes
/// - Highly collision resitant
/// - Highly collision resistant
/// - DOS resistance thanks to seed randomization when using [`GxHasher::default()`]
///
/// *<sup>1</sup>There might me faster alternatives, such as `fxhash` for very small input sizes, but that usually have low quality properties.*
/// *<sup>1</sup>There might be faster alternatives, such as `fxhash` for very small input sizes,
/// but that usually have low quality properties.*
#[derive(Clone, Debug)]
pub struct GxHasher {
state: State,
Expand Down Expand Up @@ -76,7 +77,7 @@ impl GxHasher {
GxHasher::with_state(unsafe { create_seed(seed) })
}

/// Finish this hasher and return the hashed value as a 128 bit
/// Finish this hasher and return the hashed value as a 128-bit
/// unsigned integer.
#[inline]
pub fn finish_u128(&self) -> u128 {
Expand Down

0 comments on commit 14bafdd

Please sign in to comment.