Skip to content

Commit

Permalink
chore: clean up typos, enforce typo check
Browse files Browse the repository at this point in the history
  • Loading branch information
LeoDog896 committed Oct 17, 2024
1 parent f83c5d7 commit 0280a97
Show file tree
Hide file tree
Showing 9 changed files with 33 additions and 12 deletions.
19 changes: 19 additions & 0 deletions .github/workflows/typos.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Copied from https://github.com/rerun-io/rerun_template

# https://github.com/crate-ci/typos
# Add exceptions to `.typos.toml`
# install and run locally: cargo install typos-cli && typos

name: Spell Check
on: [pull_request]

jobs:
run:
name: Spell Check
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v4

- name: Check spelling of entire workspace
uses: crate-ci/typos@master
4 changes: 2 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ depending of the size of the first write call. This increases
compression ratio and speed for use cases where the data is larger than
64kb.
```
- Add fluent API style contruction for FrameInfo [#99](https://github.com/PSeitz/lz4_flex/pull/99) (thanks @CosmicHorrorDev)
- Add fluent API style construction for FrameInfo [#99](https://github.com/PSeitz/lz4_flex/pull/99) (thanks @CosmicHorrorDev)
```
This adds in fluent API style construction for FrameInfo. Now you can do
Expand Down Expand Up @@ -186,7 +186,7 @@ Fix no_std support for safe-decode

0.9.0 (2021-09-25)
==================
Fix unsoundness in the the api in regards to unitialized data. (thanks to @arthurprs)
Fix unsoundness in the the api in regards to uninitialized data. (thanks to @arthurprs)
* https://github.com/PSeitz/lz4_flex/pull/22

0.8.0 (2021-05-17)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ Tested on AMD Ryzen 7 5900HX, rustc 1.69.0 (84c898d65 2023-04-16), Manjaro, CPU
This fuzz target generates corrupted data for the decompressor.
`cargo +nightly fuzz run fuzz_decomp_corrupt_block` and `cargo +nightly fuzz run fuzz_decomp_corrupt_frame`

This fuzz target asserts that a compression and decompression rountrip returns the original input.
This fuzz target asserts that a compression and decompression roundtrip returns the original input.
`cargo +nightly fuzz run fuzz_roundtrip` and `cargo +nightly fuzz run fuzz_roundtrip_frame`

This fuzz target asserts compression with cpp and decompression with lz4_flex returns the original input.
Expand Down
2 changes: 2 additions & 0 deletions _typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[files]
extend-exclude = ["benches/*.txt", "benches/*.json", "benches/*.xml", "tests/tests.rs"]
8 changes: 4 additions & 4 deletions src/block/compress.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
//! The compression algorithm.
//!
//! We make use of hash tables to find duplicates. This gives a reasonable compression ratio with a
//! high performance. It has fixed memory usage, which contrary to other approachs, makes it less
//! high performance. It has fixed memory usage, which contrary to other approaches, makes it less
//! memory hungry.
use crate::block::hashtable::HashTable;
Expand Down Expand Up @@ -929,7 +929,7 @@ mod tests {
// and no literal, so a block of 12 bytes can be compressed.
let aaas: &[u8] = b"aaaaaaaaaaaaaaa";

// uncompressible
// incompressible
let out = compress(&aaas[..12]);
assert_gt!(out.len(), 12);
// compressible
Expand All @@ -940,12 +940,12 @@ mod tests {
let out = compress(&aaas[..15]);
assert_le!(out.len(), 15);

// dict uncompressible
// dict incompressible
let out = compress_with_dict(&aaas[..11], aaas);
assert_gt!(out.len(), 11);
// compressible
let out = compress_with_dict(&aaas[..12], aaas);
// According to the spec this _could_ compres, but it doesn't in this lib
// According to the spec this _could_ compress, but it doesn't in this lib
// as it aborts compression for any input len < LZ4_MIN_LENGTH
assert_gt!(out.len(), 12);
let out = compress_with_dict(&aaas[..13], aaas);
Expand Down
2 changes: 1 addition & 1 deletion src/block/decompress.rs
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,7 @@ pub(crate) fn decompress_internal<const USE_DICT: bool, S: Sink>(
// to enable an optimized copy of 18 bytes.
if offset >= match_length {
unsafe {
// _copy_, not copy_non_overlaping, as it may overlap.
// _copy_, not copy_non_overlapping, as it may overlap.
// Compiles to the same assembly on x68_64.
core::ptr::copy(start_ptr, output_ptr, 18);
output_ptr = output_ptr.add(match_length);
Expand Down
2 changes: 1 addition & 1 deletion src/block/decompress_safe.rs
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ pub fn decompress_into_with_dict(
}

/// Decompress all bytes of `input` into a new vec. The first 4 bytes are the uncompressed size in
/// litte endian. Can be used in conjunction with `compress_prepend_size`
/// little endian. Can be used in conjunction with `compress_prepend_size`
#[inline]
pub fn decompress_size_prepended(input: &[u8]) -> Result<Vec<u8>, DecompressError> {
let (uncompressed_size, input) = super::uncompressed_size(input)?;
Expand Down
2 changes: 1 addition & 1 deletion src/block/hashtable.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ impl HashTable4KU16 {
#[inline]
pub fn new() -> Self {
// This generates more efficient assembly in contrast to Box::new(slice), because of an
// optmized call alloc_zeroed, vs. alloc + memset
// optimized call alloc_zeroed, vs. alloc + memset
// try_into is optimized away
let dict = alloc::vec![0; HASHTABLE_SIZE_4K]
.into_boxed_slice()
Expand Down
4 changes: 2 additions & 2 deletions src/sink.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ use alloc::vec::Vec;

use crate::fastcpy::slice_copy;

/// Returns a Sink implementation appropriate for outputing up to `required_capacity`
/// Returns a Sink implementation appropriate for outputting up to `required_capacity`
/// bytes at `vec[offset..offset+required_capacity]`.
/// It can be either a `SliceSink` (pre-filling the vec with zeroes if necessary)
/// when the `safe-decode` feature is enabled, or `VecSink` otherwise.
Expand All @@ -22,7 +22,7 @@ pub fn vec_sink_for_compression(
}
}

/// Returns a Sink implementation appropriate for outputing up to `required_capacity`
/// Returns a Sink implementation appropriate for outputting up to `required_capacity`
/// bytes at `vec[offset..offset+required_capacity]`.
/// It can be either a `SliceSink` (pre-filling the vec with zeroes if necessary)
/// when the `safe-decode` feature is enabled, or `VecSink` otherwise.
Expand Down

0 comments on commit 0280a97

Please sign in to comment.