Skip to content

Commit

Permalink
Merge pull request #48 from ayasyrev:pre-commit
Browse files Browse the repository at this point in the history
pre-commit, fixes from ruff etc
  • Loading branch information
ayasyrev authored Jul 20, 2024
2 parents dbe9e2b + b9bd02c commit 94aab98
Show file tree
Hide file tree
Showing 10 changed files with 180 additions and 139 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/deploy_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ jobs:
- uses: actions/setup-python@main
with:
python-version: 3.x
- run: pip install mkdocs-material
- run: pip install mkdocs-material
- run: pip install pymdown-extensions
- run: mkdocs gh-deploy --force
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -113,4 +113,4 @@ venv.bak/
.vscode/settings.json

# nox
.nox
.nox
74 changes: 72 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,77 @@
repos:
- repo: https://github.com/ayasyrev/nbmetaclean
rev: 0.0.7
rev: 0.0.8
hooks:
- id: nbclean
name: nbclean
entry: nbclean
entry: nbclean

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-added-large-files
- id: check-ast
- id: check-builtin-literals
- id: check-case-conflict
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-symlinks
- id: check-toml
- id: check-xml
- id: detect-private-key
- id: forbid-new-submodules
- id: forbid-submodules
- id: mixed-line-ending
- id: destroyed-symlinks
- id: fix-byte-order-marker
- id: check-json
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- id: requirements-txt-fixer

- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.5.3
hooks:
# Run the linter.
- id: ruff
exclude: '__pycache__/'
args: [ --fix ]
# Run the formatter.
- id: ruff-format

- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
- id: python-check-mock-methods
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-use-type-annotations
- id: text-unicode-replacement-char

- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
additional_dependencies: ["tomli"]

# - repo: https://github.com/igorshubovych/markdownlint-cli
# rev: v0.41.0
# hooks:
# - id: markdownlint

- repo: https://github.com/tox-dev/pyproject-fmt
rev: "2.1.4"
hooks:
- id: pyproject-fmt

- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.10.1
hooks:
- id: mypy
files: ^albumentations/
additional_dependencies: [ types-PyYAML, types-setuptools, pydantic>=2.7]
args:
[ --config-file=pyproject.toml ]
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ dist: clean
python setup.py sdist bdist_wheel

clean:
rm -rf dist
rm -rf dist
40 changes: 20 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@ hide:
Utils for benchmark - wrapper over python timeit.
<!-- cell -->
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/benchmark-utils)](https://pypi.org/project/benchmark-utils/)
[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)
[![PyPI Status](https://badge.fury.io/py/benchmark-utils.svg)](https://badge.fury.io/py/benchmark-utils)
[![Tests](https://github.com/ayasyrev/benchmark_utils/workflows/Tests/badge.svg)](https://github.com/ayasyrev/benchmark_utils/actions?workflow=Tests) [![Codecov](https://codecov.io/gh/ayasyrev/benchmark_utils/branch/main/graph/badge.svg)](https://codecov.io/gh/ayasyrev/benchmark_utils)
<!-- cell -->
Tested on python 3.8 - 3.12
<!-- cell -->
## Install
<!-- cell -->
Install from pypi:
Install from pypi:

`pip install benchmark_utils`

Expand All @@ -28,7 +28,7 @@ Or install from github repo:
<!-- cell -->
Lets benchmark some (dummy) functions.
<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
from time import sleep

Expand All @@ -43,18 +43,18 @@ def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:
sleep(sleep_time * mult)</details>

<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>

Let's create benchmark.</details>

<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
from benchmark_utils import Benchmark
```</details>

<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
bench = Benchmark(
[func_to_test_1, func_to_test_2],
Expand All @@ -65,7 +65,7 @@ bench = Benchmark(
```python
bench
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>



Expand All @@ -79,7 +79,7 @@ Now we can benchmark that functions.
# we can run bench.run() or just:
bench()
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
Expand Down Expand Up @@ -114,7 +114,7 @@ We can run it again, all functions, some of it, exclude some and change number o
```python
bench.run(num_repeats=10)
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
Expand Down Expand Up @@ -149,7 +149,7 @@ After run, we can print results - sorted or not, reversed, compare results with
```python
bench.print_results(reverse=True)
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"> Func name | Sec <span style="color: #800080; text-decoration-color: #800080">/</span> run
Expand All @@ -170,7 +170,7 @@ bench.print_results(reverse=True)
<!-- cell -->
We can add functions to benchmark as list of functions (or partial) or as dictionary: `{"name": function}`.
<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
bench = Benchmark(
[
Expand All @@ -185,7 +185,7 @@ bench = Benchmark(
```python
bench
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>



Expand All @@ -196,7 +196,7 @@ bench
```python
bench.run()
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
Expand Down Expand Up @@ -232,7 +232,7 @@ bench.run()
</pre></details>

<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
bench = Benchmark(
{
Expand All @@ -246,7 +246,7 @@ bench = Benchmark(
```python
bench
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>



Expand All @@ -262,7 +262,7 @@ When we run benchmark script in terminal, we got pretty progress thanks to rich.
<!-- cell -->
With BenchmarkIter we can benchmark functions over iterables, for example read list of files or run functions with different arguments.
<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
def func_to_test_1(x: int) -> None:
"""simple 'sleep' func for test"""
Expand All @@ -278,7 +278,7 @@ dummy_params = list(range(10))
```</details>

<!-- cell -->
<details open> <summary>output</summary>
<details open> <summary>output</summary>
```python
from benchmark_utils import BenchmarkIter

Expand All @@ -292,7 +292,7 @@ bench = BenchmarkIter(
```python
bench()
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
Expand Down Expand Up @@ -328,7 +328,7 @@ And we can limit number of items with `num_samples` argument:
<!-- cell -->
## Multiprocessing
<!-- cell -->
By default we tun functions in one thread.
By default we tun functions in one thread.
But we can use multiprocessing with `multiprocessing=True` argument:
`bench.run(multiprocessing=True)`
It will use all available cpu cores.
Expand All @@ -338,7 +338,7 @@ And we can use `num_workers` argument to limit used cpu cores:
```python
bench.run(multiprocessing=True, num_workers=2)
```
<details open> <summary>output</summary>
<details open> <summary>output</summary>


<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"></pre>
Expand Down
Loading

0 comments on commit 94aab98

Please sign in to comment.