From 78f99ac7e8560e29eaa619368aeb4a3f88010794 Mon Sep 17 00:00:00 2001 From: Youssef Benhachem Date: Tue, 19 Sep 2023 18:52:40 +0200 Subject: [PATCH] =?UTF-8?q?Fix=20typos=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index c2f55eaa..9f72875b 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ The experiment management and tracking is handled using [hydra](https://hydra.cc Everything else is either optional or inferred from the model's name or path. -### Supported Backends/Dvices +### Supported Backends/Devices - [x] Pytorch backend for CPU - [x] Pytorch backend for CUDA @@ -42,7 +42,7 @@ Everything else is either optional or inferred from the model's name or path. - [x] Energy and carbon emissions (`benchmark.energy=true`). - [x] Warm up runs before inference (`benchmark.warmup_runs=20`). - [x] Warm up steps during training (`benchmark.warmup_steps=20`). -- [x] Inputs shapes control (e.g. `benchamrk.input_shapes.sequence_length=128`). +- [x] Inputs shapes control (e.g. `benchmark.input_shapes.sequence_length=128`). - [x] Dataset shapes control (e.g. `benchmark.dataset_shapes.dataset_size=1000`). - [x] Forward and Generation pass control (e.g. for an LLM `benchmark.generate.max_new_tokens=100`, for a diffusion model `benchmark.forward.num_images_per_prompt=4`). @@ -54,7 +54,7 @@ Everything else is either optional or inferred from the model's name or path. - [x] Onnxruntime Optimization and AutoOptimization (`backend.optimization=true` or `backend.auto_optimization=O4`, etc). - [x] PEFT training (`backend.peft_strategy=lora`, `backend.peft_config.task_type=CAUSAL_LM`, etc). - [x] DDP training (`backend.use_ddp=true`, `backend.ddp_config.nproc_per_node=2`, etc). -- [x] BitsAndBytes quantization scheme (`backend.quantization_scheme=bnb`, ``backend.quantization_config.load_in_4bit`, etc). +- [x] BitsAndBytes quantization scheme (`backend.quantization_scheme=bnb`, `backend.quantization_config.load_in_4bit`, etc). - [x] GPTQ quantization scheme (`backend.quantization_scheme=gptq`, `backend.quantization_config.bits=4`, etc). - [x] Optimum's BetterTransformer (`backend.bettertransformer=true`). - [x] Automatic Mixed Precision (`backend.amp_autocast=true`).