Skip to content

Commit

Permalink
Rename package from zeus-ml to zeus (#151)
Browse files Browse the repository at this point in the history
  • Loading branch information
jaywonchung authored Jan 26, 2025
1 parent a501e56 commit bda7eac
Show file tree
Hide file tree
Showing 10 changed files with 13 additions and 13 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/publish_pypi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@ on:
- zeus-v*

jobs:
publish:
pypi-publish:
runs-on: ubuntu-latest
if: github.repository_owner == 'ml-energy'
permissions:
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
Expand All @@ -21,5 +23,3 @@ jobs:
run: pip install build && python -m build
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
password: ${{ secrets.PYPI_API_KEY }}
2 changes: 1 addition & 1 deletion docs/getting_started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Some optimizers or examples may require some extra setup steps, which are descri
Install the Zeus Python package simply with:

```sh
pip install zeus-ml
pip install zeus
```

### From source for development
Expand Down
2 changes: 1 addition & 1 deletion docs/optimize/batch_size_optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ In order for your recurring training job to communicate with the BSO server, you
1. Install the Zeus package, including dependencies needed for the batch size optimizer.

```sh
pip install zeus-ml[bso]
pip install zeus[bso]
```

2. Integrate [`BatchSizeOptimizer`][zeus.optimizer.batch_size.client.BatchSizeOptimizer] to your training script.
Expand Down
2 changes: 1 addition & 1 deletion docs/optimize/pipeline_frequency_optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ As another example, in Megatron-LM, users can pass in their custom `forward_step

### Integrate `PipelineFrequencyOptimizer`

1. Add `zeus-ml[pfo]` to your dependencies.
1. Add `zeus[pfo]` to your dependencies.
1. Instantiate the [`PipelineFrequencyOptimizer`][zeus.optimizer.pipeline_frequency.optimizer.PipelineFrequencyOptimizer] somewhere before actual training runs. Let's call the object `opt`.
1. Surround one training step with `opt.on_step_begin()` and `opt.on_step_end()`.
1. Wrap the forward pass region with `opt.on_instruction_begin("forward")` and `opt.on_instruction_end("forward")`.
Expand Down
2 changes: 1 addition & 1 deletion examples/carbon_emission_monitor/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
zeus-ml
zeus
accelerate >= 0.12.0
torch >= 1.3
datasets >= 1.8.0
Expand Down
2 changes: 1 addition & 1 deletion examples/huggingface/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To run the `SFTTrainer` integration script (`run_gemma_sft_qlora.py`):
```sh
pip install -r requirements-qlora.txt
```
Note that you may have to tweak `requirements-qlora.txt` depending on your setup. The current requirements file assumes that you are using CUDA 11, and installs `nvidia-cusparse-cu11` for `bitsandbytes`. Basically, you want to get a setup where training runs, and just add `pip install zeus-ml` on top of it.
Note that you may have to tweak `requirements-qlora.txt` depending on your setup. The current requirements file assumes that you are using CUDA 11, and installs `nvidia-cusparse-cu11` for `bitsandbytes`. Basically, you want to get a setup where training runs, and just add `pip install zeus` on top of it.

## `ZeusMonitor` and `HFGlobalPowerLimitOptimizer`

Expand Down
2 changes: 1 addition & 1 deletion examples/huggingface/requirements-qlora.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
zeus-ml
zeus
accelerate >= 0.12.0
torch >= 1.3
datasets >= 1.8.0
Expand Down
2 changes: 1 addition & 1 deletion examples/huggingface/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
zeus-ml
zeus
accelerate >= 0.12.0
torch >= 1.3
datasets >= 1.8.0
Expand Down
2 changes: 1 addition & 1 deletion examples/jax/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
zeus-ml
zeus
jax[cuda12]==0.4.30
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ requires = ["setuptools>=61.0.0", "wheel"]
build-backend = "setuptools.build_meta"

[project]
name = "zeus-ml"
name = "zeus"
description = "A framework for deep learning energy measurement and optimization."
readme = "README.md"
authors = [
Expand Down Expand Up @@ -52,7 +52,7 @@ lint = ["ruff", "black==22.6.0", "pyright", "pandas-stubs", "transformers"]
test = ["fastapi[standard]", "sqlalchemy", "pydantic<2", "pytest==7.3.2", "pytest-mock==3.10.0", "pytest-xdist==3.3.1", "anyio==3.7.1", "aiosqlite==0.20.0", "numpy<2"]
docs = ["mkdocs-material[imaging]==9.5.19", "mkdocstrings[python]==0.25.0", "mkdocs-gen-files==0.5.0", "mkdocs-literate-nav==0.6.1", "mkdocs-section-index==0.3.9", "mkdocs-redirects==1.2.1", "urllib3<2", "black"]
# greenlet is for supporting apple mac silicon for sqlalchemy(https://docs.sqlalchemy.org/en/20/faq/installation.html)
dev = ["zeus-ml[pfo-server,bso,bso-server,migration,prometheus,lint,test]", "greenlet"]
dev = ["zeus[pfo-server,bso,bso-server,migration,prometheus,lint,test]", "greenlet"]

[tool.setuptools.packages.find]
where = ["."]
Expand Down

0 comments on commit bda7eac

Please sign in to comment.