This is a project, that tries to accelerate micro research projects by providing a richer functionality for the already known hpyerparameter optimization library optuna. The code of optuna is not modified, it is incorporated into rapidFlow to provide richer evaluation and easy parallel processing.
- Python >= 3.7
- PyTorch
rapidFlow is build upon Pytorch, so make sure you have PyTorch installed.
-
From Pip Install package with:
pip install rapidflow
-
With cloned repository Install package with:
pip install -e /src
Branche | Purpose |
---|---|
main | production state |
feature | a new feautre |
hotfix | hotfix as there are no bugfixes as everything is created from master |
The desired workflow is github flow. Meaning that: * we can deploy from master at any time * nothing gets deployed without a PR and its review * we have no releases or release branches
This way we maintain: * fast responses to features or bugs and continouus delivery * easy workflow * fast developer feedback
more to come!
PR:
- deplyos into develop and performs integration tests
- 2 PRs are created at the same time --> their tests and deployment is scheduled over Runners in github or jenkins
- 1 PR is merged --> if there is a dependency a merge conflict arises, which gets reslved by a new commit in the 2nd PR --> triggers pipeline
- move experiment library to another repo
- experiments in docker container with gpu? (or singularity)
- test on multiple gpus
- testing and propper doku
- significance testing
Feel free to contribute. If you use this repository please cite with:
@misc{rapidFlow_geb,
author = {Gebauer, Michael},
title = {rapidFlow},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/gebauerm/model_storage}},
}