This repository contains tools, templates, and information for assembling, debugging, testing, and running your custom inference models, custom tasks and custom notebook environments with DataRobot.
The ./task_templates and ./model_templates folders provide reference examples to help users learn how to create custom tasks and/or custom inference models. The templates there are simple, well documented, and can be used as tutorials. These templates should also remain up to date with any API or other changes.
For further examples, provided as-is, that often contain more complex logic please see the community examples repo at: https://github.com/datarobot-community/custom-models. Please note that these examples may not stay up to date with the latest API or best practices.
The ./public_dropin_notebook_environments contains template examples (sample Dockerfile and context) for how to create custom images to use as the environments for DataRobot Notebooks.
For further documentation on this and all other features please visit our comprehensive documentation at: https://docs.datarobot.com/
DataRobot has 2 mechanisms for bringing custom ML code:
-
Custom task: an ML algorithm, for example, XGBoost or One-hot encoding, that can be used as a step in an ML pipeline (blueprint) inside DataRobot.
-
Custom inference model: a pre-trained model or user code prepared for inference. An inference model can have a predefined input/output schema or be unstructured. Learn more here
Materials for getting started:
- Demo Video
- Code examples:
- Custom task templates
- Environment Templates
- Building blueprints programmatically from tasks like lego blocks
- Quick walk-through
- Detailed documentation
Other resources:
- There is a chance that the task you are looking for has already been implemented.
Check custom tasks community Github
to see some off-the-shelf examples
- Note: The community repo above is NOT the place to start learning the basic concepts. The examples tend to have more complex logic and are meant to be used as-is rather than as a reference.
- This repo is the appropriate place to start with tutorial examples.
Materials for getting started:
- Walk-through to create, test, and deploy custom inference models
- Code examples:
- References for defining a custom inference model:
Other sources:
- There is a chance that the model you are looking for has already been implemented. Check custom inference models community Github to see some off-the-shelf examples
Note: Only reference this section if you plan to work with DRUM.
To build it, the following packages are required:
make
, Java 11
, maven
, docker
, R
E.g. for Ubuntu 18.04
apt-get install build-essential openjdk-11-jdk openjdk-11-jre maven python3-dev docker apt-utils curl gpg-agent software-properties-common dirmngr libssl-dev ca-certificates locales libcurl4-openssl-dev libxml2-dev libgomp1 gcc libc6-dev pandoc
Ubuntu 18.04
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/'
apt-get install r-cran-littler r-base r-base-dev
Rscript -e "install.packages(c('devtools', 'tidyverse', 'caret', 'recipes', 'glmnet', 'plumber', 'Rook', 'rjson', 'e1071'), Ncpus=4)"
Rscript -e 'library(caret); install.packages(unique(modelLookup()[modelLookup()$forReg, c(1)]), Ncpus=4)'
Rscript -e 'library(caret); install.packages(unique(modelLookup()[modelLookup()$forClass, c(1)]), Ncpus=4)'
- Create a virtual environment with Python>=3.9
- Install dependencies:
pip install -r ./custom_model_runner/requirements.txt -r requirements_test_unit.txt -r requirements_test.txt -r requirements_lint.txt
- To install drum in
editable
mode:pip install -e custom_model_runner/
- To install drum in
- Pytest to your heart's content.
- To build DRUM or work on Java predictor, Java 11 is required. To install Java on Ubuntu:
sudo apt install openjdk-11-jdk openjdk-11-jre
. - If you plan to run functional tests, build DRUM or work on Java predictor, Java 11 is required. To install Java on Ubuntu:
sudo apt install openjdk-11-jdk openjdk-11-jre
.
To get more information, search for custom models
and datarobot user models
in DataRobot Confluence.
- Ask repository admin for write access.
- Develop your contribution in a separate branch run tests and push to the repository.
- Create a pull request.
There is a script called create-drum-dev-image.sh
which will build and save an image with your latest local changes to the DRUM codebase. You can test new changes to drum in the DR app by running this script with an argument for which dropin env to modify, and uploading the image which gets built as an execution environment.
To contribute to the project, use a regular GitHub process: fork the repo and create a pull request to the original repository.
Artifacts used in tests are located here: ./tests/fixtures/drop_in_model_artifacts.
There is also the code in (*.ipynb, Pytorch.py, Rmodel.R, etc files) to generate those artifacts.
Check for generate*
scripts in ./tests/fixtures/drop_in_model_artifacts and ./tests/fixtures/artifacts.py
Model examples in ./model_templates are also used in functional testing. In the most cases, artifacts for those models are the same as in the ./tests/fixtures/drop_in_model_artifacts and can be simply copied accordingly. If artifact for model template is not in the ./tests/fixtures/drop_in_model_artifacts, check template's README for more instructions.
Some places to ask for help are:
- open an issue through the GitHub board.