From ML model to production API endpoint with a few lines of code
BentoML makes it easy to serve and deploy machine learning models in the cloud.
It is an open source framework for building cloud-native model serving services. BentoML supports most popular ML training frameworks and deployment platforms, including major cloud providers and docker/kubernetes.
👉 Join BentoML Slack community to hear about the latest development updates.
Installing BentoML with pip
:
pip install bentoml
Creating a prediction service with BentoML:
import bentoml
from bentoml.handlers import DataframeHandler
from bentoml.artifact import SklearnModelArtifact
@bentoml.env(pip_dependencies=["scikit-learn"]) # defining pip/conda dependencies to be packed
@bentoml.artifacts([SklearnModelArtifact('model')]) # defining required artifacts, typically trained models
class IrisClassifier(bentoml.BentoService):
@bentoml.api(DataframeHandler) # defining prediction service endpoint and expected input format
def predict(self, df):
# Pre-processing logic and access to trained model artifacts in API function
return self.artifacts.model.predict(df)
Train a classifier model with default Iris dataset and pack the trained model
with the BentoService IrisClassifier
defined above:
from sklearn import svm
from sklearn import datasets
if __name__ == "__main__":
clf = svm.SVC(gamma='scale')
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
# Create a iris classifier service
iris_classifier_service = IrisClassifier()
# Pack it with the newly trained model artifact
iris_classifier_service.pack('model', clf)
# Save the prediction service to a BentoService bundle
saved_path = iris_classifier_service.save()
You've just created a BentoService SavedBundle, it's a versioned file archive that is ready for production deployment. It contains the BentoService you defined, as well as the packed trained model artifacts, pre-processing code, dependencies and other configurations in a single file directory.
From a BentoService SavedBundle, you can start a REST API server by providing the file path to the saved bundle:
bentoml serve {saved_path}
The REST API server provides web UI for testing and debugging the server. If you are
running this command on your local machine, visit http://127.0.0.1:5000 in your browser
and try out sending API request to the server. You can also send prediction request
with curl
from command line:
curl -i \
--header "Content-Type: application/json" \
--request POST \
--data '[[5.1, 3.5, 1.4, 0.2]]' \
http://localhost:5000/predict
The BentoService SavedBundle directory is structured to work as a docker build context, that can be used to build a API server docker container image: build context directory:
docker build -t my_api_server {saved_path}
You can also deploy your BentoService to cloud services such as AWS Lambda
with bentoml
command. The deployment gives you an API endpoint hosting your model,
that is ready for production use:
bentoml deployment create my-iris-classifier --bento IrisClassifier:{VERSION} --platform=aws-lambda
More detailed code and walkthrough of this example can be found in the BentoML Quickstart Guide.
Full documentation and API references: https://docs.bentoml.org/
Visit bentoml/gallery repository for more examples and tutorials.
- Pet Image Classification - Google Colab | nbviewer | source
- Salary Range Prediction - Google Colab | nbviewer | source
- Sentiment Analysis - Google Colab | nbviewer | source
- Fashion MNIST - Google Colab | nbviewer | source
- CIFAR-10 Image Classification - Google Colab | nbviewer | source
- Fashion MNIST - Google Colab | nbviewer | source
- Text Classification - Google Colab | nbviewer | source
- Toxic Comment Classifier - Google Colab | nbviewer | source
- tf.Function model - Google Colab | nbviewer | source
- Titanic Survival Prediction - Google Colab | nbviewer | source
- League of Legend win Prediction - Google Colab | nbviewer | source
- Titanic Survival Prediction - Google Colab | nbviewer | source
- Loan Default Prediction - Google Colab | nbviewer | source
- Prostate Cancer Prediction - Google Colab | nbviewer | source
-
Automated end-to-end deployment workflow with BentoML
-
Clipper Deployment
-
Mannual Deployment
Have questions or feedback? Post a new github issue
or discuss in our Slack channel:
Want to help build BentoML? Check out our contributing guide and the development guide.
BentoML is under active development and is evolving rapidly. Currently it is a Beta release, we may change APIs in future releases.
Read more about the latest features and changes in BentoML from the releases page.
BentoML by default collects anonymous usage data using Amplitude. It only collects BentoML library's own actions and parameters, no user or model data will be collected. Here is the code that does it.
This helps BentoML team to understand how the community is using this tool and what to build next. You can easily opt-out of usage tracking by running the following command:
# From terminal:
bentoml config set usage_tracking=false
# From python:
import bentoml
bentoml.config().set('core', 'usage_tracking', 'False')