Notes about Machine Learning, Data Science and Analytics Engineering

BentoML – Faster Machine Learning Prototype

In this post, I’ll show you how to create a working prototype of a web application with a working machine learning model in 50 lines of Python code. Imagine you have a cool project idea. Now you need to implement MVP (minimum viable product) and show it to your manager / partner / investor or just show off to your friends.

We will be using BentoML. It is a flexible, high-performance platform that is ideal for building an MVP.

BentoML features:

  • supports multiple machine learning frameworks including Tensorflow, PyTorch, Keras, XGBoost, and more.
  • own cloud deployment with Docker, Kubernetes, AWS, Azure and many more
  • high performance online service via API
  • web dashboards and APIs for managing model registry and deployment

Create MVP

First, download the html page with css and pictures from the repository. Now let’s write runner.py. We will be doing a demo for the classic iris classification problem. Let’s start by importing the libraries:

from sklearn import svm
from sklearn import datasets 
from bentoml import env, artifacts, api, BentoService
from bentoml.adapters import DataframeInput
from bentoml.frameworks.sklearn import SklearnModelArtifact
from bentoml.adapters import DataframeInput
from bentoml.artifact import SklearnModelArtifact

Let’s import all the necessary bento elements: environment, artifacts, API, service for the base model, and static content.

from bentoml import (env,  # environment
                     artifacts, # artifacts
                     api, # API
                     BentoService, # service for the base model
                     web_static_content # static content

Let’s write our classifier class. We will receive data in batches and make a predictive:

class Classifier(BentoService):
     @api(input=DataframeInput(), batch=True)
     def test(self, df):
         return self.artifacts.model.predict(df)

And finally, let’s write the launch of our MVP itself:

if __name__ == "__main__":
     # Загрузим датасет и обучим простую модель
     iris = datasets.load_iris()  
     X, y = iris.data, iris.target
     clf = svm.SVC(gamma='scale')     
     clf.fit(X, y)
     # Инициализируем класс классификатора
     iris_classifier_service = Classifier()
     # Закинем модель в артифакт
     iris_classifier_service.pack('model', clf)
     # Сохраним и запустим 
     saved_path = iris_classifier_service.save()

Then the fun begins. Let’s run our runner.py:

python runner.py

A docker-like container will be created, which will then need to be called the latest version of the container:

bentoML start
bentoml serve Classifier:latest

Now our MVP will be available at http://127.0.0.1:5000, where predictions will be made by choosing the parameters of the irises.

bentoML serve

Docker

A ready-to-deploy docker container image can now be created with just one command:

bentoml containerize Classifier:latest -t my_prediction_service:v1 
docker run -p 5000:5000 my_prediction_service:v3 --workers 2

Conclusion

In this post, we looked at a very simple tool that is great for building and demonstrating an MVP.

Additional material

Link to the repository with the code from the note

Share it

If you liked the article - subscribe to my channel in the telegram https://t.me/renat_alimbekov or you can support me Become a Patron!


Other entries in this category: