Seldon Core

These demos assume you have a running Seldon Deploy installed with relevant permissions to create/update deployments.

They mostly use pre-built models, except the kubeflow demo which has steps to build a model and push it to minio.

Models can be pushed to minio or other objects stores for pre-packaged model servers or packaged as docker containers using language wrappers. See the Seldon Core docs for the building and hosting stages.

However you install your SeldonDeployments, Seldon Deploy should see them in visible namespaces. These demos use the Deploy UI for ease.


Canary Promotion

Launch a SKLearn model and update model by canarying

NVIDIA Triton

Run TensorRT, ONNX, PyTorch and Tenorflow models on GPUs with NVIDIA Triton

Model Explanations with Anchor Tabular

Launch an income prediction model and get explanations on tabular data.

Model Explanations with Anchor Text

Launch an movie sentiment prediction model and get explanations on text data.

Model Explanations with Anchor Images

Launch an image classifier model and get explanations on image data.

Drift Detection with CIFAR10 Image Classifier

Launch an image classifier model and detect drift.

Outlier Detection with CIFAR10 Image Classifier

Launch an image classifier model and detect outlier prediction requests.

Registering Models and Editing Metadata

Register a Model in the Model Catalog and edit its metadata.

Batch

Launch a batch workflow

Kubeflow Example

Run a Kubeflow Pipeline interacting with Seldon Deploy

Model Accuracy Metrics with Iris Classifier

Launch an iris classifier model and monitor accuracy metrics.

Last modified April 8, 2021