You are viewing documentation for Kubeflow 0.6

This is a static snapshot from the time of the Kubeflow 0.6 release.
For up-to-date information, see the latest version.

Serving

Serving of ML models in Kubeflow

Istio Integration (for TF Serving)

Using Istio for TF Serving

Seldon Serving

Model serving using Seldon

NVIDIA TensorRT Inference Server

Model serving using TRT Inference Server

TensorFlow Serving

Serving TensorFlow models

TensorFlow Batch Predict

Batch prediction for TensorFlow models

PyTorch Serving

Instructions for serving a PyTorch model with Seldon