We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose.You can read our privacy policies and terms of use etc by clicking here.
Argo
This page provides steps for installing argo.
Installation of Argo
We suggest to install argo in line with the argo instructions. At the time of writing these are:
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/stable/manifests/install.yaml
Per Namespace Setup
If you intend to use Batch jobs on the namespace then you’ll need to create a service account for this:
namespace=seldon
kubectl apply -n ${namespace} -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- "*"
- apiGroups:
- "apps"
resources:
- deployments
verbs:
- "*"
- apiGroups:
- ""
resources:
- pods/log
verbs:
- "*"
- apiGroups:
- machinelearning.seldon.io
resources:
- "*"
verbs:
- "*"
EOF
kubectl create -n ${namespace} serviceaccount workflow
kubectl create rolebinding -n ${namespace} workflow --role=workflow --serviceaccount=${namespace}:workflow
And also configure a secret for batch jobs to communicate with S3 or minio (here assumed to be in minio-system namespace). Batch job processor in Seldon Deploy uses Storage Initializer mechanism similar to one used on the Pre Packaged model servers.
Rclone based storage initializer (default)
Leaving the default helm value for batchjobs.storageInitailizer
:
batchjobs:
storageInitializer:
image: seldonio/rclone-storage-initializer:1.8.0-dev
will result in rclone-based storage initializer being used. Rclone offers a compatibility with over 40 differnt cloud storage products and therefore is the default choice.
For the minio installation secret is as follows:
MINIOUSER=minioadmin
MINIOPASSWORD=minioadmin
namespace=seldon
kubectl apply -n ${namespace} -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: seldon-job-secret
type: Opaque
stringData:
RCLONE_CONFIG_S3_TYPE: s3
RCLONE_CONFIG_S3_PROVIDER: minio
RCLONE_CONFIG_S3_ENV_AUTH: "false"
RCLONE_CONFIG_S3_ACCESS_KEY_ID: ${MINIOUSER}
RCLONE_CONFIG_S3_SECRET_ACCESS_KEY: ${MINIOPASSWORD}
RCLONE_CONFIG_S3_ENDPOINT: http://minio.minio-system.svc.cluster.local:9000
EOF
Using custom storage initializer
If for some reason you would like to use different storage initializer, e.g. kfserving storage initializer
you can set this by modifying the mentioned deploy-values.yaml
:
batchjobs:
storageInitializer:
image: gcr.io/kfserving/storage-initializer:v0.4.0
The corresponding secret would also need to be modified:
MINIOUSER=minioadmin
MINIOPASSWORD=minioadmin
namespace=seldon
kubectl apply -n ${namespace} -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: seldon-job-secret
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: ${MINIOUSER}
AWS_SECRET_ACCESS_KEY: ${MINIOPASSWORD}
AWS_ENDPOINT_URL: http://minio.minio-system.svc.cluster.local:9000
USE_SSL: "false"
EOF
Verification and Debugging
You can check the argo install by going to the argo UI. First port-forward:
kubectl port-forward -n argo svc/argo-server 2746
Then go to http://localhost:2746/
in the browser.
If argo is setup correctly then you should be able to run the batch demo.
To see running jobs, you can use the argo UI or its CLI, if you install that. You can list jobs in the namespace with argo list -n <namespace>
. An argo get
tells you the pod names of the steps.
To see logs for a running job, go to the relevant pod. If you don’t have the argo CLI you can work out the pod name as there should be a pod in the namespace with a running status and a name similar to the model name.