Knative Installation

Install and configure Knative

This section walks through installation of knative for Seldon Deploy.

Knative eventing and serving is used for request logging and for post-predict detector components (outlier, drift, metrics). More in request logging

Knative serving is also needed for KFServing.

Knative can be installed in other ways - if you have an existing install then just apply steps that customize.


Install KNative Serving

Install KNATIVE Serving

KNATIVE_SERVING_URL=https://github.com/knative/serving/releases/download
SERVING_VERSION=v0.18.1
SERVING_BASE_VERSON=v0.18.0

kubectl apply -f ${KNATIVE_SERVING_URL}/${SERVING_VERSION}/serving-crds.yaml
kubectl apply -f ${KNATIVE_SERVING_URL}/${SERVING_VERSION}/serving-core.yaml

kubectl apply -f https://github.com/knative-sandbox/net-istio/releases/download/${SERVING_BASE_VERSON}/release.yaml

If you are using seldon core analytics for prometheus (recommended) then for knative metrics add these annotations:

kubectl annotate -n knative-serving service autoscaler prometheus.io/scrape=true
kubectl annotate -n knative-serving service autoscaler prometheus.io/port=9090

kubectl annotate -n knative-serving service activator-service prometheus.io/scrape=true
kubectl annotate -n knative-serving service activator-service prometheus.io/port=9090

Configure Cluster Local Gateway

KNative requires a Cluster Local Gateway to work properly. This can be added to an existing Istio 1.6.x installation by generating required manifests

cat << EOF > ./local-cluster-gateway.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: empty
  components:
    ingressGateways:
      - name: cluster-local-gateway
        enabled: true
        label:
          istio: cluster-local-gateway
          app: cluster-local-gateway
        k8s:
          service:
            type: ClusterIP
            ports:
            - port: 15020
              name: status-port
            - port: 80
              name: http2
            - port: 443
              name: https
  values:
    gateways:
      istio-ingressgateway:
        debug: error
EOF

istioctl manifest generate -f local-cluster-gateway.yaml > manifest.yaml

Note the profile: empty line. This ensures that generated manifest only contain gateway related resources.

Once manifest is generated, inspect them, and then use kubectl to apply them

kubectl apply -f manifest.yaml

The above manifest and istiocl command no longer works with istioctl 1.7. For 1.7 we suggest installing istio with

cat << EOF > ./local-cluster-gateway.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
    global:
      proxy:
        autoInject: disabled
      useMCP: false
  addonComponents:
    pilot:
      enabled: true
    prometheus:
      enabled: false
  components:
    ingressGateways:
      - name: cluster-local-gateway
        enabled: true
        label:
          istio: cluster-local-gateway
          app: cluster-local-gateway
        k8s:
          service:
            type: ClusterIP
            ports:
            - port: 15020
              name: status-port
            - port: 80
              targetPort: 8080
              name: http2
            - port: 443
              targetPort: 8443
              name: https
  values:
    gateways:
      istio-ingressgateway:
        debug: error
EOF

./istioctl install -f local-cluster-gateway.yaml

Read more about gateway configuration in Istio’s documentation.

Test Knative Serving

To check installed version of KNative Serving:

kubectl get namespace knative-serving -o 'go-template={{index .metadata.labels "serving.knative.dev/release"}}'

To verify the install, kubectl apply -f on a file containing the below:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          env:
            - name: TARGET
              value: "Go Sample v1"

If this is applied in the default namespace then do a pod watch in another window (kubectl get pod -n default -w) curl it with below (else change default to chosen namespace):

kubectl run --quiet=true -it --rm curl --image=radial/busyboxplus:curl --restart=Never --  \
curl -v -X GET "http://helloworld-go.default.svc.cluster.local"

You should get a successful response and also a pod should come up in default. If you don’t then see note below on private registries before looking at resources such as seldon and knative slack.

Clean up with kubectl delete -f on the same file as before.

Install KNative Eventing

KNATIVE_EVENTING_URL=https://github.com/knative/eventing/releases/download
EVENTING_VERSION=v0.18.3

kubectl apply --filename ${KNATIVE_EVENTING_URL}/${EVENTING_VERSION}/eventing-crds.yaml
kubectl apply --filename ${KNATIVE_EVENTING_URL}/${EVENTING_VERSION}/eventing-core.yaml

kubectl apply --filename ${KNATIVE_EVENTING_URL}/${EVENTING_VERSION}/in-memory-channel.yaml
kubectl apply --filename ${KNATIVE_EVENTING_URL}/${EVENTING_VERSION}/mt-channel-broker.yaml

kubectl rollout status -n knative-eventing deployment/imc-controller

Configure KNative Eventing broker

Create knative event broker that will handle the logging.

kubectl create -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  name: default
  namespace: seldon-logs
EOF

Test KNative Eventing

To check installed version of KNative Eventing

kubectl get namespace knative-eventing -o 'go-template={{index .metadata.labels "eventing.knative.dev/release"}}'

To test knative eventing it is easiest to have Seldon fully installed with request logging and a model running.

Make a prediction to a model following one of the seldon core demos. You should see entries under Requests.

If you see entries under requests, you are all good.

If you don’t see entries under requests, first find the request logger pod in the seldon-logs namespace. Tail its logs (kubectl logs -n seldon-logs <pod-name> -f) and make a request again. Do you see output?

If this doesn’t work then find the pod for the model representing the SeldonDeployment. Tail the logs of the seldon-container-engine container and make a prediction again.

If the predictions aren’t sending then it could be a problem with the broker url executor.requestLogger.defaultEndpoint in helm get values -n seldon-system seldon-core) or the broker (kubectl get broker -n seldon-logs).

If there’s no requests and no obvious problem with the broker transmission then it could be the trigger stage.

First try kubectl get trigger -n seldon-logs to check the trigger status.

If that looks healthy then we need to debug the knative trigger process.

Do a kubectl apply -f to the default namespace on a file containing the below (or change references to default for a different namespace):

# event-display app deploment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-display
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels: &labels
      app: event-display
  template:
    metadata:
      labels: *labels
    spec:
      containers:
        - name: helloworld-python
          image: gcr.io/knative-releases/github.com/knative/eventing-sources/cmd/event_display
---
# Service that exposes event-display app.
# This will be the subscriber for the Trigger
kind: Service
apiVersion: v1
metadata:
  name: event-display
  namespace: default
spec:
  selector:
    app: event-display
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
# Trigger to send events to service above
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: event-display
  namespace: seldon-logs
spec:
  broker: default
  subscriber:
    uri: http://event-display.default:80

Now find the event-display pod and tail its logs (kubectl get pod -n default and kubectl logs <pod_name>). Make a prediction to a model following one of the seldon core demos.

You should see something in the event-display logs. Even an event decoding error message is good.

To eliminate any seldon components, we can send an event directly to the broker. There is an example in the knative docs

What we’ve done now corresponds to the knative eventing hello-world. Nothing at all in the event-display pod means knative eventing is not working.

Occasionally you see a RevisionMissing status on the ksvc and a ContainerCreating message on its Revision. If this happens check the Deployment and if no issues then delete and try again.

Hopefully you’ve got things working before here. If not then check the pods in the knative-eventing namespace. If that doesn’t help find the problem then the knative slack and/or seldon slack can help with further debugging.

Knative with a private registry

By default knative assumes the image registries used for images will be public. If you use a company-internal private one, then you have to configure that.

There is a property called queueSidecarImage in the config-deployment configmap in the knative-serving namespace. This needs to be edited to point to your registry

Tag resolution confiuration can also be required. We suggest to try the above first. The knative docs provide more details on tag resolution

Last modified April 19, 2021