Request Logging

Request Logging with Elasticsearch

Install KNative Eventing

Knative Eventing is used for request logging. See Knative section for steps.

Knative eventing is recommended but see Running without KNative below if this represents a major barrier for you.

Install EFK Stack

We suggest using the OpenDistro version of elastic for a trial and provide instructions.

Other flavours are also available.

Fluentd is used for collecting application logs. We use the request logger component for storing the bodies of http requests to models and other data about those, as elastic documents.

Seldon Core and Deploy Configuration

For Seldon Core add to your core-values.yaml following options

executor:
  requestLogger:
    defaultEndpoint: "http://broker-ingress.knative-eventing.svc.cluster.local/seldon-logs/default"

For Seldon Deploy add to your deploy-values.yaml following options

requestLogger:
  create: true

If using opendistro, it should also have auth enabled, in line with the instructions in that section (referenced above).

Using Deploy Metadata - Auth from Request Logger to Deploy

The request logger can optionally obtain a prediction schema from deploy’s metadata service if one is provided. This can be used enrich requests (e.g. add category name for categoricals) for better logging and monitoring.

To use this feature, it is necessary to:

1) Add a prediction schema to a model in the metadata or model registry UI. 2) Have an identity provider that supports password grant flow and enable this on a client (if this is a problem please contact Seldon). 3) Have a user that can be used by the request logger. 4) Ensure metadata.pg.enabled is true. 5) Ensure requestLogger.deployHost and requestLogger.authSecret are set. Default values should be fine. 6) Create an auth secret. See below.

In the provided trial installation, a keycloak client is configured called ‘sd-api’ and a user account is also created. The scripts are under ./prerequisites-setup/keycloak.

The trial installation fully configures the request logger and metadata automatically. But if you are installing components yourself then you need to set this up specifically.

Setting up a separate client is optional as the main deploy client could be used. Currently setting up a user is required.

Create a secret for use by the request logger as below, setting the parameters as per your environment:

kubectl create secret generic request-logger-auth -n seldon-logs \
  --from-literal=OIDC_PROVIDER="${OIDC_PROVIDER}" \
  --from-literal=CLIENT_ID="${CLIENT_ID}" \
  --from-literal=CLIENT_SECRET="${CLIENT_SECRET}" \
  --from-literal=OIDC_SCOPES="${OIDC_SCOPES}" \
  --from-literal=OIDC_USERNAME="${OIDC_USERNAME}" \
  --from-literal=OIDC_PASSWORD="${OIDC_PASSWORD}" \
  --dry-run=client -o yaml | kubectl apply -f -

Custom Request Logger

It’s possible for you to add your own custom request logger, with any custom logic you’d like to add. In order to do this, you need to make sure each Seldon Deployment points to the endpoint of your custom request logger.

For this, you will need to make sure you enable the Seldon Operator Environment variable for the logger. Prior to v1.2.x this was "EXECUTOR_REQUEST_LOGGER_DEFAULT_ENDPOINT_PREFIX" and then became "EXECUTOR_REQUEST_LOGGER_DEFAULT_ENDPOINT". It can be enabled through the core helm chart’s executor.requestLogger.defaultEndpoint.

Below we show an example of how you would do this for our non-knative default request logger

Running without KNative

It’s also possible to set up request logging without KNative dependency. For this, you will have to run a non-knative request logger, which you can trigger by running the configuration below.

Make sure that you edit the elasticsearch host variable below to point to the correct elasticsearch service address.

Important: for a normal install use the helm chart to install the logger. But that uses knative eventing. If you don’t want that then you can refer to the helm chart to create your own logger spec like the below.

To do this, first disable the requestLogger that is installed by the helm charts by setting in the deploy-values.yaml

requestLogger:
  create: false

then create the deployment with your custom logger:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: seldon-request-logger
  namespace: seldon-logs
  labels:
    app: seldon-request-logger
spec:
  replicas: 2
  selector:
    matchLabels:
      app: seldon-request-logger
  template:
    metadata:
      labels:
        app: seldon-request-logger
    spec:
      containers:
        - name: user-container
          image: docker.io/seldonio/seldon-request-logger:1.7.0
          imagePullPolicy: Always
          env:
            - name: ELASTICSEARCH_HOST
              value: "elasticsearch-opendistro-es-client-service.seldon-logs.svc.cluster.local"
            - name: ELASTICSEARCH_PORT
              value: "9200"
            - name: ELASTICSEARCH_PROTOCOL
              value: "https"
            - name: ELASTICSEARCH_USER
              valueFrom:
                secretKeyRef:
                  name: elastic-credentials
                  key: username
            - name: ELASTICSEARCH_PASS
              valueFrom:
                secretKeyRef:
                  name: elastic-credentials
                  key: password
---
apiVersion: v1
kind: Service
metadata:
  name: seldon-request-logger
  namespace: seldon-logs
spec:
  selector:
    app: seldon-request-logger
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

For a secured elastic you need to set ELASTICSEARCH_PROTOCOL, ELASTICSEARCH_USER and ELASTICSEARCH_PASS that in above example are sourced from the elastic-credentials secret.

Configure Seldon Core to use that endpoint

In order to make sure the Seldon Deployments are sending the requests to that endpoint, you will need to make sure you provide the request logger prefix. In this case you will need the following extra attributes in the Seldon Core values.yaml:

executor:
  requestLogger:
    defaultEndpoint: "http://seldon-request-logger.seldon-logs.svc.cluster.local"

It’s important that you make sure it’s on that format, which is http://<LOGGER_SERVICE>.<LOGGER_NAMESPACE>.svc.cluster.local.

If you prefer to have one request logger per kubernetes namespace set defaultEndpoint: http://<LOGGER_SERVICE>. (note the trailing dot) - the Seldon Service Orchestrator will then add the namespace where the Seldon Deployment is running as a suffix.

Overriding Request Logger Endpoint for specific Seldon Deployment

Once you have created the request logger, now you have to make sure your deployments are pointing to the correct custom request logger. You can set up the custom request logger address by adding the following configuration to every Seldon Core SeldonDeployment file:

      logger:
        mode: all
        url: http://seldon-request-logger.default

The mode configuration can be set to request, response or all.

The url is where this should be logged to. There’s a similar example for KFServing.

Authentication on Elasticsearch

The Seldon Deploy helm values file has two options for connecting to a secured elastic.

One is token-based authentication. Use this if you have an auth token. This is used for openshift cluster logging flavour of elastic.

The other option is basic authentication.

Elastic can be configured with basic auth. Note this requires an xpack feature.

Similar then needs to be applied for kibana - note the env vars are not quite the same.

And fluentd.

For Deploy this would need secrets in the namespaces seldon-logs (containing elastic and req logger) and seldon-system (containing Deploy) as Deploy would need to speak to elastic using the secret.

This could look like:

ELASTIC_USER=admin
ELASTIC_PASSWORD=admin

kubectl create secret generic elastic-credentials -n seldon-logs \
  --from-literal=username="${ELASTIC_USER}" \
  --from-literal=password="${ELASTIC_PASSWORD}" \
  --dry-run=client -o yaml | kubectl apply -f -

kubectl create secret generic elastic-credentials -n seldon-system \
  --from-literal=username="${ELASTIC_USER}" \
  --from-literal=password="${ELASTIC_PASSWORD}" \
  --dry-run=client -o yaml | kubectl apply -f -

Debugging

Often issues with request logging turn out to be knative or elasticsearch issues. Start by checking knative.

Last modified May 24, 2021