Red Hat Training

A Red Hat training course is available for OpenShift Container Platform

Service Mesh Install

OpenShift Container Platform 3.11

OpenShift Container Platform 3.11 Service Mesh Installation Guide

Red Hat OpenShift Documentation Team

Abstract

Getting started with Service Mesh installation on OpenShift

Chapter 1. Installing Red Hat OpenShift Service Mesh

1.1. Product overview

1.1.1. Red Hat OpenShift Service Mesh overview

Important

This release of Red Hat OpenShift Service Mesh is a Technology Preview release only. Technology Preview releases are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does NOT recommend using them for production. Using Red Hat OpenShift Service Mesh on a cluster renders the whole OpenShift cluster as a technology preview, that is, in an unsupported state. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

A service mesh is the network of microservices that make up applications in a distributed microservice architecture and the interactions between those microservices. When a service mesh grows in size and complexity, it can become harder to understand and manage.

Based on the open source Istio project, Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the service code. You add Red Hat OpenShift Service Mesh support to services by deploying a special sidecar proxy to relevant services in the mesh that intercepts all network communication between microservices. You configure and manage the service mesh using the control plane features.

Red Hat OpenShift Service Mesh gives you an easy way to create a network of deployed services that provide:

  • Discovery
  • Load balancing
  • Service-to-service authentication
  • Failure recovery
  • Metrics
  • Monitoring

Service Mesh also provides more complex operational functions including:

  • A/B testing
  • Canary releases
  • Rate limiting
  • Access control
  • End-to-end authentication

1.1.2. Red Hat OpenShift Service Mesh product architecture

Red Hat OpenShift Service Mesh is logically split into a data plane and a control plane:

The data plane a set of intelligent proxies deployed as sidecars. These proxies intercept and control all inbound and outbound network communication between microservices in the service mesh. Sidecar proxies also communicate with Mixer, the general-purpose policy and telemetry hub.

  • Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

The control plane manages and configures proxies to route traffic, and configures Mixers to enforce policies and collect telemetry.

  • Mixer enforces access control and usage policies (such as authorization, rate limits, quotas, authentication, request tracing) and collects telemetry data from the Envoy proxy and other services.
  • Pilot configures the proxies at runtime. Pilot provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing (for example, A/B tests or canary deployments), and resiliency (timeouts, retries, and circuit breakers).
  • Citadel issues and rotates certificates. Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management. You can use Citadel to upgrade unencrypted traffic in the service mesh. Operators can enforce policies based on service identity rather than on network controls using Citadel.
  • Galley ingests the service mesh configuration, then validates, processes, and distributes the configuration. Galley protects the other service mesh components from obtaining user configuration details from OpenShift Container Platform.

Red Hat OpenShift Service Mesh also uses the istio-operator to manage the installation of the control plane. An Operator is a piece of software that enables you to implement and automate common activities in your OpenShift cluster. It acts as a controller, allowing you to set or change the desired state of objects in your cluster.

1.1.3. Supported configurations

The following are the only supported configurations for the Red Hat OpenShift Service Mesh 0.12.TechPreview:

  • Red Hat OpenShift Container Platform version 3.11.
  • Red Hat OpenShift Container Platform version 4.1.
Note

OpenShift Online and OpenShift Dedicated are not supported for Red Hat OpenShift Service Mesh 0.12.TechPreview.

  • The deployment must be contained to a single OpenShift Container Platform cluster that is not federated.
  • This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64.
  • Red Hat OpenShift Service Mesh is only suited for OpenShift Container Platform Software Defined Networking (SDN) configured as a flat network with no external providers.
  • This release supports only configurations where all Service Mesh components are contained in the OpenShift cluster in which it operates. It does not support management of microservices that reside outside of the cluster, or in a multi-cluster scenario.
  • The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.

For more information about support for this technology preview, see this Red Hat Knowledge Base article.

1.1.4. Comparing Red Hat OpenShift Service Mesh and upstream Istio community installations

An installation of Red Hat OpenShift Service Mesh differs from upstream Istio community installations in multiple ways. The modifications to Red Hat OpenShift Service Mesh are sometimes necessary to resolve issues, provide additional features, or to handle differences when deploying on OpenShift.

The current release of Red Hat OpenShift Service Mesh differs from the current upstream Istio community release in the following ways:

1.1.4.1. Multi-tenant installations

Note

Single-tenant control plane installations are known to cause issues with OpenShift Container Platform restarts and upgrades. Multi-tenant control plane installations are the default configuration starting with Red Hat OpenShift Service Mesh 0.12.TechPreview.

Red Hat OpenShift Service Mesh allows you to configure multi-tenant control plane installations, specify the namespaces that can access its Service Mesh, and isolate the Service Mesh from other control plane instances.

1.1.4.2. Automatic injection

The upstream Istio community installation automatically injects the sidecar to namespaces you have labeled.

Red Hat OpenShift Service Mesh does not automatically inject the sidecar to any namespaces, but requires you to specify the sidecar.istio.io/inject annotation as illustrated in the Automatic sidecar injection section.

1.1.4.3. Role Based Access Control features

Role Based Access Control (RBAC) provides a mechanism you can use to control access to a service. You can identify subjects by username or by specifying a set of properties and apply access controls accordingly.

The upstream Istio community installation includes options to perform exact header matches, match wildcards in headers, or check for a header containing a specific prefix or suffix.

Upstream Istio community matching request headers example

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.headers[<header>]: "value"

Red Hat OpenShift Service Mesh extends the ability to match request headers by using a regular expression. Specify a property key of request.regex.headers with a regular expression.

Red Hat OpenShift Service Mesh matching request headers by using regular expressions

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: httpbin-client-binding
  namespace: httpbin
spec:
  subjects:
  - user: "cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"
    properties:
      request.regex.headers[<header>]: "<regular expression>"

1.1.4.4. Automatic route creation

Warning

Automatic route creation is currently incompatible with multi-tenant Service Mesh installations. Ensure that it is disabled in your ServiceMeshControlPlane if you plan to attempt a multi-tenant installation.

Red Hat OpenShift Service Mesh automatically manages OpenShift routes for Istio gateways. When an Istio gateway is created, updated, or deleted in the Service Mesh, a matching OpenShift route is created, updated, or deleted.

If the following gateway is created:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway1
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - www.bookinfo.com
    - bookinfo.example.com

The following OpenShift routes are automatically created:

$ oc -n istio-system get routes
NAME              HOST/PORT                                            PATH      SERVICES               PORT      TERMINATION   WILDCARD
gateway1-lvlfn    bookinfo.example.com                                           istio-ingressgateway   <all>                   None
gateway1-scqhv    www.bookinfo.com                                               istio-ingressgateway   <all>                   None

If this gateway is deleted, Red Hat OpenShift Service Mesh will delete the routes.

Note

Manually created routes are not managed by the Service Mesh.

1.1.4.4.1. Catch-all domains

Red Hat OpenShift Service Mesh does not support catch-all or wildcard domains. If Service Mesh finds a catch-all domain in the gateway definition, Red Hat OpenShift Service Mesh will create the route but relies on OpenShift to create a default hostname. The route that Service Mesh creates will not be a catch-all route and will have a hostname with a <route-name>[-<namespace>].<suffix> structure.

1.1.4.4.2. Subdomains

Subdomains are supported, but they are not enabled by default in OpenShift. Red Hat OpenShift Service Mesh will create the route with the subdomain, but it will only work after you enable subdomains in OpenShift. See the OpenShift documentation on Wildcard Routes for more information.

1.1.4.4.3. TLS

OpenShift routes are configured to support TLS.

Note

All OpenShift routes created by Red Hat OpenShift Service Mesh are in the istio-system namespace.

1.1.4.5. OpenSSL

Red Hat OpenShift Service Mesh replaces BoringSSL with OpenSSL. OpenSSL is a software library that contains an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. The Red Hat OpenShift Service Mesh Proxy binary dynamically links the OpenSSL libraries (libssl and libcrypto) from the underlying UBI8 operating system.

1.1.4.6. Container Network Interface (CNI)

Red Hat OpenShift Service Mesh includes CNI which provides you with an alternate way to configure application pod networking. When you enable CNI, it replaces the init-container network configuration eliminating the need to grant service accounts and namespaces additional privileges by modifying their Security Context Constraints (SCCs).

1.1.5. Red Hat OpenShift Service Mesh installation overview

The Red Hat OpenShift Service Mesh installation process creates four different projects (namespaces):

  • istio-operator project (1 pod)
  • istio-system project (17 pods)
  • kiali-operator project (1 pod)
  • observability project (1 pod)

You first create a Kubernetes operator. This Operator defines and monitors a custom resource that manages the deployment, updating, and deletion of the Service Mesh components.

Depending on how you define the custom resource file, you can install one or more of the following components when you install the Service Mesh:

  • Istio - based on the open source Istio project, lets you connect, secure, control, and observe the microservices that make up your applications.
  • Jaeger - based on the open source Jaeger project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems.
  • Kiali - based on the open source Kiali project, provides observability for your service mesh. Using Kiali lets you view configurations, monitor traffic, and view and analyze traces in a single console.

1.2. Prerequisites

1.2.1. Red Hat OpenShift Service Mesh installation prerequisites

Before you can install Red Hat OpenShift Service Mesh, you must meet the following prerequisites:

  • Possess an active OpenShift Container Platform subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.
  • Install OpenShift Container Platform version 3.11, or higher.

  • Install the version of the OpenShift Container Platform command line utility (the oc client tool) that matches your OpenShift Container Platform version and add it to your path.

1.2.1.1. Preparing the OpenShift Container Platform installation

Before you can install the Service Mesh into an OpenShift Container Platform installation, you must modify the master configuration and each of the schedulable nodes. These changes enable the features that are required in the Service Mesh and also ensure that Elasticsearch features function correctly.

1.2.1.2. Updating the node configuration

Note

Updating the node configuration is not necessary if you are running OpenShift Container Platform 4.1.

To run the Elasticsearch application, you must make a change to the kernel configuration on each node. This change is handled through the sysctl service.

Make these changes on each node within your OpenShift Container Platform installation:

  1. Create a file named /etc/sysctl.d/99-elasticsearch.conf with the following contents:

    vm.max_map_count = 262144
  2. Execute the following command:

    $ sysctl vm.max_map_count=262144

1.2.1.3. Updating the container registry

Note

If you are running OpenShift Container Platform 3.11 on-premise, follow these steps to configure access to registry.redhat.io.

To access the private registry.redhat.io from OpenShift Container Platform 3.11 to pull the Red Hat OpenShift Service Mesh images for the installation process:

  1. Run the following command:

    $ docker login registry.redhat.io
  2. This will prompt you for a username and password.
  3. When you successfully log in, the ~/.docker/config.json is created with the following contents:

    {
         "auths": {
             "registry.redhat.io": {
                 "auth": "XXXXXXXXXXXXXXXXXX"
             }
         }
    }
  4. On each node, create a /var/lib/origin/.docker directory.
  5. On each node, copy the /.docker/config.json file to the /var/lib/origin/.docker directory.

1.3. Installing Service Mesh

1.3.1. Installing the Red Hat OpenShift Service Mesh

Installing the Service Mesh involves installing the Operator, and then creating and managing a custom resource definition file to deploy the control plane.

Note

Starting with Red Hat OpenShift Service Mesh 0.9.TechPreview, Mixer’s policy enforcement is disabled by default. You must enable it to run policy tasks. See Update Mixer policy enforcement for instructions on enabling Mixer policy enforcement.

Note

Single-tenant control plane installations are known to cause issues with OpenShift Container Platform restarts and upgrades. Multi-tenant control plane installations are the default configuration starting with Red Hat OpenShift Service Mesh 0.12.TechPreview.

1.3.1.1. Installing the Operators

The Service Mesh installation process introduces an Operator to manage the installation of the control plane within the istio-operator namespace. This Operator defines and monitors a custom resource related to the deployment, update, and deletion of the control plane.

Starting with Red Hat OpenShift Service Mesh 0.12.TechPreview, you must install the Jaeger Operator and the Kiali Operator before the Red Hat OpenShift Service Mesh Operator can install the control plane.

1.3.1.1.1. Installing the Jaeger Operator

You must install the Jaeger Operator for the Red Hat OpenShift Service Mesh Operator to install the control plane.

  1. Log into the OpenShift Container Platform as a cluster administrator.
  2. Run the following commands to install the Jaeger Operator:

    $ oc new-project observability # create the project for the jaeger operator
    $ oc create -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/crds/jaegertracing_v1_jaeger_crd.yaml
    $ oc create -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/service_account.yaml
    $ oc create -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/role.yaml
    $ oc create -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/role_binding.yaml
    $ oc create -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/operator.yaml
1.3.1.1.2. Installing the Kiali Operator

You must install the Kiali Operator for the Red Hat OpenShift Service Mesh Operator to install the control plane.

  1. Log into the OpenShift Container Platform as a cluster administrator.
  2. Run the following command to install the Kiali Operator:

    $ bash <(curl -L https://git.io/getLatestKialiOperator) --operator-image-version v1.0.0 --operator-watch-namespace '**' --accessible-namespaces '**' --operator-install-kiali false
1.3.1.1.3. Installing the Red Hat OpenShift Service Mesh Operator
Note

You must install the Jaeger Operator and the Kiali Operator before you install the Red Hat OpenShift Service Mesh Operator.

  1. Log into the OpenShift Container Platform as a cluster administrator.
  2. If the istio-operator and istio-system namespaces do not exist, run these commands to create the namespaces:

    $ oc new-project istio-operator
    $ oc new-project istio-system
  3. Run this command to install the Red Hat OpenShift Service Mesh Operator. You can run it from any host with access to the cluster.

    $ oc apply -n istio-operator -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.12/deploy/servicemesh-operator.yaml

1.3.1.2. Verifying the Operator installation

  1. Log into the OpenShift Container Platform as a cluster administrator.
  2. Run this command to verify that the Operator is installed correctly.

    $ oc get pods -n istio-operator
  3. When the Operator reaches a running state, it is installed correctly.

    NAME                              READY     STATUS    RESTARTS   AGE
    istio-operator-5cd6bcf645-fvb57   1/1       Running   0          1h

1.3.1.3. Creating a custom resource file

Note

The istio-system namespace is used as an example throughout the Service Mesh documentation, but you can use other namespaces as necessary.

To deploy the Service Mesh control plane, you must deploy a custom resource. A custom resource you to introduce your own API into a Kubernetes project or cluster. You create a custom resource yaml file that defines the project parameters and creates the object. This example custom resource yaml file contains all of the supported parameters and deploys Red Hat OpenShift Service Mesh 0.12.TechPreview images based on Red Hat Enterprise Linux (RHEL).

Important

The 3scale Istio Adapter is deployed and configured in the custom resource file. It also requires a working 3scale account (SaaS or On-Premises).

Full example istio-installation.yaml

  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  metadata:
    name: basic-install
  spec:

    threeScale:
      enabled: false

    istio:
      global:
        proxy:
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 128Mi
        multitenant: true

      gateways:
        istio-egressgateway:
          autoscaleEnabled: false
        istio-ingressgateway:
          autoscaleEnabled: false
          ior_enabled: false

      mixer:
        policy:
          autoscaleEnabled: false

        telemetry:
          autoscaleEnabled: false
          resources:
            requests:
              cpu: 100m
              memory: 1G
            limits:
              cpu: 500m
              memory: 4G

      pilot:
        autoscaleEnabled: false
        traceSampling: 100.0

      kiali:
       dashboard:
          user: admin
          passphrase: admin
      tracing:
        enabled: true

1.3.1.4. Custom resource parameters

The following examples illustrate use of the supported custom resource parameters for Red Hat OpenShift Service Mesh and the tables provide additional information about supported parameters.

Important

The resources you configure for Red Hat OpenShift Service Mesh with these custom resource parameters, including CPUs, memory, and the number of pods, are based on the configuration of your OpenShift cluster. Configure these parameters based on the available resources in your current cluster configuration.

1.3.1.4.1. Istio global example
Note

In order for the 3scale Istio Adapter to work, disablePolicyChecks must be false.

  istio:
    global:
      hub: `maistra/` or `registry.redhat.io/openshift-istio-tech-preview/`
      tag: 0.12.0
      proxy:
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi
      mtls:
        enabled: false
      disablePolicyChecks: true
      policyCheckFailOpen: false
      imagePullSecrets:
        - MyPullSecret
Note

See the OpenShift documentation on Compute Resources for additional details on specifying CPU and memory resources for the containers in your pod.

Table 1.1. Global parameters

ParameterDescriptionValuesDefault value

disablePolicyChecks

This boolean indicates whether to enable policy checks

true/false

true

policyCheckFailOpen

This boolean indicates whether traffic is allowed to pass through to the Envoy sidecar when the Mixer policy service cannot be reached

true/false

false

tag

The tag that the Operator uses to pull the Istio images

A valid container image tag

0.12.0

hub

The hub that the Operator uses to pull Istio images

A valid image repo

maistra/ or registry.redhat.io/openshift-istio-tech-preview/

mTLS

This controls whether to enable Mutual Transport Layer Security (mTLS) between services by default

true/false

false

imagePullSecret

If access to the registry providing the Istio images is secure, list an imagePullSecret here

redhat-registry-pullsecret OR quay-pullsecret

None

Table 1.2. Proxy parameters

TypeParameterDescriptionValuesDefault value

Resources

cpu

The percentage of CPU resources requested for Envoy proxy

CPU resources in millicores based on your environment’s configuration

100m

 

memory

The amount of memory requested for Envoy proxy

Available memory in bytes based on your environment’s configuration

128Mi

Limits

cpu

The maximum percentage of CPU resources requested for Envoy proxy

CPU resources in millicores based on your environment’s configuration

2000m

 

memory

The maximum amount of memory Envoy proxy is permitted to use

Available memory in bytes based on your environment’s configuration

128Mi

1.3.1.4.2. Container Network Interface (CNI) example
Warning

If Container Network Interface (CNI) is enabled, manual sidecar injection will work, but pods will not be able to communicate with the control plane unless they are a part of the ServiceMeshMemberRoll resource.

  apiVersion: maistra.io/v1
  kind: ServiceMeshControlPlane
  metadata:
   name: basic-install
  spec:

    istio:
      istio_cni:
        enabled: true

Table 1.3. CNI parameter

TypeParameterDescriptionValuesDefault value
1.3.1.4.3. Istio gateway example
Warning

Automatic route creation does not currently work with multi-tenancy. Set ior_enabled to false for multi-tenant installations.

  gateways:
       istio-egressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
       istio-ingressgateway:
         autoscaleEnabled: false
         autoscaleMin: 1
         autoscaleMax: 5
         ior_enabled: false

Table 1.4. Istio Gateway parameters

TypeParameterDescriptionValuesDefault value

istio-egressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

 

autoscaleMin

The minimum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

 

autoscaleMax

The maximum number of pods to deploy for the egress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

istio-ingressgateway

autoscaleEnabled

This parameter enables autoscaling.

true/false

true

 

autoscaleMin

The minimum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

 

autoscaleMax

The maximum number of pods to deploy for the ingress gateway based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

 

ior_enabled

This parameter controls whether Istio routes are automatically configured in OpenShift

true/false

true

1.3.1.4.4. Istio Mixer example
  mixer:
    enabled: true
       policy:
         autoscaleEnabled: false

       telemetry:
         autoscaleEnabled: false
         resources:
           requests:
             cpu: 100m
             memory: 1G
           limits:
             cpu: 500m
             memory: 4G

Table 1.5. Istio Mixer policy parameters

ParameterDescriptionValuesDefault value

enabled

This enables Mixer

true/false

true

autoscaleEnabled

This controls whether to enable autoscaling. Disable this for small environments.

true/false

true

autoscaleMin

The minimum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

1

autoscaleMax

The maximum number of pods to deploy based on the autoscaleEnabled setting

A valid number of allocatable pods based on your environment’s configuration

5

Table 1.6. Istio Mixer telemetry parameters

TypeParameterDescriptionValuesDefault value

Resources

cpu

The percentage of CPU resources requested for Mixer telemetry

CPU resources in millicores based on your environment’s configuration

1000m

 

memory

The amount of memory requested for Mixer telemetry

Available memory in bytes based on your environment’s configuration

1G

Limits

cpu

The maximum percentage of CPU resources Mixer telemetry is permitted to use

CPU resources in millicores based on your environment’s configuration

4800m

 

memory

The maximum amount of memory Mixer telemetry is permitted to use

Available memory in bytes based on your environment’s configuration

4G

1.3.1.4.5. Istio Pilot example
  pilot:
    resources:
      requests:
        cpu: 100m
     autoscaleEnabled: false
     traceSampling: 100.0

Table 1.7. Istio Pilot parameters

ParameterDescriptionValuesDefault value

cpu

The percentage of CPU resources requested for Pilot

CPU resources in millicores based on your environment’s configuration

500m

memory

The amount of memory requested for Pilot

Available memory in bytes based on your environment’s configuration

2048Mi

traceSampling

This value controls how often random sampling occurs. Note: increase for development or testing.

A valid number

1.0

1.3.1.4.6. Tracing and Jaeger example
  tracing:
      enabled: false
      jaeger:
        tag: 1.13.1
        template: all-in-one
        agentStrategy: DaemonSet

Table 1.8. Tracing and Jaeger parameters

ParameterDescriptionValueDefault value

enabled

This enables tracing in the environment

true/false

true

hub

The hub that the Operator uses to pull Jaeger images

A valid image repo

jaegertracing/ or registry.redhat.io/openshift-istio-tech-preview/

tag

The tag that the Operator uses to pull the Jaeger images

A valid container image tag.

1.13.1

template

The deployment template to use for Jaeger

The name of a template type

all-in-one / production-elasticsearch

agentStrategy

Deploy the Jaeger Agent to each compute node

DaemonSet if required

None

1.3.1.4.7. Kiali example
Note

Kiali supports Oath authentication and dashboard users. By default, Kiali uses OpenShift Oauth, but you can enable a dashboard user by adding a dashboard user and passphrase.

  kiali:
     enabled: true
     hub: kiali/
     tag: v1.0.0
     dashboard:
       user: admin
       passphrase: admin

Table 1.9. Kiali parameters

ParameterDescriptionValuesDefault value

enabled

This enables or disables Kiali in Service Mesh. Kiali is installed by default. If you do not want to install Kiali, change the enabled value to false.

true/false

true

hub

The hub that the Operator uses to pull Kiali images

A valid image repo

kiali/ or registry.redhat.io/openshift-istio-tech-preview/

tag

The tag that the Operator uses to pull the Istio images

A valid container image tag

1.0.0

user

The username to access the Kiali console. Note: This is not related to any OpenShift account.

A valid Kiali dashboard username

None

passphrase

The password used to access the Kiali console. Note: This is not related to any OpenShift account.

A valid Kiali dashboard passphrase

None

1.3.1.4.8. 3scale example
  threeScale:
      enabled: false
      PARAM_THREESCALE_LISTEN_ADDR: 3333
      PARAM_THREESCALE_LOG_LEVEL: info
      PARAM_THREESCALE_LOG_JSON: true
      PARAM_THREESCALE_LOG_GRPC: false
      PARAM_THREESCALE_REPORT_METRICS: true
      PARAM_THREESCALE_METRICS_PORT: 8080
      PARAM_THREESCALE_CACHE_TTL_SECONDS: 300
      PARAM_THREESCALE_CACHE_REFRESH_SECONDS: 180
      PARAM_THREESCALE_CACHE_ENTRIES_MAX: 1000
      PARAM_THREESCALE_CACHE_REFRESH_RETRIES: 1
      PARAM_THREESCALE_ALLOW_INSECURE_CONN: false
      PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS: 10
      PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS: 60

Table 1.10. 3scale parameters

ParameterDescriptionValuesDefault

enabled

Whether to use the 3scale adapter

true/false

false

PARAM_THREESCALE_LISTEN_ADDR

Sets the listen address for the gRPC server

Valid port number

3333

PARAM_THREESCALE_LOG_LEVEL

Sets the minimum log output level.

debug, info, warn, error, or none

info

PARAM_THREESCALE_LOG_JSON

Controls whether the log is formatted as JSON

true/false

true

PARAM_THREESCALE_LOG_GRPC

Controls whether the log contains gRPC info

true/false

false

PARAM_THREESCALE_REPORT_METRICS

Controls whether 3scale system and backend metrics are collected and reported to Prometheus

true/false

true

PARAM_THREESCALE_METRICS_PORT

Sets the port that the 3scale /metrics endpoint can be scrapped from

Valid port number

8080

PARAM_THREESCALE_CACHE_TTL_SECONDS

Time period, in seconds, to wait before purging expired items from the cache

Time period in seconds

300

PARAM_THREESCALE_CACHE_REFRESH_SECONDS

Time period before expiry when cache elements are attempted to be refreshed

Time period in seconds

180

PARAM_THREESCALE_CACHE_ENTRIES_MAX

Max number of items that can be stored in the cache at any time. Set to 0 to disable caching

Valid number

1000

PARAM_THREESCALE_CACHE_REFRESH_RETRIES

The number of times unreachable hosts are retried during a cache update loop

Valid number

1

PARAM_THREESCALE_ALLOW_INSECURE_CONN

Allow to skip certificate verification when calling 3scale APIs. Enabling this is not recommended.

true/false

false

PARAM_THREESCALE_CLIENT_TIMEOUT_SECONDS

Sets the number of seconds to wait before terminating requests to 3scale System and Backend

Time period in seconds

10

PARAM_THREESCALE_GRPC_CONN_MAX_SECONDS

Sets the maximum amount of seconds (+/-10% jitter) a connection may exist before it is closed

Time period in seconds

60

1.3.1.5. Configuring multi-tenant installations

See the Multi-tenant Red Hat OpenShift Service Mesh install chapter for instructions on installing and configuring a Service Mesh instance.

1.3.1.6. Update Mixer policy enforcement

In previous versions of Red Hat OpenShift Service Mesh, Mixer’s policy enforcement was enabled by default. Mixer policy enforcement is now disabled by default. You must enable it before running policy tasks.

  1. Run this command to check the current Mixer policy enforcement status:

    $ oc get cm -n istio-system istio -o jsonpath='{.data.mesh}' | grep disablePolicyChecks
  2. If disablePolicyChecks: true, edit the Service Mesh ConfigMap:

    $ oc edit cm -n istio-system istio
  3. Locate disablePolicyChecks: true within the ConfigMap and change the value to false.
  4. Save the configuration and exit the editor.
  5. Re-check the Mixer policy enforcement status to ensure it is set to false.

1.3.1.7. Deploying the control plane

With the introduction of OpenShift Container Platform 4.1, the network capabilities of the host are now based on nftables rather than iptables. This change impacts the initialization of the Service Mesh application components. Service Mesh needs to know what host operating system OpenShift is running on to correctly initialize Service Mesh networking components.

Note

You do not need to make these changes to your custom resource if you are using OpenShift Container Platform 4.1.

If the OpenShift installation is deployed on a Red Hat Enterprise Linux (RHEL) 7 host, then the custom resource must explicitly request the RHEL 7 proxy-init container image by including the following:

Enabling the proxy-init container for RHEL 7 hosts

  apiVersion: maistra.io/v1
   kind: ServiceMeshControlPlane
   spec:
     istio:
       global:
        proxy_init:
           image: proxy-init

Use the custom resource definition file you created to deploy the Service Mesh control plane.

  1. Create a custom resource definition file named istio-installation.yaml.
  2. Run this command to deploy the control plane:

    $ oc create -n istio-system -f istio-installation.yaml
  3. Run this command to watch the progress of the pods during the installation process:

    $ oc get pods -n istio-system -w

1.4. Multi-tenant Service Mesh Installation

1.4.1. Multi-tenant Red Hat OpenShift Service Mesh installation

The Red Hat OpenShift Service Mesh Operator provides support for multi-tenant control plane installations. A multi-tenant control plane is configured so that only specified namespaces can be joined into its Service Mesh, isolating the mesh from other installations.

Note
  • You cannot use multi-tenant control plane installations in conjunction with a cluster-wide control plane installation. Red Hat OpenShift Service Mesh installations must either be multi-tenant or single, cluster-wide installations.
  • Single-tenant control plane installations are known to cause issues with OpenShift Container Platform restarts and upgrades. Multi-tenant control plane installations are the default configuration starting with Red Hat OpenShift Service Mesh 0.12.TechPreview.

1.4.1.1. Known issues with multi-tenant Red Hat OpenShift Service Mesh installations

Warning

Automatic route creation is currently incompatible with multi-tenant Service Mesh installations. Ensure that it is disabled, by setting ior_enabled to false in your ServiceMeshControlPlane if you plan to attempt a multi-tenant installation.

  • MeshPolicy is still a cluster-scoped resource and applies to all control planes installed in OpenShift. This can prevent the installation of multiple control planes or cause unknown behavior if one control plane is deleted.
  • The Jaeger agent runs as a DaemonSet, therefore tracing may only be enabled for a single ServiceMeshControlPlane instance.
  • If you delete the project that contains the control plane before you delete the ServiceMeshControlPlane resource, some parts of the installation may not be removed:

    • Service accounts added to the SecurityContextConstraints may not be removed.
    • OAuthClient resources associated with Kiali may not be removed, or its list of redirectURIs may not be accurate.

1.4.1.2. Differences between multi-tenant and cluster-wide installations

The main difference between a multi-tenant installation and a cluster-wide installation is the scope of privileges used by the control plane deployments, for example, Galley and Pilot. The components no longer use cluster-scoped Role Based Access Control (RBAC) ClusterRoleBinding, but rely on namespace-scoped RBAC RoleBinding.

Every namespace in the members list will have a RoleBinding for each service account associated with a control plane deployment and each control plane deployment will only watch those member namespaces. Each member namespace has a maistra.io/member-of label added to it, where the member-of value is the namespace containing the control plane installation.

1.4.1.3. Configuring namespaces in multi-tenant installations

A multi-tenant control plane installation only affects namespaces configured as part of the Service Mesh. You must specify the namespaces associated with the Service Mesh in a ServiceMeshMemberRoll resource located in the same namespace as the ServiceMeshControlPlane resource and name it default.

Warning

If Container Network Interface (CNI) is enabled, manual sidecar injection will work, but pods will not be able to communicate with the control plane unless they are a part of the ServiceMeshMemberRoll resource.

Note

The member namespaces are only updated if the control plane installation is successful.

You can add any number of namespaces, but a namespace can only belong to one ServiceMeshMemberRoll.

  • ServiceMeshMemberRoll resources are reconciled in response to the following events:

    • The ServiceMeshMemberRoll is created, updated, or deleted
    • The ServiceMeshControlPlane resource in the namespace containing the ServiceMeshMemberRoll is created or updated
    • A namespace listed in the ServiceMeshMemberRoll is created or deleted

The ServiceMeshMemberRoll is deleted when its corresponding ServiceMeshControlPlane resource is deleted.

Here is an example that joins the bookinfo namespace into the Service Mesh:

  1. Create a custom resource file named ServiceMeshMemberRoll in the same namespace as the ServiceMeshControlPlane custom resource.
  2. Name the resource default.
  3. Add the namespaces to the member list in the ServiceMeshMemberRoll. The bookinfo namespace is joined to the Service Mesh in this example.

      apiVersion: maistra.io/v1
      kind: ServiceMeshMemberRoll
      metadata:
        name: default
      spec:
        members:
        # a list of namespaces joined into the service mesh
        - bookinfo

1.5. Post installation tasks

1.5.1. Verifying the control plane installation

Note

The name of the resource is istio-installation.

  1. Run the this command to determine if the Operator finished deploying the control plane:

    $ oc get servicemeshcontrolplane/basic-install -n istio-system --template='{{range .status.conditions}}{{printf "%s=%s, reason=%s, message=%s\n\n" .type .status .reason .message}}{{end}}'

    When the control plane installation is finished, the output is similar to the following:

    Installed=True, reason=InstallSuccessful, message=%!s(<nil>)
  2. After the control plane is deployed, run this command to check the status of the pods:

    $ oc get pods -n istio-system
  3. Verify that the pods are in a state similar to this:

    Note

    The results returned when you run this verification step vary depending on your configuration including the number of nodes in the cluster, and whether you are using 3scale, Jaeger, Kiali, or Prometheus.

    NAME                                          READY     STATUS      RESTARTS   AGE
    3scale-istio-adapter-67b96f97b5-cwvgt         1/1       Running     0          99s
    grafana-75f4cbbc6-xw99s                       1/1       Running     0          54m
    istio-citadel-8489b8bb96-ttqfd                1/1       Running     0          54m
    stio-egressgateway-5ccd4d5ddd-wtp2h           1/1       Running     0          52m
    istio-galley-58ff8db57c-jrpkz                 1/1       Running     0          54m
    istio-ingressgateway-698674848f-bk57s         1/1       Running     0          52m
    istio-node-2d764                              1/1       Running     0          54m
    istio-node-4h926                              1/1       Running     0          54m
    istio-node-6qxcj                              1/1       Running     0          54m
    istio-node-8fxqz                              1/1       Running     0          54m
    istio-node-gzg5v                              1/1       Running     0          54m
    istio-node-vxx5p                              1/1       Running     0          54m
    istio-pilot-764966cf69-9nlhp                  2/2       Running     0          19m
    istio-policy-7c856f7d5f-4fjk4                 2/2       Running     2          53m
    istio-sidecar-injector-757b8ccdbf-znggc       1/1       Running     0          49m
    istio-telemetry-65d8b47c98-jrp9h              2/2       Running     2          53m
    jaeger-f775b79f8-cmbb2	                      2/2	      Running	    0	         54m
    kiali-7646d796cd-kfx29	                      1/1	      Running	    0	         20m
    prometheus-56cb9c859b-dlqmn	                  2/2	      Running	    0	         54m

1.6. Application requirements

1.6.1. Requirements for deploying applications on Red Hat OpenShift Service Mesh

When deploying an application into the Service Mesh there are several differences between the behavior of the upstream community version of Istio and the behavior within a Red Hat OpenShift Service Mesh installation.

1.6.1.1. Configuring security constraints for application service accounts

Note

The relaxing of security constraints is only necessary during the Red Hat OpenShift Service Mesh Technology Preview.

When you deploy an application into a Service Mesh running in an OpenShift environment, it is necessary to relax the security constraints placed on the application by its service account to ensure the application can function correctly. Each service account must be granted permissions with the anyuid and privileged Security Context Constraints (SCC) to enable the sidecars to run correctly.

The privileged SCC is required to ensure changes to the pod’s networking configuration is updated successfully with the istio-init initialization container and the anyuid SCC is required to enable the sidecar container to run with its required user id of 1337.

To configure the correct permissions it is necessary to identify the service accounts being used by your application’s pods. For most applications, this will be the default service account, however your Deployment/DeploymentConfig may override this within the pod specification by providing the serviceAccountName.

For each identified service account you must update the cluster configuration to ensure that the clusters are granted access to the anyuid and privileged SCCs by executing the following commands from an account with cluster admin privileges.

  1. Identify the service account(s) that require SCC changes.

    Note

    Replace <service account> and <namespace> with values specific to your application.

  2. Run this command for each service account that requires the anyuid SCC for its associated sidecar container:

    $ oc adm policy add-scc-to-user anyuid -z <service account> -n <namespace>
  3. Run this comman for each service account that requires the privileged SCC to allow successful updates to its pod’s networking configuration:

    $ oc adm policy add-scc-to-user privileged -z <service account> -n <namespace>

1.6.1.2. Updating the master configuration

Note

Master configuration updates are not necessary if you are running OpenShift Container Platform 4.1.

Service Mesh relies on the existence of a proxy sidecar within the application’s pod to provide service mesh capabilities to the application. You can enable automatic sidecar injection or manage it manually. We recommend automatic injection by using the annotation, with no need to label namespaces, to ensure your application contains the appropriate configuration for your service mesh upon deployment. This method requires fewer privileges and does not conflict with other OpenShift capabilities such as builder pods.

Note

The upstream version of Istio injects the sidecar by default if you have labeled the namespace. You are not required to label the namespace with Red Hat OpenShift Service Mesh. However, Red Hat OpenShift Service Mesh requires you to opt in to having the sidecar automatically injected to a deployment. This avoids injecting a sidecar where it is not wanted (for example, build or deploy pods). The webhook checks the configuration of pods deploying into all namespaces to see if they are opting in to injection with the appropriate annotation.

To enable the automatic injection of the Service Mesh sidecar you must first modify the master configuration on each master to include support for webhooks and signing of Certificate Signing Requests (CSRs).

Make the following changes on each master within your OpenShift Container Platform installation:

  1. Change to the directory containing the master configuration file (for example, /etc/origin/master/master-config.yaml).
  2. Create a file named master-config.patch with the following contents:

    admissionConfig:
      pluginConfig:
        MutatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
        ValidatingAdmissionWebhook:
          configuration:
            apiVersion: apiserver.config.k8s.io/v1alpha1
            kubeConfigFile: /dev/null
            kind: WebhookAdmission
  3. In the same directory, issue the following commands to apply the patch to the master-config.yaml file:

    $ cp -p master-config.yaml master-config.yaml.prepatch
    $ oc ex config patch master-config.yaml.prepatch -p "$(cat master-config.patch)" > master-config.yaml
    $ /usr/local/bin/master-restart api && /usr/local/bin/master-restart controllers
1.6.1.2.1. Automatic sidecar injection

When deploying an application into the Red Hat OpenShift Service Mesh you must opt in to injection by specifying the sidecar.istio.io/inject annotation with a value of true. Opting in ensures the sidecar injection does not interfere with other OpenShift features such as builder pods used by numerous frameworks within the OpenShift ecosystem.

  1. Open the application’s configuration yaml file in an editor.
  2. Add sidecar.istio.io/inject to the configuration yaml with a value of true as illustrated here:

    Sleep test application example

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      template:
        metadata:
          annotations:
            sidecar.istio.io/inject: "true"
          labels:
            app: sleep
        spec:
          containers:
          - name: sleep
            image: tutum/curl
            command: ["/bin/sleep","infinity"]
            imagePullPolicy: IfNotPresent

  3. Save the configuration file.
1.6.1.2.2. Manual sidecar injection

Manual injection of the sidecar is supported by using the upstream istioctl command.

Note

When you use manual sidecar injection, ensure you have access to a running cluster so the correct configuration can be obtained from the istio-sidecar-injector configmap within the istio-system namespace.

To obtain the executable and deploy an application with manual injection:

  1. Download the appropriate installation for your OS.
  2. Add the istioctl binary to the bin directory in your PATH.
  3. Run this command to inject the sidecar into your application and pipe the configuration to the oc command to create deployments:

    $ istioctl kube-inject -f app.yaml | oc create -f -

1.7. Tutorials

There are several tutorials to help you learn more about the Service Mesh.

1.7.1. Bookinfo tutorial

The upstream Istio project has an example tutorial called bookinfo, which is composed of four separate microservices used to demonstrate various Istio features. The Bookinfo application displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages and other information), and book reviews.

The Bookinfo application consists of four separate microservices:

  • The productpage microservice calls the details and reviews microservices to populate the page.
  • The details microservice contains book information.
  • The reviews microservice contains book reviews. It also calls the ratings microservice.
  • The ratings microservice contains book ranking information that accompanies a book review.

There are three versions of the reviews microservice:

  • Version v1 does not call the ratings service.
  • Version v2 calls the ratings service and displays each rating as one to five black stars.
  • Version v3 calls the ratings service and displays each rating as one to five red stars.

1.7.1.1. Installing the Bookinfo application

The following steps describe deploying and running the Bookinfo tutorial on OpenShift Container Platform with Service Mesh 0.12.TechPreview.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
Note

Red Hat OpenShift Service Mesh implements auto-injection differently than the upstream Istio project, therefore this procedure uses a version of the bookinfo.yaml file annotated to enable automatic injection of the Istio sidecar.

  1. Create a project for the Bookinfo application.

    $ oc new-project myproject
  2. Update the Security Context Constraints (SCC) by adding the service account used by Bookinfo to the anyuid and privileged SCCs in the "myproject" namespace:

    $ oc adm policy add-scc-to-user anyuid -z default -n myproject
    $ oc adm policy add-scc-to-user privileged -z default -n myproject
  3. Deploy the Bookinfo application in the "myproject" namespace by applying the bookinfo.yaml file:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo.yaml
  4. Create the ingress gateway for Bookinfo by applying the bookinfo-gateway.yaml file:

      $ oc apply -n myproject -f https://raw.githubusercontent.com/Maistra/bookinfo/master/bookinfo-gateway.yaml
  5. Set the value for the GATEWAY_URL parameter:

    $ export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}')

1.7.1.2. Verifying the Bookinfo installation

To confirm that the application is successfully deployed, run this command:

$ curl -o /dev/null -s -w "%{http_code}\n" http://$GATEWAY_URL/productpage

Alternatively, you can open http://$GATEWAY_URL/productpage in your browser.

1.7.1.3. Add default destination rules

  1. If you did not enable mutual TLS:

    $ oc apply -n myproject -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/destination-rule-all.yaml
  2. If you enabled mutual TLS:

    oc apply -n myproject -f https://raw.githubusercontent.com/istio/istio/release-1.1/samples/bookinfo/networking/destination-rule-all-mtls.yaml
  3. To list all available destination rules:

    $ oc get destinationrules -o yaml

1.7.1.4. Removing the Bookinfo application

When you finish with the Bookinfo application, you can remove it by running the cleanup script.

Tip

Several of the other tutorials in this document also use the Bookinfo application. Do not run the cleanup script if you plan to continue with the other tutorials.

  1. Download the cleanup script:

    $ curl -o cleanup.sh https://raw.githubusercontent.com/Maistra/bookinfo/master/cleanup.sh && chmod +x ./cleanup.sh
  2. Delete the Bookinfo virtualservice, gateway, and terminate the pods by running the cleanup script:

    $ ./cleanup.sh
    namespace ? [default] myproject
  3. Confirm shutdown by running these commands:

    $ oc get virtualservices -n myproject
    No resources found.
    $ oc get gateway -n myproject
    No resources found.
    $ oc get pods -n myproject
    No resources found.

1.7.2. Distributed tracing tutorial

Jaeger is an open source distributed tracing system. You use Jaeger for monitoring and troubleshooting microservices-based distributed systems. Using Jaeger you can perform a trace, which follows the path of a request through various microservices that make up an application. Jaeger is installed by default as part of the Service Mesh.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can perform a trace using the Jaeger component of Red Hat OpenShift Service Mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
  • Bookinfo demonstration application installed.

1.7.2.1. Generating traces and analyzing trace data

  1. After you have deployed the Bookinfo application, generate some activity by accessing http://$GATEWAY_URL/productpage and refreshing the page a few times.
  2. A route to access the Jaeger dashboard already exists. Query for details of the route:

    $ export JAEGER_URL=$(oc get route -n istio-system jaeger-query -o jsonpath='{.spec.host}')
  3. Launch a browser and navigate to https://${JAEGER_URL}.
  4. In the left pane of the Jaeger dashboard, from the Service menu, select "productpage" and click the Find Traces button at the bottom of the pane. A list of traces is displayed, as shown in the following image:

    jaeger main screen
  5. Click one of the traces in the list to open a detailed view of that trace. If you click on the top (most recent) trace, you see the details that correspond to the latest refresh of the `/productpage.

    jaeger spans

    The trace in the previous figure consists of a few nested spans, each corresponding to a Bookinfo service call, all performed in response to a `/productpage request. Overall processing time was 2.62s, with the details service taking 3.56ms, the reviews service taking 2.6s, and the ratings service taking 5.32ms. Each of the calls to remote services is represented by a client-side and server-side span. For example, the details client-side span is labeled productpage details.myproject.svc.cluster.local:9080. The span nested underneath it, labeled details details.myproject.svc.cluster.local:9080, corresponds to the server-side processing of the request. The trace also shows calls to istio-policy, which reflect authorization checks made by Istio.

1.7.2.2. Removing the tracing tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Tracing tutorial.

1.7.3. Prometheus tutorial

Prometheus is an open source system and service monitoring toolkit. Prometheus collects metrics from configured targets at specified intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Grafana or other API consumers can be used to visualize the collected data.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can query for metrics using Prometheus.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
  • Bookinfo demonstration application installed.

1.7.3.1. Querying metrics

  1. Verify that the prometheus service is running in your cluster. In Kubernetes environments, execute the following command:

    $ oc get svc prometheus -n istio-system

    You will see something like the following:

    NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    prometheus   10.59.241.54   <none>        9090/TCP   2m
  2. Generate network traffic by accessing the Bookinfo application:

    $ curl -o /dev/null http://$GATEWAY_URL/productpage
  3. A route to access the Prometheus user interface already exists. Query for details of the route:

    $ export PROMETHEUS_URL=$(oc get route -n istio-system prometheus -o jsonpath='{.spec.host}')
  4. Launch a browser and navigate to http://${PROMETHEUS_URL}. You will see the Prometheus home screen, similar to the following figure:

    prometheus home screen
  5. In the Expression field, enter istio_request_duration_seconds_count, and click the Execute button. You will see a screen similar to the following figure:

    prometheus metrics
  6. You can narrow down queries by using selectors. For example istio_request_duration_seconds_count{destination_workload="reviews-v2"} shows only counters with the matching destination_workload label. For more information about using queries, see the Prometheus documentation.
  7. To list all available Prometheus metrics, run the following command:

    $ oc get prometheus -n istio-system -o jsonpath='{.items[*].spec.metrics[*].name}' requests_total request_duration_seconds request_bytes response_bytes tcp_sent_bytes_total tcp_received_bytes_total

Note that returned metric names must be prepended with istio_ when used in queries, for example, requests_total is istio_requests_total.

1.7.3.2. Removing the Prometheus tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Prometheus tutorial.

1.7.4. Kiali tutorial

Kiali works with Istio to visualize your Service Mesh topology to provide visibility into features like circuit breakers, request rates, and more. Kiali offers insights about the mesh components at different levels, from abstract Applications to Services and Workloads. Kiali provides an interactive graph view of your namespace in real time. It can display the interactions at several levels, including Applications, Versions, and Workloads, with contextual information and charts on the selected graph node or edge.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how you can use the Kiali console to view the topography and health of your service mesh.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
  • Kiali parameters specified in the custom resource file.
  • Bookinfo demonstration application installed.

1.7.4.1. Accessing the Kiali console

  1. A route to access the Kiali console already exists. Run the following command to obtain the route and Kiali URL:

    $ oc get routes

    While your exact environment may be different, you should see a result that’s something like this:

    NAME                   HOST/PORT                                                PATH      SERVICES               PORT              TERMINATION   WILDCARD
    grafana                grafana-istio-system.127.0.0.1.nip.io                          grafana                http                            None
    istio-ingress          istio-ingress-istio-system.127.0.0.1.nip.io                    istio-ingress          http                            None
    istio-ingressgateway   istio-ingressgateway-istio-system.127.0.0.1.nip.io             istio-ingressgateway   http                            None
    jaeger-query           jaeger-query-istio-system.127.0.0.1.nip.io                     jaeger-query           jaeger-query      edge          None
    kiali                  kiali-istio-system.127.0.0.1.nip.io                            kiali                  <all>                           None
    prometheus             prometheus-istio-system.127.0.0.1.nip.io                       prometheus             http-prometheus                 None
    tracing                tracing-istio-system.127.0.0.1.nip.io                          tracing                tracing           edge          None
  2. Launch a browser and navigate to https://${KIALI_URL} (in the output above, this is kiali-istio-system.127.0.0.1.nip.io). You should see the Kiali console login screen.
  3. Log in to the Kiali console using the user name and password that you specified in the custom resource file during installation.

1.7.4.2. Overview page

When you first log in you see the Overview page, which provides a quick overview of the health of the various namespaces that are part of your Service Mesh. You can choose to view the health of your applications, workloads, or services.

Overview Page

1.7.4.3. Graph page

The Graph page shows a graph of microservices, which are connected by the requests going through them. On this page, you can see how applications, workloads, or services interact with each other.

  1. Click Graph in the left navigation.

    Kiali graph

  2. If necessary, select bookinfo from the Namespace menu. The graph displays the applications in the Bookinfo application.
  3. Click the question mark (?) under the Namespace menu to take the Graph Help Tour.
  4. Click Done to close the Help Tour.
  5. Click Legend in the lower left corner. Kiali displays the graph legend.

    Kiali legend

  6. Close the Graph Legend.
  7. Hover over the productpage node. Note how the graph highlights only the incoming and outgoing traffic from the node.
  8. Click the productpage node. Note how the details on the right side of the page change to display the productpage details.

1.7.4.4. Applications page

The Applications page lets you search for and view applications, their health, and other details.

  1. Click Applications in the left navigation.
  2. If necessary, select bookinfo from the Namespace menu. The page displays the applications in the selected namespace and their health.
  3. Hover over the Health icon to view additional health details.
  4. Click the reviews service to view the details for that application.

    Kiali Applications details

  5. On the Applications Details page you can view more detailed health information, and drill down for further details about the three versions of the reviews service.
  6. From the Application Details page you can also click tabs to view Traffic and Inbound and Outbound Metrics for the application.

1.7.4.5. Workloads page

The Workloads page lets you search for and view workloads, their health, and other details.

  1. Click Workloads in the left navigation.
  2. If necessary, select bookinfo from the Namespace menu. The page displays the workloads in the selected namespace, their health, and labels.
  3. Click the reviews-v1 workload to view the details for that workload.
  4. On the Workload Details page you can view an overview of pods and services associated with the workload.

    Kiali Workloads details

  5. From the Workload Details page you can also click tabs to view Traffic, Logs, and Inbound and Outbound Metrics for the workload.

1.7.4.6. Services page

The Services page lets you search for and view services, their health, and other details.

  1. Click Services in the left navigation.
  2. If necessary, select bookinfo from the Namespace menu. The page displays a listing of all the services that are running in the selected namespace and additional information about them, such as health status.
  3. Hover over the health icon for any of the services to view health information about the service. A service is considered healthy when it is online and responding to requests without errors.
  4. Click the Reviews service to view its details. Note that there are three different versions of this service.

    Kiali Services details

  5. On the Services Details page you can view an overview of workloads, virtual services, and destination rules associated with the service.
  6. From the Services Details page you can also click tabs to view Traffic, Inbound Metrics, and Traces for the service.
  7. Click the Actions menu. From here you can perform the following actions:

    • Create Weighted Routing
    • Create Matching Routing
    • Suspend Traffic
    • Delete ALL Traffic Routing
  8. Click the name of one of the services to view additional details about that specific version of the service.

1.7.4.7. Istio Config page

The Istio Config page lets you view all of the currently running configurations to your Service Mesh, such as Circuit Breakers, Destination Rules, Fault Injection, Gateways, Routes, Route Rules, and Virtual Services.

  1. Click Istio Config in the left navigation.
  2. If necessary, select bookinfo from the Namespace menu. The page displays a listing of configurations running in the selected namespace and validation status.

    Istio configuration

  3. Click one of the configurations to view additional information about the configuration file.

    Istio Configuration YAML

1.7.4.8. Distributed Tracing page

Click the Distributed Tracing link in the left navigation. On this page you can see tracing data as provided by Jaeger.

1.7.4.9. Removing the Kiali tutorial

The procedure for removing the Kiali tutorial is the same as removing the Bookinfo tutorial.

1.7.5. Grafana tutorial

Grafana is an open source a tool for creating monitoring and metric analytics and dashboards. You use Grafana to query, visualize, alert on your metrics no matter where they are stored: Graphite, Elasticsearch, OpenTSDB, Prometheus, or InfluxDB. Istio includes monitoring via Prometheus and Grafana.

This tutorial uses Service Mesh and the bookinfo tutorial to demonstrate how to set up and use the Istio Dashboard to monitor mesh traffic. As part of this task, you install the Grafana Istio add-on and use the web-based interface to view service mesh traffic data.

Prerequisites:

  • OpenShift Container Platform 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
  • Bookinfo demonstration application installed.

1.7.5.1. Accessing the Grafana dashboard

  1. A route to access the Grafana dashboard already exists. Query for details of the route:

      $ export GRAFANA_URL=$(oc get route -n istio-system grafana -o jsonpath='{.spec.host}')
  2. Launch a browser and navigate to navigate to http://${GRAFANA_URL}. You see Grafana’s home screen, as shown in the following figure:

    grafana home screen
  3. From the menu in the upper-left corner, select Istio Mesh Dashboard to see Istio mesh metrics.

    grafana mesh no traffic
  4. Generate some traffic by accessing the Bookinfo application:

    $ curl -o /dev/null http://$GATEWAY_URL/productpage

    The dashboard reflects the traffic through the mesh, similar to the following figure:

    grafana mesh with traffic
  5. To see detailed metrics for a service, click on a service name in the Service column. The service dashboard resembles the following figure:

    grafana services

    Note that TCP Bandwidth metrics are empty, because Bookinfo only uses http-based services. The dashboard also displays metrics for workloads that call the Client Workloads service and for workloads that process requests from the Service Workloads. You can switch to a different service or filter metrics by client and service workloads by using the menus at the top of the dashboard.

  6. To switch to the workloads dashboard, click Istio Workload Dashboard on the menu in the upper-left corner. You will see a screen resembling the following figure:

    grafana workloads

    This dashboard shows workload metrics and metrics for client (inbound) and service (outbound) workloads. You can switch to a different workload; to filter metrics by inbound or outbound workloads, use the menus at the top of the dashboard.

1.7.5.2. Removing the Grafana tutorial

Follow the procedure for removing the Bookinfo tutorial to remove the Grafana tutorial.

1.7.6. Red Hat OpenShift Application Runtime Missions

In addition to the bookinfo based tutorials, there are are four Service Mesh-specific tutorials (missions) and sample applications (boosters) that you can use with the Fabric8 integrated development platform launcher to explore some of the Istio features. Each of these boosters and missions is available for four different application runtimes. For more information about Fabric8, see the Fabric8 documentation.

Prerequisites:

  • OpenShift Container Catalog 3.11 or higher installed.
  • Red Hat OpenShift Service Mesh 0.12.TechPreview installed.
  • Launcher parameters specified in the custom resource file.

Table 1.11. RHOAR Tutorials

RuntimeMissionDescription

Springboot

Istio Circuit Breaker mission

This scenario showcases how Istio can be used to implement the Circuit Breaker architectural pattern.

Springboot

Istio Distributed Tracing mission

This scenario showcases the interaction of Distributed Tracing capabilities of Jaeger and properly instrumented microservices running in the Service Mesh.

Springboot

Security mission

This scenario showcases Istio security concepts whereby access to services is controlled by the platform rather than independently by constituent applications.

Springboot

Routing mission

This scenario showcases using Istio’s dynamic traffic routing capabilities with a set of example applications designed to simulate a real-world rollout scenario.

Thorntail (A.K.A. Wildfly Swarm)

Istio Circuit Breaker mission

This scenario showcases how Istio can be used to implement the Circuit Breaker architectural pattern.

Thorntail

Istio Distributed Tracing mission

This scenario showcases the interaction of Distributed Tracing capabilities of Jaeger and properly instrumented microservices running in the Service Mesh.

Thorntail

Security mission

This scenario showcases Istio security concepts whereby access to services is controlled by the platform rather than independently by constituent applications.

Thorntail

Routing mission

This scenario showcases using Istio’s dynamic traffic routing capabilities with a set of example applications designed to simulate a real-world rollout scenario.

Vert.x

Istio Circuit Breaker mission

This scenario showcases how Istio can be used to implement a Circuit Breaker via a (minimally) instrumented Eclipse Vert.x microservices.

Vert.x

Istio Distributed Tracing mission

This scenario showcases the interaction of Distributed Tracing capabilities of Jaeger and a (minimally) instrumented set of Eclipse Vert.x applications.

Vert.x

Security mission

This scenario showcases Istio Transport Layer Security(TLS) and Access Control Lists (ACL) via a set of Eclipse Vert.x applications.

Vert.x

Routing mission

This scenario showcases using Istio’s dynamic traffic routing capabilities with a minimal set of example applications.

Node.js

Istio Circuit Breaker mission

This scenario showcases how Istio can be used to implement the Circuit Breaker architectural pattern in Node.js applications.

Node.js

Istio Distributed Tracing mission

This scenario showcases the interaction of Distributed Tracing capabilities of Jaeger and properly instrumented Node.js applications running in the Service Mesh.

Node.js

Security mission

This scenario showcases Istio Transport Layer Security(TLS) and Access Control Lists (ACL) with Node.js applications.

Node.js

Routing mission

This scenario showcases using Istio’s dynamic traffic routing capabilities with a set of example applications designed to simulate a real-world rollout scenario.

1.8. Removing Red Hat OpenShift Service Mesh

1.8.1. Removing Red Hat OpenShift Service Mesh

Follow this process to remove the Service Mesh from an existing OpenShift Container Platform instance. Log in as a cluster admin and run these commands from any host with access to the cluster.

1.8.1.1. Remove the control plane

Note

Removing the servicemeshcontrolplane triggers the Service Mesh Operator to uninstall the resources it created.

Note

You can used the shortened smcp command in place of servicemeshcontrolplanes.

  1. Retrieve the name of the installed custom resource with this command:

    oc get servicemeshcontrolplanes -n istio-system
  2. Replace <name_of_custom_resource> with the output from the previous command, and run this command to remove the custom resource:

    $ oc delete servicemeshcontrolplanes -n istio-system <name_of_custom_resource>

1.8.1.2. Remove the Operators

You must remove the Operators to successfully remove Red Hat OpenShift Service Mesh. After you remove the Red Hat OpenShift Service Mesh Operator, you must remove the Jaeger Operator and the Kiali Operator.

1.8.1.2.1. Remove the Red Hat OpenShift Service Mesh Operator
  • Run this command to remove the Red Hat OpenShift Service Mesh Operator:

    $ oc delete -n istio-operator -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.12/deploy/servicemesh-operator.yaml
1.8.1.2.2. Remove the Jaeger Operator
  • Run the following commands to remove the Jaeger Operator:

    $ oc delete -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/operator.yaml
    $ oc delete -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/role_binding.yaml
    $ oc delete -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/role.yaml
    $ oc delete -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/service_account.yaml
    $ oc delete -n observability -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.13.1/deploy/crds/jaegertracing_v1_jaeger_crd.yaml
1.8.1.2.3. Remove the Kiali Operator
  • Run the following command to remove the Kiali Operator:

    $ bash <(curl -L https://git.io/getLatestKialiOperator) --uninstall-mode true --operator-watch-namespace '**'

1.8.1.3. Remove the projects

  1. Run this command to remove the istio-system project:

    $ oc delete project istio-system
  2. Run this command to remove the istio-operator project:

    $ oc delete project istio-operator
  3. Run this command to remove the kiali-operator project:

    $ oc delete project kiali-operator
  4. Run this command to remove the observability project:

    $ oc delete project observability

1.9. Upgrading Red Hat OpenShift Service Mesh

1.9.1. Upgrading Red Hat OpenShift Service Mesh

While Red Hat OpenShift Service Mesh is a Technology Preview, there is no upgrade. If you have an existing Service Mesh installation (for example if you have installed the developer preview), then that installation must be removed before installing a new version of Red Hat OpenShift Service Mesh.

1.10. 3scale Istio Adapter

The 3scale Istio Adapter allows you to label a service running within the Red Hat OpenShift Service Mesh and integrate that service with the 3scale API Management solution.

Prerequisites:

Note

To configure the 3scale Istio Adapter, refer to Installing Service Mesh for instructions on adding adapter parameters to the custom resource file.

1.10.1. Integrate the adapter with Red Hat OpenShift Service Mesh

You can use these examples to configure requests to your services using the 3scale Istio Adapter.

Note

Pay particular attention to the kind: handler resource. You must update this with your 3scale credentials and the service ID of the API you want to manage.

  1. Modify the handler configuration with your 3scale configuration.

      apiVersion: "config.istio.io/v1alpha2"
      kind: handler
      metadata:
       name: threescale
      spec:
       adapter: threescale
       params:
         service_id: "<SERVICE_ID>"
         system_url: "https://<organization>-admin.3scale.net/"
         access_token: "<ACCESS_TOKEN>"
       connection:
         address: "threescale-istio-adapter:3333"
  2. Modify the rule configuration with your 3scale configuration.
  # rule to dispatch to handler threescalehandler
  apiVersion: "config.istio.io/v1alpha2"
  kind: rule
  metadata:
    name: threescale
  spec:
    match: destination.labels["service-mesh.3scale.net"] == "true"
    actions:
      - handler: threescale.handler
        instances:
          - threescale-authorization.instance

1.10.1.1. Generating custom resources

The adapter includes a tool that allows you to generate the handler, instance, and rule custom resources.

Table 1.12. Usage

OptionDescriptionRequiredDefault value

-h, --help

Produces help output for available options

No

 

--name

Unique name for this URL, token pair

Yes

 

-n --namespace

Namespace to generate templates

No

istio-system

-t, --token

3scale access token

Yes

 

-u, --url

3scale Admin Portal URL

Yes

 

-s, --service

3scale API/Service ID

Yes

 

--auth

3scale authentication pattern to specify (1=Api Key, 2=App Id/App Key, 3=OIDC)

No

Hybrid

-o, --output

File to save produced manifests to

No

Standard output

-v

Outputs the CLI version and exits right away

No

 
1.10.1.1.1. Generate templates from URL examples

This example will generate generic templates, allowing the token, URL pair to be shared by multiple services as a single handler:

$ 3scale-gen-config --name=admin-credentials --url="https://<organization>-admin.3scale.net:443" --token="[redacted]"

This example will generate the templates with the service ID embedded in the handler:

$ 3scale-gen-config --url="https://<organization>-admin.3scale.net" --name="my-unique-id" --service="123456789" --token="[redacted]"

1.10.1.2. Generating manifests from a deployed adapter

To generate these manifests from a deployed adapter, assuming it is deployed in the istio-system namespace, run the following:

$ export NS="istio-system" URL="https://<replaceme>-admin.3scale.net:443" NAME="name" TOKEN="token"
oc exec -n ${NS} $(oc get po -n ${NS} -o jsonpath='{.items[?(@.metadata.labels.app=="3scale-istio-adapter")].metadata.name}') \
-it -- ./3scale-config-gen \
--url ${URL} --name ${NAME} --token ${TOKEN} -n ${NS}

This will produce sample output to the terminal. Edit these samples if required and create the objects using the oc create command.

When the request reaches the adapter, the adapter needs to know how the service maps to an API on 3scale. This can be provided in one of two ways:

  • As a label on the workload (recommended)
  • Hardcoded in the handler as service_id

Update the workload with the required annotations:

Note

You need only update the service ID provided in the below example if it has not been embedded in the handler previously. The setting in the handler will take precedence.

$ export CREDENTIALS_NAME="replace-me"
export SERVICE_ID="replace-me"
export DEPLOYMENT="replace-me"
patch="$(oc get deployment "${DEPLOYMENT}"
patch="$(oc get deployment "${DEPLOYMENT}" --template='{"spec":{"template":{"metadata":{"labels":{ {{ range $k,$v := .spec.template.metadata.labels }}"{{ $k }}":"{{ $v }}",{{ end }}"service-mesh.3scale.net/service-id":"'"${SERVICE_ID}"'","service-mesh.3scale.net/credentials":"'"${CREDENTIALS_NAME}"'"}}}}}' )"
oc patch deployment "${DEPLOYMENT}" --patch ''"${patch}"''

1.10.1.3. Routing service traffic through the adapter

To drive traffic for your service through the 3scale adapter, you need to match the rule destination.labels["service-mesh.3scale.net/credentials"] =="threescale" that you previously created in the configuration, in the kind: rule resource.

Integration of a service requires that the above label be added to PodTemplateSpec on the Deployment of the target workload. The value, threescale, refers to the name of the generated handler. This handler will store the access token required to call 3scale.

Add the following label to the workload to pass the service ID to the adapter via the instance at request time:

destination.labels["service-mesh.3scale.net/service-id"] == "replace-me"

Your 3scale administrator should be able to provide you with both the required credentials name and the service ID.

1.10.2. Configure the integration settings in 3scale

Note

For 3scale SaaS customers, Red Hat OpenShift Service Mesh is enabled as part of the Early Access program.

1.10.2.1. Integration settings

  1. Navigate to [your_API_name] > Integration > Configuration.
  2. At the top of the Integration page click edit integration settings in the top right corner.
  3. Under the Service Mesh heading, click the Istio option.
  4. Scroll to the bottom of the page and click Update Service.

1.10.3. Caching behavior

Responses from 3scale System API’s are cached by default within the adapter. Entries will be purged from the cache when they become older than the cacheTTLSeconds value. Also by default, automatic refreshing of cached entries will be attempted seconds before they expire, based on the cacheRefreshSeconds value. You can disable automatic refreshing by setting this value higher than the cacheTTLSeconds value.

Caching can be disabled entirely by setting cacheEntriesMax to a non-positive value.

By using the refreshing process, cached values whose hosts become unreachable will be retried before eventually being purged when past their expiry.

1.10.4. Authenticating requests

This Technology Preview release supports the following authentication methods:

  • Standard API Keys: single randomized strings or hashes acting as an identifier and a secret token.
  • Application identifier and key pairs: immutable identifier and mutable secret key strings, referred to as AppID.
  • OpenID authentication method: client ID string parsed from the JSON Web Token, referred to as OpenID Connect (OIDC).

1.10.4.1. Applying authentication patterns

Modify the instance custom resource, as illustrated in the following authentication method examples, to configure authentication behavior. You can accept the authentication credentials from:

  • Request headers
  • Request parameters
  • Both request headers and query parameters
Note

When specifying values from headers they must be lower case. For example, if you want to send a header as X-User-Key, this must be referenced in the configuration as request.headers["x-user-key"].

1.10.4.1.1. API key authentication method

Service Mesh looks for the API key in query parameters and request headers as specified in the user option in the subject custom resource parameter. It checks the values in the order given in the custom resource file. You can restrict the search for the API key to either query parameters or request headers by omitting the unwanted option.

In this example Service Mesh looks for the API key in the user_key query parameter. If the API key is not in the query parameter, Service Mesh then checks the x-user-key header.

API key authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
  namespace: istio-system
spec:
  template: authorization
  params:
    subject:
      user: request.query_params["user_key"] | request.headers["x-user-key"] | ""
    action:
      path: request.url_path
      method: request.method | "get"

If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the API key in a query parameter named “key”, change request.query_params["user_key"] to request.query_params["key"].

1.10.4.1.2. Application ID and application key pair authentication method

Service Mesh looks for the application ID and application key in query parameters and request headers, as specified in the properties option in the subject custom resource parameter. The application key is optional. It checks the values in the order given in the custom resource file. You can restrict the search for the credentials to either query parameters or request headers by not including the unwanted option.

In this example, Service Mesh looks for the application ID and application key in the query parameters first, moving on to the request headers if needed.

Application ID and application key pair authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
  namespace: istio-system
spec:
  template: authorization
  params:
    subject:
        app_id: request.query_params["app_id"] | request.headers["x-app-id"] | ""
        app_key: request.query_params["app_key"] | request.headers["x-app-key"] | ""
    action:
      path: request.url_path
      method: request.method | "get"

If you want the adapter to examine a different query parameter or request header, change the name as appropriate. For example, to check for the application ID in a query parameter named identification, change request.query_params["app_id"] to request.query_params["identification"].

1.10.4.1.3. OpenID authentication method

To use the OpenID Connect (OIDC) authentication method, use the properties value on the subject field to set client_id, and optionally app_key.

You can manipulate this object using the methods described previously. In the example configuration shown below, the client identifier (application ID) is parsed from the JSON Web Token (JWT) under the label azp. You can modify this as needed.

OpenID authentication method example

  apiVersion: "config.istio.io/v1alpha2"
  kind: instance
  metadata:
    name: threescale-authorization
  spec:
    template: threescale-authorization
    params:
      Subject:
  properties:
          app_key: request.query_params["app_key"] | request.headers["x-app-key"] | ""
          client_id: request.auth.claims["azp"] | ""
      action:
        path: request.url_path
        method: request.method | "get"
          service: destination.labels["service-mesh.3scale.net/service-id"] | ""

For this integration to work correctly, OIDC must still be done in 3scale for the client to be created in the identity provider (IdP). You should create end-user authentication for the service you want to protect in the same namespace as that service. The JWT is passed in the Authorization header of the request.

In the sample Policy defined below, replace issuer and jwksUri as appropriate.

OpenID Policy example

  apiVersion: authentication.istio.io/v1alpha1
  kind: Policy
  metadata:
    name: jwt-example
    namespace: bookinfo
  spec:
    origins:
      - jwt:
          issuer: >-
            http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak
          jwksUri: >-
  http://keycloak-keycloak.34.242.107.254.nip.io/auth/realms/3scale-keycloak/protocol/openid-connect/certs
    principalBinding: USE_ORIGIN
    targets:
      - name: productpage

1.10.4.1.4. Hybrid authentication method

You can choose to not enforce a particular authentication method and accept any valid credentials for either method. If both an API key and an application ID/application key pair are provided, Service Mesh uses the API key.

In this example, Service Mesh checks for an API key in the query parameters, then the request headers. If there is no API key, it then checks for an application ID and key in the query parameters, then the request headers.

Hybrid authentication method example

apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: threescale-authorization
spec:
  template: authorization
  params:
    subject:
      user: request.query_params["user_key"] | request.headers["x-user-key"] |
      properties:
        app_id: request.query_params["app_id"] | request.headers["x-app-id"] | ""
        app_key: request.query_params["app_key"] | request.headers["x-app-key"] | ""
        client_id: request.auth.claims["azp"] | ""
    action:
      path: request.url_path
      method: request.method | "get"
        service: destination.labels["service-mesh.3scale.net/service-id"] | ""

1.10.5. Adapter metrics

The adapter, by default reports various Prometheus metrics that are exposed on port 8080 at the /metrics endpoint. These metrics provide insight into how the interactions between the adapter and 3scale are performing. The service is labeled to be automatically discovered and scraped by Prometheus.

Legal Notice

Copyright © 2017 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Modified versions must remove all Red Hat trademarks.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat. Licensed under the Apache License 2.0.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.