Configure REST Proxy

This page outlines the configuration values for the REST Proxy deployed via Axual Streaming Helm chart.

About REST Proxy

For detailed information on the REST Proxy, please refer to the REST Proxy documentation.

REST Proxy Configuration

Here you can find some basic configuration and examples that you can use to build your own values.yaml file to deploy REST Proxy. Refer to the REST Proxy 1.14.1 Helm Readme for more details.

Basic REST Proxy Configuration

REST Proxy is configured and deployed by the Axual Streaming Charts. Below you can see a configuration example which contains every required field:

Click to open rest-proxy-values.yaml
global:
  rest-proxy:
    enabled: true

rest-proxy:
  image:
  #  tag: "1.5.4" #override
    registry: "registry.axual.io"
  keystoreProvider:
    image:
      registry: "registry.axual.io"
  # Enable podDisruptionBudget
  podDisruptionBudget:
    enabled: true
  # Enable Reloader
  # -- Optional: deployment-specific annotations.
  # These are not required for the Axual Platform itself,
  # but can be useful depending on your cluster setup.
  podAnnotations:
    # Enables automatic pod reload when ConfigMaps/Secrets change (Stakater Reloader)
    "reloader.stakater.com/auto": "true"
    # Specifies Fluent Bit parser for structured log collection
    fluentbit.io/parser: json

  config:
    # REST Proxy component is installed for a specific tenant, specific instance, specific cluster. If it is needed to add REST Proxy to other instance then a separate chart can be used to install it
    axual:
      tenant: "<tenant-short-name>" # The target tenant
      instance: "<instance-short-name>" # The shortname of the target instance
      configMode: "static"
      static-configuration:
        tenant: "<tenant-short-name>" # The target tenant
        instance: "<instance-short-name>" # The shortname of the target instance
        cluster: "<cluster-name>" # The name of the target cluster
        # If REST Proxy is running in the same Kubernetes cluster we can just use the internal service url as well.
        bootstrapServers: "bootstrap-kafka.<domain>:443"
        schemaRegistryUrl: "https://apicurio.<domain>"
        enable.value.headers: "false"
        groupIdResolver: "io.axual.common.resolver.GroupPatternResolver"
        groupIdPattern: "{tenant}-{instance}-{environment}-{group}" # This should be the same with the group pattern which is used in cluster configuration
        topicResolver: "io.axual.common.resolver.TopicPatternResolver"
        topicPattern: "{tenant}-{instance}-{environment}-{topic}" # This should be the same with the group pattern which is used in topic configuration
        transactionalIdResolver: "io.axual.common.resolver.TransactionalIdPatternResolver"
        transactionalIdPattern: "{tenant}-{instance}-{environment}-{app.id}-{transactional.id}" # This should be the same with the group pattern which is used in transaction configuration
        # We use AdvancedAclPrincipalBuilder as we have chained principals. In case of principal without chain, use BasicAclPrincipalBuilder
        principalBuilderClass: io.axual.security.principal.AdvancedAclPrincipalBuilder
    spring:
      security:
        oauth2:
          resourceserver:
            jwt:
              issuer-uri: https://sts.windows.net/dcbcc3be-2c54-4328-86ee-2589d4da46de/  #iss of the token
      application:
        name: axual-rest-proxy
  logbackConfig: |
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
      <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
      </appender>

      <root level="INFO">
        <appender-ref ref="console" />
      </root>
    </configuration>
  tls:
    clientEnabled: true
    serverEnabled: true
    clientAuth: need
    automatedKeystores: true
    createServerKeypairSecret: false
    serverKeypairSecretName: server-cert-secret
    createClientKeypairSecret: false
    clientKeypairSecretName: rest-proxy-cert-secret
    createTruststoreCaSecret: false
    truststoreCaSecretName: my-external-ca
  service:
    type: ClusterIP
  ingress:
    enabled: true
    # -- The name of the IngressClass cluster resource.
    className: nginx
    # -- Annotations to add to the Ingress resource.
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: HTTPS
      nginx.ingress.kubernetes.io/ssl-passthrough: "true"
      nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
      external-dns.alpha.kubernetes.io/hostname: "restproxy.<domain>"
      axual.com/service.class: "aks-np-ams"
      external-dns.alpha.kubernetes.io/ttl: "60"
    hosts:
      - # -- The fully qualified domain name of a network host.
        host: restproxy.<domain>
        paths:
          - # -- Matched against the path of an incoming request.
            path: /
            # -- Determines the interpretation of the Path matching.
            # Can be one of the following values: `Exact`, `Prefix`, `ImplementationSpecific`.
            pathType: ImplementationSpecific
    # -- TLS configuration for this Ingress.
    tls:
      # Use secretName to update the certificates of the ingress
      - secretName: server-cert-secret
        hosts:
          - restproxy.<domain>

  kafkaInitContainer:
    # -- Kafka bootstrap servers to initialize
    # If REST Proxy is running in the same Kubernetes cluster we can just use the internal service url as well.
    bootstrapServers: "SSL://bootstrap-kafka.<domain>:443"
    # -- Principal common name to give access to (should match tls.clientKeypairSecretName)
    # principal: "[0] CN=Axual Root CA 2018, [1] CN=Axual Intermediate CA 2018 1, [2] CN=REST Proxy Client, O=Axual B.V.,L=Utrecht,ST=Utrecht,C=NL"
    # Check in Kafka logs if there is a certificate related error and check the CN there.
    # In this example, we dont have chained certificates so the value here should be just "REST Proxy Client"
    principal: "CN=REST Proxy Client"
    # Those two should be set based on the topic and group pattern, in this example the should be just {instance}
    # -- Group prefix to give access to (usually {tenant}-{instance})
    groupPattern: "<instance-short-name>"
    # -- Topic prefix to give access to (usually {tenant}-{instance})
    topicPattern: "<instance-short-name>"
    tls:
      # -- Existing Keypair secret name
      keypairSecretName: "server-cert-secret"
      # -- Existing Keypair key name
      keypairSecretKeyName: "tls.key"
      # -- Existing Keypair certificate name
      keypairSecretCertName: "tls.crt"
      # -- Existing Truststore secret name
      truststoreCaSecretName: "my-external-ca"
      # -- Existing Truststore certificate name
      truststoreCaSecretCertName: "ca.crt"
  serviceMonitor:
    enabled: true
REST Proxy must use a client certificate whose identity matches the DNS in the server certificate.
Ensure that the restproxy.<domain> has the correct ca in the certificate via openssl s_client -showcerts -verify 5 -connect 192.168.99.120:443 -servername restproxy.<domain> < /dev/null

To test REST Proxy component, the following script can be used to produce a message in Kafka. Note that the topic and the application should be created from Self service before using them for testing.

Click to open rest-proxy-message.sh
#!/bin/bash
# https://<rest-proxy-ingress>
REST_HOST="https://restproxy.<domain>"
# Short name of the target environment
ENVIRONMENT=""
# Name of the target topic
STREAM=""
# The uuid of the target application. Can be found in URL of the application details in UI
UUID=""
# The id of the target application
APP_ID=""
JSON_HEADER="Content-Type: application/json"
PRODUCE_RECORD='{
   "keyMessage":{
      "type":"STRING",
      "message":"Random key: '$RANDOM'"
   },
   "valueMessage":{
      "type":"STRING",
      "message":"Random value: '$RANDOM'"
   }
}'
echo "Sending"
curl -vk --request POST \
  --url "${REST_HOST}/stream/${ENVIRONMENT}/${STREAM}" \
  --header "axual-application-id: $APP_ID" \
  --header 'axual-application-version: 1.0' \
  --header "axual-producer-uuid: $UUID" \
  --header "$JSON_HEADER" \
  --key <path-to-certificate-key> \
  --cert <path-to-certificate-crt> \
  --data "$PRODUCE_RECORD" \
The following section can be used for additional configuration or for more details for a specific section.

REST Proxy repository

First, you need to add some configuration to specify from where to pull the REST Proxy from. You can do this in the following way:

values.yaml
rest-proxy:

  image:
    registry: "registry.axual.io"
    tag: "1.12.0"
  imagePullSecrets:
    - name: docker-credentials

Kafka init container

REST proxy requires an init container (running a Kafka image) to create the ACLs in the Kafka cluster. We need to specify:

  • bootstrapServers of the Kafka cluster where we want to apply the ACLs

  • principal to whom we want to grant the ACLs. We might need to use the SSL chain to identify the principal or the CN, depending on how the Kafka installation is configured.

  • groupPattern which is the group prefix to give access to (typically {tenant}-{instance}- depending on cluster group pattern)

  • topicPattern which is the topic prefix to give access to (typically {tenant}-{instance}- depending on cluster topic pattern)

  • tls Secrets needed to connect to the Kafka cluster

If Kafka is configured to validate ACLs over the full principal chain, please provide the principal chain as this example: [0] CN=Root CA, [1] CN=Intermediate CA, [3] CN=schema-registry. Otherwise, provide the common name prefixed with CN:.

You can configure it in the following way:

values.yaml
rest-proxy:
  kafkaInitContainer:
    bootstrapServers: ""
    principal: ""
    groupPattern: ""
    topicPattern: ""
    tls:
      keypairSecretName: ""
      keypairSecretKeyName: ""
      keypairSecretCertName: ""
      truststoreCaSecretName: ""
      truststoreCaSecretCertName: ""

Logback Configuration

It is possible to define a ConfigMap containing the logback configuration used by the REST Proxy application. You can configure:

  • pattern: Defines the exact pattern for log statements

  • rootLoglevel: Sets the base logging level

  • loggers: You can configure specific loggers with different levels, overriding the root level for these loggers.

Here is an example:

values.yaml
rest-proxy:

  logging:
    pattern: '%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}'
    rootLoglevel: debug
    loggers:
      io.axual: info
      io.axual.proxy.rest: debug
      org.apache.kafka.clients.admin.AdminClientConfig: info
      org.apache.kafka.clients.producer.ProducerConfig: info
      org.apache.kafka.clients.consumer.ConsumerConfig: info
      org.springframework.boot.web: debug

TLS Configuration

If needed, you can specify secrets containing the PEM certificates for keystore generation:

  • Server keypair

  • Client keypair

  • Truststore

Here is an example of how you can configure it.

values.yaml
rest-proxy:

  tls:
    # -- Creates server keypair from PEM
    createServerKeypairSecret: true
    # -- PEM used to generate the server keypair if `createServerKeypairSecret` is true
    serverCertificatePem: <server-certificate>
    # -- PEM used to generate the server keypair if `createServerKeypairSecret` is true
    serverKeyPem: <server-key>

    # -- Creates client keypair from PEM
    createClientKeypairSecret: true
    # -- PEM used to generate the client keypair if `createClientKeypairSecret` is true
    clientCertificatePem: <client-certificate>
    # -- PEM used to generate the client keypair if `createClientKeypairSecret` is true
    clientKeyPem: <client-key>

    # -- Creates truststore from PEMs
    createTruststoreCaSecret: true
    # -- Set of PEMs used to generate the truststore if `createTruststoreCaSecret` is true
    caCerts:
      ca_one.crt:  <first-cert>
      ca_two.crt: <second-cert>

For more information on the secrets defined above, refer to TLS secrets.

Application Configuration

REST Proxy is a Spring Boot application. Spring Boot application can be configured with application.yml files. What is present under config in the yml file, gets injected in a ConfigMap and mounted as an application.yml file.

values.yaml
rest-proxy:
  config:

Here some configurations of the REST Proxy are presented, the most important ones are going to be described.

Logback and server Configuration

The logback.xml file will be the one that we added before in loggers section. In the server part we have some configuration for the server, like port, the ssl ciphers that can be used and the configuration for the Tomcat accesslog, which is disabled by default.

values.yaml
rest-proxy:
  config:
    logging.config: /logging/logback.xml

    server:
      port: 18111
      ssl.ciphers: 'TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA'
      tomcat:
        accesslog:
          #  defaults to disabled
          enabled: false
          pattern: '{"host": "%h", "timestamp":"%{yyyy-MM-dd HH:mm:ss.SSS}t", "thread": "%I", "request_line": "%r", "response_status_code":"%s", "bytes_sent":  "%b", "request_process_time":"%D","user_agent": "%{user-agent}i"}'
          directory: "/dev"
          prefix: "stdout"
          buffered: false
          suffix: ""
          fileDateFormat: ""
REST Proxy client Configuration

You can configure the Kafka Clients instantiated by the REST Proxy similar to what it is done in the following example.

values.yaml
rest-proxy:

  # -- Configuration passed to the container.
  # Contents get injected to a ConfigMap, which gets mounted as an `application.yml` file.
  config:

    axual:
      tenant: axual
      instance: local
      applicationId: rest-proxy
      applicationVersion: 1.12.0

      sslProtocol: "SSL"
      sslEnableHostnameVerification: false
      acl:
        cacheTtlMs: 30000
        retrySleep: 100
        useCache: false
      producer:
        config:
          # Overrides kafka producer configuration
          metadata-max-age-ms: 180000
          connections-max-idle-ms: 180000
          request-timeout-ms: 120000
          retries: 3
          max-block-ms: 60000
          acks: all
          batch-size: 10
          linger-ms: 1
          max-in-flight-requests-per-connection: 5
          send-buffer-bytes: 10000
          receive-buffer-bytes: 10000
      consumer:
        numberOfThreads: 10
        config:
          # Overrides kafka consumer configuration
          metadata-max-age-ms: 180000
          connections-max-idle-ms: 180000
      avro:
        maxSchemasPerSubject: 100
        basicAuthCredentialsSource: ""
REST Proxy static configuration

This segment of the documentation contains the configuration needed to connect to the Kafka cluster and perform group.id and topic resolving. An example follows:

values.yaml
rest-proxy:
  config:
    axual:
      static-configuration:
        tenant: "axual"
        instance: "test"
        cluster: "ams01"
        bootstrapServers: "bootstrap.ams01.cloud.axual.com:9094"
        schemaRegistryUrl: "schema-registry-slave.cloud.axual.com"
        groupIdResolver: "io.axual.common.resolver.GroupPatternResolver"
        groupIdPattern: "{tenant}-{instance}-{environment}-{group}"
        topicResolver: "io.axual.common.resolver.TopicPatternResolver"
        topicPattern: "{tenant}-{instance}-{environment}-{topic}"
        transactionalIdResolver: "io.axual.common.resolver.TransactionalIdPatternResolver"
        transactionalIdPattern: "{tenant}-{instance}-{environment}-{transactional.id}"
        principalBuilderClass: io.axual.security.principal.AdvancedAclPrincipalBuilder