Distributor Helm Readme

Distributor

Installing the Chart

The distributor helm charts are stored in a public registry maintained by Axual. To access the Helm chart and the images you need an account to access it.

Contact the Axual support team to request access.

To install the Axual Distributor Chart with a single values.yaml

helm registry login -u [your-user] registry.axual.io/axual-charts
helm  upgrade --install local-distributor oci://registry.axual.io/axual-charts/distributor --version 5.3.4 -f values.yaml

Distributor Support

Prerequisite: Following services should be up and running before starting distributor:

  • Strimzi Operator

  • Kafka Brokers

Also, distributor topics with proper ACLs have to be created before starting the distributor. An initialisation function is available to allow the Helm Charts to do this. A secret containing a super user certificate and trusted CA certificates are needed for this to work.

Deployment Kafka Connect with Distributor plugins

These charts can install a Kafka Connect cluster with distributor plugins. The Kafka Connect cluster is dedicated for a single Axual Tenant and Instance, because Strimzi Kafka Connect will reuse the authentication settings with all connectors Prerequisites:

  • A namespace to install the Kafka Connect cluster resources in

  • A Strimzi Cluster Operator that watches the namespace

  • The Strimzi Cluster Operator is set up to read from the registry.axual.io repository

  • A known bootstrap server endpoint for the source Kafka cluster

  • A secret containing a client certificate for a Kafka user that can create Kafka topics and access control lists

The following example configuration will start a Connect Cluster with three replicas for the tenant axual and instance dta It connects using a client certificate with a Kafka cluster named cluster01

# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"

connect:
 # The number of connect replicas to spin up
 replicas: 3

 # Should connector resources, or Connector CRDs be used, needs to be true
 useConnectorResources: true

 # Use Strimzi Rack Awareness
 rack:
  enabled: true
  topologyKey: topology.kubernetes.io/zone

 #The bootstrap url used to connect to the kafka cluster
 bootstrapServers: "cluster01-kafka-bootstrap:9093"
 # The consumer group identifier used by this Kafka Connect cluster
 groupId: "_rbsoft-multi-distributor-group"
 # Contains the internal connect topics settings
 topics:
  # Set this to the replication factor to use on the source Kafka cluster
  replicationFactor: 1
  config:
   # Name of the Kafka Connect config topic
   name: "_axual-dta-distributor-config"
  status:
   # Name of the Kafka Connect status topic
   name: "_axual-dta-distributor-status"
   # Set the number of partitions used for the status topic
   partitions: 2
  offset:
   # Name of the Kafka Connect offset topic
   name: "_axual-dta-distributor-offset"
   # Set the number of partitions used for the offset topic
   partitions: 25

 # Set the rest of the Kafka Connect configuration
 config:
  key.converter: JsonConverter
  value.converter: JsonConverter

  # Use this if you need SASL authentication for the source cluster
  #  sasl:
  #      # Set to true to enable a SASL connection
  #     enabled: false
  #     type: "PLAIN" # only PLAIN and SCRAM-SHA-512 are supported
  #     username: ""
  #     password: ""

  # Contains the TLS settings for connecting to the kafka cluster
  tls:
    enabled: true
    createCaCertsSecret: false
    # if createTruststoreCaSecret is true, set the CA certs below
    #    caCerts:
    #      one_ca.crt: |
    #        -----BEGIN CERTIFICATE-----
    #      other_ca.crt: |
    #        -----BEGIN CERTIFICATE-----

    # if createTruststoreCaSecret is false, the caCerts need to be set
    # with an existing secret (name) and the name of the cert inside the
    # secret
    # caSecret:
    #  secretName: your_custom_ca_secret
    #  keyForCertificate: your_custom_ca_cert
    caSecret:
     secretName: "cluster01-cluster-ca-cert"
     keyForCertificate: "ca.crt"

    # Configure authentication using a client certificate
    authentication:
     enabled: true
     createTlsClientSecret: false
     #           clientCert: |
     #              -----BEGIN CERTIFICATE-----
     #              -----END CERTIFICATE-----
     #           clientKey: |
     #              -----BEGIN PRIVATE KEY-----
     #              -----END PRIVATE KEY-----

     # if a TLS secret already exists with the client credentials, provide the name here
     secretName: "axual-cluster01-distributor"

  # Set logging rules, the different types and formats can be found in Strimzi documentation
  logging:
   type: inline
   loggers:
    log4j.rootLogger: "INFO"
    log4j.logger.io.axual.distributor.common: "INFO"
    log4j.logger.io.axual.distributor.message: "INFO"
    log4j.logger.io.axual.distributor.offset: "INFO"
    log4j.logger.io.axual.distributor.schema: "INFO"
    log4j.logger.org.apache.kafka.connect.runtime.rest: "WARN"

# Used for creating the required Kafka ACLs and topics on the Kafka cluster
init:
 enabled: true
 # A list of Kafka principal names that should be used in the ACLS
 principals:
  - "User:CN=rbsoft distributor client for multi instance,OU=Development,O=Axual,L=Utrecht,ST=Utrecht,C=NL"

 # Secrets containing the TLS certificates for applying topics and acl to kafka
 tls:
  # -- Existing Keypair secret name
  keypairSecretName: "axual-remote-2-super-user-cert"
  # -- Existing Keypair key name
  keypairSecretKeyName: "tls.key"
  # -- Existing Keypair certificate name
  keypairSecretCertName: "tls.crt"
  # -- Existing Truststore secret name
  truststoreCaSecretName: "cluster01-cluster-ca-cert"
  # -- Existing Truststore certificate name
  truststoreCaSecretCertName: "ca.crt"

Metrics and Alerts

The distributor can be configured using to expose metrics, which can be exposed using a PodMonitor. Add the following configuration for metrics

# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"

connect:
 metrics:
  enabled: true

 # Setting for the podmonitor
 podMonitor:
  enabled: true
  interval: 60s
  scrapeTimeout: 12s

 # Use this to enable default alerts and add custom alerts
 prometheusRule:
  enabled: true
  rules:
   # Alert definition
   - alert: MyCustomAlertRuleName
     annotations:
     message: '{{ "{{ $labels.connector }}" }} send rate has dropped to 0'
     expr: sum by (connector) ( kafka_connect_sink_task_sink_record_send_rate{connector=~".*-message-distributor-.*"}) == 0
     for: 5m
     labels:
      severity: high
      callout: false

Exposing and securing the Connect REST API

Strimzi restricts access to the REST API to prevent other applications in Kubernetes to modify connector settings.

This restriction can be removed by instructing Strimzi to NOT manage connectors. Since this leaves the API open to all applications a custom network policy can be enabled.

The following example allows a specific ingress controller in the namespace local-ingress to connect

# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"

connect:
  # Turns off Strimzi Managed connectors. KafkaConnector Resources cannot be used
  useConnectorResources: false

  # Settings for the custom network policy
  networkPolicy:
    # The REST API will not be reachable if network policy is not enabled, AND useConnectorResources is true
    enabled: true
    from:
      # A pod needs to meet ALL the requirements of AT LEAST ONE entry in this list to be able to access the Connect REST API.
      - podSelector:
          matchLabels:
            # The pod must be called ingress-nginx and be part of instance ingress.
            # This is a setup to match the labels of the nginx ingress controller
            app.kubernetes.io/instance: ingress
            app.kubernetes.io/name: ingress-nginx

        namespaceSelector:
          matchLabels:
            # The pod must run in the namespace local-ingress
            kubernetes.io/metadata.name: local-ingress

Defining an ingress for the Connect REST API

Strimzi does not provide an Ingress resource to be able to reach the Connect REST API, as this should not be used when using managed connector resources (KafkaConnector resource).

A custom Ingress can be defined to expose the service. The following example uses nginx for ingress.

# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"

connect:
  # Turns off Strimzi Managed connectors. KafkaConnector Resources cannot be used
  # If set to true, then the network policy must be configured for the ingress to reach the service
  useConnectorResources: false

  # Settings for the custom Ingress
  ingress:
    enabled: true
    className: nginx
    annotations:
      # The example rewrites the target URL using nginx configuration
      nginx.ingress.kubernetes.io/use-regex: "true"
      nginx.ingress.kubernetes.io/rewrite-target: /$2
    hosts:
      - host: examples.dev
        paths:
          # Use the distributor path to be able to call http://examples.dev/distributor/xyz
          # The service is called with path /xyz because of the pattern and rewrite annotation
          - path: /distributor(/|$)(.*)
            pathType: ImplementationSpecific
    tls:
      - hosts:
          - examples.dev
        secretName: examples-dev-server-cert-secret

Changing Strimzi Kafka Connect settings

The distributor helm charts allows the following Strimzi configurations for Kafka Connect. See the Strimzi documentation for details

connect:
 resources:
  limit:
   cpu: 500m
   memory: 750Mi

 jmxOptions:
 # JMX Options

 jvmOptions:
 # Java VM Options

 livenessProbe:
 # Strimzi Connect liveness probe options

 readinessProbe:
 # Strimzi Connect readiness probe options

 tracing:
 # Strimzi Connect tracing options

 templateOverride:
 # Strimzi Connect template options

 externalConfiguration:
 # Strimzi Connect external configuration options

Changing Image Registry, Repository or Tag

To change the globally used registry, change the global config. This is useful when you use a custom registry to mirror the content of the Axual registry

global:
  # -- Globally override the registry to pull images from.
  imageRegistry: "local.image.registry:5000"
  # -- Globally override the list of ImagePullSecrets provided.
  imagePullSecrets:
    - name: custerDockerSecret1
    - name: custerDockerSecret2

It is also possible to set the registry, repository and tag for Connect and Init directly

connect:
 # Containes the image to use for connect
 image:
  # Which image and registry should be used
  # If not specified the connect image settings will be used
  image:
   # Set the specific registry to use for the init containers
   registry: "local.image.registry:5000"
   # Set the specific repository to use for the init containers
   repository: "axual/distributor"
   # Set the specific tag of the repository to use for the init containers
   tag: "5.3.4-0.43.0"
   # Specify the docker secrets that can be used to pull the image from the registry
   pullSecrets:
    - name: custerDockerSecret3
    - name: custerDockerSecret4

After distributor installation with desired message/schema distributor settings, you can check the connectors with the instructions below.

  1. Use port forwarding or the ingress to enable access distributor application. (ex: port 8083)

  2. To reach the list of connectors use the link http://localhost:{port}/connectors/ http://localhost:8083/connectors/

  3. To reach one of your schema or message distributors and their configurations using the name listed http://localhost:8083/connectors/{listed-connector-name}

  4. To check its status with http://localhost:8083/connectors/{your-connector-name}/status

  5. To check its configs with http://localhost:8083/connectors/{your-connector-name}/config

  6. You can check its tasks with http://localhost:8083/connectors/{your-connector-name}/tasks