Distributor Helm Readme
Distributor
Installing the Chart
The distributor helm charts are stored in a public registry maintained by Axual. To access the Helm chart and the images you need an account to access it.
Contact the Axual support team to request access.
To install Axual Distributor Chart with a single values.yaml
helm registry login -u [your-user] registry.axual.io/axual-charts
helm upgrade --install local-distributor oci://registry.axual.io/axual-charts/distributor --version 5.5.2 -f values.yaml
Distributor Support
Prerequisite: Following services should be up and running before starting distributor:
-
Strimzi Operator
-
Kafka Brokers
Also, distributor topics with proper ACLs have to be created before starting the distributor. An initialisation function is available to allow the Helm Charts to do this. A secret containing a super user certificate and trusted CA certificates are needed for this to work.
Strimzi Compatibility
Distributor is deployed by the Strimzi Cluster Operator and is available for several Strimzi Releases.
This table contains the distributor version and corresponding Strimzi operators that are supported and the corresponding configuration values.
The connect.image.tag column refers to the value of that configuration property.
Distributor |
Strimzi |
|
5.3.0* |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0 |
|
5.3.1* |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0 |
|
5.3.2 |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0, 0.40, 0.41.0, 0.42.0, 0.43.0 |
|
5.3.3 |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0, 0.40, 0.41.0, 0.42.0, 0.43.0 |
|
5.3.4 |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0, 0.40, 0.41.0, 0.42.0, 0.43.0 |
|
5.3.5 |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0, 0.40, 0.41.0, 0.42.0, 0.43.0 |
|
5.3.6 |
0.35.1, 0.36.0, 0.36.1, 0.37.0, 0.38.0, 0.39.0, 0.40, 0.41.0, 0.42.0, 0.43.0 |
|
5.4.0 |
0.35.1**, 0.36.0**, 0.36.1**, 0.37.0**, 0.38.0**, 0.39.0**, 0.40, 0.41.0, 0.42.0, 0.43.0, 0.44.0, 0.45.0 |
|
5.4.1 |
0.35.1**, 0.36.0**, 0.36.1**, 0.37.0**, 0.38.0**, 0.39.0**, 0.40, 0.41.0, 0.42.0, 0.43.0, 0.44.0, 0.45.0 |
|
5.4.2 |
0.35.1**, 0.36.0**, 0.36.1**, 0.37.0**, 0.38.0**, 0.39.0**, 0.40, 0.41.0, 0.42.0, 0.43.0, 0.44.0, 0.45.0 |
|
* 5.3.0 and 5.3.1 do not have Strimzi specific images
** Deprecated, support will be removed in a future release
Distributor |
Strimzi |
|
|
5.5.2 |
0.40***, 0.41.0***, 0.42.0***, 0.43.0***, 0.44.0, 0.45.0, 0.46.1, 0.47.0, 0.48.0, 0.49.0, 0.49.1 |
|
|
5.5.1 |
0.40***, 0.41.0***, 0.42.0***, 0.43.0***, 0.44.0, 0.45.0, 0.46.1, 0.47.0, 0.48.0 |
|
|
5.5.0 |
0.40***, 0.41.0***, 0.42.0***, 0.43.0***, 0.44.0, 0.45.0, 0.46.1, 0.47.0 |
|
|
* Distributor 5.5.0+ determines the correct image tag from the configuration strimzi.version
Using the configuration connect.image.tag will override the automatic tag selection based on strimzi.version
* Deprecated, support will be removed in a future release
Upgrading to Distributor 5.5.2 with Strimzi 0.49.x
TLS Certificate Chain Configuration
When upgrading to Distributor 5.5.2 with Strimzi 0.49.0/0.49.1 (Kafka 4.1.1), you may encounter TLS handshake failures:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Cause: Strimzi 0.49.x requires the complete certificate chain (root + intermediate CA) in the caCerts configuration. Previous versions (e.g., Distributor 5.5.1 with Strimzi 0.48.0) worked with only the root certificate.
Fix: Add the intermediate CA certificate to distribution.sourceCluster.tls.caCerts and distribution.clusters.<target>.tls.caCerts. Retrieve the full chain from your cluster CA secret:
kubectl get secret <cluster>-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d
This returns both intermediate and root certificates. Include both in your caCerts configuration.
API Deprecation Warnings
When deploying, you will see warnings about deprecated Strimzi API versions:
W1217 14:39:07.651758 29765 warnings.go:70] Version v1beta2 of the KafkaConnect API is deprecated. Please use the v1 version instead.
W1217 14:39:07.653912 29765 warnings.go:70] Version v1beta2 of the KafkaConnector API is deprecated. Please use the v1 version instead.
These warnings are expected and intentional. The chart uses v1beta2 API versions to maintain backward compatibility with Strimzi versions older than 0.44. Upgrading to v1 would break the migration path for users on older Strimzi installations.
Upgrading to Distributor 5.5.0 and Strimzi 0.46.1 or newer from earlier versions
Distributor 5.5.0 contains several changes to support the Kafka 4 based Strimzi images of 0.46.0 and newer.
This guide assumes the new Strimzi operator is not yet installed, and you’re on a Strimzi version supported by Distributor 5.5.0.
Follow this guide to upgrade:
-
Update the chart dependency to 5.5.0
-
Update the values.yaml
-
Add a new configuration property
strimzi.versionand set it to a value matching your Strimzi operator -
Remove the value
connect.image.tagif set. Keeping this value would force the chart to load the specific image. -
Remove the value
init.image.tagif set. Keeping this value would force the chart to load the specific image for running the init scripts.
-
-
Apply the update
-
Verify the updated distributor. The update should result in the 5.5.0 images being used, with no changes in logging.
-
-
Update to the new Strimzi operator after checking distributor compatibility
-
Update the values.yaml
-
Set the configuration property
strimzi.versionto match your Strimzi operator version -
Update the content of
connect.loggingto Log4J2 format if specified. If not specified the default inline logging will be used
-
-
Apply the update
-
Verify that the image used is for the correct Distributor and Strimzi version and that the output logging is working as expected
-
Converting inline loggers from Log4J to Log4J2
Kafka 4.0.0 has removed support for Log4J and uses Log4J2 instead. The configuration field names have changed, and the logger name and level are configured separately.
The following code blocks contain the same logger configurations in both formats.
Log4J example inline loggers (Strimzi < 0.46.1):
log4j.rootLogger: "INFO"
log4j.logger.io.axual.distributor.common: "INFO"
log4j.logger.io.axual.distributor.message: "INFO"
log4j.logger.io.axual.distributor.offset: "INFO"
log4j.logger.io.axual.distributor.schema: "INFO"
log4j.logger.org.apache.kafka.clients.consumer: "WARN"
log4j.logger.org.apache.kafka.clients.producer: "WARN"
log4j.logger.org.apache.kafka.clients.admin: "WARN"
Log4J2 example inline loggers (Strimzi >= 0.46.1):
rootLogger.level: "INFO"
logger.distributorCommon.name: "io.axual.distributor.common"
logger.distributorCommon.level: "INFO"
logger.distributorMessage.name: "io.axual.distributor.message"
logger.distributorMessage.level: "INFO"
logger.distributorOffset.name: "io.axual.distributor.offset"
logger.distributorOffset.level: "INFO"
logger.distributorSchema.name: "io.axual.distributor.schema"
logger.distributorSchema.level: "INFO"
logger.kafkaConsumer.name: "org.apache.kafka.clients.consumer"
logger.kafkaConsumer.level: "WARN"
logger.kafkaProducer.name: "org.apache.kafka.clients.producer"
logger.kafkaProducer.level: "WARN"
logger.kafkaAdmin.name: "org.apache.kafka.clients.admin"
logger.kafkaAdmin.level: "WARN"
Deployment Kafka Connect with Distributor plugins
These charts can install a Kafka Connect cluster with distributor plugins. The Kafka Connect cluster is dedicated for a single Axual Tenant and Instance, because Strimzi Kafka Connect will reuse the authentication settings with all connectors Prerequisites:
-
A namespace to install the Kafka Connect cluster resources in
-
A Strimzi Cluster Operator that watches the namespace
-
The Strimzi Cluster Operator is set up to read from the registry.axual.io repository
-
A known bootstrap server endpoint for the source Kafka cluster
-
A secret containing a client certificate for a Kafka user that can create Kafka topics and access control lists
The following example configuration will start a Connect Cluster with three replicas for the tenant axual and instance dta It connects using a client certificate with a Kafka cluster named cluster01
global:
# Globally override the registry to pull images from.
imageRegistry: "registry.axual.io"
# Globally override the list of ImagePullSecrets provided, used for init containers
imagePullSecrets: []
# Configure the Strimzi Cluster Operator version installed. Used to determine the correct image tag and default logging to use
strimzi:
version: "0.48.0"
connect:
# Contains the image to use for connect
image:
repository: "axual/distributor"
# Specify the exact tag to override version deduction from strimzi.version value
# tag: "5.5.2-0.48.0"
# The number of connect replicas to spin up
replicas: 3
# Should connector resources, or Connector CRDs be used, needs to be true
useConnectorResources: true
# Use Strimzi Rack Awareness
rack:
enabled: true
topologyKey: topology.kubernetes.io/zone
#The bootstrap url used to connect to the kafka cluster
bootstrapServers: "cluster01-kafka-bootstrap:9093"
# The consumer group identifier used by this Kafka Connect cluster
groupId: "_rbsoft-multi-distributor-group"
# Contains the internal connect topics settings
topics:
# Set this to the replication factor to use on the source Kafka cluster
replicationFactor: 1
config:
# Name of the Kafka Connect config topic
name: "_axual-dta-distributor-config"
status:
# Name of the Kafka Connect status topic
name: "_axual-dta-distributor-status"
# Set the number of partitions used for the status topic
partitions: 2
offset:
# Name of the Kafka Connect offset topic
name: "_axual-dta-distributor-offset"
# Set the number of partitions used for the offset topic
partitions: 25
# Set the rest of the Kafka Connect configuration
config:
key.converter: JsonConverter
value.converter: JsonConverter
# Use this if you need SASL authentication for the source cluster
# sasl:
# # Set to true to enable a SASL connection
# enabled: false
# type: "PLAIN" # only PLAIN and SCRAM-SHA-512 are supported
# username: ""
# password: ""
# Contains the TLS settings for connecting to the kafka cluster
tls:
enabled: true
createCaCertsSecret: false
# if createTruststoreCaSecret is true, set the CA certs below
# caCerts:
# one_ca.crt: |
# -----BEGIN CERTIFICATE-----
# other_ca.crt: |
# -----BEGIN CERTIFICATE-----
# if createTruststoreCaSecret is false, the caCerts need to be set
# with an existing secret (name) and the name of the cert inside the
# secret
# caSecret:
# secretName: your_custom_ca_secret
# keyForCertificate: your_custom_ca_cert
caSecret:
secretName: "cluster01-cluster-ca-cert"
keyForCertificate: "ca.crt"
# Configure authentication using a client certificate
authentication:
enabled: true
createTlsClientSecret: false
# clientCert: |
# -----BEGIN CERTIFICATE-----
# -----END CERTIFICATE-----
# clientKey: |
# -----BEGIN PRIVATE KEY-----
# -----END PRIVATE KEY-----
# if a TLS secret already exists with the client credentials, provide the name here
secretName: "axual-cluster01-distributor"
# Set logging rules, the different types and formats can be found in Strimzi documentation
# If logging is not set, a default version will be deduced from the value of strimzi.version
logging:
type: inline
loggers:
rootLogger.level: "INFO"
logger.distributorCommon.name: "io.axual.distributor.common"
logger.distributorCommon.level: "INFO"
logger.distributorMessage.name: "io.axual.distributor.message"
logger.distributorMessage.level: "INFO"
logger.distributorOffset.name: "io.axual.distributor.offset"
logger.distributorOffset.level: "INFO"
logger.distributorSchema.name: "io.axual.distributor.schema"
logger.distributorSchema.level: "INFO"
logger.kafkaConsumer.name: "org.apache.kafka.clients.consumer"
logger.kafkaConsumer.level: "WARN"
logger.kafkaProducer.name: "org.apache.kafka.clients.producer"
logger.kafkaProducer.level: "WARN"
logger.kafkaAdmin.name: "org.apache.kafka.clients.admin"
logger.kafkaAdmin.level: "WARN"
# Used for creating the required Kafka ACLs and topics on the Kafka cluster
init:
enabled: true
# A list of Kafka principal names that should be used in the ACLS
principals:
- "User:CN=rbsoft distributor client for multi instance,OU=Development,O=Axual,L=Utrecht,ST=Utrecht,C=NL"
# Secrets containing the TLS certificates for applying topics and acl to kafka
tls:
# -- Existing Keypair secret name
keypairSecretName: "axual-remote-2-super-user-cert"
# -- Existing Keypair key name
keypairSecretKeyName: "tls.key"
# -- Existing Keypair certificate name
keypairSecretCertName: "tls.crt"
# -- Existing Truststore secret name
truststoreCaSecretName: "cluster01-cluster-ca-cert"
# -- Existing Truststore certificate name
truststoreCaSecretCertName: "ca.crt"
Metrics and Alerts
The distributor can be configured using to expose metrics, which can be exposed using a PodMonitor. Add the following configuration for metrics
# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"
connect:
metrics:
enabled: true
# Setting for the podmonitor
podMonitor:
enabled: true
interval: 60s
scrapeTimeout: 12s
# Use this to enable default alerts and add custom alerts
prometheusRule:
enabled: true
rules:
# Alert definition
- alert: MyCustomAlertRuleName
annotations:
message: '{{ "{{ $labels.connector }}" }} send rate has dropped to 0'
expr: sum by (connector) ( kafka_connect_sink_task_sink_record_send_rate{connector=~".*-message-distributor-.*"}) == 0
for: 5m
labels:
severity: high
callout: false
Exposing and securing the Connect REST API
Strimzi restricts access to the REST API to prevent other applications in Kubernetes to modify connector settings.
This restriction can be removed by instructing Strimzi to NOT manage connectors. Since this leaves the API open to all applications a custom network policy can be enabled.
The following example allows a specific ingress controller in the
namespace local-ingress to connect
# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"
connect:
# Turns off Strimzi Managed connectors. KafkaConnector Resources cannot be used
useConnectorResources: false
# Settings for the custom network policy
networkPolicy:
# The REST API will not be reachable if network policy is not enabled, AND useConnectorResources is true
enabled: true
from:
# A pod needs to meet ALL the requirements of AT LEAST ONE entry in this list to be able to access the Connect REST API.
- podSelector:
matchLabels:
# The pod must be called ingress-nginx and be part of instance ingress.
# This is a setup to match the labels of the nginx ingress controller
app.kubernetes.io/instance: ingress
app.kubernetes.io/name: ingress-nginx
namespaceSelector:
matchLabels:
# The pod must run in the namespace local-ingress
kubernetes.io/metadata.name: local-ingress
Defining an ingress for the Connect REST API
Strimzi does not provide an Ingress resource to be able to reach the Connect REST API, as this should not be used when using managed connector resources (KafkaConnector resource).
A custom Ingress can be defined to expose the service. The following example uses nginx for ingress.
# Axual registration data, should match short names in Self Service
tenantShortName: "axual"
instanceShortName: "dta"
connect:
# Turns off Strimzi Managed connectors. KafkaConnector Resources cannot be used
# If set to true, then the network policy must be configured for the ingress to reach the service
useConnectorResources: false
# Settings for the custom Ingress
ingress:
enabled: true
className: nginx
annotations:
# The example rewrites the target URL using nginx configuration
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
hosts:
- host: examples.dev
paths:
# Use the distributor path to be able to call http://examples.dev/distributor/xyz
# The service is called with path /xyz because of the pattern and rewrite annotation
- path: /distributor(/|$)(.*)
pathType: ImplementationSpecific
tls:
- hosts:
- examples.dev
secretName: examples-dev-server-cert-secret
Changing Strimzi Kafka Connect settings
The distributor helm charts allows the following Strimzi configurations for Kafka Connect. See the Strimzi documentation for details
connect:
resources:
limit:
cpu: 500m
memory: 750Mi
jmxOptions:
# JMX Options
jvmOptions:
# Java VM Options
livenessProbe:
# Strimzi Connect liveness probe options
readinessProbe:
# Strimzi Connect readiness probe options
tracing:
# Strimzi Connect tracing options
templateOverride:
# Strimzi Connect template options
externalConfiguration:
# Strimzi Connect external configuration options
Changing Image Registry, Repository or Tag
To change the globally used registry, change the global config. This is useful when you use a custom registry to mirror the content of the Axual registry
global:
# -- Globally override the registry to pull images from.
imageRegistry: "local.image.registry:5000"
# -- Globally override the list of ImagePullSecrets provided.
imagePullSecrets:
- name: custerDockerSecret1
- name: custerDockerSecret2
It is also possible to set the registry, repository and tag for Connect and Init directly
connect:
# Contains the image to use for connect
image:
# Set the specific registry to use for the init containers
registry: "local.image.registry:5000"
# Set the specific repository to use for the init containers
repository: "axual/distributor"
# Override the specific tag, else the Charts appVersion combined with the strimziVersion is used
tag: "5.5.2-0.48.0"
# Specify the docker secrets that can be used to pull the image from the registry
pullSecrets:
- name: custerDockerSecret3
- name: custerDockerSecret4
Useful Links
After distributor installation with desired message/schema distributor settings, you can check the connectors with the instructions below.
-
Use port forwarding or the ingress to enable access distributor application. (ex: port 8083)
-
To reach the list of connectors use the link
http://localhost:{port}/connectors/http://localhost:8083/connectors/ -
To reach one of your schema or message distributors and their configurations using the name listed
http://localhost:8083/connectors/{listed-connector-name} -
To check its status with
http://localhost:8083/connectors/{your-connector-name}/status -
To check its configs with
http://localhost:8083/connectors/{your-connector-name}/config -
You can check its tasks with
http://localhost:8083/connectors/{your-connector-name}/tasks