Configure Strimzi
This page outlines the configuration values for Strimzi deployed via Axual Streaming Helm chart.
About Strimzi Operator
Strimzi is an open-source Kubernetes operator that deploys and monitors a Kafka cluster (Zookeeper and Broker).
Strimzi Configuration
Here you can find some basic configuration and examples that you can use to build your own values.yaml
file to deploy Strimzi.
For more details and advanced configuration, please refer to the Strimzi Documentation.
Deployment name
The deployment name will affect the names that Strimzi will give to the resources (pods, secrets, etc.) it will deploy for you. By default, the deployment name will be the name of the helm charts (e.g. axual-streaming).
You have two options to change the deployment name:
-
With
fullnameOverride
, the deployment name will be completely overridden. In the following example, it will just belocal
values.yamlkafka: fullnameOverride: "local" nameOverride: ""
-
With
nameOverride
, the deployment name will have the form<chart-name>-<nameOverride>
. In the previous example, it would have beenaxual-streaming-local
Enable Kafka and Kafka version
You can enable Kafka and use a specific Kafka version in the following way. Be sure that the kafka version you are trying to run is supported by the Strimzi version you are running. You can check compatibility on the Strimzi supported versions page.
kafka:
kafka:
enabled: true
version: 3.4.0
Rack topology key
Rack awareness is a feature that allows Kafka to spread replicas of a partition across different racks or zones to ensure durability and availability.
You can enable this feature by setting rack.enabled
to true
. After doing that, you can specify the key that will be used to determine the rack configuration
with rack.topologyKey
please refer to the Strimzi documentation for more details.
kafka:
kafka:
rack:
enabled: true
topologyKey: topology.kubernetes.io/zone
Listeners
Listeners define how Kafka clients connect to the Kafka brokers. Kafka listeners are essentially the network endpoints that Kafka brokers use to listen for incoming connections from Kafka clients.
Each listener is configured to use a specific network protocol and port, along with other settings.
There are internal
listeners, which can be used to Kafka from inside the same Kubernetes cluster, and external
listeners,
which can be used to connect to Kafka from outside the Kubernetes cluster.
Internal Listener
Axual Streaming charts, have an internal
listener on port 9093
.
TLS
on that listener can be enabled using the value kafka.internalListenerTlsEnabled
(default false
).
The internal listener authentication is defined by kafka.internalListenerAuthenticationType
.
If left empty, no authentication is enabled.
Supported values are tls
, scram-sha-512
, oauth
, custom
External listeners
To make the Kafka cluster accessible from outside the kubernetes cluster, you need to configure the externalListener
.
-
externalListenerType
defines the type of external listener. You can use NodePort or LoadBalancer. -
externalListenerTlsEnabled
is theTLS
feature toggle -
externalListenerAuthenticationType
defines the type of authentication wanted. If left empty, no authentication is enabled. Supported values aretls
,scram-sha-512
,oauth
,custom
-
In
externalListenerConfiguration
you need to define:-
bootstrap
: the external bootstrap service that clients use to initially connect to the Kafka cluster -
list of
brokers
: for each of them, theadvertisedHost
,advertisedPort
,nodePort
andannotations
-
You can have different external listener types, such as, NodePort
, LoadBalancer
, Ingress
and Route
.
You can refer to the Strimzi documentation
to check which are the differences between the different types of listener.
Below you can find an example with one Kafka node, internal listener using TLS
and an external lister of type NodePort
also with TLS
enabled.
kafka:
kafka:
replicas: 1
internalListenerTlsEnabled: "true"
internalListenerAuthenticationType: "tls"
externalListenerTlsEnabled: "true"
externalListenerAuthenticationType: "tls"
externalListenerType: nodeport
externalListenerConfiguration:
bootstrap:
annotations: {}
alternativeNames:
- <alternative-name-1>
brokers:
- broker: 0
advertisedHost: <advertised-host-broker-0>
advertisedPort: <advertised-port-broker-0>
nodePort: <node-port-broker-0>
annotations: <annotation-broker-0>
Defining storage
You can configure and define how data is stored for the Kafka cluster. You can set parameters such as the type of storage, size, and whether to delete the storage claim when the cluster is removed. You can find an example of a Kafka storage configuration below. For further information about Kafka storage, please refer to the Strimzi documentation.
kafka:
kafka:
storageType: jbod # acceptable values are jbod, ephemeral, persistent
volumes:
- id: 0
type: persistent-claim
size: 1Gi
deleteClaim: false
Deploying broker replicas
You can change the number of replicas of your Kafka cluster by changing the value of replicas
.
When changing the number of replicas, remember to add the new brokers in externalListenerConfiguration
.
For the storage, instead, you don’t need to add new volumes
In the following code snippet; we see an example with 3 replicas.
kafka:
kafka:
replicas: 3
externalListenerType: nodeport
externalListenerConfiguration:
bootstrap:
annotations: {}
alternativeNames:
- <alternative-name-1>
brokers:
- broker: 0
advertisedHost: <advertised-host-broker-0>
advertisedPort: <advertised-port-broker-0>
nodePort: <node-port-broker-0>
annotations: <annotation-broker-0>
- broker: 1
advertisedHost: <advertised-host-broker-1>
advertisedPort: <advertised-port-broker-1>
nodePort: <node-port-broker-1>
annotations: <annotation-broker-1>
- broker: 2
advertisedHost: <advertised-host-broker-2>
advertisedPort: <advertised-port-broker-2>
nodePort: <node-port-broker-2>
annotations: <annotation-broker-2>
storageType: jbod
volumes:
- id: 0
type: persistent-claim
size: 1Gi
deleteClaim: false
Other listeners
You can define other listeners in the following way. Example of other listeners are interClusterListener
, scramsha512listener
,
oauthListener
. Please refer to the Strimzi documentation for more details.
kafka:
kafka:
scramsha512listener:
enabled: true
listenerType: nodeport
listenerConfiguration:
bootstrap:
annotations: {}
alternativeNames:
- <alternative-name>
brokers:
- broker: 0
advertisedHost: <advertised host>
advertisedPort: <advertised port>
nodePort: <node port>
annotations: <annotations>
Kafka configs
You can define Kafka configuration in the following way. There are a couple of interesting points in the following example:
-
We are setting
default.replication.factor
to3
andmin.insync.replicas
to2
, these values make sense for the previous deployment for 3 replicas -
We are setting
principal.builder.class
toio.axual.security.auth.SslPrincipalBuilder
. This setting is possible because we are using Axual Kafka images, for further information you can refer to Why should I enable principal chain based ACL authentication?.
kafka:
kafka:
config:
inter.broker.protocol.version: "3.4"
# Enable custom principal builder class
principal.builder.class: io.axual.security.auth.SslPrincipalBuilder
unclean.leader.election.enable: false
background.threads: 16
num.replica.fetchers: 4
replica.lag.time.max.ms: 20000
message.max.bytes: 1000012
replica.fetch.max.bytes: 1048576
replica.socket.receive.buffer.bytes: 65536
offsets.retention.minutes: 20160
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
transaction.state.log.num.partitions: 3
default.replication.factor: 3
min.insync.replicas: 2
Superusers definition
Superusers are user principals that have the ability to perform any action regardless of the permission grants, such as creating and deleting topics, viewing and modifying consumer group offsets, etc.
You can define the superusers in the following way. In this case, we are using tha SSL chain to identify the principal,
if you don’t want to use the standard Kafka way to identify a principal, you should only use the CN
.
The following certificates must be issued by the trusted CA (Certificate Authority) that the Kafka brokers are configured to trust.
kafka:
kafka:
authorization:
superUsers:
- "[0] CN=Root CA, [1] CN=Intermediate CA, [2] CN=Demo Superuser,O=Axual B.V.,L=Utrecht,ST=Utrecht,C=NL"
- "[0] CN=Root CA, [1] CN=Intermediate CA, [2] CN=local-kafka,O=io.strimzi"
KafkaExporter
kafkaExporter
is an optional component that, when enabled, export various metrics about Kafka,
such as consumer group lag, topic and partition sizes, and more, to a monitoring system. It can be enabled in the following way.
kafka:
kafka:
kafkaExporter:
enabled: true
generateCertificateAuthority option for Strimzi
The generateCertificateAuthority
configuration determines whether Strimzi will generate its own CA certificates.
If you set this to true, Strimzi will create a new Certificate Authority when you deploy a Kafka cluster.
This CA will then sign the certificates used for internal communication between Kafka brokers,
as well as for communication between clients and the Kafka cluster. It can be enabled in the following way:
kafka:
kafka:
generateCertificateAuthority: true
Security
The security
section specifies the Certificate Authority (CA) details (private key and certificate, in PEM
format)
that are used for handling TLS encryption and authentication within the Kafka cluster.
-
clientsCaCert
: The public certificate of the CA that issues client certificates. Kafka brokers would use this to validate the certificates presented by clients to ensure they are signed by a trusted CA. -
clientsCa
: The private key for the CA that issues client certificates. It’s used by the CA to sign certificates for clients. -
clusterCaCert
: The public certificate of the CA that issues certificates for the Kafka brokers (the cluster CA). This is used by clients and other brokers to verify the identity of a broker. -
clusterCa
: The private key for the cluster CA, used to sign the certificates for the Kafka brokers.
It can be configured in the following way:
kafka:
kafka:
security:
clientsCaCert: <clients-ca-cert>
clientsCa: <clients-ca>
clusterCaCert: <cluster-ca-certificate>
clusterCa: <cluster-certificate>
extraCaCerts: {}
clientsCaCertGeneration: "0"
clientsCaGeneration: "0"
clusterCaCertGeneration: "0"
clusterCaGeneration: "0"
Kafka PodMonitoring
If you are using Prometheus, Kafka PodMonitoring can be enabled by adding the following in your values.yaml
file.
kafka:
kafka:
metrics: true
If you want to change the scrapeTimeout
and the interval
you can add the following.
kafka:
kafka:
podMonitor:
scrapeTimeout: "20s"
interval: "30s"
labels: {}
Zookeeper configuration
The zookeeper
configuration is easier than the kafka
one.
You can decide how many replicas to deploy, and you can configure the storage.
For the storage, in particular, you can define the size
and the deleteClaim
. For Zookeeper, the storage type is not configurable
and set to persistent-claim
.
For further information about Zookeeper storage, please refer to the Strimzi documentation
You can find an example of the zookeeper
configuration below.
kafka:
zookeeper:
replicas: 3
storage:
size: 1Gi
deleteClaim: false
Alerting
Deployment includes Prometheus Rule, which provides alerts for Kafka and Zookeeper.
Set of the alerts
Kafka:
-
KafkaRunningOutOfSpace (Kafka is running out of free disk space)
-
UnderReplicatedPartitions (Kafka has under replicated partitions)
-
AbnormalControllerState (Kafka has abnormal controller state)
-
OfflinePartitions (Kafka has offline partitions)
-
UnderMinIsrPartitionCount (Kafka has under min ISR partitions)
-
OfflineLogDirectoryCount (Kafka offline log directories)
-
ScrapeProblem (Prometheus unable to scrape metrics from cluster)
-
ClusterOperatorContainerDown (Cluster Operator down)
-
KafkaBrokerContainersDown (All
kafka
containers down or in CrashLookBackOff status) -
KafkaContainerRestartedInTheLast5Minutes (One or more Kafka containers restarted too often)
Zookeeper:
-
AvgRequestLatency (Zookeeper average request latency)
-
OutstandingRequests (Zookeeper outstanding requests)
-
ZookeeperRunningOutOfSpace (Zookeeper is running out of free disk space)
-
ZookeeperContainerRestartedInTheLast5Minutes (One or more Zookeeper containers were restarted too often)
-
ZookeeperContainersDown (All
zookeeper
containers in the Zookeeper pods down or in CrashLookBackOff status)
Configuration
To enable alerting, property kafka.strimziAlerts.enabled
should be set to true
kafka:
strimziAlerts:
enabled: true
labels: {}
You can provide custom labels via kafka.strimziAlerts.labels
, they will be included into each provided alert.
Those labels can be used to add custom logic, for example, it can have a severity: LEVEL
to filter alerts and direct them to different channels of your choice.
Additionally, you can add additional labels to the PrometheusRule resource based on the configuration provided in kafka.prometheusRule.labels
.
kafka:
prometheusRule:
labels: {}