Cluster Stack Migration
| To be able to start migrating the Cluster Stack, you need to have migrated all Self-Service instances. | 
| This Cluster Stack Migration needs to be executed for each Kafka cluster defined in your Axual installation. | 
Start a standalone Cluster-API
Objective
Come up with a new values.yaml file to start a new Cluster-API with axual-helm-charts alongside existing Kafka.
Execution
| To avoid downtime or customer impact, perform this execution outside office hours | 
Standalone Cluster-API
- 
Define a new Chart.yaml that defines as dependency the platform:0.17.11charts.Chart.yamlapiVersion: v2 appVersion: "2024.1.3" description: Cluster API with Axual Helm Charts name: just-cluster-api type: application version: 0.1.0 dependencies: - name: "platform" version: "0.17.11" repository: "https://dev.axual.io/nexus/repository/axual-helm-stable"
- 
Create a new values.yaml using the existing Cluster’s values.yaml used by the Axual Helm Charts as reference. First, disable the components that are not Cluster-API values.yamlglobal: cluster: enabled: true name: [existing-cluster-name] strimzi: enabled: false clusterbrowse: enabled: false instance: enabled: false mgmt: enabled: false
- 
In the core.clusterapisection, provide the same configuration used by the existing Cluster’s values.yamlvalues.yamlplatform: core: clusterapi: [existing-configuration]
| You will not be able to deploy it because the resources will still exist from the existing Cluster stack deployment. | 
Existing Cluster-API
- 
In the existing Cluster stack deployment, disable the Cluster-API values.yamlglobal: cluster: clusterapi: enabled: false
- 
Upgrade the existing Cluster stack deployment to disable the Cluster-API 
- 
Start the new Standalone Cluster deployment to replace the disabled Cluster-API 
| Perform these two above steps in a fast sequence to reduce to the minimum the amount of time when a restart of Discovery-API and Schema-Registry might fail due to missing Cluster-API. | 
| The Self-Service functionalities are not affected since all the clusters have been made not requiring Cluster-API | 
Migrate Axual Operator to the Strimzi Operator
Objective
Change the Operator available in the k8s-cluster which deploys Kafka and Zookeeper pods
| Be sure that you are using the same version of the Axual Operator | 
Execution
Based on your way to deploy the Axual Operator, you might have two different ways.
Helm Upgrade Command
- 
Download the Strimzi public charts helm repo add strimzi https://strimzi.io/charts/
- 
Upgrade the existing Axual Operator with the new values helm upgrade --install [existing-release-name] strimzi/strimzi-kafka-operator \ --version=0.34.0 \ --namespace [existing-namespace] \ --set watchAnyNamespace=true \ --set kafka.image.registry=registry.axual.io \ --set kafka.image.repository=axual/streaming/strimzi \ --set image.imagePullSecrets=[existing-docker-secret-name]
Chart.yaml and Values.yaml
- 
Update the existing Chart.yaml to use strimzi-kafka-operatoras dependency chart.Chart.yamlapiVersion: v2 name: "strimzi-operator" type: "application" version: "0.34.0" appVersion: "2024.1.3" description: Strimzi Operator dependencies: - name: "strimzi-kafka-operator" version: "0.34.0" repository: "https://strimzi.io/charts/"
- 
Update the existing values.yaml to pull a different Kafka image from Axual Registry. values.yamlstrimzi-kafka-operator: watchAnyNamespace: true createGlobalResources: true # Adjust to your needs resources: limits: memory: 512Mi requests: memory: 512Mi cpu: 200m image: imagePullSecrets: [existing-docker-secret-name] kafka: image: registry: "registry.axual.io" repository: "axual/streaming/strimzi"
| This will cause rolling restarts of Zookeeper and Kafka pods. | 
Verification
In this step, we are going to verify that the new Zookeeper and Kafka pods deployed with Strimzi Operator work as the old Zookeeper and Kakfa pods deployed with Axual Operator.
You can verify this be either
- 
Confirming that your producer/consumer applications are running fine 
- 
Logging into the Self-Service and perform a topic deployment 
- 
Logging into the Self-Service and browse a topic deployment 
If all checks are successful, you can proceed to the next steps.
Migrate Kafka
Execution
| Before upgrading the Kafka deployment, check the diffs with ArgoCD, with helm diff upgrade --install, or in the way supported from the tool you are using to deploy the charts. | 
- 
Replace the dependency in the Cluster Chart.yaml file. Chart.yamlapiVersion: v2 appVersion: "2024.1.3" description: Kafka Stack with Streaming Charts name: [existing-cluster-name] type: application version: 0.1.0 dependencies: - name: "axual-streaming" version: "0.3.4" repository: "https://dev.axual.io/nexus/repository/axual-helm-stable"
- 
Copy the existing platform.core.kafkasection from the existing values.yaml file so that you can replace some keys.
- 
Disable the components expect kafkavalues.yamlglobal: rest-proxy: enabled: false apicurio: enabled: false axual-schema-registry: enabled: false
- 
Add the Cluster name to the axual-streaming.kafka.fullnameOverridekeyvalues.yamlaxual-streaming: kafka: fullnameOverride: [existing-cluster-name]Be sure that to use the right cluster-name to avoid any Kafka restart 
- 
Add the Internal Listeners Configurationto theaxual-streaming.kafka.kafkakeyvalues.yamlaxual-streaming: kafka: kafka: # Kafka internal listener configuration internalListenerTlsEnabled: "true" internalListenerAuthenticationType: tls
- 
Replace platform.core.kafkakey withaxual-streaming.kafkakeyexisting_values.yamlplatform: core: kafka: [existing-content]new_values.yamlaxual-streaming: kafka: [existing-content]
- 
Replace global.cluster.kafka.nodeskey withaxual-streaming.kafka.kafka.replicaskeyexisting_values.yamlglobal: cluster: kafka: nodes: [existing-value]new_values.yamlaxual-streaming: kafka: kafka: replicas: [existing-value]
- 
Replace global.cluster.zookeeper.nodeskey withaxual-streaming.kafka.zookeeper.replicaskeyexisting_values.yamlglobal: cluster: zookeeper: nodes: [existing-value]new_values.yamlaxual-streaming: kafka: zookeeper: replicas: [existing-value]
- 
Replace platform.core.kafka.rackEnabledkey withaxual-streaming.kafka.kafka.rack.enabledkeyexisting_values.yamlplatform: core: kafka: kafka: rackEnabled: [existing-value]new_values.yamlaxual-streaming: kafka: kafka: rack: enabled: [existing-value]
- 
Replace platform.core.kafka.rackTopologyKeykey withaxual-streaming.kafka.kafka.rack.topologyKeykeyexisting_values.yamlplatform: core: kafka: kafka: rackTopologyKey: [existing-value]new_values.yamlaxual-streaming: kafka: kafka: rack: topologyKey: [existing-value]
- 
Replace platform.core.kafka.rackEnabledkey withaxual-streaming.kafka.zookeeper.rack.enabledkeyexisting_values.yamlplatform: core: kafka: kafka: rackEnabled: [existing-value]new_values.yamlaxual-streaming: kafka: zookeeper: rack: enabled: [existing-value]
- 
Replace platform.core.kafka.rackTopologyKeykey withaxual-streaming.kafka.zookeeper.rack.topologyKeykeyexisting_values.yamlplatform: core: kafka: kafka: rackTopologyKey: [existing-value]new_values.yamlaxual-streaming: kafka: zookeeper: rack: topologyKey: [existing-value]
- 
Replace platform.core.kafka.superUserskey withaxual-streaming.kafka.kafka.authorization.superUserskeyexisting_values.yamlplatform: core: kafka: kafka: superUsers: [existing-value]new_values.yamlaxual-streaming: kafka: kafka: authorization: superUsers: [existing-value]
- 
In the Kafka security section, provide the correct values for clientsCaCertGeneration,clientsCaGeneration,clusterCaCertGeneration,clusterCaGenerationvalues.yamlaxual-streaming: kafka: kafka: security: clientsCaCertGeneration: "0" clientsCaGeneration: "0" clusterCaCertGeneration: "0" clusterCaGeneration: "0"
- 
If the configuration fully matches, there will be no restart of the cluster. If there are differences, keep adapting the values.yaml until all the diffs are gone. 
Verification
In this step, we are going to verify that the existing Zookeeper and Kafka pods deployed with Strimzi Operator and Streaming Charts work as expected.
You can verify this be either
- 
Confirming that your producer/consumer applications are running fine 
- 
Logging into the Self-Service and perform a topic deployment 
- 
Logging into the Self-Service and browse a topic deployment 
If all checks are successful, you can proceed to the next steps.