Performing the upgrade using HELM charts
Typical upgrade steps
A lot of the upgrade steps can be performed without impact on your users. Basically, the deployment or upgrade of components is split in two actions:
- 
Configuration changes, such as added or changed configuration parameters, including the new component’s version 
- 
Deployment of the upgraded component, using the helm upgradecommand.
Performing the upgrade using HELM charts
Step 1 - Upgrade Axual Operator to 0.6.3
- 
Update the Axual helm repository to download the latest charts available helm repo update
- 
Verify the kafka version is set to 2.8.1 in core.kafka.kafka.version
- 
Upgrade Axual Operator helm upgrade --install strimzi --set watchAnyNamespace=true axual-stable/axual-operator --version=0.6.3 -n kafka
This command will restart the strimzi-cluster-operator pod.
Verify the upgrade by checking the pods and their images:
- 
strimzi-cluster-operatorpod uses image:docker.axual.io/axual/strimzi/operator:0.27.1
Once restarted, verify everything is running fine before moving to next step.
Step 2 - Upgrade Kafka to 3.0.0
After Axual Operator has been upgraded, Kafka can be upgraded to version 3.0.0. The upgrade of Kafka is executed in two steps: upgrading the Kafka binaries and optionally upgrading the inter.broker.protocol.version and log.message.format.version. For both steps, a rolling restart of all the brokers is executed.
- 
Modify the Kafka version in your values.yamlfileExample: platform: core: kafka: kafka: version: 3.0.0 ... config: inter.broker.protocol.version: "2.8" log.message.format.version: "2.8" ... [your-existing-config]
- 
Upgrade Axual platform using the above modified values.yaml.helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaVerify: - 
Zookeeper will perform a rolling restart with a new docker image (0.27.1-kafka-3.0.0). 
- 
Kafka brokers will perform a rolling restart with a new docker image (0.27.1-kafka-3.0.0). 
- 
Verify everything is running fine. 
 
- 
- 
Edit your values.yamlagain and update broker configinter.broker.protocol.version&log.message.format.versionto3.0.Changing inter.broker.protocol.version&log.message.format.versionis an optional stepsExample: platform: core: kafka: kafka: version: 3.0.0 ... config: inter.broker.protocol.version: "3.0" log.message.format.version: "3.0" ... [your-existing-config]Apply above changes: helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaVerify - 
Kafka brokers will perform a rolling restart with a new docker image (0.27.1-kafka-3.0.0). Once restarted verify everything is running fine. 
 
- 
Step 3 - Upgrade Axual Operator to 0.7.0
- 
Update the Axual helm repository to download the latest charts available helm repo update
- 
Verify the kafka version is set to 3.0.0 in core.kafka.kafka.version
- 
Upgrade Axual Operator helm upgrade --install strimzi --set watchAnyNamespace=true axual-stable/axual-operator --version=0.7.0 -n kafka
This command will restart the strimzi-cluster-operator, zookeeper and kafka pods.
Verify the upgrade by checking the pods and their images:
- 
strimzi-cluster-operatorpod uses image:docker.axual.io/axual/strimzi/operator:0.29.0
- 
zookeeperpod uses image:docker.axual.io/axual/strimzi/kafka:0.29.0-kafka-3.0.0
- 
kafkapod uses image:docker.axual.io/axual/strimzi/kafka:0.29.0-kafka-3.0.0
Once restarted, verify everything is running fine before moving to next step.
Step 4 - Upgrade Kafka to 3.2.0
After Axual Operator has been upgraded, Kafka can be upgraded to version 3.2.0. The upgrade of Kafka is executed in two steps: upgrading the Kafka binaries and upgrading the inter.broker.protocol.version and log.message.format.version. For both steps, a rolling restart of all the brokers is executed.
| From Kafka 3.0.0, when theinter.broker.protocol.versionis set to3.0or higher, thelog.message.format.versionoption is ignored and does not need to be set. Thelog.message.format.versionproperty for brokers and themessage.format.version propertyfor topics are deprecated and will be removed in a future release of Kafka. | 
- 
Modify the Kafka version in your values.yamlfileExample: platform: core: kafka: kafka: version: 3.2.0 ... config: ... [your-existing-config]
- 
Upgrade Axual platform using the above modified values.yaml.helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaVerify: - 
Zookeeper will perform a rolling restart with a new docker image (0.29.0-kafka-3.2.0). 
- 
Kafka brokers will perform a rolling restart with a new docker image (0.29.0-kafka-3.2.0). 
- 
Verify everything is running fine. 
 
- 
- 
Edit your values.yamlagain and update broker configinter.broker.protocol.versionto3.2.Changing inter.broker.protocol.version&log.message.format.versionis an optional stepsExample: platform: core: kafka: kafka: version: 3.2.0 ... config: inter.broker.protocol.version: "3.2" ... [your-existing-config]Apply above changes: helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaVerify - 
Kafka brokers will perform a rolling restart with a new docker image (0.29.0-kafka-3.2.0). Once restarted verify everything is running fine. 
 
- 
Step 5 - Upgrade to Keycloak 18.0.1
| During Keycloak upgrade the Management Service Portal will not be accessible, but applications will still be able to consume and produce message | 
In this step we are going to set up Keycloak to use the 18.0.1 version.
Step 5.1 - Run MariaDB Scripts (Optional)
If you’re running a MariaDB Server version prior to 10.3.11, you’ll need to run the following script before starting the Keycloak 18 container/pod:
-- Disable foreign key check
SET FOREIGN_KEY_CHECKS=0;
-- KC 12 - 12.1.0-add-realm-localization-table
CREATE TABLE REALM_LOCALIZATIONS
(
    REALM_ID varchar(255) not null,
    LOCALE   varchar(255) not null,
    TEXTS    longtext     not null,
    primary key (REALM_ID, LOCALE)
);
SELECT max(ORDEREXECUTED) INTO @max_order_executed FROM DATABASECHANGELOG;
INSERT INTO DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED,
  EXECTYPE, MD5SUM, DESCRIPTION, COMMENTS, TAG, LIQUIBASE, CONTEXTS, LABELS, DEPLOYMENT_ID)
VALUES ('12.1.0-add-realm-localization-table', 'keycloak', 'META-INF/jpa-changelog-12.0.0.xml', now(), @max_order_executed + 1, 'EXECUTED',
  '8:babadb686aab7b56562817e60bf0abd0', 'createTable tableName=REALM_LOCALIZATIONS; addPrimaryKey tableName=REALM_LOCALIZATIONS','', null, '4.8.0', null, null, '5988138086');
-- KC 13 - 13.0.0-increase-column-size-federated
ALTER TABLE CLIENT_SCOPE_CLIENT MODIFY COLUMN CLIENT_ID VARCHAR(255);
ALTER TABLE CLIENT_SCOPE_CLIENT MODIFY COLUMN SCOPE_ID VARCHAR(255);
SELECT max(ORDEREXECUTED) INTO @max_order_executed FROM DATABASECHANGELOG;
INSERT INTO DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED,
  EXECTYPE, MD5SUM, DESCRIPTION, COMMENTS, TAG, LIQUIBASE, CONTEXTS, LABELS, DEPLOYMENT_ID)
VALUES ('13.0.0-increase-column-size-federated', 'keycloak', 'META-INF/jpa-changelog-13.0.0.xml', now(), @max_order_executed + 1, 'EXECUTED',
  '8:9d11b619db2ae27c25853b8a37cd0dea', 'modifyDataType columnName=CLIENT_ID, tableName=CLIENT_SCOPE_CLIENT; modifyDataType columnName=SCOPE_ID, tableName=CLIENT_SCOPE_CLIENT', '', null, '4.8.0', null, null, '5988138086');
-- KC 13 - json-string-accomodation-fixed
ALTER TABLE REALM_ATTRIBUTE ADD VALUE_NEW LONGTEXT;
UPDATE REALM_ATTRIBUTE SET VALUE_NEW = VALUE;
ALTER TABLE REALM_ATTRIBUTE DROP COLUMN VALUE;
ALTER TABLE REALM_ATTRIBUTE CHANGE COLUMN VALUE_NEW VALUE LONGTEXT;
SELECT max(ORDEREXECUTED) INTO @max_order_executed FROM DATABASECHANGELOG;
INSERT INTO DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED,
  EXECTYPE, MD5SUM, DESCRIPTION, COMMENTS, TAG, LIQUIBASE, CONTEXTS, LABELS, DEPLOYMENT_ID)
VALUES ('json-string-accomodation-fixed', 'keycloak', 'META-INF/jpa-changelog-13.0.0.xml', now(), @max_order_executed + 1, 'EXECUTED',
  '8:dfbee0d6237a23ef4ccbb7a4e063c163', 'addColumn tableName=REALM_ATTRIBUTE; update tableName=REALM_ATTRIBUTE; dropColumn columnName=VALUE, tableName=REALM_ATTRIBUTE; renameColumn newColumnName=VALUE, oldColumnName=VALUE_NEW, tableName=REALM_ATTRIBUTE', '', null, '4.8.0', null, null, '5988138086');
-- KC 15 - 15.0.0-KEYCLOAK-18467
ALTER TABLE REALM_LOCALIZATIONS ADD TEXTS_NEW LONGTEXT;
UPDATE REALM_LOCALIZATIONS SET TEXTS_NEW = TEXTS;
ALTER TABLE REALM_LOCALIZATIONS DROP COLUMN TEXTS;
ALTER TABLE REALM_LOCALIZATIONS CHANGE COLUMN TEXTS_NEW TEXTS LONGTEXT NOT NULL;
SELECT max(ORDEREXECUTED) INTO @max_order_executed FROM DATABASECHANGELOG;
INSERT INTO DATABASECHANGELOG (ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED,
  EXECTYPE, MD5SUM, DESCRIPTION, COMMENTS, TAG, LIQUIBASE, CONTEXTS, LABELS, DEPLOYMENT_ID)
VALUES ('15.0.0-KEYCLOAK-18467', 'keycloak', 'META-INF/jpa-changelog-15.0.0.xml', now(), @max_order_executed + 1, 'EXECUTED',
  '8:ba8ee3b694d043f2bfc1a1079d0760d7', 'addColumn tableName=REALM_LOCALIZATIONS; update tableName=REALM_LOCALIZATIONS; dropColumn columnName=TEXTS, tableName=REALM_LOCALIZATIONS; renameColumn newColumnName=TEXTS, oldColumnName=TEXTS_NEW, tableName=REALM_LOCALIZATIONS; addNotNullConstrai...', '', null, '4.8.0', null, null, '5988138086');
-- Enable foreign key check
SET FOREIGN_KEY_CHECKS=1;Step 5.2 - Upgrade Keycloak
| By default,  - name: KC_HTTPS_CERTIFICATE_FILE value: "/etc/x509/https/tls.crt" - name: KC_HTTPS_CERTIFICATE_KEY_FILE value: "/etc/x509/https/tls.key" Check the reference server documentation to know which variables are available to be configured. You can also have variables that are not intended for the Keycloak setup itself. Should you need a different TLS setup, check the configuring TLS reference | 
helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaStep 5.3 - Verify Keycloak 18
After keycloak’s pod state is running, make sure Keycloak is up and running by:
- 
checking logs 
- 
accessing the Admin Console 
- 
accessing the Management Service Portal 
Should you need more information about Keycloak upgrade, please refer to its Upgrading Guide
Step 6 - Enable Metrics Exposer Deployment
Since 2022.2 we have introduced a new component to provide insight about your applications or streams.
Step 6.1 - Fulfill prerequisite
- 
Create a new client-scopein Keycloak Admin console for your realm(s) namedmetrics-exposer
- 
Add the metrics-exposerclient scope to theself-serviceclientThis client scope is used by Metrics Exposer to specify what access privileges are being requested for the issued JWT token. 
- 
Metrics Exposer requires its own Prometheus server, and relies on the Prometheus Operator to deploy this server. Prometheus Operator must be present in the Kubernetes cluster before Metrics Exposer can be installed. You can follow the Public Prometheus documentation for installation on a production environment. 
- 
To Install Prometheus Operator on a local environment you can execute the following helmcommands.- 
Add the Helm repository: helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && helm repo update
- 
Create a monitoring namespace: kubectl create namespace monitoring
- 
Unfortunately, there is no Helm chart that only installs the Prometheus Operator. The commonly used Helm chart to install Prometheus Operator is the kube-prometheus-stackchart. However, this chart installs not just Prometheus Operator, but a full monitoring stack. It includes Prometheus, AlertManager, Grafana, and an array of metric exporters. If you want to install just the Prometheus Operator, you must disable all of these other components. The following Helm command installs just the Prometheus Operator using thekube-prometheus-stackchart version 38.0.2. Newer versions may contain additional components which need to be disabled.helm upgrade --install --namespace monitoring kube-prometheus-stack prometheus-community/kube-prometheus-stack \ --set defaultRules.create=false \ --set alertmanager.enabled=false \ --set grafana.enabled=false \ --set kubeApiServer.enabled=false \ --set kubelet.enabled=false \ --set kubeControllerManager.enabled=false \ --set coreDns.enabled=false \ --set kubeEtcd.enabled=false \ --set kubeScheduler.enabled=false \ --set kubeProxy.enabled=false \ --set kubeStateMetrics.enabled=false \ --set nodeExporter.enabled=false \ --set prometheus.enabled=false
 
- 
- 
Enable broker metrics (if it’s not done already) by overriding values.yaml #Enable Broker Metrics core: kafka: kafka: metrics: truePerform the upgrade with the helm upgrade command, as follows: helm upgrade --install platform axual-stable/platform -f values.yaml --version=0.9.0 -n kafkaThis command will issue a rolling restart of the kafka brokers 
Step 6.2 Configure and Enable Metrics Exposer deployment
| Confirm that Management API 6.12.0 or higher is running before continue | 
- 
Gather an accessible Prometheus URL, you can even use the service-name of the Prometheus Stack. In our case we are providing values for a local installation 
- 
Configure Metrics Exposer with the prometheusUrlsandmanagementHostby overriding values.yamlmgmt: # Metrics Exposer configuration metricsexposer: axual: # Default Prometheus Stack URL prometheusUrls: default: http://<global.mgmt.name>-metricsexposer-prometheus:9090 # The Base URL where Management Stack is hosted. managementIngressUrl: https://<global.mgmt.hostname> prometheus: persistentVolume: # Storage Class Name storageClassName: <your-storage-class>
- 
Enable Metrics Exposer deployment global: mgmt: # Toggle for Metrics Exposer Deployment metricsexposer: enabled: true
- 
Verify that Metrics Exposer has been successfully deployed by opening the OpenAPI Spec The above link is valid for a local deployment, if you have deployed Metrics Exposer on a different environment, the OpenAPI Spec will be available at [mgmt-host]/api/metrics/api-docs.yaml 
- 
Confirm that Prometheus is reachable and that is scraping kafka brokers. 
If installed locally through our suggested helm commands Prometheus will not be reachable outside the k8s cluster, you will have to port-forward the prometheus-stack service
Having problems after the upgrade?
Make sure to check the troubleshooting page, and as a last resource, the rollback instructions.
That’s it!
No other steps are required to upgrade or configure the platform components. You can read through the release blog to find out what has changed since the last release and forward it to your colleagues.