Preparing for the upgrade

General prerequisites

Before you upgrade to 2020.2, please make sure:

  • you are running Axual version 2020.1. Use axual.sh status to check the current version.

  • you are performing the deployment with platform-deploy 2020.2

Below you will find specific prerequisites for services that are upgraded or introduced in release 2020.2.

Preparing for deployment of Connect

For deploying Connect, host names + port numbers need to be configured. Moreover, certificates are required for inter-service communication and securing the connect endpoint(s).

Determining Deployment Considerations for Connect

How you plan to deploy Connect determines the host names + port numbers that are used in the configuration. Connect typically is deployed in a cluster of multiple (e.g. 3) nodes or VMs. The sizing of the nodes depends very much on the amount of connectors that will be running simultaneously and how much they stress the VM.

Sizing requirements

Find below the advised sizing requirements for Connect worker nodes.

Non-production

Production

CPU

2

4

RAM

4 GB

8 GB

Heap allocation

1.5 GB

3 GB

Cluster size

1

3

The above requirements allow to have 2 connectors running. When more connectors are added, memory requirements will change first, then CPU. Keep an eye on the monitoring dashboards to see when you need to scale up or out.
Networking requirements

On every node where connect is deployed, it listens on multiple ports, as shown in the CONNECT_ configurations below. They are used later in the upgrade/deployment process.

Port configuration

Used for

CONNECT_HOST_HTTP_PORT

The port used for non TLS Rest API calls (only for debugging purposes)

CONNECT_HOST_HTTPS_PORT

The port used for TLS Rest API calls

CONNECT_PROMETHEUS_AGENT_PORT

The port number where the prometheus agent exposes the metrics

CONNECT_HOST_JMX_PORT

The port used for incoming JMX connections

In determining the ports, please make sure there are no active services on the desired VMs where you want to run Connect

Preparing certificates for Connect

Create two keystores per instance for Connect, one for the HTTPS REST endpoint, and one for a client connection to the brokers.

Use the following names:

  • <tenant>-<instance>-axual-connect.server.keystore.jks

  • <tenant>-<instance>-axual-connect.client.keystore.jks

Safely store the keystores and remember the key and keystore passwords for the next step(s)

Preparing for deployment of Vault

Vault is a new service that is used by Management API to store and Connect to retrieve a connector’s private key.

Determining deployment considerations for Vault

How Vault is being deployed largely depends on the environment it is being used. In a development environment, we suggest a 1 node setup whereas in production a 3 node setup is advised.

On every node where vault is deployed, it listens on multiple ports, shown as VAULT_ configurations below. They are used later in the upgrade/deployment process.

Port configuration

Used for

Default

VAULT_API_PORT

Port over which client-vault communication happens. Vault non leader nodes redirect to this port of the leader node

8200

VAULT_CLUSTER_PORT

Port over which vault-vault communication happens in a cluster

8201

Vault can be deployed alongside other services on an existing VM. When determining the ports, please make sure there are no active services listening on the same port(s).

Preparing certificates for Vault

Vault needs the following preparation with regards to security:

  1. Create a server certificate with the load balancer address as SAN and the vault cluster-ips. Keep the following files for configuration later:

    1. mgmt-vault.server.certificate.key: Private key

    2. mgmt-vault.server.certificate.crt: Complete Chain in PEM

    3. mgmt-vault-ca.cer: Intermediate and Root CA of mgmt-api (if an intermediate is applicable) as well as the Root CA of the certificate used for Vault itself

  2. Request an intelligent load balancer that has the vault nodes as members, and use the same load balancer address as used in the certificate above.

    1. The external load balancer should poll the sys/health endpoint to detect the active node and route traffic accordingly to the active node always.

    2. The load balancer should be configured to make an HTTPS request for the following URL to each node in the cluster to: https://[Vault Node URL]:[VAULT_API_PORT]/v1/sys/health

    3. The active Vault node will respond with a 200 while the standby nodes will return a 4xx response.

Please keep the file names as close to suggested as possible, it will help to ensure a smooth upgrade process.

Preparing for deployment of Operation Manager

Operation Manager is a service used to proxy requests from Management API to Connect.

Determining deployment considerations for Operation Manager

It is advised to deploy Operation Manager on the same machine as Management API. It is typically deployed on a single node.

TODO: add memory requirements?

On the node where Operation Manager is deployed, it listens on a single port, as indicated below. This port is used later in the upgrade/deployment process.

Port configuration

Used for

OPERATION_MANAGER_PORT

General API communication (https)

Operation Manager can be deployed alongside other services on an existing VM. When determining the ports, please make sure there are no active services listening on the same port(s).

Preparing certificates for Operation Manager

  1. Create a keystore for Operation Manager for the HTTPS REST endpoint.

    • Use the following name: operation-manager.server.keystore.jks

  2. Request a load balancer that has the operation manager nodes as member, that will be pointing to Operation Manager running on port [OPERATION_MANAGER_PORT]. The load balancer should do a TCP health check on the operation manager service.

Make sure to add the load balancer DNS to the SAN of the server keystore certificate.
Safely store the keystores and remember the key and keystore passwords for the next step(s)

Next step: Performing the upgrade

This concludes the preparation for the 2020.2 upgrade. You may now continue Performing the Upgrade.