Infrastructure Requirements
Kubernetes
The Axual Platform started out on Virtual Machines, but moved to Kubernetes/OpenShift for the many operational benefits that platform offers when running mission-critical applications and services.
For more information, continue reading on the official Kubernetes documentation and the Axual Resilience section.
Openshift
The Axual Kafka platform is fully compatible and runs mission-critical workloads on OpenShift. The Axual Platform is available on the Red Hat Ecosystem Catalog.
Most examples in Axual documentation are made with Kubernetes, but these work similarly on OpenShift.
Replace the kubectl command in examples with oc .
|
Requirements
These requirements should be considered for the infrastructure design of a new installation, to be refined further in collaboration with the responsible infrastructure team.
Kubernetes infrastructure requirements
Category |
Requirement |
Remarks |
Kubernetes Version |
> 1.24 |
EKS, AKS, GKE, Rancher, OpenShift are supported. |
Kubernetes Nodes |
For a Production environment: 3 Dedicated Nodes for Kafka brokers and Zookeeper
|
For POC, 4 nodes of 4 CPU, 16G should be enough. Further reading on Kafka cluster setups: Kafka Architecture DC Setups & Availability |
Persistent Volumes |
|
|
Kubernetes permissions |
|
|
Network Connectivity |
|
|
DNS |
|
|
Load Balancers or Ingress |
|
Nginx Ingress Controller and OpenShift Routes are supported. |
Certificates |
|
|
MySQL Database Service |
In total 2 databases (schemas) are used per cluster - Self Service DB and Keycloak DB A 3rd is required in the case of Apicurio schema registry. |
For POC, MariaDB charts can be automatically deployed as part of platform charts deployment. |
(Optional) Hashicorp Vault |
If there is a Hashicorp Vault present in the infrastructure, it can be used. Alternatively this can be deployed as part of Axual Platform. Two logical Vaults are 1) for Kafka streaming layer credentials and 2) Connector credentials for Kafka Connect. At least one physical vault is required. |
If Vault is not available, Hashicorp Vault should be deployed on the cluster, separately from the Axual Platform charts. |
Identity Provider |
An Identity Provider, for example Azure Active Directory, depending on the infra/cloud solution available, should be integrated with the Axual Platform. |
|
Helm Chart Repository |
Axual Platform is distributed via Helm Charts. The Helm Chart repository should be reachable from the Kubernetes Cluster or a deployment tool.
|
|
Image Registry |
|
|
(Optional) GitOps facilities |
Axual prefers to work in a GitOps way, where all infrastructure and configuration is stored in git and applied using tools like ArgoCD or Terraform. Git repositories for the installation configurations are a minimum requirement. Deployment tooling is great to have. |
|
(Optional) Sensitive Configuration Storage |
The platform configuration consists of many sensitive configurations (like private keys, DB passwords, keystore passwords etc.). All configurations will be stored in a Git repository so these will need to be encrypted or stored in another location. Helm Secrets + Mozilla SOPS is supported, but also Sealed Secrets and 1Password Secrets etc. |
Can be skipped for POC. |
(Optional) Monitoring & Alerting |
Axual Platform provides metrics that are ready for Prometheus to scrape. Integrating these metrics into a central Prometheus, Grafana and Alertmanager stack of the operations team is preferred for all parties involved. There are ServiceMonitors, PodMonitors and PrometheusRules (for alerting) readily available. |
If no Prometheus stack is available, integration with the alerting solution of the customer will be required. |
(Optional) Centralized Logging & Tracing |
Integration with any Centralized Logging or Distributed Tracing solution would be of great benefit to the observability of the platform. All components are OTEL compliant and can write logs in JSON format. |
DNS Names
The DNS names are indicative only. Please change according to your requirements.
Component |
Layer |
DNS Name |
IP Address (TBA) |
Port |
Protocol |
Exposed By |
Broker 1 |
Streaming |
esp-broker-0.company.org |
<Custom LB IP> |
9094, 9095 |
mTLS+TCP, SASL |
Loadbalancer |
Broker 2 |
Streaming |
esp-broker-1.company.org |
<Custom LB IP> |
9094, 9095 |
mTLS+TCP, SASL |
Loadbalancer |
Broker 3 |
Streaming |
esp-broker-2.company.org |
<Custom LB IP> |
9094, 9095 |
mTLS+TCP, SASL |
Loadbalancer |
Broker bootstrap |
Streaming |
esp-broker-bootstrap.company.org |
<Custom LB IP> |
9094, 9095 |
mTLS+TCP, SASL |
Loadbalancer |
API Gateway |
Governance |
esp-gateway.company.org |
<Ingress LB IP> |
443 |
HTTPS |
Ingress |
Schema Registry |
Streaming |
esp-schemas.company.org |
<Ingress LB IP> |
443 |
HTTPS |
Ingress |
Rest Proxy |
Streaming |
esp-restproxy.company.org |
<Custom LB IP> |
443 |
mTLS+HTTPS |
Loadbalancer |
(Optional) Broker 1 Internal |
Streaming |
esp-broker-0-internal.company.org |
<Custom LB IP> |
9096, 9095 |
mTLS+TCP, SASL |
Loadbalancer |
Certificates and Private Key Infrastructure
When using a Service Mesh, internal PKI and internal certificates are no longer required, as components can work without their own mTLS solutions and still be secure.
PKI |
Purpose |
Enterprise PKI |
Enterprise PKI is trusted by the platform for external interface components. It is used to sign server and client certificates (see below) for external-facing components. It is also used to sign application certificates which will connect to ESP. |
PKIESP (Internal) |
Custom CA for internal certificates, preferably facilitated via cert-manager. Used to sign private listener certificates of Kafka brokers (9091, 9093). The private key of the CA is also installed in the Operator. Not exposed to external connecting applications. Included in the platform package. |
Certificates
Certificates are required for service-service mTLS communication within the Kubernetes cluster and for external communication from clients (producers and consumers) to external components like Kafka, Schema Registry and Rest Proxy.
Use cert-manager to automate internal certificates, any company-wide cert-manager should be integrated if possible. |
In table below, replace <prefix> with either:
-
<tenant-short-name>-<instance-name> for example
customer-dta-platform-manager
-
Replace <tenant-short-name> with the value configured in
.Values.global.tenant.shortName
-
Replace <instance-name> with the value configured in
.Values.global.instance.name
-
-
the name of the Chart, for example “axual” becomes axual-platform-manager
Subject |
Issuer |
Component |
Subject Alternative Names |
Certificate type |
CN=internal-server-only |
PKIESP |
|
|
Server |
CN=esp-company-server |
Enterprise PKI |
|
|
Server |