Deploying Axual Platform
Introduction
Axual Platform consists of multiple components wrapped in Docker containers. As for any generic Docker container, deployments poses some complexity on passing the correct environmental variables and network / persistence setup (port mapping, volume mapping).
In order to overcome this complexity we have created a tool called Axual Deploy that bootstraps the configuration and deploys the complete platform locally.
below instructions are intented for local deployment of Axual Platform. If you want to deploy the platform in a different deployment scenario, please continue reading here. |
Local deployment
Prerequisites
Before starting, there are a few things required to setup:
make sure you assign at least 3 Gb to your docker daemon |
-
login into Axual’s docker repository with
docker login docker.axual.io
. Use the credentials provided by Axual. -
download the example setup of the Axual platform for local deployment. Use the same docker credentials.
Configuring your local machine
The example setup of Axual Deploy contains the following:
-
Deployment scripts (scripts that start/stop the platform services)
-
Platform configuration (cluster, instance and tenant configuration)
-
Security folder with dummy Keystores for components in order to enable SSL communication
-
The initial-setup.sh script
-
The axual.sh script
Before running the platform for the first time, it is required to run the initial-setup.sh
script to prepare your machine.
You will be prompted for the sudo password.
./initial-setup.sh -run
You should see the following output (or similar):
Adding alias 192.168.99.100 to interface lo0
Writing the host file at /Users/yourname/host.sh
Creating local data directory at /Users/yourname/axual_local_data
Creating an SQL file with initial statements to be inserted in the local database...
Setup complete.
###############
Start the platform with './axual.sh [-v] start' (optional argument '-v' stands for verbose)
You will be prompted to insert the DB root password.
After the platform starts, login into the self-service portal using the following credentials:
URL: https://192.168.99.100:8095/
Username: 'demo@axualdemo.nl'
Password: 'password'
Use the following configuration for your application to connect to this Axual platform deployment:
ENDPOINT=https://192.168.99.100:29000
ENVIRONMENT=local
TENANT=demo
KEYSTORE_LOCATION=/Users/yourname/Downloads/axual-platform-local-5.1/security/application/axual-local-application.keystore.jks
TRUSTSTORE_LOCATION=/Users/yourname/Downloads/axual-platform-local-5.1/security/application/axual-local-application.keystore.jks
All keystore, truststore and key passwords are 'notsecret'
The application ID and version are chosen by the developer.
You can run './initial-setup.sh -run' again to display this screen
Customisation is possible, run the script with -help
instead of -run
for more information.
Running the platform
To simply start the platform, run the following command:
./axual.sh start
If everything went successfully, a similar output should be present in the console:
Configuring cluster services for node localhost in cluster LOCAL Preparing exhibitor: Done Starting exhibitor: Done Waiting for exhibitor on 192.168.99.100 Connected to exhibitor on 192.168.99.100 Preparing broker: Done Starting broker: Done Waiting for broker on 192.168.99.100 Connected to broker on 192.168.99.100 Preparing cluster-api: Done Starting cluster-api: Done Preparing distributor: Done Starting distributor: Done Deploying topic _company-local-environments: Done Deploying topic _company-local-schemas: Done Configuring instance services for company-local in cluster LOCAL Stopping distributors with prefix company-local-message-distributor-from-: Done Stopping distributors with prefix company-local-offset-distributor-from-: Done Stopping distributors with prefix company-local-schema-distributor-: Done Preparing company-local-sr-master: Done Starting company-local-sr-master: Done Preparing company-local-sr-slave: Done Starting company-local-sr-slave: Done Running copy-config-company-local-discovery-api: Done Preparing company-local-discovery-api: Done Done Starting company-local-discovery-api: Done Preparing company-local-instance-api: Done Cluster servers are https://platform.local:9080 Done Starting company-local-instance-api: Done Configuring mgmt services for node localhost in cluster LOCAL Running clean-config-prometheus: /config/prometheus Done Generating prometheus targets... Generating prometheus configuration... Running create-config-prometheus: Done Running copy-config-prometheus: Done Starting prometheus: Done Provisioning grafana dashboards... Running copy-config-grafana: Done
To stop the platform, run the following command:
./axual.sh stop
Troubleshooting
Troubleshooting the platform deployment can be done at multiple levels. The following scenarios can be referenced when attempting to troubleshoot your own deployment:
Axual Deploy scripts completed with errors
In this case, adding the -v
flag would result in a verbose output on the console
which it will make the debugging process easier. The command will look like:
./axual.sh -v start
Axual Deploy scripts completed successfully but some components are unreachable
Run the following command to check if all the applications have started and the port mappings are correct:
docker ps
If the platform started correctly, you should see a similar output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a7c7b082b7fa axual/clusterapi:1.0.1 "/home/kafka/start-j…" About a minute ago Up About a minute 192.168.99.100:9080->9080/tcp, 192.168.99.100:4001->8081/tcp cluster-api 046e11d87426 axual/instance-application:1.0.0 "/home/kafka/start-j…" 3 minutes ago Up 3 minutes 192.168.99.100:9181->9181/tcp, 192.168.99.100:31000->31000/tcp axual-local-instance-api 711704a1c458 axual/discovery-api:2.0.2 "/home/kafka/start-j…" 17 minutes ago Up 17 minutes 192.168.99.100:29000->8080/tcp, 192.168.99.100:30000->8081/tcp, 192.168.99.100:443->8443/tcp axual-local-discovery-api 6a7443c81309 axual/schemaregistry:3.5.0 "/home/kafka/start-s…" 18 minutes ago Up 17 minutes 192.168.99.100:24000->24000/tcp, 192.168.99.100:25000->25000/tcp, 192.168.99.100:27000->27000/tcp, 192.168.99.100:26000->8088/tcp axual-local-sr-slave 106eb3fc17e2 axual/schemaregistry:3.5.0 "/home/kafka/start-s…" 18 minutes ago Up 18 minutes 127.0.0.1:20000->20000/tcp, 127.0.0.1:21000->21000/tcp, 192.168.99.100:23000->23000/tcp, 192.168.99.100:22000->8088/tcp axual-local-sr-master a6bb2f788cdd axual/broker:5.0.2 "/home/kafka/start-b…" 22 minutes ago Up 22 minutes 192.168.99.100:9001->9001/tcp, 192.168.99.100:9092->9092/tcp, 192.168.99.100:9094-9096->9094-9096/tcp, 192.168.99.100:4003->8088/tcp broker 3c015ffebbc6 axual/exhibitor:2.0.1 "/home/kafka/start-e…" 23 minutes ago Up 23 minutes 192.168.99.100:2181->2181/tcp, 192.168.99.100:2888->2888/tcp, 192.168.99.100:3888->3888/tcp, 192.168.99.100:8082->8082/tcp exhibitor
Please pay attention to the following columns:
-
STATUS - all components should have an Up status (e.g. Up 10 minutes)
-
PORTS - all components should be listening on ip 192.168.99.100 on the assigned ports
In the case that all components have a correct state but they are not accessible
make sure to run ./initial-setup.sh -run
script before running ./axual.sh start
.
If there are any issues or questions regarding Axual Deploy you can always contact the support team using the Support Page
Monitoring
By default, monitoring is disabled in the local deployment, as it isn’t very insightful in that particular set up. There’s little or no load at all times and it consists of a single node.
The platform components export metrics at all times. If you do want to enable monitoring in your local cluster setup, change the local-config/clusters/local/nodes.sh
file:
-
Append, at the end of
NODE1_MGMT_SERVICES=…
line, the following:,prometheus,grafana
-
On linux machines only, at the end of the
MONITORING_SERVICES=…
line, appendprometheus-node-exporter
. This enables exporting of host-machine metrics, like CPU, RAM, IO usage.
By default, grafana is available at http://localhost:3000
and the credentials are: admin:admin
, but they can be customised in platform-local/clusters/ local/mgmt-grafana.sh