fluentd
Each Service / Axual service generates log files. Axual Connect generates one system log file and another for each connector application running inside of Axual Connect.
Axual Logs can be viewed by either opening the log files or using a log collector such as fluentd or logstach to export logs to a logging database/viewing tool such as Elastic/Kibana. This article explores the use of fluentd to forward log files. We assume that you are forwarding logs to Elastic in the rest of this article. There are other alternatives though, such as DataDog and Splunk
We have documented some eexample Elastic installations/setups.
Fluentd is the service used to move logs from each service to a service such as Elastic. We have two types of fluentd installations.
-
We use Docker images to move log files from the file system to Kibana. The Docker images are installed on the same server as the applications generating the log files. Please view Docker/fluentd for more information on the Docker fluentd installation.
-
In Helm Axual internally uses a Kokuwa HELM Chart to install a fluentd daemon set on our kubernetes cluster. A daemon set is a setup whereby one pod is run on each Kubernetes node. The pod has access to the logs generated on each node. **Be careful, though the fluentd installation will transfer all files generated in a Kubernetes standard manner to Elastic. Please view the install directions at Kokuwa for more detailed information on the Helm charts installation.
Docker containers in Kubernetes write logs to standard output (stdout) and standard (stderr) error streams. Docker redirects these streams to a logging driver configured in Kubernetes to write to a file in JSON format. The Docker logs are written to a Docker standard location on the disk on which docker runs. Kubernetes exposes log files to users via kubectl logs command. Users can also get logs from a previous instantiation of a container setting the --previous flag of this command to true. They can get container logs if the container crashed and was restarted.
The fluentd Helm Charts interface moves the Docker log files to Elastic. Again, be careful to push any extra log information over which is not logged in the standard way.
Although FluentD is normally used in conjunction with Elastic, the logs can be sent to any type of data store. Kafka is also a datastore to which Fluentd can transfer the logs to.
Axual uses logback in our applications. Logback can be configured to transfer log files to other locations such as Kafka.
Configure Elastic Store for FluentD
Docker Image Installation Each docker install needs the following Environment Variables to set up the Elastic Search Integration
"OUTPUT_HOST={hostname}", "OUTPUT_PORT={port}", "ELASTICSEARCH_SCHEME=https", "ELASTICSEARCH_USERNAME={fluentd-elastic-username}", "ELASTICSEARCH_PASSWORD={fluentd-elastic-password}", "FLUENTD_PATH=/appl/kafka/fluentd", "LOGS_PATH=/appl/logs", "LOGSTASH_PREFIX=legacy-infrastructure-production" OutputHost:
OutputPort:
OUTPUT_HOST=The Elastic server host. OUTPUT_PORT= Port on which to communicate to Elastic ELASTICSEARCH_SCHEME=https/http ELASTICSEARCH_USERNAME=Elastic user for authentication ELASTICSEARCH_PASSWORD=Elastic Password for authentication FLUENTD_PATH=location of the fluentd configuration. If you have time view the fluentd.conf file inside of this directory. LOGS_PATH= Send all files under this directory to Elastic LOGSTASH_PREFIX= Elatic index to load log files into
Helm Charts / Kubernetes Installation
We use the Kokuwa Fluentd Helm chart. As explained above, this installs a fluentd log exporter on each Kubernetes node.
Configuration:
Below is an example Kokuwa configuration:
elasticsearch: auth: enabled: true hosts: - {hostname}:{port} logstash: prefix: {index-name} scheme: https suppressTypeName: true
<source> @id fluentd-containers.log @type tail path /var/log/containers/*.log pos_file /var/log/containers.log.pos tag kubernetes.* read_from_head true <parse> @type multi_format <pattern> format json time_key time time_format %Y-%m-%dT%H:%M:%S.%NZ </pattern> <pattern> format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/ time_format %Y-%m-%dT%H:%M:%S.%N%:z </pattern> </parse> </source>
elasticsearch.hosts - the elastic server name and port.
elasticsearch.logstach - the elastic index to upload to.
source → path - the logs to upload. By default Docker sends all standard out logs to /var/log/containers/*.log
The elastic login information is found in the secrets file:
elasticsearch: auth: user: {username} password: {password}
Here you should place the encrypted username and password.
There are more settings in this file, please read the entire file for more information.
The fluentd version can be found in the helmfile.yaml inside of the base directory:
-
name: fluentd namespace: monitoring createNamespace: true chart: kokuwa/fluentd-elasticsearch atomic: false labels: app: fluentd version: 11.9.0 values:
-
apps/fluentd/values-{{ .Environment.Name }}.yaml secrets:
-
apps/fluentd/secrets-{{ .Environment.Name }}.yaml At the time of writing the latest helm version is 13.5.0 and so this can be upgraded.
Problems
Fluentd sometimes stops moving to elastic the logs for the Connect Applications. When this happens please restart fluentd on the node that has stopped. On docker just restart the image:
docker restart fluentd On Kubernets please kill the pod:
-
Find the node that the connector pod is running on:
kubectl get pod connect-XXX -n kafka -o wide
-
Find the corresponding fluentd node
kubectl get pods -n monitoring -o wide
-
kill the malfunctioning fleuntd daemon node
kubectl delete pod XXXXX -n monitoring
It is a good idea to monitor fluentd using a monitoring tool.