Axual Connect

About Axual Connect

On this page we describe how to maintain connect, check performance, debug connect issues, examine connect settings and how to install new plugins.

An instance(s) of Axual connect is needed for each tenant and instance.

At least one Connect Instance is installed on each server for each tenant/instance combination.

Where is Connect Configured ? In our CLI version, the standard configuration is found in the directory

<Configuration Base>/<TENANT NAME>/instances/<INSTANCE NAME>/axual_connect.sh.

In our helm charts, Connect is congigured inside of the Axual Connect Helm Chart. (axual-stable/axual-connect)

The most important configurations are :

The connect host port

CONNECT_HOST_HTTPS_PORT=host_name:37060

The keystores password is defined inside this file. When updating a customer certificate, remember to update the passwords if needed.

CONNECT_SSL_CLIENT_KEYSTORE_PASSWORD=notsecret
CONNECT_SSL_CLIENT_KEY_PASSWORD=notsecret
CONNECT_SSL_SERVER_KEYSTORE_PASSWORD=notsecret
CONNECT_SSL_SERVER_KEY_PASSWORD=notsecret

The Value Role And secret:

CONNECT_VAULT_ROLE_ID='207bae5e-f10d-4507-bdab-b60bdb4047cf'
CONNECT_VAULT_SECRET_ID='0edf7fb8-cb46-e94c-ff04-486bea1d83a4'

Plugins Location:

HELM

It is possible to download the plugins from a URL location on service restart or place them into a kubernetes Persistent Volume Claim

Persistent Volume Claim

The Persistent Volume claim is defined in the Helm charts as connect-plugin-pv.

Downloadable Plugins
downloadPlugins:
  enabled: true
  image: docker.axual.io/axual/connect
  tag: 2.2.4
  artificateBaseUrl: "http://artifacts.axual.cloud.s3-website.eu-central-1.amazonaws.com"
  connectPluginsFile: "Axual-Connect/axual-connect-plugins-1.0.0.tgz"
  commonResourcesFile: "Axual-Connect/axual-connect-common-resources-1.0.0.tgz"

Store the plugins under the artifactBaseUrl location. Each connector will download plugins from this location on startup. The directory structure should be first a directory with the plugin type, the contents of the directory is the list of plugin jar files needed by the plugin.

CLI
  • When running platform-deploy locally, start the vault with the vault.sh script. This will update your role and secret for connect.

To setting up the vault please reference the documentation: Enable Axual-Connect Deploying Axual-Connect via Helm.

The Plugin location is configured with an external defined by the environment variable:

CONNECT_PLUGINS_DIR_PATH

General

Please note the Connect Host Port as you will need it for other actions below.

Connect Instances must be registered in the Management API to make them accessible. You can do that on the Instances → Edit page of Self Service.

How To Install A New Connector (Platform Deploy/Docker)

At times your users will request a new version of a connector. The connector should be first installed into a test environment and then in production.

  1. Download the Connector jar from Maven. (If possible, download a jar with dependencies.)

  2. In Maven check the dependencies. Download each dependency. (Sometimes, a connect jar is available with dependencies included.)

  3. Some Connectors such as JDBC connectors have optional dependencies. For a JDBC driver, you will need to download your database-specific driver.

  4. Upload the jar files to the connect server(s). (In Helm to the shared drive that was configured in Helm)

  5. Copy the files to the staging plugins directory of the client. You need to know the instance (instance), the tenant (tenant), and the plugin (snowflake). Delete older jar files that are no longer needed. If I upgrade from snowflake1.5.1 to snowflake1.7.1 then I can delete the snowflake1.5.1 jar file.

    Example

    cp /tmp/myjarfiles.jar /<connect-config-folder>/<tenant>-<instance>/plugins/snowflake/.
  6. Restart the connector - do this one machine at a time, and remember to check if your work was successful. (There is a success check)

    /<cloud-config>/platform-deploy/axual.sh restart client rabo-ota axual-connect

Of course this can also be done inside of ArgosCD if that is configured.

  1. Check for success. (First wait a few minutes.) Run two commands. Check that the new jar was loaded. Check that the connectors have started.

    In order to run this command, you will need to know the port that the service runs on. Check that information at the top of this article - the Connect_HOST_HTTPS_PORT. You want to know just the port number. Next run this command against the server that you are running on.

    Check if the connector was loaded:

    curl -k -u connectAdmin:<<CONNECT ADMIN PASSWORD>> \
    https://host:port/connector-plugins -s | jq -S

    Output example

    [
      {
        "class": "com.snowflake.kafka.connector.SnowflakeSinkConnector",
        "type": "sink",
        "version": "1.7.1"
      },
      {
        "class": "io.axual.connect.plugins.http.HttpSinkConnector",
        "type": "sink",
        "version": "1.0.0"
      },
      {
        "class": "io.confluent.connect.jdbc.JdbcSinkConnector",
        "type": "sink",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "type": "source",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.s3.S3SinkConnector",
        "type": "sink",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
        "type": "source",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.axual.utils.LogSinkConnector",
        "type": "sink",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.axual.utils.LogSourceConnector",
        "type": "source",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
        "type": "sink",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
        "type": "source",
        "version": "2.3.1"
      }
    ]

    Check that the version of your uploaded connector matches the version in the output of this command.

    The second check is to check that the connector has restarted. The customer had a custom connect application that they wanted to upgrade. The following command checks that the connector is still running. In the following command the name of my application is snowflake_sink_2-test

    curl -k -u connectAdmin:<<connectAdmin password>> \
    https://host:port/connectors/<connector-application-name>/status -s | jq -S

    Example output

    {
        "connector": {
            "state": "RUNNING",
            "worker_id": "vision-clients-1:37060"
        },
        "name": "connector-application-name",
        "tasks": [
            {
                "id": 0,
                "state": "RUNNING",
                "worker_id": "vision-clients-1:37060"
            }
        ],
        "type": "sink"
    }
  2. Clean up any files that you uploaded to temp directories. You are now finished.

  3. Repeat for each Connect Server.

How To Install A New Connector (Helm)

. At times a customer will request a new version of a connector. The connector should be first installed into the client’s test environment and then in production.

  1. Download the Connector jar from Maven. (If possible download a jar with dependencies.)

  2. In Maven check the dependencies. These will also need to be downloaded.

  3. Some Connectors such as JDBC connectors have optional dependencies. For a JDBC driver, you will need to download your database-specific driver.

  4. Upload (scp) the jar files to the downloadPlugins.artificateBaseUrl location. For example, if you were to install the Snowflake Sink version 1.7.1 then at your artificateBaseUrl location, create a Snowflake directory. Inside that directory, place the files.

    scp myjarfile chris@myserver:.

    **Delete older jar files that are no longer needed. If I upgrade from snowflake1.5.1 to snowflake1.7.1 then I can delete the snowflake1.5.1 jar file.

  5. Restart the connector - do this one machine at a time, and remember to check if your work was successful.

    kubectl delete pod <connect pod> -n kafka
  6. Check for success. (It may take a few minutes for connect to start.) Wait until Connect successfully restarts. Run two commands. Check that the new jar was loaded. Check that the connectors have started.

    In order to run this command, you will need to know the port that the service runs on. Check that information at the top of this article - the Connect_HOST_HTTPS_PORT. You want to know just the port number. Next run this command against the server that you are running on.

    Check if the connector was loaded:

    curl -k https://<server name>:<server port>/connector-plugins -s | jq -S

    Output example

    [
      {
        "class": "com.snowflake.kafka.connector.SnowflakeSinkConnector",
        "type": "sink",
        "version": "1.7.1"
      },
      {
        "class": "io.axual.connect.plugins.http.HttpSinkConnector",
        "type": "sink",
        "version": "1.0.0"
      },
      {
        "class": "io.confluent.connect.jdbc.JdbcSinkConnector",
        "type": "sink",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.jdbc.JdbcSourceConnector",
        "type": "source",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.s3.S3SinkConnector",
        "type": "sink",
        "version": "5.0.4"
      },
      {
        "class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
        "type": "source",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.axual.utils.LogSinkConnector",
        "type": "sink",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.axual.utils.LogSourceConnector",
        "type": "source",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
        "type": "sink",
        "version": "2.3.1"
      },
      {
        "class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
        "type": "source",
        "version": "2.3.1"
      }
    ]

    Check that the version of your uploaded connector matches the version in the output of this command.

    The second check is to check that the connector has restarted. The customer had a custom connect application that they wanted to upgrade. The following command checks that the connector is still running. In the following command, the name of my application is connector-application-name

    curl -k https://<server name>:<port>/connectors/<connector-application-name>/status -s | jq -S

    example output

    {
        "connector": {
            "state": "RUNNING",
            "worker_id": "vision-clients-1:37060"
        },
        "name": "connector-application-name",
        "tasks": [
            {
                "id": 0,
                "state": "RUNNING",
                "worker_id": "vision-clients-1:37060"
            }
        ],
        "type": "sink"
    }
  7. Clean up the files that you uploaded to the temp directories. You are now finished.

Debugging

Viewing the logging is described on a separate page: debugging/elasticKibana.adoc

The connectors can be debugged using JMX. (One of the few places where JMX is useful) Please read the JMX section of the following article help. debugging/logLevels.adoc

The CLI log files can be found in the /appl/logs/<environment> directory. System.log shows the connect application log, and the connector logs are found inside of the connectors subdirectory. In a Kubernetes environment the log files are stored inside of the /var/logs/container directory for each node on which a connect application is running.

If an application seems stuck, then you can examine thread dumps with the JMX debugger (debugging/logLevels.adoc).

Memory leaks can be found by running a core dump of the java process. Use this step only as a last resort. (Inside of the docker image, use jps to find the java process id and jmap to create a core file. Analyze core files with the Eclipse Memory Analyzer.)

Process Flow (Find Problem Source)

The process flow of starting and stopping Connect tasks by the customer can be found in the image below:

Process Flow

Notice that:

The Customer Application and Configuration are found in the Maria DB. For UI display problems, check the MariaDB

The Connect Instances must be registered inside the UI. This data is stored in the Maria DB. Check the contents of the MariaDB if there are strange display errors.

Start/Stop commands are sent from the UI to the Management API to the Axual Operator and then to the Connect Instances. If there is an issue it is most probably inside of Connect. But be careful to check that the Management API and Operator are also running. For possible connection issues check the Axual Operator logs.

If you cannot upload a new certificate to the UI, then check that the Vault is active. If you see any certificate errors in the logs, then:

  • Try to delete and then re-add the certificate. If there are errors here, then check that the Vault is running and that the secret and role are set up correctly (From the set up above.)

  • The certificate must be saved using Linux/Unix line breaks and not Windows Line Feeds. Please do not alter the certificate using standard Windows tools as this can cause corruption.

Helpful API commands

Check connectors installed:

curl -k https://<server>:<port>/connector-plugins -s | jq -S

Example above.

Check the status of a connect Application:

curl -k https:/<server>:<port>/connectors/snowflake_sink_2-test/status -s | jq -S

Example above.

List Connectors

curl -k https://<server>:<port>/connectors -s | jq -S

Example Output:

[
  "app9329_liander_p5_source_2-prd",
  "app7674_p5_snowflakesink-prd"
]

List and Status

It is possible to combine list and status commands: The following command will list all connectors and give their status

curl -k https://<server>:<port>/connectors?expand=status -s | jq -S

Example:

{
    "app7674_p5_snowflakesink-prd": {
        "status": {
            "connector": {
                "state": "RUNNING",
                "worker_id": "vision-clients-2:37031"
            },
            "name": "connector-application-name",
            "tasks": [
            {
                "id": 0,
                "state": "RUNNING",
                "worker_id": "vision-clients-2:37031"
            }
            ],
            "type": "sink"
        }
    },
    "app9329_liander_p5_source_2-prd": {
        "status": {
            "connector": {
                "state": "RUNNING",
                "worker_id": "vision-clients-3:37031"
            },
            "name": "second-connector-application-name",
            "tasks": [
            {
                "id": 0,
                "state": "RUNNING",
                "worker_id": "vision-clients-3:37031"
            }
            ],
            "type": "source"
        }
    }
}

Common Issues

  • I cannot create a Connect Application in Self Service: Check that connect is running and configured.

  • I have started Connect, and I when I try to upload a certificate I always get an error: Check that the Vault has been started.