Installing Connect Plugins

Installing plugins

The way connect-plugins are currently provided for the helm installation of the platform is:

  • Host connect-plugins on a webserver

  • An init-job downloads the plugins and common jars

  • The plugins are stored in a volume

  • Axual-Connect pods mount the volume

The plugins and common jars are downloaded on every restart of the pod.

In order to install a new connect-plugin, you have to:

  1. Add the new Connect-Plugin JARs to the FileServer
    There is one archive file for connect-plugins, and one for the common-jars.

    1. Ensure the plugins are in a .tgz file with all plugins directly in the base directory.

plugins.tgz
  couchbase-kafka-connect-couchbase-4.1.7/
  debezium-connector-mongodb/
  kafka-connect-cassandra-3.0.1-2.5.0-all.jar
  kafka-connect-cosmos-1.14.2-jar-with-dependencies.jar
  README.md
  1. Confirm that Axual Connect has the correct configuration to download Connect-Plugins from your FileServer

    downloadPlugins:
      artifactsBaseUrl: "[URL_OF_YOUR_FILE_SERVER]"
      connectPluginsFile: "[PATH_TO_YOUR_PLUGINS_TARBALL]"
      commonResourcesFile: "[PATH_TO_YOUR_COMMON_RESOURCES_TARBALL]"
  2. Issue a helm upgrade command for Axual-Connect:

    helm upgrade --install -n kafka axual-connect -f [YOUR_CUSTOM_VALUES]

Make the new Connect-Plugins available in the Self-Service portal

After Axual Connect has been restarted, confirm that the plugins are available by checking the /connector-plugins endpoint of any Connect-Node’s API.

Although the plugins are available in the Connect-Cluster, they may not yet available on the Self-Service portal.
You have two options to make the new Connect-Plugins available:

  1. Wait until the reconciliation.connect.plugin job has executed

    1. You can check how often the reconciliation connect plugin job runs by checking selfservice-api’s SCHEDULER_RECONCILIATION_CONNECT_PLUGINS_CRON config value.

    2. You can change the frequency by configuring the SCHEDULER_RECONCILIATION_CONNECT_PLUGINS_CRON value. see Connect Reconciliation Jobs at Platform Manager.

  2. Manually trigger a refresh of Connector Plugin by editing the Instance associated with the Axual Connect in the Self-Service Portal

    1. Log in the Self-Service Portal as a TENANT_ADMIN

    2. Go to Instances page

    3. Select the Instance associated with the Axual Connect instance you updated the plugins for

    4. Press the edit button

    5. Save the Instance as-is, without changing anything. This will trigger the reconciliation plugin job to run immediately

When upgrading connect-plugin versions, existing connectors may start failing due to missing mandatory configurations.
We advise you to collect a list of Connect-Applications using a certain Connect-Plugin before performing any maintenance on it.
Make sure that the Connect-Application owners are aware of the implications of your maintenance operation.

You can search by Application Class in the Self-Service

Create a plugin download location

It may be convenient to create a Connector plugin download location directly inside the namespace that hosts the Connect framework. To achieve this, the following guide would be a simple solution.

# Use when Axual Connect needs to find Connector plugins internally:
# 1) Deploy these 3 Kubernetes resources to namespace that will run Connect.
#    "kubectl apply -f connectorstore.yaml"
# 2) Copy connector packages from your local machine into the /usr/share/nginx/html directory, where the packages will be exposed by nginx and persisted in the volume
#    "kubectl cp connector-package.tgz <pod_name>:/usr/share/nginx/html"
# 3) Verify: "k exec <pod> -- ls -larth /usr/share/nginx/html", make sure the axual-connect-common-resources archive exists!
# 4) (Optional) Verify by port-forwarding Service and check the content "k port-forward pods/<pod_name> 8000:80"
#    Check via browser or download via curl "curl localhost:8000/connector-package.tgz > check.tgz"
# 5) Configure connect to download package from the Service

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: connectorstore-volume
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard
  volumeMode: Filesystem

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: connectorstore
spec:
  replicas: 1
  selector:
    matchLabels:
      app: connectorstore
  template:
    metadata:
      labels:
        app: connectorstore
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
          volumeMounts:
            - name: connectors
              mountPath: /usr/share/nginx/html
      volumes:
        - name: connectors
          persistentVolumeClaim:
            claimName: connectorstore-volume

---
apiVersion: v1
kind: Service
metadata:
  name: connectorstore
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: connectorstore