Debezium - MySQL source-connector

You’ll find this documentation very similar to Debezium SQL Server, and that’s because the only difference between them is three properties and the DB backup configuration.

Debezium - MySQL

If you already have a publicly accessible deployment of MySQL, then you can already configure and install a connector using the Axual Self-Service UI. Use the configuration instructions below.

If you don’t have one available, follow the next section to deploy a publicly available instance of a MySQL server.

Deploying a MySQL instance

We’ll deploy a MySQL instance using Google Cloud Services.

If you already have a Google account, you can sign up for a free trial of Google cloud here.

You can read more about MySQL on Google cloud here.

Let’s get started.

  1. Create a new MySQL instance here.

  2. The instance ID is irrelevant for Connect, but will be displayed in your Google overview. You will also set the password for the database root user. Remember it, but you can also reset it later.

  3. Select database "MySQL 8.0" (likely the default option).

  4. The region is irrelevant, too, but usually you would select a region geographically closest to the Connect-cluster.

  5. Zone availability: "Single" is enough. We’ll aim to have a very lightweight deployment.

  6. Customize your instance: Click "Show configuration options"

    • Machine type - Click the dropdown menu and select "Lightweight": 1vCPU is enough.

    • Storage: go for the least amount of storage.

    • Connections: Leave only "Public IP" selected. Authorized networks. Click "Add network". Use any name and "0.0.0.0/0" as the CIDR notation (and click "Done"). This will open the database to the entire internet. It’s ok, we’ll delete it shortly anyway.

    • Backups: Enable backups and point-in-time-recovery.

  7. Click "Create instance". Wait for the operation to complete.

  8. While the database server is getting deployed, let’s create a bucket.

    • Give it any name (i.e. "mysql-init-axual-connect"). Click Continue.

    • Where to store your data: Region (we’re using a single region). Choose any geographic location and click Continue.

    • Default storage class: Standard. Click Continue.

    • Control access to objects: uniform. Click Continue.

    • Protect data: None

  9. Click "Create" to crete the bucket.

  10. The bucket page will open. Click "Upload files"

  11. Save this file locally, and then click "Upload files" to upload it into the bucket:

USE mysql_database_name;

CREATE TABLE HOTEL
(
  hotel_id       INT          NOT NULL AUTO_INCREMENT,
  hotel_name     VARCHAR(100) NULL    ,
  hotel_address  VARCHAR(250) NULL    ,
  hotel_checkin  TIME         NULL    ,
  hotel_checkout TIME         NULL    ,
  PRIMARY KEY (hotel_id)
);

INSERT INTO `HOTEL` VALUES
(1, 'iris', null, null, null),
(2, 'altair', null, null, null),
(3, 'turbulence', null, null, null);
  1. You can close the buckets page. Let’s go back to our SQL instances. Select your MySQL instance to view it. Note down the public IP address. This is the database.hostname you’ll use in your configuration.

  2. [OPTIONAL] On the left-side menu, click "Users". Click the dots next to the root user. You can change its password. We’ll use these credentials to connect to the database.

  3. On the left-side menu, click "Databases". Click "Create Database". Use mysql_database_name as the name, as this is the name used in this example document. We referenced it in the SQL snippet above, and we’ll do it again in the connector configuration. Leave the rest of the configuration as it is and click "Create".

  4. On the left-side menu, click "Overview". Click the "Import" tab.

    • Source: click "Browse". Select the SQL file we saved earlier.

    • File format: SQL

    • Destination: mysql_database_name

    • Click "Import"

Configuring a new connector

Summary of steps. You will need to perform the following:

  1. Determine the values used for topic and consumer-group name-resolving logic.

  2. Create the associated Streams in the UI

  3. Provide default connector configuration

  4. Create an additional application (workaround for bug AXPD-5547 - internal link )

  5. Authorize applications

  6. Provide advanced(custom) security configuration to the connector

1. Determine name-resolving properties

Some properties require you to be aware of Axual’s custom name-resolving mechanism. Consult an axual-platform operator to acquire the following information you will need:

Resolving pattern for topic-names

Example pattern:

<tenant short-name>-<instance short-name>-<environment short-name>-<topic-name>

Resolving pattern for consumer-groups

Example pattern:

<tenant short-name>-<instance short-name>-<environment short-name>-<consumer-group-name>

Tenant short-name

Example name:

tenant

Instance short-name

Example name:

instance

Environment short-name

Example name:

environment

Connect-Application short-name

You will create this application shortly, and you can choose any valid name

my_hotels_app

2. Create the associated streams in the UI

Follow the 2022.1@axual::self-service/stream-management.html.adoc#creating-streams documentation in order to create 3 (or more) streams. All streams will have Key/Value of type String (configured at stream creation time).

All the streams will be deployed onto the same environment, and only this environment will be used throughout the entire document. When you deploy the streams onto the environment, you will use the following configuration:

  • retention time and segment.ms of at least 157680000000 (5 years).

  • a single (one) partition.

Streams to create and deploy:

  1. The first stream will be a (<nickname>) for the database. You can choose any value for this (it’s for internal use only), but it has to be unique within Axual-Connect. In our example, the DB server and stream name will be mysql_nickname. This stream contains schema-change-events that describe schema-changes that are applied to captured tables in the database.

  2. The second stream’s name will be the composition of <nickname>.<Database name>.<table name>, for every table we intend to watch. In our example, we’ll only watch a single table, so we’ll create a single stream, named: mysql_nickname.mysql_database_name.HOTEL. This naming pattern is enforced by the connect-plugin. These streams will contain events that happen for every table, including (but not limited to) insertions, updates, deletions.

  3. The third and final stream will store the database schema history. It can have any name, so we’ll call it <nickname>._schema-changes, for consistency: mysql_nickname._schema-changes.

3. Provide default connector configuration

Follow the Configuring Connector-Applications documentation to set up a new Connect-Application. Let’s call it my_hotels_app. The plugin name is "io.debezium.connector.mysql.MySqlConnector". Configure the security certificate as instructed. The values you will need to supply as configuration will be listed in this section.

For advanced configuration, see the official connector documentation.

database.hostname

Example value:

123.123.123.123

database.port

3306

database.user

Example value:

root

database.password

Example value:

1234abcdEFG

database.server.name

This is the <nickname> we used as a prefix for all the streams we created.

mysql_nickname

database.server.id

This has to be a unique number within the MySQL Server. It isn’t relevant for our example use-case.

184054

database.include.list

The name of the database you created

Example value:

mysql_database_name

table.include.list

The name of the table you created, prefixed by the DB name

Example value:

mysql_database_name.HOTEL

database.history.kafka.bootstrap.servers

Example values:

  • For local development (helm platform deployment):

    platform.local:31757

  • For Axual-Cloud, rokin cluster:

    bootstrap-c1-ams01-azure.axual.cloud:9094

  • For SAAS trial:

    oberon-kafka-bootstrap.axual.cloud:9094

database.history.kafka.topic

This is the 3rd topic we created, using the following naming convention:

<nickname>_schema-changes

This topic name has to be resolved! Information required for this step was acquired at Step #1.

tenant-instance-environment-mysql_nickname._schema-changes

4. Create an additional application

Compared to other connectors, the Debezium - MySQL connector has an additional requirement: it needs to read the stream it normally produces to. The self-service UI doesn’t allow this yet. Bug tracked here (internal). In order to work around this limitation, we’ll create a consumer application using the same certificate as the connector, and authorise it to consume the stream instead.

Use the following configuration:

  • application ID: mysql_nickname-dbhistory (<nickname>-dbhistory)

  • name: my_hotels_app_aux_consumer (example)

  • short_name: my_hotels_app_aux_consumer (example)

  • Owner: same as the Connect-Application

  • Type: Custom

  • Application type: any value.

  • Visibility: private

When deploying the application on the same environment as the Connector-Application, supply the same certificate you used for the Connector-Application.

5. Authorize applications

Authorise the my_hotels_app_aux_consumer Custom-Application to consume the mysql_nickname._schema-changes stream.

Authorize the my_hotels_app Connector-Application to produce to all 3 streams we just created:

  • mysql_nickname

  • mysql_nickname.mysql_database_name.HOTEL

  • mysql_nickname._schema-changes

6. Provide advanced(custom) configuration to the connector

Edit the Connector-Application configuration (my_hotels_app). We need to supply additional (custom) properties, with keys which are not automatically available in the self-service UI.

Many configuration values are of the form:

  • keyvault:connectors/tenant/instance short-name/environment short-name/application short-name:property.name

  • Resolved example: keyvault:connectors/tenant/instance/environment/my_hotels_app:property.name

Please update the values which are of this form, in the right column of the table, as per your findings during Step #1. The dollar sign and the curly brackets are part of the value, do not remove them!

auto.create.topics.enable

false

database.history.consumer.group.id

This is the application ID used for the my_hotels_app_aux_consumer Custom-Application

ATTENTION! This consumer-group name has to be resolved! Information required for this step was acquired at Step #1

tenant-instance-environment-mysql_nickname-dbhistory

database.history.consumer.security.protocol

SSL

database.history.consumer.ssl.key.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.key.password}

database.history.consumer.ssl.keystore.location

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.keystore.location}

database.history.consumer.ssl.keystore.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.keystore.password}

database.history.consumer.ssl.truststore.location

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.truststore.location}

database.history.consumer.ssl.truststore.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.truststore.password}

database.history.producer.security.protocol

SSL

database.history.producer.ssl.key.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.key.password}

database.history.producer.ssl.keystore.location

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.keystore.location}

database.history.producer.ssl.keystore.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.keystore.password}

database.history.producer.ssl.truststore.location

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.truststore.location}

database.history.producer.ssl.truststore.password

${keyvault:connectors/tenant/instance/environment/my_hotels_app:ssl.truststore.password}

You can now start the Connector-Application

Cleanup

Once you are done, stop the connector application and cleanup the unused axual resources.

In case you deployed your resources via Google Cloud, don’t forget to delete your bucket and your database once you are done.