SQL Server source Connector, version 2.6.1.Final

SQL Server

If you already have a publicly accessible deployment of SQL Server, you can skip ahead and Configure and install a connector, using the Axual Self-Service UI. Use the configuration instructions below.
If you don’t have one available, the following section instructs you to deploy a publicly available instance of SQL Server.

Deploying a SQL Server instance

Without TLS

We’ll deploy a SQL Server instance using Google Cloud Services.
If you have a Google account, you can sign up for a free trial of Google cloud.

You can read more about SQL Server on Google cloud here.

Do not extend this guide over multiple days, as SQLServer deployed via Google cloud might incur a licensing fee that greatly exceeds the costs of the server itself. Go through this use-case in a day, and finish it off by decommissioning all your resources at the end.

Let’s get started.

  1. Create a new SQL Server instance.

    • instance ID: my-axual-sqlserver-connector-test
      The instance ID is irrelevant for our example: It will only be displayed in your Google overview.

    • Password: You can set the password for the sqlserver user of the database. We’ll use this password when configuring the Connector-Application later.
      You can reset this password later.

    • Database: Select "SQL Server 2019 Standard" (likely the default option).

    • Choose a configuration to start with: Development

    • Region: europe-west1
      The region isn’t very irrelevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Zonal availability: Single zone

    • Customize your instance: Click "Show configuration options"

      • Machine type: Click the dropdown menu and select Lightweight. 1vCPU is enough.

      • Storage: go for the least amount of storage.

        • If available, Use HDDs.

        • If given the option, do not enable automatic storage increases.

      • Connections: Leave only "Public IP" selected.

        • Authorized networks. Click "Add network". Use any name and 0.0.0.0/0 as the CIDR notation (and click "Done").
          This will open the database to the entire internet. That’s ok, we’ll delete it shortly anyway.

      • Authentication: Skip this config if it’s present

      • Data protection:

        • Enable backups. Under "Advanced options", select (single) Region (europe-west1)

        • If available, enable point-in-time recovery

        • Disable deletion protection

      • Maintenance: Skip this config

      • Flags and parameters: Skip this config

      • Labels: Skip this config

    • Click "Create instance".

  2. While the database server is getting deployed, let’s create a bucket.

    • Name your bucket: sqlserver-init-axual-connect (or something different, in case the name is already taken).
      Skip labels.
      Click "Continue".
      The bucket name is irrelevant for our example: It will only be displayed in your Google overview.

    • Choose where to store your data:

      • Region: europe-west1 (you don’t need Multi or dual region. Single region is enough)
        The region isn’t very relevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Choose a default storage class for your data: Standard.
      Click "Continue".

    • Choose how to control access to objects: Uniform.
      Click "Continue".

    • Choose how to protect object data: None

      • Data encryption: Skip this config

  3. Click "Create" to create the bucket.
    If you get a pop-up with "Public access will be prevented", click "Confirm".

  4. The bucket page will open. Click "Upload files"

  5. Save the following text to a file on your system, and then click "Upload files" to upload it into the bucket:

    CREATE SCHEMA demo
    GO
    
    CREATE TABLE demo.categories (
    	category_id   INT IDENTITY  PRIMARY KEY,
    	category_name VARCHAR (255) NOT NULL
    );
    
    SET IDENTITY_INSERT demo.categories ON;
    INSERT INTO demo.categories (category_id, category_name)
    VALUES (1,'Names'), (2,'Places'), (3,'Hobbies');
    
    EXEC msdb.dbo.gcloudsql_cdc_enable_db 'database_name';

    The last line in the snippet above enables "Change Data Capture" on the database. You can read more about it here.

  6. You can close the buckets page. Let’s go back to our SQL instance.
    Select your instance to open it. Note down the public IP address.
    We’ll use it when configuring the Connector-Application later.

  7. [Optional] Change the sqlserver user’s password:

    • On the left-side menu, click "Users".

    • Click the dots next to the sqlserver user.

    • Click "Change password".

    • Type a new password and click "OK".

  8. On the left-side menu, click "Databases".

    • Click "Create Database".
      Use database_name as the name, since this is the name used in this example instance-setup (we referenced it in the SQL snippet above, and we’ll do it again in the connector configuration).

      • Click "Create".

  9. On the left-side menu, click "Overview". Click the "Import" button.

    • Source: click "Browse".

      • Double-click your bucket and select the SQL file we saved earlier.

      • Click "SELECT" at the bottom of the page.

    • File format: SQL

    • Destination: database_name

    • Click "Import"

With TLS

  • To enable requiring SSL/TLS, do the following extra steps apart from the above mentioned ones:

    • In the Google Cloud console, go to the Cloud SQL Instances page.

    • To open the Overview page of an instance, click the instance name.

    • Click Connections from the SQL navigation menu.

    • Select the Security tab.

      • Select one of the following:

        1. Allow unencrypted network traffic (not recommended)

        2. Allow only SSL connections. This option only allows connections using SSL/TLS encryption. Certificates aren’t validated.

        3. Require trusted client certificates. This option only allows connections from clients that use a valid client certificate and are SSL encrypted.

Please note that client’s side certificate must be placed in the connect configuration inorder to ensure trust while working with Debezium connector. Please reach out to Axual Cloud Operators to do so.

Configuring a new source Connector

Determine name-resolving properties

Some properties require you to be aware of Axual’s custom name-resolving mechanism.
Consult an axual-platform operator to acquire the following information:

Tenant shortName

Example value:
axual

Instance shortName

Example value:
local

Environment shortName

Example value:
default

Connect-Application shortName

This value is already provided. We’ll use:
my_categories_app

Resolving pattern for topic-names

Example pattern:
${tenant-shortName}-${instance-shortName}-${environment-shortName}-${topic-name}
Example resolved topic name:
axual-local-default-transactions

Resolving pattern for consumer-groups

Example pattern:
${tenant-shortName}-${instance-shortName}-${environment-shortName}-${consumerGroup-name}
Example resolved consumer group:
axual-local-default-com.axual.transactionProcessor

Create the associated streams in the UI

  1. Follow the Creating streams documentation in order to create 3 (or more) streams, and deploy them onto an environment.

    • All streams will have a Key/Value of type String/String (configured at stream creation time).

    • All streams will be deployed onto the same environment. This is the only environment we’ll use throughout the entire process.

    • When you’ll deploy the streams onto the environment, you will use the following stream-deployment configuration:

      • Partition count: 1

      • Retention Time: 157680000000 (5 years)

      • (Optional advanced property) segment.ms of at least 157680000000 (5 years).

  2. Create and deploy the following streams:

This stream contains schema-change-events that describe *schema-changes that are applied to captured tables* in the database. +
_This stream is a (`<nickname>`) for the database._ +
_You can choose any value for this stream-name: it's for internal use only, but it has to be *unique* within the {connect-cluster}._ +
  1. my_sqlserver_name._schema-changes
    This stream stores the database schema history.
    _This stream name is also completely flexible and independent of the other stream-names, but we chose to call it <nickname>._schema-changes for consistency._

  2. my_sqlserver_name.demo.categories
    This stream will contain DB events that happen for the categories table, including (but not limited to) insertions, updates, deletions.
    This stream name is the composition of <nickname>.<schema name>.<table name>. This naming pattern is enforced by the Connect-Plugin.
    You would need to create one such stream for every DB table you intend to watch. In our example, we’ll only watch a single table, so we created a single stream.

Create the Connect-Application, configure the Connector-Application

  1. Follow the Configure and install a connector documentation to set up a new Connector-Application.
    Let’s call it my_categories_app.
    The plugin name is io.debezium.connector.sqlserver.SqlServerConnector.
    If a plugin isn’t available, ask a platform operator to install plugins.

    Use the following application ID: my_sqlserver_name-dbhistory.
    This value isn’t enforced. We chose to use <nickname>-dbhistory for consistency with other naming schemes.

    For advanced configuration, see the official source connector documentation.

  2. Provide the following configuration to set up connectivity to the previously configured SQL Server instance.

    database.hostname
    You noted down this IP address after creating the SQL Server instance.

    Example value:
    123.123.123.123

    database.port

    1433

    database.user

    sqlserver

    database.password
    You noted down this password while creating the SQL Server instance

    Example value:
    1234abcdEFG

    database.server.name
    This is the <nickname> we used as a prefix for all the streams we created.

    my_sqlserver_name

    database.names

    database_name

    database.dbname

    database_name

    table.include.list The name of the table we created, prefixed by the schema name

    Example value:
    demo.categories

    schema.history.internal.kafka.bootstrap.servers

    Example values:
    - For local development (helm platform deployment):
    platform.local:31757 - For Axual-Cloud, rokin cluster:
    bootstrap-c1-ams01-azure.axual.cloud:9094 - For SAAS trial:
    oberon-kafka-bootstrap.axual.cloud:9094

    schema.history.internal.kafka.topic This is the topic we created using the following naming convention:
    <nickname>._schema-changes

    This topic name has to be resolved! Information required for this step was acquired at "Determine name-resolving properties"

    Example resolved value:
    axual-local-default-my_sqlserver_name._schema-changes

Provide advanced(custom) configuration to the connector

Edit the Connector-Application configuration (my_categories_app). We need to supply additional (custom) properties, with keys which are not automatically available in the self-service UI.

Many configuration values are of the form:
keyvault:connectors/${tenant}/${instance-shortName}/${environment-shortName}/${application-shortName}:${property.name}

Resolved example:
keyvault:connectors/axual/local/default/my_categories_app:property.name

Please update the values which are of this form, in the right column of the table, using the values you previously determined. The dollar sign and the curly brackets are part of the value, do not remove them!

Without TLS:

auto.create.topics.enable

false

schema.history.internal.consumer.group.id This is the application ID used for the my_categories_app Connect-Application

ATTENTION! This consumer-group name has to be resolved! Information required for this step was previously determined

Example resolved value:
axual-local-default-my_sqlserver_name-dbhistory

schema.history.internal.consumer.security.protocol

SSL

schema.history.internal.consumer.ssl.key.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.key.password}

schema.history.internal.consumer.ssl.keystore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.location}

schema.history.internal.consumer.ssl.keystore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.password}

schema.history.internal.consumer.ssl.truststore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.location}

schema.history.internal.consumer.ssl.truststore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.password}

schema.history.internal.producer.security.protocol

SSL

schema.history.internal.producer.ssl.key.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.key.password}

schema.history.internal.producer.ssl.keystore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.location}

schema.history.internal.producer.ssl.keystore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.password}

schema.history.internal.producer.ssl.truststore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.location}

schema.history.internal.producer.ssl.truststore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.password}

With TLS:

  • These are the additional values to be added:

schema.history.internal.kafka.security.protocol

SSL

schema.history.internal.kafka.recovery.attempts

100

schema.history.internal.kafka.recovery.poll.interval.ms

100

schema.history.internal.kafka.query.timeout.ms

3000

database.trustServerCertificate

true

Authorize the Connector-Application and start producing

  1. Authorize the my_categories_app source Connector-Application to produce to and consume from all 3 streams we just created:

    • my_sqlserver_name

    • my_sqlserver_name._schema-changes

    • my_sqlserver_name.demo.categories

  2. You can now start the Connector-Application.

Cleanup

Once you are done, stop the Connector-Application and cleanup the unused axual resources.
In case you deployed your resources via Google Cloud, don’t forget to delete your bucket and your SQL instance once you are done.