Debezium SQL Server CDC Source Connector

For previous versions you can check the Official Debezium Documentation.

Type

source

Class

io.debezium.connector.sqlserver.SqlServerConnector

Target System

SQL Server

Maintainer

Debezium / Red Hat

License

Apache License 2.0

Project

github.com/debezium/debezium

Download

Debezium Releases

This page documents version 3.0.8.Final. Newer versions should be compatible unless there are breaking changes, but field names or default values may differ. If you notice discrepancies, please contact Axual Support.

Description

The Debezium SQL Server CDC Source Connector captures change events from SQL Server databases using SQL Server’s native CDC feature and publishes them as records to Kafka topics.

It is maintained by the Debezium community as part of the open-source github.com/debezium/debezium.

The connector reads SQL Server’s built-in CDC change tables, which must be enabled on the target database. Schema history is stored in a dedicated Kafka topic so events are decoded correctly even after schema changes.

Features

  • Real-time Change Data Capture (CDC) from SQL Server tables

  • Captures inserts, updates, and deletes

  • Schema history stored in a dedicated Kafka topic

  • Axual-specific SSL key material managed via keyvault references

  • Stream name follows the convention <nickname>.<schema>.<table>

  • Supports optional TLS client certificate authentication

When to Use

  • You need real-time CDC from SQL Server tables into Kafka.

  • You want to capture all DML events with millisecond latency.

When NOT to Use

SQL Server deployed via Google Cloud may incur a licensing fee that greatly exceeds the cost of the server itself. Complete this guide within a single day and decommission all resources when done.

Installation

The connector is available from the Debezium Releases.

  1. Navigate to the releases page and select the version matching your Kafka Connect installation.

  2. Download the JAR file.

For installation steps, see Installing Connector Plugins.

Configuration

For the complete configuration reference, see the official source connector documentation.

To configure a connector in Axual Self-Service, see Starting Connectors.
To speed up your deployment, use the Terraform Boilerplate or the Management API Boilerplate.

Axual Platform requirements

The Debezium SQL Server connector maintains a schema history by writing to and reading from a dedicated Kafka topic (schema.history.internal.kafka.topic). This means the connector opens an internal Kafka producer and consumer — separate from the main Connect worker — to persist schema changes. On the Axual Platform, Kafka requires SSL authentication, so this internal connection must also be configured with the SSL credentials of the connector application. The SSL values are read from Axual’s keyvault at runtime using the ${keyvault:…​} reference syntax.

Supply the following additional properties when running this connector on the Axual Platform.

The schema.history.internal.kafka.topic value must be resolved using Axual’s name-resolving mechanism. Consult an Axual platform operator to acquire the name-resolving values (tenant, instance, and environment short names).

auto.create.topics.enable

false

schema.history.internal.consumer.group.id

This consumer-group name must be resolved!

Example resolved value:
axual-local-default-my_sqlserver_name-dbhistory

schema.history.internal.consumer.security.protocol

SSL

schema.history.internal.consumer.ssl.key.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.key.password}

schema.history.internal.consumer.ssl.keystore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.location}

schema.history.internal.consumer.ssl.keystore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.password}

schema.history.internal.consumer.ssl.truststore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.location}

schema.history.internal.consumer.ssl.truststore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.password}

schema.history.internal.producer.security.protocol

SSL

schema.history.internal.producer.ssl.key.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.key.password}

schema.history.internal.producer.ssl.keystore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.location}

schema.history.internal.producer.ssl.keystore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.keystore.password}

schema.history.internal.producer.ssl.truststore.location

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.location}

schema.history.internal.producer.ssl.truststore.password

${keyvault:connectors/axual/local/default/my_categories_app:ssl.truststore.password}

If TLS client certificates are used, additionally provide:

schema.history.internal.kafka.security.protocol

SSL

schema.history.internal.kafka.recovery.attempts

100

schema.history.internal.kafka.recovery.poll.interval.ms

100

schema.history.internal.kafka.query.timeout.ms

3000

database.trustServerCertificate

true

Getting Started

Prerequisites

SQL Server instance

We’ll deploy a SQL Server instance using Google Cloud Services.
If you have a Google account, you can sign up for a free trial of Google cloud.

You can read more about SQL Server on Google Cloud here.

  1. Create a new SQL Server instance.

    • instance ID: my-axual-sqlserver-connector-test
      The instance ID is irrelevant for our example: It will only be displayed in your Google overview.

    • Password: You can set the password for the sqlserver user of the database. We’ll use this password when configuring the Connector-Application later.
      You can reset this password later.

    • Database: Select "SQL Server 2019 Standard" (likely the default option).

    • Choose a configuration to start with: Development

    • Region: europe-west1
      The region isn’t very irrelevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Zonal availability: Single zone

    • Customize your instance: Click "Show configuration options"

      • Machine type: Click the dropdown menu and select Lightweight. 1vCPU is enough.

      • Storage: go for the least amount of storage.

        • If available, Use HDDs.

        • If given the option, do not enable automatic storage increases.

      • Connections: Leave only "Public IP" selected.

        • Authorized networks. Click "Add network". Use any name and 0.0.0.0/0 as the CIDR notation (and click "Done").
          This will open the database to the entire internet. That’s ok, we’ll delete it shortly anyway.

      • Authentication: Skip this config if it’s present

      • Data protection:

        • Enable backups. Under "Advanced options", select (single) Region (europe-west1)

        • If available, enable point-in-time recovery

        • Disable deletion protection

      • Maintenance: Skip this config

      • Flags and parameters: Skip this config

      • Labels: Skip this config

    • Click "Create instance".

  2. While the database server is getting deployed, let’s create a bucket.

    • Name your bucket: sqlserver-init-axual-connect (or something different, in case the name is already taken).
      Skip labels.
      Click "Continue".
      The bucket name is irrelevant for our example: It will only be displayed in your Google overview.

    • Choose where to store your data:

      • Region: europe-west1 (you don’t need Multi or dual region. Single region is enough)
        The region isn’t very relevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Choose a default storage class for your data: Standard.
      Click "Continue".

    • Choose how to control access to objects: Uniform.
      Click "Continue".

    • Choose how to protect object data: None

      • Data encryption: Skip this config

  3. Click "Create" to create the bucket.
    If you get a pop-up with "Public access will be prevented", click "Confirm".

  4. The bucket page will open. Click "Upload files"

  5. Save the following text to a file on your system, and then click "Upload files" to upload it into the bucket:

    Create the test schema and table, and enable CDC:

    CREATE SCHEMA demo
    GO
    
    CREATE TABLE demo.categories (
    	category_id   INT IDENTITY  PRIMARY KEY,
    	category_name VARCHAR (255) NOT NULL
    );
    
    SET IDENTITY_INSERT demo.categories ON;
    INSERT INTO demo.categories (category_id, category_name)
    VALUES (1,'Names'), (2,'Places'), (3,'Hobbies');
    
    EXEC msdb.dbo.gcloudsql_cdc_enable_db 'database_name';

    The last line enables CDC on the database. You can read more about it here.

  6. You can close the buckets page. Let’s go back to our SQL instance.
    Select your instance to open it. Note down the public IP address.
    We’ll use it when configuring the Connector-Application later.

  7. [Optional] Change the sqlserver user’s password:

    • On the left-side menu, click "Users".

    • Click the dots next to the sqlserver user.

    • Click "Change password".

    • Type a new password and click "OK".

  8. On the left-side menu, click "Databases".

    • Click "Create Database".
      Use database_name as the name, since this is the name used in this example instance-setup (we referenced it in the SQL snippet above, and we’ll do it again in the connector configuration).

      • Click "Create".

  9. On the left-side menu, click "Overview". Click the "Import" button.

    • Source: click "Browse".

      • Double-click your bucket and select the SQL file we saved earlier.

      • Click "SELECT" at the bottom of the page.

    • File format: SQL

    • Destination: database_name

    • Click "Import"

Optional: Enabling TLS

To require SSL/TLS on your SQL Server instance:

  1. In the Google Cloud console, go to the Cloud SQL Instances page.

  2. Click the instance name → ConnectionsSecurity tab.

  3. Select your desired SSL mode (e.g. Allow only SSL connections or Require trusted client certificates).

If using client certificates, the client-side certificate must be placed in the Connect configuration. Reach out to Axual Cloud Operators to set this up.

Axual name-resolving properties

Some properties require Axual’s custom name-resolving mechanism.
Consult an Axual platform operator to acquire the following information:

Tenant shortName

Example value:
axual

Instance shortName

Example value:
local

Environment shortName

Example value:
default

Connect-Application shortName

This value is already provided. We’ll use:
my_categories_app

Resolving pattern for topic-names

Example pattern:
${tenant-shortName}-${instance-shortName}-${environment-shortName}-${topic-name}
Example resolved topic name:
axual-local-default-transactions

Resolving pattern for consumer-groups

Example pattern:
${tenant-shortName}-${instance-shortName}-${environment-shortName}-${consumerGroup-name}
Example resolved consumer group:
axual-local-default-com.axual.transactionProcessor

Axual streams

Follow the Creating streams documentation to create and deploy the following streams:

  • my_sqlserver_name._schema-changes — Stores the database schema history.

  • my_sqlserver_name.demo.categories — Contains DB events for the categories table.
    Stream name follows the pattern <nickname>.<schema>.<table>. Create one per watched table.

All streams should use Key/Value type String/String, Partition count 1, Retention Time 157680000000 (5 years).

Steps

Step 1 — Create a connector application

  1. Follow the Configure and install a connector documentation to set up a new Connector-Application.
    Let’s call it my_categories_app.
    The plugin name is io.debezium.connector.sqlserver.SqlServerConnector.
    If a plugin isn’t available, ask a platform operator to install plugins.

    Use the following application ID: my_sqlserver_name-dbhistory.

Step 2 — Configure the connector

  1. Provide the following configuration:

    database.hostname

    Example value:
    123.123.123.123

    database.port

    1433

    database.user

    sqlserver

    database.password

    Example value:
    1234abcdEFG

    topic.prefix

    my_sqlserver_name

    database.names

    database_name

    table.include.list

    Example value:
    demo.categories

    schema.history.internal.kafka.bootstrap.servers

    Example values:
    - Local (helm platform): platform.local:31757
    - Axual-Cloud rokin cluster: bootstrap-c1-ams01-azure.axual.cloud:9094
    - SAAS trial: oberon-kafka-bootstrap.axual.cloud:9094

    schema.history.internal.kafka.topic

    This topic name must be resolved! Use the name-resolving values acquired above.

    Example resolved value:
    axual-local-default-my_sqlserver_name._schema-changes

  2. Supply the Axual Platform-required properties described in [_axual_platform_requirements].

Step 3 — Start the connector

Authorize the my_categories_app source connector application to produce to and consume from all streams:

  • my_sqlserver_name._schema-changes

  • my_sqlserver_name.demo.categories

Then start the connector application from Axual Self-Service.

Step 4 — Verify

In Axual Self-Service, use stream-browse on my_sqlserver_name.demo.categories to confirm CDC events are arriving when rows are inserted or updated in SQL Server.

Cleanup

When you are done:

  1. Stop the connector application in Axual Self-Service.

  2. Remove stream access for the application if no longer needed.

  3. Delete your Cloud Storage bucket and your SQL instance if they were created only for testing.

Known limitations

  • SQL Server CDC must be enabled on the target database before starting the connector.

  • The schema.history.internal.kafka.topic value must be resolved using Axual’s name-resolving mechanism before the connector can be started.

  • SSL configuration values must use the keyvault: prefix format specific to Axual.

  • SQL Server licensing costs when deployed via Google Cloud may be significant — decommission resources within the same day.

Examples

Full configuration on Axual Platform

Derived from the Terraform Boilerplate.

{
  "name": "my-debezium-sqlserver-source",
  "config": {
    "connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
    "tasks.max": "1",
    "database.hostname": "sqlserver.example.com",
    "database.port": "1433",
    "database.user": "debezium_user",
    "database.password": "Deb3zium@MssqlPass",
    "database.names": "database_name",
    "database.encrypt": "true",
    "database.trustServerCertificate": "true",
    "topic.prefix": "my_sqlserver_name",
    "topic.creation.enable": "false",
    "table.include.list": "demo.categories",
    "snapshot.mode": "initial",
    "include.schema.changes": "true",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "key.converter.schemas.enable": "false",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "value.converter.schemas.enable": "false",
    "schema.history.internal.kafka.bootstrap.servers": "kafka.axual.example.com:9093",
    "schema.history.internal.kafka.topic": "axual-myinstance-exampleenvironment-schema-history-mysql",
    "schema.history.internal.consumer.group.id": "my-debezium-sqlserver-app",
    "schema.history.internal.consumer.security.protocol": "SSL",
    "schema.history.internal.consumer.ssl.keystore.location": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.keystore.location}",
    "schema.history.internal.consumer.ssl.keystore.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.keystore.password}",
    "schema.history.internal.consumer.ssl.key.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.key.password}",
    "schema.history.internal.consumer.ssl.truststore.location": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.truststore.location}",
    "schema.history.internal.consumer.ssl.truststore.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.truststore.password}",
    "schema.history.internal.producer.security.protocol": "SSL",
    "schema.history.internal.producer.ssl.keystore.location": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.keystore.location}",
    "schema.history.internal.producer.ssl.keystore.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.keystore.password}",
    "schema.history.internal.producer.ssl.key.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.key.password}",
    "schema.history.internal.producer.ssl.truststore.location": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.truststore.location}",
    "schema.history.internal.producer.ssl.truststore.password": "${keyvault:connectors/<tenant>/<instance>/<environment>/<application>:ssl.truststore.password}",
    "errors.log.enable": "true",
    "errors.log.include.messages": "true",
    "errors.tolerance": "none"
  }
}

License

Debezium source connector is licensed under the Apache License, Version 2.0.