Debezium - SQL Server source-connector
You’ll find this documentation very similar to Debezium MySQL, and that’s because the only difference between them is three properties and the DB backup configuration.
Debezium - SQL Server
If you already have a publicly accessible deployment of SQL Server, then you can already configure and install a connector using the Axual Self-Service UI. Use the configuration instructions below.
If you don’t have one available, follow the next section to deploy a publicly available instance of SQL Server.
Deploying a SQL Server instance
We’ll deploy an SQL Server using Google Cloud Services.
If you already have a Google account, you can sign up for a free trial of Google cloud here.
You can read more about SQL Server on Google cloud here.
Let’s get started.
-
Create a new SQL Server instance here.
-
The instance ID and password are irrelevant for Connect. Use whatever you wish.
-
Select database version "SQL Server 2019 Standard" (likely the default option).
-
The region is irrelevant, too, but usually you would select a region geographically closest to the Connect-cluster.
-
Zone availability: "Single" is enough. We’ll aim to have a very lightweight deployment.
-
Customize your instance: Click "Show configuration options"
-
Machine type - Click the dropdown menu and select "Lightweight": 1vCPU is enough.
-
Storage: go for the least amount of storage.
-
Connections: Leave only "Public IP" selected.
-
Connections: Authorized networks. Click "Add network". Use any name and "0.0.0.0/0" as the CIDR notation (and click "Done"). This will open the database to the entire internet. It’s ok, we’ll delete it shortly anyway.
-
Backups: Disable backups by unchecking the "Automate backups" tickbox.
-
No changes are needed for "Maintenance", "Flags […]" and "Labels"
-
-
Click "Create instance". Wait for the operation to complete.
-
While the database server is getting deployed, let’s create a bucket.
-
Give it any name (i.e. "sqlserver-init-axual-connect"). Click Continue.
-
Where to store your data: Region (we’re using a single region). Choose any geographic location and click Continue.
-
Default storage class: Standard. Click Continue.
-
Control access to objects: uniform. Click Continue.
-
Protect data: None
-
-
Click "Create" to crete the bucket.
-
The bucket page will open. Click "Upload files"
-
Save this file locally, and then click "Upload files" to upload it into the bucket:
CREATE SCHEMA demo; go CREATE SCHEMA test; go CREATE TABLE demo.categories ( category_id INT IDENTITY (1, 1) PRIMARY KEY, category_name VARCHAR (255) NOT NULL ); SET IDENTITY_INSERT demo.categories ON; INSERT INTO demo.categories(category_id,category_name) VALUES(1,'Archers'); INSERT INTO demo.categories(category_id,category_name) VALUES(2,'Warriors'); INSERT INTO demo.categories(category_id,category_name) VALUES(3,'Medics'); EXEC msdb.dbo.gcloudsql_cdc_enable_db 'database_name';
-
You can close the buckets page. Let’s go back to our SQL instances. Select your SQL Server to view it. Note down the public IP address. This is the
database.hostname
you’ll use in your configuration. -
On the left-side menu, click "Users". Click the dots next to the
sqlserver
user and change its password. We’ll use these credentials to connect to the database. -
On the left-side menu, click "Databases". Click "Create Database". Use
database_name
as the name, as this is the name we referenced in the db initialization SQL above, when enabling Change Data Capture (you can read about it here). We’ll also reference it again in the connector configuration. -
On the left-side menu, click "Overview". Click the "Import" button at the top.
-
Source: click "Browse". Select the SQL file we saved earlier.
-
File format: SQL
-
Destination:
database_name
-
Click "Import"
-
Configuring a new connector
You will need to perform the following:
-
Determine the values used for topic and consumer-group name-resolving logic.
-
Create the associated Streams in the UI
-
Provide default connector configuration
-
Create an additional application (workaround for bug AXPD-5547 - internal link )
-
Authorize applications
-
Provide advanced(custom) security configuration to the connector
1. Determine name-resolving properties
Some properties require you to be aware of Axual’s custom name-resolving mechanism. Consult an axual-platform operator to acquire the following information you will need:
Resolving pattern for topic-names |
Example pattern:
|
Resolving pattern for consumer-groups |
Example pattern:
|
Tenant short-name |
Example name:
|
Instance short-name |
Example name:
|
Environment short-name |
Example name:
|
Connect-Application short-name |
You will create this application shortly, and you can choose any valid name
|
2. Create the associated streams in the UI
Follow the 2022.1@axual::self-service/stream-management.html.adoc#creating-streams documentation in order to create 3 (or more) streams.
All the streams will be deployed onto the same environment, and only this environment will be used throughout the entire document. When you deploy the streams onto the environment, you will use the following configuration:
-
Key/Value types will both be STRING
-
retention time and
segment.ms
of at least157680000000
(5 years). -
a single (one) partition.
Streams to create and deploy:
-
The first stream will be a (
<nickname>
) for the database. You can choose any value for this (it’s for internal use only), but it has to be unique within Axual-Connect. In our example, the DB server and stream name will bemy_sqlserver_name
. This stream contains schema-change-events that describe schema-changes that are applied to captured tables in the database. -
The second stream’s name will be the composition of
<nickname>.<Database name>.<table name>
, for every table we intend to watch. In our example, we’ll only watch a single table, so we’ll create a single stream, named:my_sqlserver_name.demo.categories
. This naming pattern is enforced by the connect-plugin. These streams will contain events that happen for every table, including (but not limited to) insertions, updates, deletions. -
The third and final stream will store the database schema history. It can have any name, so we’ll call it
<nickname>._schema-changes
, for consistency:my_sqlserver_name._schema-changes
.
3. Provide default connector configuration
Follow the Configuring Connector-Applications documentation to set up a new Connect-Application. Let’s call it my_categories_app
. The plugin name is "io.debezium.connector.sqlserver.SqlServerConnector". Configure the security certificate as instructed. The values you will need to supply as configuration will be listed in this section.
For advanced configuration, see the official connector documentation.
|
Example value:
|
||
|
|
||
|
|
||
|
Example value:
|
||
This is the |
|
||
|
|
||
|
|
||
|
Example values:
|
||
This is the 3rd topic we created, using the following naming convention:
|
|
4. Create an additional application
Compared to other connectors, the Debezium - SQL Server connector has an additional requirement: it needs to read the stream it normally produces to. The self-service UI doesn’t allow this yet. Bug tracked here (internal). In order to work around this limitation, we’ll create a consumer application using the same certificate as the debezium connector, and authorise it to consume the stream instead.
Follow this documentation section to 2022.1@axual::self-service/application-management.html.adoc#creating-a-custom-application.
Use the following configuration:
-
application ID:
my_sqlserver_name-dbhistory
(<nickname>-dbhistory
) -
name:
my_categories_app_aux_consumer
(example) -
short_name:
my_categories_app_aux_consumer
(example) -
Owner: same as the Connect-Application
-
Type: Custom
-
Application type: any value.
-
Visibility: private
When deploying the application on the same environment as the Connector-Application, supply the same certificate you used for the Connector-Application.
5. Authorize applications
Authorise the my_categories_app_aux_consumer
Custom-Application to consume the my_sqlserver_name._schema-changes
stream.
Authorize the my_categories_app
Connector-Application to produce to all 3 streams we just created:
-
my_sqlserver_name
-
my_sqlserver_name.demo.categories
-
my_sqlserver_name._schema-changes
6. Provide advanced(custom) configuration to the connector
Edit the Connector-Application configuration. We need to supply additional (custom) properties, with keys which are not automatically available in the self-service UI.
Many configuration values are of the form:
-
keyvault:connectors/
tenant
/instance short-name
/environment short-name
/application short-name
:property.name -
Resolved example: keyvault:connectors/
tenant
/instance
/environment
/my_categories_app
:property.name
Please update the values which are of this form, in the right column of the table, as per your findings during Step #1. The dollar sign and the brackets are part of the value, do not remove them!
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
You can now start the Connector-Application.
Cleanup
Once you are done, stop the connector application and cleanup the unused axual resources.
In case you deployed your resources via Google Cloud, don’t forget to delete your bucket and your database once you are done.