JDBC Source Connector — SQL Server
Type |
Source |
Class |
|
Target System |
Database (SQL Server) |
Maintainer |
Aiven |
License |
Apache License 2.0 |
Project |
|
Download |
|
This page documents version 6.12.0. Newer versions should be compatible unless there are breaking changes, but field names or default values may differ. If you notice discrepancies, please contact Axual Support. |
Description
The JDBC Source Connector for SQL Server periodically polls SQL Server tables and publishes new or changed rows as records to Kafka topics.
It is maintained by Aiven as part of the open-source github.com/aiven/jdbc-connector-for-apache-kafka.
Features
-
Poll SQL Server tables and publish rows to Kafka topics
-
Multiple polling modes:
incrementing,timestamp,timestamp+incrementing, andbulk -
Configurable table whitelist and topic prefix for flexible topic naming
-
Schema-aware: works with JSON (with schema) and Avro converters
-
Offset tracking ensures rows are not re-published across connector restarts
When to Use
-
You need to periodically ingest rows from a SQL Server table into Kafka.
-
Your use case tolerates polling latency — near-real-time is sufficient and millisecond CDC latency is not required.
-
You want a simple database integration without the operational overhead of CDC.
When NOT to Use
-
You need real-time change data capture — use Debezium SQL Server instead.
-
Your database schema changes frequently — this connector does not support schema evolution and requires manual reconfiguration on schema changes.
| SQL Server deployed via Google Cloud may incur a licensing fee that greatly exceeds the cost of the server itself. Complete this guide within a single day and decommission all resources when done. |
Installation
The connector is available from the Maven Central.
-
Navigate to the releases page and select the version matching your Kafka Connect installation.
-
Download the JAR file.
For installation steps, see Installing Connector Plugins.
Configuration
For the complete configuration reference, see the official source connector documentation.
| To configure a connector in Axual Self-Service, see Starting Connectors. TIP: To speed up your deployment, use the Terraform Boilerplate or the Management API Boilerplate. |
Getting Started
This section walks through setting up the JDBC Source Connector using a SQL Server instance on Google Cloud.
Prerequisites
SQL Server instance
You need a running SQL Server instance reachable from the Kafka Connect cluster.
We’ll deploy a SQL Server instance using Google Cloud Services.
If you have a Google account, you can sign up for a free trial of Google cloud.
You can read more about SQL Server on Google Cloud here.
-
Create a new SQL Server instance.
-
instance ID:
my-axual-sqlserver-connector-test
The instance ID is irrelevant for our example: It will only be displayed in your Google overview. -
Password: You can set the password for the
sqlserveruser of the database. We’ll use this password when configuring the Connector-Application later.
You can reset this password later. -
Database: Select "{database-type}" (likely the default option).
-
Choose a configuration to start with:
Development -
Region:
europe-west1
The region isn’t very irrelevant for our example, but usually you would select a region geographically closest to the Connect-Cluster. -
Zonal availability:
Single zone -
Customize your instance: Click "Show configuration options"
-
Machine type: Click the dropdown menu and select
Lightweight.1vCPUis enough. -
Storage: go for the least amount of storage.
-
If available, Use HDDs.
-
If given the option, do not enable automatic storage increases.
-
-
Connections: Leave only "Public IP" selected.
-
Authorized networks. Click "Add network". Use any name and
0.0.0.0/0as the CIDR notation (and click "Done").
This will open the database to the entire internet. That’s ok, we’ll delete it shortly anyway.
-
-
Authentication: Skip this config if it’s present
-
Data protection:
-
Enable backups. Under "Advanced options", select (single) Region (
europe-west1) -
If available, enable point-in-time recovery
-
Disable deletion protection
-
-
Maintenance: Skip this config
-
Flags and parameters: Skip this config
-
Labels: Skip this config
-
-
Click "Create instance".
-
-
While the database server is getting deployed, let’s create a bucket.
-
Name your bucket:
sqlserver-init-axual-connect(or something different, in case the name is already taken).
Skip labels.
Click "Continue".
The bucket name is irrelevant for our example: It will only be displayed in your Google overview. -
Choose where to store your data:
-
Region:
europe-west1(you don’t need Multi or dual region. Single region is enough)
The region isn’t very relevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.
-
-
Choose a default storage class for your data:
Standard.
Click "Continue". -
Choose how to control access to objects:
Uniform.
Click "Continue". -
Choose how to protect object data:
None-
Data encryption: Skip this config
-
-
-
Click "Create" to create the bucket.
If you get a pop-up with "Public access will be prevented", click "Confirm". -
The bucket page will open. Click "Upload files"
-
Save the following text to a file on your system, and then click "Upload files" to upload it into the bucket:
Create the test schema, table, and seed data:
CREATE SCHEMA demo GO CREATE TABLE demo.categories ( category_id INT IDENTITY PRIMARY KEY, category_name VARCHAR (255) NOT NULL ); SET IDENTITY_INSERT demo.categories ON; INSERT INTO demo.categories (category_id, category_name) VALUES (1,'Names'), (2,'Places'), (3,'Hobbies'); -
You can close the buckets page. Let’s go back to our SQL instance.
Select your instance to open it. Note down the public IP address.
We’ll use it when configuring the Connector-Application later. -
[Optional] Change the
sqlserveruser’s password:-
On the left-side menu, click "Users".
-
Click the dots next to the
sqlserveruser. -
Click "Change password".
-
Type a new password and click "OK".
-
-
On the left-side menu, click "Databases".
-
Click "Create Database".
Usedatabase_nameas the name, since this is the name used in this example instance-setup (we referenced it in the SQL snippet above, and we’ll do it again in the connector configuration).-
Click "Create".
-
-
-
On the left-side menu, click "Overview". Click the "Import" button.
-
Source: click "Browse".
-
Double-click your bucket and select the SQL file we saved earlier.
-
Click "SELECT" at the bottom of the page.
-
-
File format:
SQL -
Destination:
database_name -
Click "Import"
-
Axual stream
The stream where the connector will produce events must already exist in Axual Self-Service. See Creating streams if you need to create it.
Steps
Step 1 — Create a connector application
-
Follow the Creating streams documentation in order to create one stream and deploy it onto an environment.
The name of the stream will besqlserver_demo.categories.
The key/value types will beJSON/JSON. -
Follow the Configure and install a connector documentation to set up a new Connector-Application.
Let’s call itmy.sqlserver.source.
The plugin name isio.aiven.connect.jdbc.JdbcSourceConnector.
If a plugin isn’t available, ask a platform operator to install plugins.
Step 2 — Configure the connector
-
Provide the following minimal configuration:
connector.classio.aiven.connect.jdbc.JdbcSourceConnectorconnection.urljdbc:sqlserver://PASTE_THE_IP_ADDRESS:1433;databaseName=database_name;
Use the IP address noted when creating the SQL Server instance.connection.usersqlserverconnection.passwordYour database password
modeincrementingincrementing.column.namecategory_id
The primary key column.table.whitelistcategories
Case-sensitive.topic.prefixsqlserver_value.converterorg.apache.kafka.connect.json.JsonConvertervalue.converter.schemas.enabletrue
Include the schema envelope so the sink connector can parse each event.For advanced options, see the official source connector documentation.
Step 3 — Start the connector
Start the connector application from Axual Self-Service.
Once running, new rows inserted into the categories table will be published to the sqlserver_demo.categories stream.
Step 4 — Verify
In Axual Self-Service, use stream-browse on sqlserver_demo.categories to confirm events are arriving.
An example event:
{
"key": null,
"value": {
"schema": {
"name": "categories",
"type": "struct",
"optional": false,
"fields": [
{ "field": "category_id", "type": "int32", "optional": false },
{ "field": "category_name", "type": "string", "optional": false }
]
},
"payload": {
"category_id": 1,
"category_name": "Names"
}
}
}
The schema field is present because value.converter.schemas.enable=true.
The JDBC Sink Connector uses this schema
to construct INSERT statements.
Cleanup
When you are done:
-
Stop the connector application in Axual Self-Service.
-
Remove stream access for the application if no longer needed.
-
Delete your Cloud Storage bucket and your SQL instance if they were created only for testing.
Known limitations
-
This connector uses polling — it does not provide real-time change detection. Use Debezium SQL Server for CDC.
-
table.whitelistvalues are case-sensitive — the table name must match exactly as it appears in SQL Server. -
Schema evolution is not supported — changes to the source table schema require manual reconfiguration.
Examples
Minimal configuration
{
"name": "my-jdbc-sqlserver-source",
"config": {
"connector.class": "io.aiven.connect.jdbc.JdbcSourceConnector",
"connection.url": "jdbc:sqlserver://123.123.123.123:1433;databaseName=database_name;",
"connection.user": "sqlserver",
"connection.password": "<your-database-password>",
"mode": "incrementing",
"incrementing.column.name": "category_id",
"table.whitelist": "categories",
"topic.prefix": "sqlserver_",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "true"
}
}
License
SQL Server JDBC Connector is licensed under the Apache License, Version 2.0.