Azure Cosmos DB Sink Connector
Type |
sink |
Class |
|
Target System |
Azure Cosmos DB |
Maintainer |
Microsoft |
License |
MIT License |
Project |
|
Download |
|
This page documents version 1.19.0. Newer versions should be compatible unless there are breaking changes, but field names or default values may differ. If you notice discrepancies, please contact Axual Support. |
Description
The Azure Cosmos DB Sink Connector consumes records from Kafka topics and writes them into an Azure Cosmos DB container.
It is maintained by Microsoft as part of the open-source github.com/microsoft/kafka-connect-cosmosdb.
Features
-
Write Kafka records into Azure Cosmos DB containers
-
Topic-to-container mapping via configurable
topicmap -
Supports JSON record formats
When to Use
-
You need to write Kafka records into an Azure Cosmos DB container as part of a data pipeline.
-
You want to materialise a Kafka topic as a Cosmos DB document collection.
When NOT to Use
-
You need to read data from Cosmos DB into Kafka — use the Azure Cosmos DB Source Connector instead.
-
Your Kafka records are not in JSON format.
Installation
The connector is available from the GitHub Releases.
-
Navigate to the releases page and select the version matching your Kafka Connect installation.
-
Download the JAR file.
For installation steps, see Installing Connector Plugins.
Configuration
For the complete configuration reference, see the official connector documentation.
| To configure a connector in Axual Self-Service, see Starting Connectors. TIP: For Infrastructure-as-Code deployment, see the Axual Kafka Connect Boilerplates for Terraform and Management API boilerplates. |
Getting Started
Prerequisites
Steps
Step 1 — Create a connector application
-
Follow the Creating streams documentation in order to create one stream and deploy it onto an environment.
The name of the stream will behotels.
The key/value types will beJSON/JSON. -
Follow the Configure and install a connector documentation to set up a new Connector-Application.
Let’s call itmy-custom-cosmosdb-instance.
The plugin name iscom.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector.
If a plugin isn’t available, ask a platform operator to install plugins.
Step 2 — Configure the connector
-
Provide the following minimal configuration:
key.converterorg.apache.kafka.connect.json.JsonConvertertopicshotelsvalue.converterorg.apache.kafka.connect.json.JsonConverterconnect.cosmos.connection.endpointconnect.cosmos.master.keyInsert Azure CosmosDB Master Key
connect.cosmos.containers.topicmapFormat is topic#containername. Example:
hotels#cosmosdbtestingconnect.cosmos.databasenameInsert name of database. Example:
cosmosdbtestingFor advanced options, see the official connector documentation.
-
Authorize the
my-custom-cosmosdb-instancesink Connector-Application to consume thehotelsstream.
Examples
Minimal configuration
{
"name": "my-cosmosdb-sink",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": "hotels",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connect.cosmos.connection.endpoint": "https://my-cosmos-instance.documents.azure.com:443/",
"connect.cosmos.master.key": "<your-cosmos-master-key>",
"connect.cosmos.containers.topicmap": "hotels#cosmosdbtesting",
"connect.cosmos.databasename": "cosmosdbtesting"
}
}
License
Azure Cosmos DB sink connector is licensed under the MIT license.