PostgreSQL source Connector, version v1.9.6.Final

PostgreSQL

If you already have a publicly accessible deployment of PostgreSQL, you can already Configure and install a connector using the Axual Self-Service UI. Use the configuration instructions below.
If you don’t have one available, follow the next section to deploy a publicly available instance of PostgreSQL.

Deploying a PostgreSQL instance

We’ll deploy a PostgreSQL instance using Google Cloud Services.
If you have a Google account, you can sign up for a free trial of Google cloud.

You can read more about PostgreSQL on Google cloud here.

Let’s get started.

  1. Create a new PostgreSQL instance.

    • instance ID: my-axual-postgresql-connector-test
      The instance ID is irrelevant for our example: It will only be displayed in your Google overview.

    • Password: You can set the password for the postgres user of the database. We’ll use this password as the database.password property when configuring the Connector-Application later.
      You can reset this password later.

    • Database: Select "PostgreSQL 14" (likely the default option).

    • Choose a configuration to start with: Development

    • Region: europe-west1
      The region isn’t very irrelevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Zonal availability: Single zone

    • Customize your instance: Click "Show configuration options"

      • Machine type: Click the dropdown menu and select Lightweight. 1vCPU is enough.

      • Storage: go for the least amount of storage.

        • If available, Use HDDs.

        • If given the option, do not enable automatic storage increases.

      • Connections: Leave only "Public IP" selected.

        • Authorized networks. Click "Add network". Use any name and 0.0.0.0/0 as the CIDR notation (and click "Done").
          This will open the database to the entire internet. That’s ok, we’ll delete it shortly anyway.

      • Authentication: Skip this config if it’s present

      • Data protection:

        • Enable backups. Under "Advanced options", select (single) Region (europe-west1)

        • If available, enable point-in-time recovery

        • Disable deletion protection

      • Maintenance: Skip this config

      • Flags and parameters: cloudsql.logical_decoding : on
        The effect of this operation is to set the "wal_level" to "logical"

      • Labels: Skip this config

    • Click "Create instance".

  2. While the database server is getting deployed, let’s create a bucket.

    • Name your bucket: postgresql-init-axual-connect.
      Skip labels.
      Click "Continue".
      The bucket name is irrelevant for our example: It will only be displayed in your Google overview.

    • Choose where to store your data:

      • Region: europe-west1
        The region isn’t very irrelevant for our example, but usually you would select a region geographically closest to the Connect-Cluster.

    • Choose a default storage class for your data: Standard.
      Click "Continue".

    • Choose how to control access to objects: Uniform.
      Click "Continue".

    • Choose how to protect object data: None

      • Data encryption: Skip this config

  3. Click "Create" to crete the bucket.
    If you get a pop-up with "Public access will be prevented", click "Confirm".

  4. The bucket page will open. Click "Upload files"

  5. Save the following text to a file on your system, and then click "Upload files" to upload it into the bucket:

    ALTER USER postgres REPLICATION;
    
    CREATE TABLE CUSTOMERS
    (
      user_id        serial         PRIMARY KEY,
      customer_name  VARCHAR (100)  NOT NULL,
      email          VARCHAR(250)
    );
    
    INSERT INTO CUSTOMERS (customer_name, email) VALUES
    ('alin', null),
    ('frenk', null),
    ('doxxed', null);
  6. You can close the buckets page. Let’s go back to our SQL instance.
    Select your instance to open it. Note down the public IP address.
    We’ll use this as the database.hostname property when configuring the Connector-Application later.

  7. [Optional] Change the postgres user’s password:

    • On the left-side menu, click "Users".

    • Click the dots next to the postgres user.

    • Click "Change password".

    • Type a new password and click "OK".

  8. On the left-side menu, click "Databases".

    • Click "Create Database".
      Use inventory as the name, as this is the name used in this example instance-setup (we referenced it in the SQL snippet above, and we’ll do it again in the connector configuration).

      • Click "Create".

  9. On the left-side menu, click "Overview". Click the "Import" tab.

    • Source: click "Browse".

      • Double-click your bucket and select the SQL file we saved earlier.

      • Click "SELECT" at the bottom of the page.

    • File format: SQL

    • Destination: inventory

    • Click "Import"

Configuring a new source Connector

  1. Follow the Creating streams documentation in order to create one stream and deploy it onto an environment.
    The name of the stream will be postgresql_nickname.inventory.customers.
    The key/value types will be String/String.

    This stream name is the composition of <nickname>.<database name>.<table name>. This naming pattern is enforced by the Connect-Plugin.
    You would need to create one such stream for every DB table you intend to watch. In our example, we’ll only watch a single table, so we created a single stream.

  2. Follow the Configure and install a connector documentation to set up a new Connector-Application.
    Let’s call it my_customers_app.
    The plugin name is io.debezium.connector.postgresql.PostgresConnector.

    For advanced configuration, see the official source connector documentation.

  3. Provide the following minimal configuration in order to connect to the previously configured PostgreSQL instance.

    database.hostname
    You noted down this IP address after creating the PostgreSQL instance.

    Example value:
    123.123.123.123

    database.port

    5432

    database.user

    postgres

    database.password
    You noted down this password while creating the PostgreSQL instance

    Example value:
    1234abcdEFG

    database.server.name This is the <nickname> we used as a prefix for the stream we created.

    postgresql_nickname

    database.dbname

    inventory

    table.include.list

    public.customers

    We need to supply additional (custom) properties, with keys which are not automatically available in the self-service UI. We disable automatic topic creation. The first property should be enough. The fact that the following 2 properties are required is a bug within the connect-plugin itself.

    topic.creation.enable

    false

    topic.creation.default.partitions

    1

    topic.creation.default.replication.factor

    3

  4. Authorize the my_customers_app source Connector-Application to produce to the postgresql_nickname.inventory.customers stream.

  5. You can now start the source Connector-Application.

  6. You can now check the postgresql_nickname.inventory.customers stream to see the events published by the Connector.

Cleanup

Once you are done, stop the Connector-Application and cleanup the unused axual resources.
In case you deployed your resources via Google Cloud, don’t forget to delete your bucket and your SQL instance once you are done.