In this article, we are going to walk through, step by step, how to build a Conduit connector.
Conduit connectors communicate with Conduit by either writing records into the pipeline (source connector) and/or the other way around (destination connector).
For this example, we are going to build an Algolia destination connector. The goal of this connector is to give the user the ability to send data to Algolia. In the context of search engines, this is called indexing. Since Conduit is a generic tool to move data between data infrastructure, with this new connector we can index data from any Conduit Source (PostgreSQL, Kafka, etc.).
The Conduit Kafka Connect Wrapper connector is a special connector that allows you to use Kafka Connect connectors with Conduit. Conduit doesn't come bundled with Kafka Connect connectors, but you can use it to bring any Kafka Connect connector with Conduit.
This connector gives you the ability to:
Easily migrate from Kafka Connect to Conduit.
Remove Kafka as a dependency to move data between data infrastructure.
Leverage a datastore if Conduit doesn't have a native connector.
Since the Conduit Kafka Connect Wrapper itself is written in Java, but most of Conduit's connectors are written in Go, it also serves as a good example of the flexbilty of the Conduit Plugin SDK.
By default, Conduit ships with a REST API that allows you to automate the creation of data pipelines and connectors. To make it easy to get started with the API, we have provided a Swagger UI to visualize and interact with the Condiut without having to write any code...yet 😉.
After you start Conduit, if you navigate to http://localhost:8080/openapi/, you will see a page that looks like this:
Then, after you test the API, you can write code to make the equilivent request. For example, here is how you would make a request using the axios Node.js library.
To open the Swagger UI, open your browser and navigate to http://localhost:8080/openapi. This UI allows you to interact with the API and create connectors. It also serves as a reference for the API.
In this guide, we will build a data pipline that moves data between files. This example is a great to get started with Conduit on a local machine, but it's also the foundation of use cases such as log aggregation.
Everytime that data is appended to the src.log, data will be move in real-time to dest.log.
In this guide, we will build a data pipline that moves data between a Kafka topic and a Postgres table. We will also use Docker to run local instances of Apache Kafka and Postgres.