Producer
Producer<K, V> (Interface)
Package: io.axual.client.producer
The axualClient.buildProducer()
call returns an implementation of Producer
interface.
AxualClient client = ... (1)
ProducerConfig producerConfig = ProducerConfig.builder()
.setKeySerializer(...)
...
.setSslConfig(...) (2)
.build(); (3)
Producer<K, V> producer = client.buildProducer(producerConfig); (4)
ProducerMessage<K, V> message = ProducerMessage.<K, V>newBuilder()
.setStream(...)
...
.build(); (5)
producer.produce(...); (6)
1 | Build an AxualClient object. |
2 | Build an SslConfig object. |
3 | Build a ProducerConfig object. |
4 | Build a Producer object. |
5 | Build a ProducerMessage object. |
6 | Start producing |
Methods
Future<ProducedMessage<K, V>> produce(ProducerMessage<K, V> message)
This method accepts a ProducerMessage
object and returns a Future
object containing a
ProducerMessage
object.
Future<ProducedMessage<K, V>> produce(ProducerMessage<K, V> message, ProduceCallback<K, V> callback)
This method also accepts a ProduceCallback
object which will be called
after a successful/failed produce.
ProducerMessage<K, V>
A wrapper object containing the key, value and stream among other parameters of the message to be produced.
ProducerMessage<K, V> message = ProducerMessage.<K, V>newBuilder()
.setStream(...)
.setKey(...)
.setValue(...)
.build();
// produce message
producer.produce(message);
ProducedMessage<K, V>
A wrapper object returned after a successful produce()
call. It includes metadata information
including the partition of the topic, offset of the message and timestamp of message produced.
It also contains the name of the underlying Axual Kafka Cluster and the logical instance to which the producer is connected.
To access the ProducedMessage
object, call the get()
method on the Future object.
Future<ProducedMessage<K, V>> future = producer.produce(...);
ProducedMessage<K, V> message = future.get();
ProduceCallback<K, V>
DeliveryStrategy
An Enum used for the producers, the strategy is used to manage the produce and retry policy.
AT_MOST_ONCE
This strategy will try to deliver messages once (or less) and continue processing in all failure cases. In case something fails the message will be lost. For Producers this means that messages will be sent to Kafka only once.
AT_LEAST_ONCE
This strategy will retry to deliver messages upon technical failures, potentially leading to message duplication in certain situations. Duplication is considered to be a lesser evil by applications that adopt this strategy. Be sure to be idempotent in your application when adopting this strategy. For Producers this strategy ensures successful delivery to Kafka and the message is replicated to other node in the cluster.