- Watch out for importing json serializer
- TDD has died
- Synchronous producing has been used here
- You may be required to divorce/combine consumer groups (for conduktor and spring)
UI tools don't consume messages: The default variable of auto.offset.reset is 'latest'. When the app runs from scratch and CONSUMER GROUP doesn't know its offset, the offset will be determined as the offset of the latest message
-
$${\color{green} done}$$ Sending strings butchered the message ffff but it works :pEdit: It may be fixed through serializer interface but it is rather complicated, use a package (json, avro (the best) OR jackson, gson) for now. Edit-2: Avro has been used with schema registry.
-
$${\color{green} done}$$ IsCompacted says NO. How, why?It pertains to 'compact' in cleanup.policy, namely no bug here
- Take the variable offset etc. along with header.
- If you have multiple partitions?
- If you have multiple brokers?
- Asynchronous producing, and handle errors by means of callback.
- Read his some articles that seem useful. https://medium.com/@pravvich -
NAH non-satisfactory
- delivery.timeout.ms in produce-config depends on the (a)synchronous producing, I gather. BUT the suggested thing is to regulate it as the time demanded to be request-timeout with infinite retries.
- compression.type
- enable.idempotence, by default, is true, but if it is set false?
- I see a host of tests.
later on
- cleanup.policy
- default commit and subscribe concepts have been used, play with them.
- auto.commit.interval.ms: disable it, manual commit is the key.
- client.dns.lookup
- kafka connect
later on along with debezium
- kafka streams w/ streams dsl
- kafka streams w/ processor api
- windowing
- consumer groups
- offset management
- retention policy
- exactly once
- transactional consuming
- leader election
- reassignment
- schema registry
- ksqlDB