This code runs multiple microservices to absorb transactions from via kafka
or via csv
file from AWS S3
.
data-consumer
consumes transactions from data-via-kafka
& data-via-csv
. data-consumer
produces its own transaction tagged as CONSUMER
, whereas data
coming from other source are tagged as KAFKA
, CSV
. Success
and Failed
transactions are streamed live. Percentage of failed transactions is 10%
.
- data-consumer (Consumes from Self, Kafka and S3. Persist data to Mongo. Stream Success and Failed transactions)
- data-via-kafka (Data Produce and Published to Kafka)
- data-csv (Produce data in CSV format. Stores data periodically to AWS S3)
Follow steps mentioned in Commands to run this code locally
Start kafka
via by running docker-compose.yml
under folder data-via-kafka/kafka-setup
OR
docker run --name kafka -p "9092:9092" --volume ./data:/tmp/kafka-logs -d apache/kafka:3.7.0
Recipe added for license and static import. Refer OpenRewrite
Note: Running locally via Kubernetes needs higher memory footprint. Recommended to run data-ai
using IntelliJ
- Refer video on Ollama
- library https://github.com/ollama/ollama
- Llama 2 7B 3.8GB
- ollama run llama2
- Llama 2 13B 13B 7.3GB
- ollama run llama2:13b
- Llama 2 7B 3.8GB
- Rest Api
- https://ollama.com/
- https://useanything.com/
- Discord location Ollama