Skip to content

hth/mercury

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

84 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mercury

This code runs multiple microservices to absorb transactions from via kafka or via csv file from AWS S3.

data-consumer consumes transactions from data-via-kafka & data-via-csv. data-consumer produces its own transaction tagged as CONSUMER, whereas data coming from other source are tagged as KAFKA, CSV. Success and Failed transactions are streamed live. Percentage of failed transactions is 10%.

Navigate code

  • data-consumer (Consumes from Self, Kafka and S3. Persist data to Mongo. Stream Success and Failed transactions)
  • data-via-kafka (Data Produce and Published to Kafka)
  • data-csv (Produce data in CSV format. Stores data periodically to AWS S3)

Solution to design

Design Problem

Running Locally

Follow steps mentioned in Commands to run this code locally

Standalone Kafka

Run kafka

Start kafka via by running docker-compose.yml under folder data-via-kafka/kafka-setup

OR

docker run --name kafka -p "9092:9092" --volume ./data:/tmp/kafka-logs -d apache/kafka:3.7.0

OpenRewrite

Recipe added for license and static import. Refer OpenRewrite

Spring AI

Note: Running locally via Kubernetes needs higher memory footprint. Recommended to run data-ai using IntelliJ

For more Refer

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published