Skip to content

Latest commit

 

History

History
28 lines (20 loc) · 2.92 KB

CHANGELOG.md

File metadata and controls

28 lines (20 loc) · 2.92 KB

Change Log

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog.

[Embed.xyz Assessment] - October 13, 2022

Here we write upgrading notes for brands. It's a team effort to make them as straightforward as possible.

Added

  • Initial API & Project Structure Initial project structure (FastAPI & Mongo deployment with Docker-compose and basic requirements)
  • ELK Application Logging #1 Here we deploy/config the ELK Stack in order to store, search, analyze, and visualize data from our Python FastAPI application. Beside of the ELK stack, we created the logs injection from our app with python-logstash (since that the API already have the logging structure, basically we added the logstash handler in order to inject the data into the Elasticsearch), making the logs available into Kibana.
  • Authentication Module (Login/Register with JWT) #2 Added the necessary structure to create security via Bearer Token (JWT). We can easily protect routes with the "user_id: str = Depends(require_user)" function requirement. We are using the "fastapi_jwt_auth" lib in order to help on token process. Also developed the Router Auth and its Routes: Register, Login and MyProfile
  • Posts Module (Create, Search by String-match) #3 Here we consolidate the API resilience to receive new modules with the Posts module. For now, we created the whole Schemas & Serializers for default response/request API patterns. This module basically means that the user can create a new post (following the specifications regarding max characters) and also search for his/her existing posts (we return a list of objects, in this case: posts). We are already using logging structures, so we have track of what is going on regarding our module.
  • Monitoring Infrastructure (Prometheus, NodeExporter and Grafana) #4 To increase the monitoring/obervability and visualize important metrics such as CPU, Memory and others, we deployed some tools to first of all: help us to expose metrics from our nodes (node-exporter), this way, we can create an endpoint for it; Prometheus for itself, will scrap the exposed data by the exporter; Prometheus will be available as a Datasource into Grafana (this process is done automatically by the Docker-compose), this way we can apply the Node-Exporter Dashboard 1860 and have a nice UI for our metrics.

Changed

Fixed