This project runs a demo stack of Elasticsearch and Kibana on a Kind-based Kubernetes. It was originally developed on Mac OSX 11.5.1 and only tested on this OS. Dependencies are listed below.
- Make sure you have all Dependencies installed and Docker is running.
- Run:
make
. This will:
- Create Kind cluster
- Create a local Docker registry
- Lint, Build Docker Images and push them to local registry
- Deploy manifests
- Bootstrap Elasticsearch cluster
- Forward ports to Kibana and provide access info
Other commands:
- To delete cluster and dependencies run:
make clean
- Show available commands:
make help
- Forward Kibana ports:
make port-forward
- Lint docker image:
make docker-lint/<dir_with_Dockerfile>
- Build docker image:
make docker-build/<dir_with_Dockerfile>
- Push docker image:
make docker-push/<dir_with_Dockerfile>
Check Kubernetes resources
kubectl get all -n staging
Check container logs
kubectl get po -n staging
# ...
kubectl logs -n staging -f <pod_name>
Restart containers
kubectl rollout restart deployment -n staging staging-kibana
kubectl rollout restart statefulset -n staging staging-elasticsearch
Bootstrap logic is just a draft, it does not work in all scenarios and does not care about existing data.
No backup is implemented. One possible scenario is to use scheduled CronJobs to trigger Elasticsearch snapshots to some independent backend.
Current healthchecks just see that the ports are listening whithout checking if application is healthy. This needs improvement in real scenarios.
Faced this issue while running the setup on MacOS, Elasticsearch can see full disk space allocated to Docker Desktop. Make sure that space allocated to Docker Desktop is not full.
{"type": "rolling", "timestamp": "2021-10-11T08:53:32,066Z", "level": "WARN", "component":
"o.e.c.r.a.DiskThresholdMonitor", "cluster.name": "elasticsearch-cluster", "node.name":
"staging-elasticsearch-0", "message": "flood stage disk watermark [95%] exceeded on
[DDU-VjFIShSr7kvn9_l4wA][staging-elasticsearch-0][/usr/share/elasticsearch/data/nodes/0] free:
2.8gb[4.9%], all indices on this node will be marked read-only", "cluster.uuid":
"eXW5ZhpxSFCShrucDIwJ8g", "node.id": "DDU-VjFIShSr7kvn9_l4wA" }