You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The resource limits applied in that test are (For the couchdb pods):
containers:
- name: cht-couchdb-1
image: public.ecr.aws/medic/cht-couchdb:4.2.0
resources:
requests:
cpu: "1800m" # About 23% of node CPU
memory: "7Gi" # About 24% of node memory
limits:
cpu: "2500m" # About 32% of node CPU
memory: "9Gi" # About 31% of node memory
It probably makes sense to make these values configurable via values.yaml since it should depend from machine to machine and users might decide to make them higher/lower based on their own needs.
Right now the pods don't have resource limits and resource requests specified.
As a result we run into issues like these from time to time where the pods get evicted.
All pods need to specify sensible resource resource requests and resource constraints.
The text was updated successfully, but these errors were encountered: