Replies: 2 comments
-
Hi @jonnermut We just clarified what we meant by clustering support in the roadmap item https://github.com/openfga/roadmap/issues/14 You can safely scale out your service by deploying multiple nodes of the OpenFGA service targeting a single database. |
Beta Was this translation helpful? Give feedback.
-
@jonnermut you can run as many replicas of OpenFGA as you would like and autoscale your workload according to your usage. OpenFGA servers are stateless and can be run standalone.
This is a tough question to answer as there are so many variables involved. Memory/CPU, at the most basic level, is a limiting factor. OpenFGA workloads tend to be more memory bound than CPU bound. That being said, the number of Checks and other API operations can impact the scalability factor quite drastically, especially if you are issuing costly Checks (e.g. checks with deep or broad resolution depth) than that will also impact that OpenFGA node more so than the other nodes in the deployment. This is one of the reasons why we're pursuing the idea of adding openfga/roadmap#14 to help distribute the load of costly queries across a cluster of OpenFGA replicas. |
Beta Was this translation helpful? Give feedback.
-
I note clustering support is on the roadmap.
Does that mean that OpenFga can only be run as a single node at the moment?
Would bad things happen if two nodes are up for instance as Kubernetes upgrades a container?
And is there any indication of what a single node could scale to, and how reliable it would be in production?
Beta Was this translation helpful? Give feedback.
All reactions