Any possible issues/gotchas with load balancing across multiple instances? #2953
-
For our dev/test instances we are happy running in single server mode (and is working well) - but have a AWS setup for production that deploys in high availability with instances across availability zones and a load balancer. I just wanted to check that there are no cache coherence/session stickiness etc problems with running a service in this configuration. The docs don't mention too much about how production setups can be configured. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Hello, Good questions. I think we can fix this lapse in the documentation sometime. The simple answer is that we run all of our production instances with a single (sometimes beefy) instance. In fact, the processing of commands/events currently requires there to be a single instance that does the processing. It is not configurable (e.g. to a single master server), so REMS should be run on only one server. In theory, the queries in the API are mostly stateless, but command processing is inherently not, and it is synchronous (not eventually consistent). It would have to be reworked, so that at least different applications can be handled independently. As REMS is currently implemented, session stickiness is also required, as REMS stores the user identity, and for example the CSRF token in the session. What sort of practical needs do you actually have for the number of users or high availability? |
Beta Was this translation helpful? Give feedback.
-
No specific needs for high availability - just had the stack setup in AWS with a load balancer (that out of the box will run across multiple aws zones) - so running it with 2+ instances is as trivial as changing the instance number. But before I actually did that setup I thought I had better check. There's no problems with a single server setup - we don't as yet have lots of users or use cases for high availability. |
Beta Was this translation helpful? Give feedback.
Hello,
Good questions. I think we can fix this lapse in the documentation sometime.
The simple answer is that we run all of our production instances with a single (sometimes beefy) instance.
In fact, the processing of commands/events currently requires there to be a single instance that does the processing. It is not configurable (e.g. to a single master server), so REMS should be run on only one server. In theory, the queries in the API are mostly stateless, but command processing is inherently not, and it is synchronous (not eventually consistent). It would have to be reworked, so that at least different applications can be handled independently. As REMS is currently implemented, sessio…