Replies: 2 comments 11 replies
-
Yes, Crossbar.io (using router-to-router links, and proxy and router workers) scales with the number of CPU cores, e.g. a number I use for orientation is, 50k fully routed WAMP RPCs (each consisting of 4 WAMP messages on the wire) per CPU core using 2-8 GB RAM per CPU core ... |
Beta Was this translation helpful? Give feedback.
-
lemme comment, just quickly ..
Yes indeed, there has been quite some development and fixing happening rgd rlinks, r2r and such in Crossbar.io since v21.1.1, and based on a pretty advanced and extensive setup for a customer project to test and verify, so I am confident to say: if you want to use this stuff, you should use the very latest Crossbar.io tagged release, which is v23.1.2 currently - need to push a new release summing up last months changes. You don't need to and should not build Crossbar.io, but you can just use the official Docker images published, which includes everything, and runs on PyPy.
Seems like a process problem: your team should not be required to "make" or do anything just to follow Crossbar.io stable releases Maybe update 1 line somewhere which says which release version you follow. And then "make pull / update" to trigger the rollout. |
Beta Was this translation helpful? Give feedback.
-
Hey! I'm taking over a legacy application at my company that utilizes crossbar. We have a handful of publisher apps that publish data over WAMP and then a bunch of users that selectively subscribe from their browsers via our websocket router to that data.
We've been experiencing performance issues and someone on the ops team identified a bottleneck. It seems like there's a single thread that publishes data to all of our clients. He recommended that we increase the thread count if possible. I've been navigating the documentation here but haven't found any configuration to do so. Is there anyway to achieve this in crossbar?
Also worth mentioning that we are on v21.1.1
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions