-
Notifications
You must be signed in to change notification settings - Fork 11
Best Practices for deploying scalable instances on aws #4
Comments
Of course the option 3 is the most scalable one, at least, you can scale
every service vertically independently.
But it can be a bit complex to manage. In that sense, it is a bit less
maintainable.
OpenVidu team is working in an elastic architecture for OpenVidu. We are
first focusing on AWS, but we plan to add more cloud providers in the
future.
How do you plan to deploy OpenVidu Service? On premises? On bare metal?
Using some sort of private cloud like OpenStack or a docker orchestrator
like Kubernetes?
|
Kubernetes for me. I'd love to see some deployment templates for Kurento, OpenVidu, or both. |
Any news here? when will the scale out feature be available? |
It is the next point on our roadmap. We will start in the following weeks. |
any updates on scaling on AWS? |
As you probably know, we have launched OpenVidu Pro, a commercial version of OpenVidu with extended features. OpenVidu Pro is in active development right now. It will include scalability and elasticity features. |
So to confirm as of right now you don't have any in place and you don't plan to add them to the open source version. |
We don't plan to add scalability features to OpenVide open source version in the mid term. If you want it, you can implement a layer on top of it to manage several OpenVidu servers. |
Can anyone list down the best practices for deploying the openvidu servers on AWS? Keeping in mind scalability and performance. I was reading the documentation here (https://openvidu.io/docs/deployment/deploying-ubuntu/) and (https://openvidu.io/docs/deployment/deploying-aws/) but didn't find satisfactory answers.
I saw a diagram in the documentation that tells to deploy the openvidu servers in three ways:
Which is more scalable, performant and maintainable?
The text was updated successfully, but these errors were encountered: