Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[action] [PR:16164] [dualtor] Improve mux_simulator #16369

Merged
merged 1 commit into from
Jan 7, 2025

Conversation

mssonicbld
Copy link
Collaborator

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • Test case(new/improvement)

Back port request

  • 202012
  • 202205
  • 202305
  • 202311
  • 202405

Approach

What is the motivation for this PR?

The dualtor nightly suffers from mux simulator timeout issue, and there are always HTTP timeout failures observed.
This PR tries to improve the mux simulator performance:

  1. improve the all mux toggle performance.
  2. improve the mux simulator read/write throughput.

PR 1522 was a quick fix to address, but it was a temporary quick fix.

How did you do it?

  1. run mux simulator with gunicorn instead of its own built-in HTTP server.
    The mux simulator is running with Flask's own built-in HTTP server. Previously, the mux simulator is running with single-threaded mode, which limits its performance && throughput; and the mux simulator is observed stuck in reading from dead connection; PR 1522 proposes a temporary by running mux simulator in threaded mode. The throughput is improved with the threaded approach, but the built-in server limits the tcp listen backlog to 128, and it is designed for development/test purpose and not recommended to be deployed(Flask's deployment doc).
    So let's run the mux simulator with gunicorn:
  • better performance/throughput with customized worker count
  • increased tcp listen backlog
  1. use thread pool to parallel the toggle requests.
    The mux simulator handles the toggle-all request by toggling each mux port one by one, let's use a thread pool to parallel run thoses toggles to further decrease the response time.

How did you verify/test it?

Run the following benchmarks on a dualtor-120 testbed, and compare the performance of:

  • A: the original mux simulator, with Flask built-in server in single-thread mode.
  • B: the mux simulator with Flask built-in server in threaded mode.
  • C: the mux simulator with this PR.
  1. toggle mux status for all mux ports(one request to toggle one mux port):
  • 20 concurrent users, repeated 2000 times
mux simulator version A B C
elapse time 96s 37s 36s
  1. toggle mux status for all mux ports(one request to toggle all mux ports):
  • 1 user, repeated 1 time.
mux simulator version A B C
elapse time 16s 16s 7s

To summarize, mux simulator with this PR has the best performance in toggles.

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

What is the motivation for this PR?
The dualtor nightly suffers from mux simulator timeout issue, and there are always HTTP timeout failures observed.
This PR tries to improve the mux simulator performance:

improve the all mux toggle performance.
improve the mux simulator read/write throughput.
PR 1522 was a quick fix to address, but it was a temporary quick fix.

How did you do it?
run mux simulator with gunicorn instead of its own built-in HTTP server.
The mux simulator is running with Flask's own built-in HTTP server. Previously, the mux simulator is running with single-threaded mode, which limits its performance && throughput; and the mux simulator is observed stuck in reading from dead connection; PR 1522 proposes a temporary by running mux simulator in threaded mode. The throughput is improved with the threaded approach, but the built-in server limits the tcp listen backlog to 128, and it is designed for development/test purpose and not recommended to be deployed(Flask's deployment doc).

So let's run the mux simulator with gunicorn:
better performance/throughput with customized worker count
increased tcp listen backlog
use thread pool to parallel the toggle requests.
The mux simulator handles the toggle-all request by toggling each mux port one by one, let's use a thread pool to parallel run thoses toggles to further decrease the response time.
How did you verify/test it?
Run the following benchmarks on a dualtor-120 testbed, and compare the performance of:

A: the original mux simulator, with Flask built-in server in single-thread mode.
B: the mux simulator with Flask built-in server in threaded mode.
C: the mux simulator with this PR.

toggle mux status for all mux ports(one request to toggle one mux port):
20 concurrent users, repeated 2000 times
mux simulator version	A	B	C
elapse time	96s	37s	36s

toggle mux status for all mux ports(one request to toggle all mux ports):
1 user, repeated 1 time.
mux simulator version	A	B	C
elapse time	16s	16s	7s

To summarize, mux simulator with this PR has the best performance in toggles.

Any platform specific information?
Supported testbed topology if it's a new test case?

Signed-off-by: Longxiang Lyu <[email protected]>
@mssonicbld
Copy link
Collaborator Author

/azp run

@mssonicbld
Copy link
Collaborator Author

Original PR: #16164

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@mssonicbld mssonicbld merged commit 8c14bdd into sonic-net:202405 Jan 7, 2025
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants