Skip to content
This repository has been archived by the owner on Jun 14, 2022. It is now read-only.

Parallel execution for scenario replay #56

Open
4 of 9 tasks
Mythra opened this issue Feb 8, 2018 · 2 comments
Open
4 of 9 tasks

Parallel execution for scenario replay #56

Mythra opened this issue Feb 8, 2018 · 2 comments
Labels
discussion M-T: An issue where more input is needed to reach a decision

Comments

@Mythra
Copy link

Mythra commented Feb 8, 2018

Description

Currently while looking through the steno docs I see you can "start", and "stop" through a web api to gather all the requests there are. Currently my slackbot is written in Rust which by default runs multiple tests in parallel (and speeds up our test execution too!) I'd really hate to turn this off, so I'm curious how do we playback multiple scenarios at a time?

What type of issue is this? (place an x in one of the [ ])

  • bug
  • enhancement (feature request)
  • question
  • documentation related
  • testing related
  • discussion

Requirements (place an x in each of the [ ])

  • I've read and understood the Contributing guidelines and have done my best effort to follow them.
  • I've read and agree to the Code of Conduct.
  • I've searched for any related issues and avoided creating a duplicate issue.
@aoberoi aoberoi added the discussion M-T: An issue where more input is needed to reach a decision label May 11, 2018
@aoberoi
Copy link
Contributor

aoberoi commented May 11, 2018

you raise a really great point. the current design does not account for parallel tests, and that's a shame since it would be a great performance win.

let's brainstorm some ideas about how we may build this into a future version.

the main problem I see is that there's no way for your app or Slack to signal (on a per-request basis) which scenario a specific interaction is associated with. imagine running scenario-a and scenario-b in parallel. they both make a request to http://localhost:3000/api/chat.postMessage. how would steno know which scenario to replay that request against? similarly, how would your app know which test case to route an incoming request from steno into?

one idea is to use a special header in the HTTP request, let's call it X-Steno-Scenario-Name. this would be fairly easy for steno to use automatically add when making a request to your app, but then your test code has the burden or parsing that data and performing routing based on the value. similarly, your app would also have the burden of assigning that header value on all outgoing requests, so that steno can find the right response.

any other ideas?

@aoberoi aoberoi changed the title Usage with test engines in parallel Parallel execution for scenario replay May 11, 2018
@Mythra
Copy link
Author

Mythra commented May 14, 2018

Hey @aoberoi ,

Thanks for the great write up! I think for me the solution I was thinking of was quite similar to yours. Ideally what I was hoping for is some way to make all my requests with a special header ( I'll use X-Steno-Scenario-Name), and then being able to retrieve all of them with a later call.

For example I'd call start with: X-Steno-Scenario-Name: Test, and for everything happening in that particular test I would also add: X-Steno-Scenario-Name: Test. Then when I called stop to validate I would say call: http://localhost:300/stop?scenario_name=Test, and I'd only get back request/response pairs for that particular scenario name.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
discussion M-T: An issue where more input is needed to reach a decision
Projects
None yet
Development

No branches or pull requests

2 participants