Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force test run on a single device #979

Open
nikolas-scaleforce opened this issue Nov 5, 2024 · 6 comments
Open

Force test run on a single device #979

nikolas-scaleforce opened this issue Nov 5, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@nikolas-scaleforce
Copy link

nikolas-scaleforce commented Nov 5, 2024

Is your feature request related to a problem? Please describe.
We are running multiple different test batches on the same CI pipeline. Initially we start a few emulator instances and run the tests that allow for parallel execution. The last test batch is for sequential tests that functionally do not support being run in parallel - but they still run in parallel because we have the multiple emulator instances already started.

Describe the solution you'd like
We need a way to force Marathon to run the tests against a single emulator/device without having to explicitly kill(disconnect) all running emulators/devices for this to work.

Describe alternatives you've considered
Alternative is mentioned above - go over the running emulators and kill all but one, but this makes the pipeline setup more complex

Additional context
We use OSS runner with the Gradle plugin and all test batches have their own annotation filter applied, so we use different commands to start the tests like ./gradlew marathonStandardDebugAndroidTest -PmarathonAnnotationFilters=Sequential. I looked over the documentation multiple times but I couldn't find an existing option to do this

@nikolas-scaleforce nikolas-scaleforce added the enhancement New feature or request label Nov 5, 2024
@Malinskiy
Copy link
Member

All the tests are expected to be homogeneous in terms of running strategy, this has always been an assumption test runs in marathon. If you have a list of tests that require different constraints - I suggest to start a separate test run and connect as many or as few devices as needed directly via adb or by splitting your devices into groups combined under different adb servers and pointing marathon to an appropriate adb server

I'm happy to consider alternative technical proposals, but I don't see any value from overcomplicating an already complex test run logic when simply running a different test run would achieve exactly the same.

FYI for clarity it's probably best to not call separate test runs as batches because in marathon a batch is a unique set of tests

I don't have the code for marathonAnnotationFilters property, but I assume it dynamically populates marathonfile via gradle configuration. This is the expected way to group tests for execution in gradle plugin. For CLI usage a single Marathonfile and multiple filter files is usually the preferred method https://docs.marathonlabs.io/runner/configuration/filtering#values-file-filtering

@nikolas-scaleforce
Copy link
Author

nikolas-scaleforce commented Nov 6, 2024

You got it exactly right, the property uses JUnit CustomFilter to populate the marathon filters via gradle configuration dynamically for tests with different constraints in batches, suites, sets of tests or whatever you find suitable to call them - sorry if I didn't use the correct terminology regarding Marathon in particular :)

I totally get what you're saying but I find a problem with the assumption about the control over the devices - not everyone has their own dedicated device/emulator farm with full control over the way devices are being spun up/connected to the machine executing the tests. In current times many people use some kind of cloud service to run application builds and tests. In our case we use Azure VM agents - emulator setup and boot is a slow and expensive operation even for a single pipeline, let alone having separate test runs. Of course I could still spin up and down emulators depending on the current set of tests being run but this overcomplicates and slows down the overall test job execution. Considering the improvements I get from running the tests in parallel it is kind of acceptable, I was just wondering if there is a way to target just one device in the case where a specific set of tests has such constraints that allows them to run only sequentially.

Anyway, thanks for the quick reply! Apart from this "limitation" in my scenario, I want to say that this is a great tool, much better than anything else I've tried in terms of performance and flexibility, either OSS or Google provided. Amazing job 🚀

@Malinskiy
Copy link
Member

That's okay regarding the terminology. It's just sometimes hard to understand if we're talking about the same thing, so I needed to clarify it.

I understand the constraint on the device control. Marathon is a large piece of the puzzle in terms of running tests at scale, but it's not a full solution. Doing that requires tighter coupling between marathon and the device provisioning logic as well as all the other pieces at play here. I tried building the full solution here in Marathon, but lots of users have very different and diverging pieces around Marathon. That's why I've separated solving the whole problem domain into a commercial setting rather than open-source: tight coupling would mean complicated expertise required to support it as well as less flexibility between components. Marathon OSS on the other hand is as flexible as possible on the other hand. As I've replied before, feel free to suggest a technical design for supporting unparallelizable tests.

@nikolas-scaleforce
Copy link
Author

Honestly I am not that involved in the Marathon implementation to suggest an actual technical design but what I imagine is a property like useSingleDevice or something in that line. Then Marathon picks up a single available device and starts running tests against that device skipping other available devices. If this device becomes unavailable for some reason then Marathon picks up another one of the available devices. Not really sure how feasible this is but it kind of sounds possible to me :)

@Malinskiy
Copy link
Member

After thinking about this use case, I wanted to share a specific design issue with this feature: assuming you're running Marathon on a host with concurrent usage of Android devices via adb, there is no way to distinguish between devices that have been picked up by Narathon's instance. Assuming Marathon starts supporting this single device mode and you start running concurrently on some host multiple test runs there is no way to distinguish which device has been reserved by the current test run. Essentially, this means that there is no device provider controller since Marathon is a modular piece and its only requirement is that you connect devices to some (or multiple) adb servers.
I fear when introducing single device mode you'd have to run separate adb servers anyway as a simple form of device controller, which is the easiest solution to this problem without any changes to marathon

@nikolas-scaleforce
Copy link
Author

nikolas-scaleforce commented Nov 19, 2024

There is a working solution so that's fine. I just wonder though, if what you are saying is true, what would happen if you start multiple test runs concurrently right now (without separate adb servers)? How would one Marathon instance know which device is used by another Marathon instance? I would assume it is the same scenario as what you describe above for single device mode.
Edit:
And an additional question - how does Marathon currently know which device is running a test so it doesn't attempt to run another test at the same time? Can't this logic be reused in some way for single device mode?
These are just questions though, feel free to close this issue! If some day I have the time and the actual capability to look into this and propose a solution, I will 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants