-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test sharding / parallelization #439
Comments
Hey, I saw Issue #412 and it's exactly the sort of functionality I am looking for. Has there been any movement on this? Cheers. |
I definitely wanna do this, it's hard to say when... We need a way to group files. Web-server has to understand these groups Currently the resolved files object looks something like: {
served: [
// File objects (metadata, where the file is stored, timestamps, etc.)
],
included: [
// File objects
]
} After this change, it would be: {
served: [
// File objects (I'm thinking we might make this map rather than an array)
],
included: {
0: [
// files included in the group=0 browser
],
1: [
// files included in the group=1 browser
]
} |
@shteou Would you be interested in working on this ? I would definitely help you... |
Also, does it make sense ? I mentioned karma-closure couple of times, that is this plugin: https://github.com/karma-runner/karma-closure |
I'd just like to note that we (Wix) tried this out. danyshaanan created a test case for this. It didn't work out too well, since many of our tests require loading a large part of our code, and therefore the setup time and loading of all the packages for each child process was too costly. Depending on the number of child processes, it usually ended up running slower, though it did come close to running in the same amount of time. We tried it out when we were at about ~2,000 tests. We now have over 3,500 tests in this project, so it might be worth revisiting this. If anyone else is working on this or has another angle for this, we are also more than happy to help. |
@EtaiG I have not started working on this, but as my project currently exceeds 3000 tests as well it's becoming something I want to invest some time into as well. |
A note about dynamic creation of groups (@vojtajina) - We should be aware of how this affects tests that happen to have side effects. Imagine tests B, C, and D, and a naive alphabetical devision of tests into two groups - {B,C} and {D}. Lets say C has a devastating side affect and I'm adding test A, hence changing the grouping to {A,B} and {C,D}. Now D will fail, just because the grouping changed. Of course, tests shouldn't have side effects, but this case is bound to happen, and might be very confusing to users. |
I think we can ignore this case and let people who encounter it deal with
|
+1 We have a large number of tests at work, and sharding would be very beneficial. As you said, there shouldn't be side effects between tests, and for anyone who doesn't want to remove the side effects, I'd say they just don't get to run their tests in parallel :) As long as the sharding is opt-in I think the confusion should be manageable. |
Hey @EtaiG and @danyshaanan, very interested in this experiment you mentioned. Is this code accessible somewhere? I'd very much like to experiment with this a bit - maybe your work could give me a headstart! |
@LFDM We have nothing to share at the moment, but I'm just about to rewrite a smarter version of it in the coming couple of weeks. I'll try to do so in a way I'll be able to share. Feel free to ping me about this in a week or so if I'll not post anything by then. |
👍 Sounds like a great idea to me; Would love to see any progress updates on this @danyshaanan! |
👍 this would be great. |
@danyshaanan! any news? |
@aaichlmayr : |
Thanks for the info |
Totally agree. Think it's fine to launch as an experimental feature with this requirement. Would love to see this land and would be happy to help bug-hunt, etc. |
Hi, has the feature been shipped already? |
👍 anyone making headway on this? |
So is this a feature yet? |
I dont get it? Did I do somethink wrong? What did I miss? Why the thumbs down? |
If you reply to an issue, all the subscribed people get an email and a notification. If you just want to add a +1 on the issue, do so adding a thumb up reaction to the first post (or to the one with more upvotes), in this way you don't flood the whole list of subscribers. |
@dignifiedquire Could you lock this one like #1320 with a help:wanted label? Thanks! |
True, but it's a hack to get these systems to work in karma in the first place right? I'm tempted to put that consideration aside for now. I agree with the suggestions made by @vojtajina #439 (comment) (even if they are three and a half years old :) I thinking module.exports = function(config) {
config.set({
files: [
'lib/lib1.js',
'lib/lib2.js',
{'other/**/*.js', included: false},
{pattern: 'app/**/*.js', grouped: true},
{patttern: 'more-app/**/*.js', grouped: true},
],
});
} And then the resolved object would be {
served: [
'other/file1.js',
'other/files/file2.js'
],
included: {
common: [
'lib/lib1.js',
'lib/lib2.js'
],
groups: {
0: {
'app/file1.js',
'app/file2.js',
},
1: {
'app/file3.js',
'more-app/file.js',
}
}
}
} We could reuse We'd probably want a |
I'd hate to have to group tests by hand. What if we had the system uses the regular configuration as a starting point, and have it build up and refine an optimal parallel test plan over time? Along the way it might be able to discover any dependency chains (that shouldn't be there, but might be). It could flag those as "todo" items for developers, but could work around those should it discover them. Whatever it does, it'd be good for it to be able to deal with changes in the test code gracefully, so it would not have to recompute the whole thing all over again when a single test is added (or removed). I'm sure the computational complexity would be enormous for getting at the very best configuration, but maybe some rough heuristics would get us reasonably close. |
really, just copy what jest does. it's fine |
Not sure if I understand this right, but I wasn't suggesting that.
I suggest a |
Is this dead in the water? |
Hello - Here is my proposal for running tests in parallel. This is a very simple sharding strategy - But it should provide speedup just by using multiple processors on the machine. This is meant mostly for local development and not much for CI runs(where remote CI setup costs far outweigh the speed gains of parallelization) karma.js changes:
Jasmine(or Mocha etc.) Adapter changes:
Karma server changes:
Chrome(or other launcher) changes:
Reporter changes: (I need to flush this out more. Any ideas welcome here)
|
I'm going to try to get this done this weekend. I don't know anything about this project or the codebase, but I think a ton of people would be saved a ton of time if I can figure something out. Maybe different tabs running on different ports? Could get hairy... |
@brandonros rock on! I'm in a similar boat--haven't contributed to the project before but would also be interested if I can help with a sharding feature. Would you be willing to create a fork to get the ball rolling? I don't know--do people usually create "WIP - xyz feature" pull request to help rally effort? |
So, here's what happens at a high level:
I actually think it would make more sense to inject into Jasmine, because I was able to boil down exactly where the tests are executed. However, I ran into an issue where it didn't like that I was trying to execute different suites at the same time. So, I came up with multiple tabs:
at https://github.com/brandonros/karma/commit/40cf8eb79be7af9892448a16da5b6578cd3dd983 It is still very early. All it does is allow you to chunk the test files (I hardcoded that they have to contain Spec) across multiple tabs. I'll try to update if I make any worthwhile progress on a more complete package. Edit: I just tested it and it doesn't really work. Karma kind of falls apart as soon as you start serving different tabs different tests. I'm not sure if the architecture of Karma really supports parallel/concurrent testing. Even if I was able to work through the bugs and make the multi-tab approach, I'd need an event and logic to go with it to wait until all browsers are idle. Edit 2: Something already existed for gulp, but I am still stuck on Grunt, so I came up with this. Hopefully it will help somebody as a rough draft for their Gruntfile.js. The improvement really isn't that great because loading all of the files in 2, 3, 4, 8 tabs isn't the best. I am going to try WebWorkers next.
|
I created a plugin that automatically shards tests across the listed browsers (e.g. if you want two sets you list two browsers... browsers: ['ChromeHeadless', 'ChromeHeadless']). It doesn't achieve one of the concerns listed in this thread: run tests at the same time. It forces concurrency: 1. It does however fix the memory problems of having too many specs loading in a single browser and it correctly works with coverage reporting. UPDATE: Version 4.0.0 of karma-sharding supports parallel browser execution and no longer forces concurrency to 1. |
I have also created a plugin, karma-parallel similar to the one that @rschuft made but is a bit different. It supports running specs in different browser instances by splitting up the commonly used |
@joeljeske do u have tested with angular project using ng-cli? we have one and your plugin seems to be interesting to us, will give a try |
I do not use an ng-cli project on a regular basis, but I have done basic testing on an ng-cli project and it works just fine. Please log any issues if you run into to something. One note; it is not yet tested with coverage reporting. It would likely be best to disable coverage reporting when using karma-parallel. Coverage reporting is a future feature of the plugin. |
The karma-sharding plugin doesn't play well with the ng-cli because of the way webpack bundles the specs together before the middleware engages with it. Hopefully @joeljeske can bypass that limitation with his approach. |
@guilhermejcgois, the latest release of karma-parallel is tested and compliant with ng-cli projects. Code coverage support was just introduced. Would love to hear feedback on your experience with it. |
My results from karma-parallel on MBP 15" Early 2013 (8 cores): joeljeske/karma-parallel#1 (comment)
Was expecting at least 2x perf but seems not to really make a difference. Is anyone doing sharding/parallelism and actually seeing positive results? Would be nice to see |
I'm having an issue where, when one shard disconnects, it simply stops that group of tests and moves on to the next – then, it returns exit code 0. Any thoughts on what may be causing this? Karma ^3.1.4 |
I’m not sure. It sounds like it is not really collecting the failures at all. Which browser are you firing off?
… On May 10, 2019, at 10:43 AM, Wilson Hobbs ***@***.***> wrote:
I'm having an issue where, when one shard disconnects, it simply stops that group of tests and moves on to the next – then, it returns exit code 0. Any thoughts on what may be causing this?
Karma ^3.1.4
Jasmine ^2.99
karma-parallel ^0.3.1
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I’ve tried Chrome and ChromeHeadless |
@joeljeske We use karma-parallel and we're getting false positives because one of the shards isn't running all the tests. There are test problems, but karma-parallel hides them because it doesn't fail the tests. After a browser disconnects, typically a shard is restarted from the beginning. However, s sometimes they restart and run just one of the tests. It definitely has to do with the size of the project. We're running 3500 tests where over 200 of them are ng component DOM testing. I've created an issue here joeljeske/karma-parallel#42 |
Allow splitting a single test suite into multiple chunks, so that it can be executed in multiple browsers and therefore executed on multiple machines or at least using multiple cores.
The text was updated successfully, but these errors were encountered: