Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding k6-browser support to runner image #289

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

javaducky
Copy link
Contributor

Add support for distributed k6-browser scripts.

@javaducky javaducky requested a review from yorugac September 18, 2023 19:54
@javaducky
Copy link
Contributor Author

javaducky commented Sep 18, 2023

@yorugac, building upon our Slack convo today, I successfully ran k6-browser tests using the operator. There remains a separate issue where only parallelism: 1 being supported due to parsing of browser-specific options causing maxVUs=1.

This PR is more of an FYI (experiment)...I'm not sure if we truly want to merge this just yet.

@yorugac
Copy link
Collaborator

yorugac commented Sep 20, 2023

Related PR for Docker k6-browser image: grafana/k6#3340

@javaducky, IIUC, the browser image worked for you even without this no-sandbox option? I've got an impression that it wasn't supposed to work without it 😅 Well, now it's hard-coded anyway, but it's a curious situation.

I've checked the case of maxVUs > 1: it seems parallelism is working correctly with browser as well 👍

I think from k6-operator perspective, just knowing of this second image with browser support is sufficient: it's too large to make a default runner image. IOW, it'd be good if we could make a remark on browser images in the docs, at some point.

@javaducky
Copy link
Contributor Author

the browser image worked for you even without this no-sandbox option?

Yes. I didn't seem to have any issues related to this. 🤷

I've checked the case of maxVUs > 1: it seems parallelism is working correctly with browser as well 👍

Here is my scenario. I'm using a test script using the constant-vus executor specifying vus: 4.

My K6 resource successfully executes if parallelism: 1, but fails if I try to use parallelism: 2 with the following output in the manager log (log line truncations due to display in k9s):

│ 2023-09-20T13:51:20Z    INFO    controllers.K6    Waiting for initializing pod to finish    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "6 │
│ 2023-09-20T13:51:20Z    INFO    controllers.K6    Reconcile(); stage = initialization    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "50f6 │
│ 2023-09-20T13:51:20Z    INFO    controllers.K6    k6 inspect: {External:{Loadimpact:{Name: ProjectID:0}} TotalDuration:10s MaxVUs:1 Thresholds:map[checks:0xc000a │
│ 2023-09-20T13:51:20Z    ERROR    controllers.K6    Parallelism argument cannot be larger than maximum VUs in the script    {"namespace": "k6-demo", "name": "k6-b │
│ github.com/grafana/k6-operator/controllers.RunValidations                                                                                                         │
│     /workspace/controllers/k6_initialize.go:69                                                                                                                    │
│ github.com/grafana/k6-operator/controllers.(*K6Reconciler).Reconcile                                                                                              │
│     /workspace/controllers/k6_controller.go:121                                                                                                                   │
│ sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile                                                                                    │
│     /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122                                                                  │
│ sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler                                                                             │
│     /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:323                                                                  │
│ sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem                                                                          │
│     /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:274                                                                  │
│ sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2                                                                                │
│     /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235                                                                  │
│ 2023-09-20T13:51:20Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "87bb5821-7853 │
│ 2023-09-20T13:51:21Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "2a325800-3f65 │
│ 2023-09-20T13:51:21Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "c3215c34-0448 │
│ 2023-09-20T13:51:21Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "fd1fe89c-37cc │
│ 2023-09-20T13:51:21Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "904cca7e-40c0 │
│ 2023-09-20T13:51:25Z    INFO    controllers.K6    Reconcile(); stage = error    {"namespace": "k6-demo", "name": "k6-browser-test", "reconcileID": "bc4f700e-571f │

@1saeedsalehi
Copy link

Any update on this one?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants