Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Store results of performance tests somehow and somewhere for the future usage (e.g. performance regression testing on CI) #24

Open
hinok opened this issue Apr 17, 2020 · 7 comments
Assignees

Comments

@hinok
Copy link

hinok commented Apr 17, 2020

Hey 👋
What a great idea for an addon! It reminds me how Stripe was doing performance testing of their components several years ago. They've found that testing performance of specific components have much more value than testing the whole pages with tons of components.

I know that the storybook addons live only in the browser so the idea that I want to propose exceeds the browser boundaries, but... at least it's worth to propose it and generally I'd like to start a discussion about it.

Idea

Store "somehow" and "somewhere" performance results to have automatic regression testing that could be run on CI.

It may be stored by using an API? Maybe tools like Graphite? Even it could be saved like jest's snapshot tests, locally as a file with data in a specific format.

Do you plan to work on something like that in the future in storybook-addon-performance or as a separate project? What do you think?

@juanferreras
Copy link

This project is great – and this idea will make it even better!

I've been reviewing the codebase and from my understanding:

  1. storybook-addon-performance does let you save results (manually using Storybook UI, and for a single individual story) to a JSON file.
  2. storybook-addon-performance-cli (note it's not even documented yet, use at your own risk!) is an upcoming CLI that will let you compare results (and potentially use in a CI environment).
Please input two directories – one containing the current test results,
and one containing the baseline to compare it against.

Usage
    $ sb-perf -c <[current-results-path]> -b <[baseline-path]>
    Arguments
      -c        Directory of performance test results of current state
      -b        Directory of baseline test results
    Example
      $ sb-perf -c current -b baseline

As per my understanding, there isn't yet a way to create a full baseline from a CI environment (eg. on git push to develop run X command to save baseline snapshots).

Is that planned as a future step OR am I misunderstanding the overall intention/workflow?

Thanks!

@DarkPurple141
Copy link
Contributor

Hi @juanferreras yes the CLI is definitely a WIP even though we've cut a 1.0.0 I expect we'll build it out further.

Given we're still tightly bound to storybook, you're right that creating a baseline in a CI environment requires a headless browser and a driver to run and collect the metrics. Internally at @atlassian we do have an implementation of some of this functionality but it's quite opinionated to our pipelines. This is something we're definitely exploring though although I can't give you any timelines.

For the time being we'll look to improve the documentation of the CLI to make it simpler to get better use out of it.

@DarkPurple141
Copy link
Contributor

We also need to update documentation around file/save functionality which is very useful for regression testing.

For anyone watching this issue as of ^0.14.0 the library does support saving performance artefacts in a basic form.

@joshacheson
Copy link

joshacheson commented Nov 18, 2021

Hey y'all. I noticed that this is still undocumented. I'm happy (if not eager) to contribute but was curious what the state of this effort is currently.

Whether y'all might want some help with the cli itself and/or just the docs, I may be able to help.

Really appreciate all the work done so far (on the CLI and otherwise), it's great!

@joshacheson
Copy link

@DarkPurple141 would love hints as to what you're doing once the storybook is open in a headless browser.

Do you have it interacting with the storybook UI to run tests, save results, etc.? Are you exposing ways of running that stuff programmatically once the storybook is open?

@DarkPurple141
Copy link
Contributor

Hey @joshacheson I'm going to work on the documentation for the storage/cli this weekend. Thanks for the nudge!

I'll provide some insight into how Atlassian does the CI setup (which is still pretty rough), in the docs.

Certainly open to thinking about better ways to run these tests with test runners or perhaps even in a node environment as an indicative approach! Let me write it up first though.

@hinok
Copy link
Author

hinok commented Jan 28, 2024

@DarkPurple141 Is there anything I can help here to "make it happen"? I know that it's more than 2 years since the last activity here, but I'm eager to help 🙇

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants