You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I love the idea of A/B testing. I would like to explore how we'd set this up from an Engineering point of view and tracking metrics around that testing. We haven't done that on AMO and we totally should be.
I’d love to discuss more things that we can test.
Here are my initial test scenarios, just for the disco pane:
Install models: on/off toggle (proposed) vs. “Add to Firefox” vs. “Install now” vs. “Free” vs. etc. I think that on/off is the one that makes the most sense, but I’d like to know.
How many curated add-ons is too many to show in the pane?
Showing vs. not showing alternative add-ons.
I’m sure there are heaps of things we can test. Do you have more ideas?
Hi Bram - I am a strong believer in star ratings/number of raters in a product listing. I propose we test with these as well. See item 143 for one example of where to place the stars in the add-on summary. Here is a link that shows one way to do the test (fewer stars with many reviewers vs more stars with only 3, 4, and 5 reviewers) http://baymard.com/blog/user-perception-of-product-ratings
On issue #143, andymckay wrote:
I’d love to discuss more things that we can test.
Here are my initial test scenarios, just for the disco pane:
I’m sure there are heaps of things we can test. Do you have more ideas?
┆Issue is synchronized with this Jira Task
The text was updated successfully, but these errors were encountered: