-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
- Loading branch information
There are no files selected for viewing
Large diffs are not rendered by default.
This file was deleted.
This file was deleted.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
{"pageProps":{"frontmatter":{"title":"LMSYS Kaggle Competition – Predicting Human Preference with $100,000 in Prizes","author":"LMSYS Arena Team","date":"May 2, 2024","previewImg":"/images/blog/kaggle_competition/thumb_4x.png"},"content":"\nLMSYS and Kaggle are launching a human preference prediction competition! You are challenged to predict which responses users will prefer in head-to-head battles between Large Language Models (LLMs). You'll work with a dataset from the [Chatbot Arena](https://chat.lmsys.org), containing conversations and user preferences across various LLMs. By developing a model that accurately predicts human preferences, you'll contribute to improving chatbot performance and alignment with user expectations. The training dataset includes over 55,000 real-world user and LLM conversations and user preferences, with personally identifiable information removed. Your solution submission will be tested on a hidden test set of 25,000 samples.\nThe dataset includes real-world conversations with over 70 state-of-the-art LLMs, such as GPT-4, Claude 2, Llama 2, and Mistral models. [Click here to join the competition.](https://www.kaggle.com/competitions/lmsys-chatbot-arena/overview)\n\n<img src=\"/images/blog/kaggle_competition/header_4x.png\" style=\"width: 60%; max-width: 60%; margin-left: auto; margin-right: auto; margin-top: 0px; margin-bottom: 0px\"></img>\n\nCurrent LLM benchmarks often fail to capture real-world LLM usage, resulting in a discrepancy between model performance and user satisfaction. Platforms like Chatbot Arena allow users to submit questions and vote on preferred responses; however, the potential of this data has been largely untapped in developing models that predict and optimize for user preferences at scale. Predicting user preferences is essential for creating human-aligned conversational AI that delivers a satisfying user experience. Successful models could enable language models to dynamically adapt their output based on individual preferences across different contexts and use cases. Moreover, this competition aims to uncover the factors that drive user preferences beyond objective correctness. Many user questions are open-ended, and we have already found a correlation between user preference and subjective qualities like conversationality. This could also be one of the best testbeds for reward modeling in your RLHF algorithms.\n\nThe competition will run until August 5th, **with a total prize of $100,000**, featuring a $25,000 prize for 1st place, 20,000 prizes for 2nd through 4th places, and a 15,000 prize for 5th place. This is your opportunity to contribute to the advancement of human-aligned language models while gaining valuable insights into human preferences and decision-making. These insights could provide value to both the computer science and psychology communities, shedding light on the factors that shape human preferences in conversational AI.\n","slug":"2024-05-02-kaggle-competition"},"__N_SSG":true} |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.