-
Github repo: https://github.com/openai/gpt-3
-
Root Dev tools: https://www.notion.so/Getting-Started-e8b69bfedfd84efa9855d373f797f794
-
https://www.greaterwrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper
-
https://jalammar.github.io/how-gpt3-works-visualizations-animations/
-
https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277
Tweak individual requests
item | Example/Default | Remarks |
---|---|---|
engine | davinci | Engine id (there are 4 main engines ada/babbage/curie/davinci) |
prompt | // | Input which will be "completed" by the system |
max_tokens | 20 | How many "tokens" (words or portions of words) to return in a completion, max of 512 |
temperature | 0.5 | 0-1, 0 means more predictable and more to 1 is more random |
top_p | // | 0-1, represents a percentage threshold for values it will accept. Use this OR temperature |
n | // | Number of choices to create for each prompt |
stream | // | True/false |
logprobs | // | Return n most likely tokens |
stop | \n | Stopping character, won't proceed beyond |