-
Notifications
You must be signed in to change notification settings - Fork 276
/
code_contests_prompts_validate_reflection.toml
55 lines (41 loc) · 1.48 KB
/
code_contests_prompts_validate_reflection.toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[code_contests_prompts_validate_reflection]
temperature = 0.2
system = """\
"""
User="""\
You are given a code contest problem, and ai-generated explanations of how each input example leads to the corresponding output:
problem description:
============
{{description|trim}}
============
tests explanations:
============
{{ tests_explanations_str|trim }}
============
Your goal is to consider each test explanation, and make sure it is correct and complete. Be critical - the provided explanations may be wrong or incomplete.
Read carefully the problem description. Make sure the test explanations are consistent with them, and between themselves.
The explanations must coherently and logically lead from the input to the output, with the actual flow. Be specific as possible, and describe in detail how the input leads to the output.
Pay attention to the problem constraints, and small details.
The output must be a YAML object equivalent to type $InputOutputExplanation, according to the following Pydantic definitions:
=====
Class InputOutput(BaseModel):
input: str
output: str
explanation: str = Field(description="Short explanation of how the input leads to the output. Be specific as possible.")
class $InputOutputExplanation(BaseModel):
fixed_tests_explanations: list[InputOutput] = Field(max_items = {{ actual_number_of_tests }})
=====
Example YAML output:
```yaml
fixed_tests_explanations:
- input: |
...
output: |
..
explanation: |
...
...
```
Answer:
```yaml\
"""