Skip to content

Commit

Permalink
Merge pull request #52 from barnett-yuxiang/main
Browse files Browse the repository at this point in the history
Update .gitignore, README, and configuration files
  • Loading branch information
mrT23 authored May 17, 2024
2 parents af08292 + c16bdf5 commit f4e9044
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 10 deletions.
10 changes: 10 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,18 @@
venv/

# codecontests datasets
# https://huggingface.co/datasets/talrid/CodeContests_valid_and_test_AlphaCodium/blob/main/codecontests_valid_and_test_processed_alpha_codium.zip
valid_and_test_processed/

# Python cache files
**/__pycache__/

# Cache files generated during documentation builds
docs/.cache/

# IDEA project-specific settings and configuration files
.idea/

alpha_codium/settings/.secrets.toml
dataset_output.json
example.log
17 changes: 11 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,9 +41,14 @@ Many of the principles and best practices we acquired in this work, we believe,

## Installation

(1) setup a virtual environment and run: `pip install -r requirements.txt`
(1) setup a virtual environment:
```bash
python3 -m venv venv
source ./venv/bin/activate
```
and run: `pip install -r requirements.txt`.

(2) Duplicate the file `alpha_codium/settings/.secrets_template.toml`, rename it as `.secrets.toml`, and fill in your OpenAI API key:
(2) Duplicate the file `alpha_codium/settings/.secrets_template.toml`, rename it as `alpha_codium/settings/.secrets.toml`, and fill in your OpenAI API key:
```
[openai]
key = "..."
Expand Down Expand Up @@ -87,7 +92,7 @@ to solve the entire dataset with AlphaCodium, from the root folder run:
```
python -m alpha_codium.solve_dataset \
--dataset_name /path/to/dataset \
--split_name test
--split_name test \
--database_solution_path /path/to/output/dir/dataset_output.json
```

Expand All @@ -101,9 +106,9 @@ python -m alpha_codium.solve_dataset \

Once you generate a solution for the entire dataset (valid or test), you can evaluate it by running:
```
python -m alpha_codium.evaluate_dataset\
--dataset_name /path/to/dataset\
--split_name test\
python -m alpha_codium.evaluate_dataset \
--dataset_name /path/to/dataset \
--split_name test \
--database_solution_path /path/to/output/dir/dataset_output.json
```

Expand Down
9 changes: 5 additions & 4 deletions alpha_codium/settings/configuration.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
[config]
model="gpt-4-0125-preview"
#model="gpt-4-0613"
#model = "gpt-3.5-turbo-16k"
# model="gpt-4o-2024-05-13"
# model="gpt-4-0613"
# model="gpt-3.5-turbo-16k"
frequency_penalty=0.1
ai_timeout=90 # seconds
fallback_models =[]
Expand Down Expand Up @@ -60,5 +61,5 @@ trace_depth=4
[code_contests_tester]
stop_on_first_failure = false
timeout = 3
path_to_python_bin = "/usr/bin/python3.9"
path_to_python_lib = ["/usr/lib", "/usr/lib/python3.9"]
path_to_python_bin = "./venv/bin/python3.9"
path_to_python_lib = ["./venv/lib", "./venv/lib/python3.9"]

0 comments on commit f4e9044

Please sign in to comment.