My first attempt at building an AI "agent". The agent will try iteratively solving Advent of Code puzzles. It'll be run as a Temporal workflow, largely just to be able to go back and inspect the full process that the agent takes in working through the problem since Temporal conveniently logs history. I'll also be making use of Temporal's ability to configure timeouts and schedule the timing of activity execution so that I can have confidence that I can keep the agent from just pounding the AoC server with guesses.
This agent does follow the automation guidelines on the /r/adventofcode community wiki. Specifically:
- Puzzle inputs/solutions are not tracked by git (see:
.gitignore
). - Outbound call retries are throttled to every 1 minute in
agent/temporal/workflow.py
. - Once inputs are downloaded, they are cached locally (see:
agent/adventofcode/scrape_problems.py
). - If you suspect a day's input is corrupted, you can manually request a fresh copy by deleting the cached
problem.html
file for that day. - The User-Agent header in
agent/adventofcode/_HEADERS.py
is set to me since I maintain this tool :)
On day 4 I realized that it'd be fun to explicitly do a bit of a log on how things are going.
❗ *Hypothetical Global Leaderboard Rank* ❗
I'm not a rule-breaker so I'm intentionally not running the agent until AFTER the global leaderboard is full. But it's interesting to see what ranking I COULD'VE gotten.
Day | Part 1 | Part 2 | Global Rank* | Total Agent Workflow Time |
---|---|---|---|---|
1 | ✅ | ✅ | ? | - |
2 | ✅ | ✅ | ? | - |
3 | ✅ | ✅ | ? | - |
4 | ✅ | ✅ | #83 | |
5 | ✅ | ✅ | #31 | |
6 | ✅ | ✅ | N/A | |
7 | ✅ | ✅ | #3 | |
8 | ✅ | ❌ | N/A | |
9 | ✅ | ✅ | #22 | |
10 | ✅ | ✅ | #41 | |
11 | ✅ | ✅ | N/A | |
12 | ✅ | ❌ | N/A | |
13 | ✅ | ❌ | N/A | - |
14 | ✅ | ❌ | N/A | |
15 | ❌ | ❌ | N/A | |
16 | ✅ | ❌ | N/A | |
17 | ❌ | ❌ | N/A | |
18 | ✅ | ✅ | #8* | |
19 | ✅ | ✅ | #48 | |
20 | ❌ | ❌ | N/A |