Skip to content

Latest commit

 

History

History
46 lines (32 loc) · 1.69 KB

README.md

File metadata and controls

46 lines (32 loc) · 1.69 KB

codecov

Original version live-coded on YouTube.

The implemented algorithm is almost exactly what was outlined (and very well explained) in this 3blue1brown video.

Please do tinker with it and see how much you can push it — there's almost certainly gains to be had! I've also left some TODOs from the 3b1b algorithm that should improve the guesses a fair bit. It'd also be really neat to add in a mode for computing the first word by computing multiple levels of expected information (again, like 3b1b), instead of just hard-coding it like we do at the moment.

Dataset

If you want to remake dictionary.txt yourself, first, make corpus/wordle.txt by grabbing the words from the Wordle source code (that's also how you get answers.txt). Then, grab the ngram dataset by downloading these. Then run:

cd corpus
cargo r --release /path/to/1-*-of-00024.gz | tee ../dictionary.txt

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.