Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

documentation: what are generation failures #201

Open
wilbowma opened this issue Oct 3, 2019 · 4 comments
Open

documentation: what are generation failures #201

wilbowma opened this issue Oct 3, 2019 · 4 comments
Assignees

Comments

@wilbowma
Copy link
Collaborator

wilbowma commented Oct 3, 2019

I have a program using redex-check that looks like

> (redex-check
    BoxyTypingL
    #:satisfying (type-infer · · e A)
    (redex-match? BoxyEvalL v (term (eval e)))
    #:attempts 1000)
redex-check: no counterexamples in 78 attempts (with 922 generation failures)

I cannot for the life of me figure out why there would be so many failures, and looking through the documentation, I don't see any references that help me make sense of it.

As I understand #:satsifying, there ought not be any failures! It should have generated a bunch of well-typed terms...

@rfindler
Copy link
Member

rfindler commented Oct 3, 2019

There are a lot of reasons why there might be generation failures. I mean, basically, it is a randomized solver that is going off and searching for solutions and sometimes it backs itself into a corner and can't get out. So it gives up and restarts the process. One thing that can happen (that is hard to diagnose) is simply that randomly selecting the next "layer" of the derivation has a relatively low probability of success. For example, if there are 10 choices and making any one choice means that you need to select a specific next choices, well, that's effectively 90% chance of failure (that you'll repeat many times as you build up the tree). That's the main issue, but there are others too, like the generator might just not be smart about certain combinations of redex features somehow in ways that could be fixed but we just haven't identified yet and, of course, there could be a bug! (I'm pretty sure that historically there were just regular ole bugs that manifested as certain models always or almost always failing generation.)

@rfindler
Copy link
Member

rfindler commented Oct 3, 2019

Oh, and if someone actually wants to get dirty working with the solver, I believe that @bfetscher built something to visualize the steps that the solver takes so you can try to get a feel for what's going on with a specific model in an attempt to improve things. I've not really used it myself, however.

@rfindler
Copy link
Member

rfindler commented Oct 3, 2019

Oh, and @bfetscher gave a nice talk about how this all works but I cannot find it online. I stole his slides and gave a version of it here: https://www.youtube.com/watch?v=SEb7FjZwRUk in case you're curious about how the solver is working (although this may already be old news, in which case I apologize).

@wilbowma
Copy link
Collaborator Author

wilbowma commented Oct 3, 2019

Okay this looks like Redex is not doing what I sort of expected, or is counting attempts differently than I expect. I was sort of imagining that Redex would do more or less the same thing as a miniKanren program, so if I request 1000 solutions, it would search backwards until it found 1000 valid solutions. It sounds like Redex instead searches until it has tried 1000 unifications (or something), some of which fail.

@wilbowma wilbowma self-assigned this Oct 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants