Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalization of LowerUnivalents #57

Open
ceilican opened this issue Apr 25, 2013 · 6 comments
Open

Generalization of LowerUnivalents #57

ceilican opened this issue Apr 25, 2013 · 6 comments
Assignees

Comments

@ceilican
Copy link
Member

Joseph Wrote:

While working on the slides for PCC's talk, I thought about a new idea to
improve LUniv. I think it may remove the dependency on ordering.

What is the idea behind LU and LUniv ? It is that a node with many incoming
edges labeled with the same literal may be delayed until all of this
resolvents are resolved. Lowering down to the bottom is overkill and limit the
node we can lower.

The new idea is therefore to perform a first bottom-up traversal. This
traversal must make it possible, given a set of nodes, to know which is the
higher node on all path from the root to any node of the set. I hope to be
clear enough for you to understand me. Your comments are welcome.

@ceilican
Copy link
Member Author

What is the idea behind LU and LUniv ? It is that a node with many incoming
edges labeled with the same literal may be delayed until all of this
resolvents are resolved.

Yes. Another way of understanding "lowering" is as a kind of factoring, as you have already noticed.

I had briefly (and only theoretically) investigated the idea of factoring in that paper on the "...algebraic properties of resolution". However, when I invented LU, the connection between lowering and factoring was not completely clear to me. I had come to the idea of LU relatively independently from my previous ideas about factoring. Only recently I fully realized that lowering is an extreme (overkill, as you said) kind of factoring.

Lowering down to the bottom is overkill and limit the
node we can lower.

Yes, it is like factoring an arithmetical expression. If we have "5 + (2_3+2_4)", we can compress it to "5 + 2*(3+4)". In this case, we lowered the "2" not to the very bottom of the expression, but only to some place in the middle.

I thought about this recently as well, as the analogies between lowering and factoring started to become clearer to me, especially thanks to your work on LUniv.

In a way, LUniv is a particular case of lowering not to the very bottom, because univalent subproofs are reintroduced above those that have already been lowered. But LUniv is still restricted to producing compressed proofs where the bottom has a particular "linear" shape.

I was wondering if we could have a more "local" kind of "LowerNodes", capable of reintroducing nodes at arbitrary places in the proof, but my ideas in this direction are not so concrete yet.

The new idea is therefore to perform a first bottom-up traversal. This
traversal must make it possible, given a set of nodes, to know which is the
higher node on all path from the root to any node of the set. I hope to be
clear enough for you to understand me. Your comments are welcome.

This paragraph was not so clear to me. Could you rephrase it?

@ceilican
Copy link
Member Author

The vague idea I had so far was still with a single top-down traversal:

In LUniv, as we traverse the proof, we collect univalent subproofs and do not nothing with them until we finish the traversal. Only then we start reintroducing the univalent subproofs.

In the yet-not-existing "LowerNodes", as we traverse the proof, we should collect "lowerable subproofs" and simultaneously check whether it is already time to reintroduce some of the collected lowerable subproofs.

What is not yet clear to me (because I haven't had time to think about this yet) is:

  1. what constitutes a "lowerable subproof"
  2. how do we know when is the right moment to reintroduce the collected "lowerable subproofs"

I suspect that my previous investigation of distributivity/factoring in "...algebraic properties of resolution" might be a first tiny step to answer these questions.

In terms of implementation, I also find it plausible that we might need some extra traversals to be able to do these checks in polynomial time.

@Jogo27
Copy link
Member

Jogo27 commented Apr 27, 2013

A proof (as DAG) can be viewed as a meet-semilattice, ie each set of nodes has a greater lower bound: glb(set).

Now let's consider a subproof phi_l with a set E of incoming edges labeled with the same literal ell. The set of all resolvents from E is noted N, and phi_s is the subproof with glb(N) as root. What I mean is to replace phi_s by:
(phi_s \ (E)) odot_ell phi_l. Let's write Gamma_x and Gamma_l for the conclusions of (phi_s \ (E)) and phi_s. The condition we have to ensure is that Gamma_x subseteq Gamma_l U {ell}.

@Jogo27
Copy link
Member

Jogo27 commented Apr 27, 2013

For the implementation you may be right. Perhaps would it be possible to propagate down sets of subproofs to be lowered. If the intersection of the left and right set is non-empty, those subproofs would be reintroduced below the current node.

@Jogo27
Copy link
Member

Jogo27 commented Apr 27, 2013

My second last comment above is wrong. The condition must be that the conclusion of (phi_s \ (E)) odot_ell phi_l subsumes the conclusion of phi_s. The major issue happens when some literals from the conclusion of phi_l are deleted by different subproofs in each branch.

@ceilican
Copy link
Member Author

Let's discuss this in Helsinki.

@ghost ghost assigned Jogo27 Jul 19, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants