-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathconclusion.tex
57 lines (48 loc) · 6.87 KB
/
conclusion.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
\chapter{Conclusions}
In this work, we have reviewed and implemented interior-point method for semidefinite programs solving.
In the experiments on synthetic semidefinite programs we have verified the correctness of the implementation and we have compared it to the state of the art methods.
The results showed that our implementation is significantly slower than the state of the art methods, but an efficient semidefinite programs solver was not a goal of this work.
However, the goal was to understand the implemented method, so we can now exploit its advantages and minimize the effects of its disadvantages when applied in polynomial optimization methods.
In \refsec{SDP:sa} we have shown that typically only few iterations of \refalg{SDP:scb:pf} are enough to get a good approximation of the optimal point.
Furthermore, we have focused on polynomial optimization.
We have implemented a method, which uses hierarchies of semidefinite programs to solve the original non-convex problem.
The new implementation has been compared to the state of the art methods on synthetically generated polynomial optimization problems.
From the results can be seen that our implementation is slower than the state of the art methods, which is mainly because of the inefficient SDP solver, which is the most time consuming part of the algorithm.
The main contribution of this work is the review and implementation of the moment method for polynomial systems solving.
The advantage of this method is that it allows us to find only real solutions of polynomial systems, which can save some computation time.
We have successfully applied this method to some minimal problems from geometry of computer vision, which is a novel idea in the field of computer vision.
To see the performance of the implementation of the moment method on some real problems, we have described two simple minimal problems (namely P3P and P3.5Pf problems) from geometry of computer vision, which we have tested on real 3D scene.
We have found out that for the P3P problem there is about 40 \% of non-real solutions, which need not be computed.
For the P3.5Pf problem it is about 50 \% of all solutions.
The comparison with the algebraic solvers showed that the moment method is applicable on the minimal problems, i.e.\ that the estimation errors of the camera poses and the focal lengths are comparable to the results of the state of the art methods.
The main drawback of the moment method is the computation time, which is significantly higher compared to the algebraic solvers generated by the automatic generator \cite{autogen}.
\section{Future work}
We have seen that the described moment method can be applied on minimal problems from computer vision, but due to its slow computation time it is not comparable to the algebraic methods.
The performance can be improved by many ways.
Firstly, it showed up that the semidefinite programs constructed inside the moment method have typically no feasible strictly interior point.
Therefore, the interior-point algorithm we have implemented in \refcha{SDP} can not be used to find a feasible point from the relative interior.
In this work, we have solved this issue by solving another semidefinite program \refeqb{POP:sol:SDP_tau}.
Different approach can be to implement another SDP solver, which would use an infeasible interior-point method.
Another solution to this issue may bring a method called facial reduction \cite{facial}.
This method is able to find a reduced version of the original problem by solving a sequence of easier semidefinite programs.
Using this method we would be able to remove the superfluous dimensions of the SDP problem, which causes that then the problem is solvable by an interior-point method and moreover the size of the problem is reduced, and therefore some computation time can be saved.
Secondly, it is possible that the idea of automatic generators \cite{autogen, larsson} could be transformed to the optimization world, i.e.\ that for a given problem we would be able to generate a parametrized solver that would solve the problem efficiently for a given value of parameters of the problem.
This idea is proposed in \cite{SDPstability}.
The authors suggest that if we found out that the SDP relaxation is tight for a given value of parameters, and therefore solves the problem correctly, then under some sufficient conditions the relaxation is tight even for small perturbation of the parameters.
Moreover, from histograms in \reffig{app:P3P:histRelax} and \reffig{app:P35Pf:histRelax} we can see that typically one relaxation order prevails over the others.
Therefore, it would make sense not to start from the minimal possible relaxation order in \refalg{POP:sol:alg}, but to solve the problem only for one given relaxation order that would be known in advance.
Although we were unable to show that the moment method can beat the algebraic methods in computation time, there might be a problem on which the moment method will be faster.
Such a problem would have probably a lot of non-real solutions and only few real solutions, so the moment method could benefit from its advantages.
If such a problem would be found, the moment method could be included amongst the state of the art methods for polynomial systems solving in computer vision.
Another advantage of usage optimization methods for polynomial systems solving, which we have not shown in this work, is that overconstrained systems can be solved by them.
An overconstrained system can be solved in precise arithmetic, but typically it has no solution when solved on real noisy data in floating-point arithmetic.
But the constraints of such problems can be relaxed and the errors of them minimized, and then the optimization techniques as described in \refsec{POP:opt} can be applied.
We have not provided such an experiment in this work, but it would be nice to find a problem, on which this approach would be applicable and to see, how this approach is performing compared to the state of the art approaches.
In this work, we have provided one implementation for polynomial optimization problems solving (\refsec{POP:opt}) and a different one for solving polynomial systems (\refsec{POP:sol}).
This is because of the evolution of this work.
But since both these methods are based on hierarchies of semidefinite programs, it makes sense to implement one universal algorithm, which would be able to solve both tasks.
The advantages are obvious.
We would be able to add constraints in form of polynomial equations to the polynomial optimization problems \refeqb{POP:opt:polmin}, which currently allows only polynomial inequality constraints.
On the other hand, we would be able to introduce polynomial inequalities into the systems of polynomial equations and eliminate some solutions by this approach.
This would lead to smaller multiplication matrices, and therefore to faster eigenvector computations.
A typical example from minimal problems from computer vision may be to impose positivity on focal lengths.