Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-linear optimization Vs Belief propagation #18

Open
deebuls opened this issue Jun 3, 2022 · 1 comment
Open

Non-linear optimization Vs Belief propagation #18

deebuls opened this issue Jun 3, 2022 · 1 comment

Comments

@deebuls
Copy link
Contributor

deebuls commented Jun 3, 2022

GTSAM uses Non-linear optimization (Gauss-Newton, Levenberg-Marquardt)
- For solving what?
- Why
- https://project.inria.fr/ppniv19/files/2019/11/miniSAM.pdf
There is another method for solving (??????) Belief Propagation/

Some library use non-linear optimization some use belief propagation.
We have make a decision on what we can use.

@deebuls
Copy link
Contributor Author

deebuls commented Jun 3, 2022

Solving a linear system of equations Ax = b is one of the most fundamental problems in algebra,
with countless applications in the mathematical sciences and engineering. In this thesis, we
propose an efficient distributed iterative algorithm for solving systems of linear equations.
The problem of solving a linear system of equations is described as follows. Given an ob-
servation vector b ∈ Rm and the data matrix A ∈ Rm×n (m ≥ n ∈ Z), a unique solution,
x = x∗ ∈ Rn, exists if and only if the data matrix A has full column rank. Assuming a nonsin-
gular matrix A, the system of equations can be solved either directly or in an iterative manner.
Direct matrix inversion methods, such as Gaussian elimination (LU factorization, [1]-Ch. 3) or
band Cholesky factorization ( [1]-Ch. 4), find the solution with a finite number of operations,
typically, for a dense n × n matrix, of the order of n3. The former is particularly effective for
systems with unstructured dense data matrices, while the latter is typically used for structured
dense systems.
Iterative methods [2] are inherently simpler, requiring only additions and multiplications, and
have the further advantage that they can exploit the sparsity of the matrix A to reduce the
computational complexity as well as the algorithmic storage requirements [3]. By comparison, for
large, sparse and amorphous data matrices, the direct methods are impractical due to the need
for excessive matrix reordering operations.
The main drawback of the iterative approaches is that, under certain conditions, they converge
only asymptotically to the exact solution x∗ [2]. Thus, there is the risk that they may converge
slowly, or not at all. In practice, however, it has been found that they often converge to the exact
solution or a good approximation after a relatively small number of iterations.
A powerful and efficient iterative algorithm, belief propagation (BP, [4]), also known as the
sum-product algorithm, has been very successfully used to solve, either exactly or approximately,
inference problems in probabilistic graphical models [5].

From https://arxiv.org/pdf/0811.2518.pdf
Gaussian Belief Propagation: Theory and Application

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant