Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent data fidelity scaling between different algorithms #43

Open
andrewwmao opened this issue Oct 6, 2022 · 2 comments
Open

Comments

@andrewwmao
Copy link
Collaborator

After seeing Issue #114 on MRIReco.jl, I noticed that certain algorithms (e.g. ADMM) scale the data fidelity term by 1/2, but some do not (e.g. FISTA). This creates an undesirable problem where the same lambda gives different results on different algorithms. We have actually seen this first hand with our own data.
I would be inclined to fix this just for FISTA by multiplying the Lipschitz constant by 2, but I wonder if the package wants to decide on a consistent way to handle this across the board.

@tknopp
Copy link
Member

tknopp commented Oct 6, 2022

yes, absolutely. If you would have some time correcting these it would be much appreciated.

In my opinion the data fidelity should always be scaled identically. It would be easier to remove the 1/2 for ADMM probably since Kaczmarz for instance also does not have this pre-factor.

@andrewwmao
Copy link
Collaborator Author

I am not familiar enough with the algorithms outside of FISTA & ADMM, but I can make the change to ADMM if that's what will make things more consistent overall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants