Skip to content

Commit

Permalink
Merge branch 'master' into notimplementederror
Browse files Browse the repository at this point in the history
  • Loading branch information
MargaretDuff authored Nov 25, 2024
2 parents 2880c3a + 5d569dc commit 6728269
Show file tree
Hide file tree
Showing 24 changed files with 387 additions and 190 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ jobs:
echo '${{ secrets.STFC_SSH_KEY }}' > ./key
chmod 600 ./key
ssh -o StrictHostKeyChecking=no -i ./key ${{ secrets.STFC_SSH_HOST }} \
'conda index --bz2 --zst --run-exports --channeldata --rss -n ccpi ${{ secrets.STFC_SSH_CONDA_DIR }}'
'bash -lic "conda index --bz2 --zst --run-exports --channeldata --rss -n ccpi ${{ secrets.STFC_SSH_CONDA_DIR }}"'
docs:
defaults: {run: {shell: 'bash -el {0}', working-directory: docs}}
runs-on: ubuntu-latest
Expand Down
10 changes: 9 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,12 @@
* 24.x.x
- Bug fixes:
- Fix bug with 'median' and 'mean' methods in Masker averaging over the wrong axes.
- Enhancements:
- Removed multiple exits from numba implementation of KullbackLeibler divergence (#1901)
- Dependencies:
- Added scikit-image to CIL-Demos conda install command as needed for new Callbacks notebook.

* 24.2.0
- New Features:
- Added SVRG and LSVRG stochastic functions (#1625)
- Added SAG and SAGA stochastic functions (#1624)
Expand All @@ -17,7 +25,7 @@
- Internal refactor: Separate framework into multiple files (#1692)
- Allow the SIRT algorithm to take `initial=None` (#1906)
- Add checks on equality method of `AcquisitionData` and `ImageData` for equality of data type and geometry (#1919)
- Add check on equality method of `AcquisitionGeometry` for equality of dimension labels (#1919)
- Add check on equality method of `AcquisitionGeometry` for equality of dimension labels (#1919)
- Testing:
- New unit tests for operators and functions to check for in place errors and the behaviour of `out` (#1805)
- Updates in SPDHG vs PDHG unit test to reduce test time and adjustments to parameters (#1898)
Expand Down
2 changes: 2 additions & 0 deletions NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,10 @@ Ashley Gillman (2024) -12
Zeljko Kereta (2024) - 5
Evgueni Ovtchinnikov (2024) -1
Georg Schramm (2024) - 13
Sam Porter (2024) - 5
Joshua Hellier (2024) - 3
Nicholas Whyatt (2024) - 1
Rasmia Kulan (2024) - 1

CIL Advisory Board:
Llion Evans - 9
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ We recommend using either [`miniconda`](https://docs.conda.io/projects/miniconda
Install a new environment using:

```sh
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.1.0
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12
```

To install CIL and the additional packages and plugins needed to run the [CIL demos](https://github.com/TomographicImaging/CIL-Demos) install the environment with:

```sh
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.1.0 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets
conda create --name cil -c conda-forge -c https://software.repos.intel.com/python/conda -c ccpi cil=24.2.0 ipp=2021.12 astra-toolbox=*=cuda* tigre ccpi-regulariser tomophantom ipykernel ipywidgets scikit-image
```

where:
Expand Down
32 changes: 16 additions & 16 deletions Wrappers/Python/cil/optimisation/algorithms/SIRT.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,23 +27,22 @@


class SIRT(Algorithm):

r"""Simultaneous Iterative Reconstruction Technique, see :cite:`Kak2001`.
Simultaneous Iterative Reconstruction Technique (SIRT) solves
the following problem
.. math:: A x = b
The SIRT algorithm is
The SIRT update step for iteration :math:`k` is given by
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega * D ( A^{T} ( M * (b - Ax^{k}) ) ) ),
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega D ( A^{T} ( M (b - Ax^{k}) ) ) ),
where,
:math:`M = \frac{1}{A*\mathbb{1}}`,
:math:`M = \frac{1}{A\mathbb{1}}`,
:math:`D = \frac{1}{A^{T}\mathbb{1}}`,
:math:`\mathbb{1}` is a :code:`DataContainer` of ones,
:math:`\mathrm{prox}_{C}` is the projection over a set :math:`C`,
:math:`\mathrm{proj}_{C}` is the projection over a set :math:`C`,
and :math:`\omega` is the relaxation parameter.
Parameters
Expand All @@ -63,9 +62,6 @@ class SIRT(Algorithm):
A function with :code:`proximal` method, e.g., :class:`.IndicatorBox` function and :meth:`.IndicatorBox.proximal`,
or :class:`.TotalVariation` function and :meth:`.TotalVariation.proximal`.
kwargs:
Keyword arguments used from the base class :class:`.Algorithm`.
Note
----
If :code:`constraint` is not passed, :code:`lower` and :code:`upper` are used to create an :class:`.IndicatorBox` and apply its :code:`proximal`.
Expand All @@ -77,21 +73,22 @@ class SIRT(Algorithm):
The preconditioning arrays (weights) :code:`M` and :code:`D` used in SIRT are defined as
.. math:: M = \frac{1}{A*\mathbb{1}} = \frac{1}{\sum_{j}a_{i,j}}
.. math:: D = \frac{1}{A*\mathbb{1}} = \frac{1}{\sum_{i}a_{i,j}}
.. math:: M = \frac{1}{A\mathbb{1}}
.. math:: D = \frac{1}{A^T\mathbb{1}}
Examples
--------
.. math:: \underset{x}{\mathrm{argmin}} \frac{1}{2}\| x - d\|^{2}
.. math:: \underset{x}{\mathrm{argmin}} \frac{1}{2}\| Ax - d\|^{2}
>>> sirt = SIRT(initial = ig.allocate(0), operator = A, data = d, max_iteration = 5)
>>> sirt = SIRT(initial = ig.allocate(0), operator = A, data = d)
"""


def __init__(self, initial=None, operator=None, data=None, lower=None, upper=None, constraint=None, **kwargs):
"""Constructor of SIRT algorithm"""

super(SIRT, self).__init__(**kwargs)

Expand Down Expand Up @@ -140,10 +137,12 @@ def set_up(self, initial, operator, data, lower=None, upper=None, constraint=Non

@property
def relaxation_parameter(self):
"""Get the relaxation parameter :math:`\omega`"""
return self._relaxation_parameter

@property
def D(self):
"""Get the preconditioning array :math:`D`"""
return self._Dscaled / self._relaxation_parameter

def set_relaxation_parameter(self, value=1.0):
Expand All @@ -164,6 +163,7 @@ def set_relaxation_parameter(self, value=1.0):


def _set_up_weights(self):
"""Set up the preconditioning arrays M and D"""
self.M = 1./self.operator.direct(self.operator.domain_geometry().allocate(value=1.0))
self._Dscaled = 1./self.operator.adjoint(self.operator.range_geometry().allocate(value=1.0))

Expand Down Expand Up @@ -196,9 +196,9 @@ def _remove_nan_or_inf(self, datacontainer, replace_with=1.0):

def update(self):

r""" Performs a single iteration of the SIRT algorithm
r""" Performs a single iteration of the SIRT algorithm. The update step for iteration :math:`k` is given by
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega * D ( A^{T} ( M * (b - Ax) ) ) )
.. math:: x^{k+1} = \mathrm{proj}_{C}( x^{k} + \omega D ( A^{T} ( M (b - Ax^{k}) ) ) )
"""

Expand All @@ -218,7 +218,7 @@ def update(self):
self.x=self.constraint.proximal(self.x, tau=1)

def update_objective(self):
r"""Returns the objective
r""" Appends the current objective value to the list of previous objective values
.. math:: \frac{1}{2}\|A x - b\|^{2}
Expand Down
44 changes: 25 additions & 19 deletions Wrappers/Python/cil/optimisation/functions/KullbackLeibler.py
Original file line number Diff line number Diff line change
Expand Up @@ -343,25 +343,9 @@ def kl_gradient_mask(x, b, out, eta, mask):
@njit(parallel=True)
def kl_div(x, y, eta):
accumulator = numpy.zeros(get_num_threads(), dtype=numpy.float64)
has_inf = 0
for i in prange(x.size):
X = x.flat[i]
Y = y.flat[i] + eta.flat[i]
if X > 0 and Y > 0:
# out.flat[i] = X * numpy.log(X/Y) - X + Y
accumulator[get_thread_id()] += X * numpy.log(X/Y) - X + Y
elif X == 0 and Y >= 0:
# out.flat[i] = Y
accumulator[get_thread_id()] += Y
else:
# out.flat[i] = numpy.inf
return numpy.inf
return sum(accumulator)

@njit(parallel=True)
def kl_div_mask(x, y, eta, mask):
accumulator = numpy.zeros(get_num_threads(), dtype=numpy.float64)
for i in prange(x.size):
if mask.flat[i] > 0:
if has_inf == 0:
X = x.flat[i]
Y = y.flat[i] + eta.flat[i]
if X > 0 and Y > 0:
Expand All @@ -372,7 +356,29 @@ def kl_div_mask(x, y, eta, mask):
accumulator[get_thread_id()] += Y
else:
# out.flat[i] = numpy.inf
return numpy.inf
accumulator[get_thread_id()] = numpy.inf
has_inf = 1
return sum(accumulator)

@njit(parallel=True)
def kl_div_mask(x, y, eta, mask):
accumulator = numpy.zeros(get_num_threads(), dtype=numpy.float64)
has_inf = 0
for i in prange(x.size):
if has_inf == 0:
if mask.flat[i] > 0:
X = x.flat[i]
Y = y.flat[i] + eta.flat[i]
if X > 0 and Y > 0:
# out.flat[i] = X * numpy.log(X/Y) - X + Y
accumulator[get_thread_id()] += X * numpy.log(X/Y) - X + Y
elif X == 0 and Y >= 0:
# out.flat[i] = Y
accumulator[get_thread_id()] += Y
else:
# out.flat[i] = numpy.inf
accumulator[get_thread_id()] = numpy.inf
has_inf = 1
return sum(accumulator)

# convex conjugate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

class OperatorCompositionFunction(Function):

""" Composition of a function with an operator as : :math:`(F \otimes A)(x) = F(Ax)`
""" Composition of a function with an operator as : :math:`(F \circ A)(x) = F(Ax)`
:parameter function: :code:`Function` F
:parameter operator: :code:`Operator` A
Expand Down Expand Up @@ -66,9 +66,9 @@ def __call__(self, x):

def gradient(self, x, out=None):

""" Return the gradient of F(Ax),
""" Return the gradient of :math:`F(Ax)`,
..math :: (F(Ax))' = A^{T}F'(Ax)
:math:`(F(Ax))' = A^{T}F'(Ax)`
"""

Expand Down
13 changes: 7 additions & 6 deletions Wrappers/Python/cil/optimisation/functions/SVRGFunction.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ class SVRGFunction(ApproximateGradientSumFunction):
r"""
The Stochastic Variance Reduced Gradient (SVRG) function calculates the approximate gradient of :math:`\sum_{i=1}^{n-1}f_i`. For this approximation, every `snapshot_update_interval` number of iterations, a full gradient calculation is made at this "snapshot" point. Intermediate gradient calculations update this snapshot by taking a index :math:`i_k` and calculating the gradient of :math:`f_{i_k}`s at the current iterate and the snapshot, updating the approximate gradient to be:
.. math ::
n*\nabla f_{i_k}(x_k) - n*\nabla f_{i_k}(\tilde{x}) + \nabla \sum_{i=0}^{n-1}f_i(\tilde{x}),
.. math ::
n*\nabla f_{i_k}(x_k) - n*\nabla f_{i_k}(\tilde{x}) + \nabla \sum_{i=0}^{n-1}f_i(\tilde{x}),
where :math:`\tilde{x}` is the latest "snapshot" point and :math:`x_k` is the value at the current iteration.
Expand Down Expand Up @@ -86,7 +86,7 @@ def __init__(self, functions, sampler=None, snapshot_update_interval=None, store
self.snapshot = None

def gradient(self, x, out=None):
""" Selects a random function using the `sampler` and then calls the approximate gradient at :code:`x` or calculates a full gradient depending on the update frequency
r""" Selects a random function using the `sampler` and then calls the approximate gradient at :code:`x` or calculates a full gradient depending on the update frequency
Parameters
----------
Expand Down Expand Up @@ -115,9 +115,10 @@ def gradient(self, x, out=None):
return self.approximate_gradient(x, self.function_num, out=out)

def approximate_gradient(self, x, function_num, out=None):
""" Calculates the stochastic gradient at the point :math:`x` by using the gradient of the selected function, indexed by :math:`i_k`, the `function_number` in {0,...,len(functions)-1}, and the full gradient at the snapshot :math:`\tilde{x}`
.. math ::
n*\nabla f_{i_k}(x_k) - n*\nabla f_{i_k}(\tilde{x}) + \nabla \sum_{i=0}^{n-1}f_i(\tilde{x})
r""" Calculates the stochastic gradient at the point :math:`x` by using the gradient of the selected function, indexed by :math:`i_k`, the `function_number` in {0,...,len(functions)-1}, and the full gradient at the snapshot :math:`\tilde{x}`
.. math ::
n*\nabla f_{i_k}(x_k) - n*\nabla f_{i_k}(\tilde{x}) + \nabla \sum_{i=0}^{n-1}f_i(\tilde{x})
Note
-----
Expand Down
49 changes: 42 additions & 7 deletions Wrappers/Python/cil/optimisation/operators/IdentityOperator.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,17 @@

class IdentityOperator(LinearOperator):

'''IdentityOperator: Id: X -> Y, Id(x) = x\in Y
r''' `IdentityOperator`: :math:`\mathrm{Id}: X \rightarrow Y`, :math:`\mathrm{Id}(x) = x`
X : gm_domain
Y : gm_range ( Default: Y = X )
:math:`X` : domain
:math:`Y` : range ( Default: :math:`Y = X` )
Parameters
----------
domain_geometry: CIL Geometry
domain of the operator
range_geometry: CIL Geometry, optional
range of the operator, default: same as domain
'''


Expand All @@ -42,7 +48,21 @@ def __init__(self, domain_geometry, range_geometry=None):

def direct(self,x,out=None):

'''Returns Id(x)'''
r'''Returns the input data :math:`x`
Parameters
----------
x : DataContainer or BlockDataContainer
Input data
out : DataContainer or BlockDataContainer, optional
If out is not None the output of the Operator will be filled in out, otherwise a new object is instantiated and returned. The default is None.
Returns
-------
DataContainer or BlockDataContainer
:math:`\mathrm{Id}(x) = x`
'''

if out is None:
return x.copy()
Expand All @@ -51,8 +71,21 @@ def direct(self,x,out=None):
return out

def adjoint(self,x, out=None):

'''Returns Id(x)'''
r'''Returns the input data, :math:`x`
Parameters
----------
x : DataContainer or BlockDataContainer
Input data
out : DataContainer or BlockDataContainer, optional
If out is not None the output of the Operator will be filled in out, otherwise a new object is instantiated and returned. The default is None.
Returns
-------
DataContainer or BlockDataContainer
:math:`\mathrm{Id}^*(x)=x`
'''


if out is None:
Expand All @@ -63,7 +96,7 @@ def adjoint(self,x, out=None):

def calculate_norm(self, **kwargs):

'''Evaluates operator norm of IdentityOperator'''
'''Evaluates operator norm of `IdentityOperator`'''

return 1.0

Expand All @@ -85,8 +118,10 @@ def sum_abs_col(self):

def is_orthogonal(self):
'''Returns if the operator is orthogonal
Returns
-------
`Bool`
Always returns `True` for `IdentityOperator`
'''
return True
Loading

0 comments on commit 6728269

Please sign in to comment.