Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix imports #1736

Merged
merged 4 commits into from
Feb 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

* x.x.x
- Set CMake Policy CMP0148 to OLD to avoid warnings in CMake 3.27
- AcquisitionGeometry prints the first and last 10 angles, or all if there are 30 or less, rather than the first 20
Expand Down Expand Up @@ -34,6 +33,7 @@
- New unit tests have been implemented for operators and functions to check for in place errors and the behaviour of `out`.
- Bug fix for missing factor of 1/2 in SIRT update objective and catch in place errors in the SIRT constraint
- Allow Masker to take integer arrays in addition to boolean
- Improved import error/warning messages

* 23.1.0
- Fix bug in IndicatorBox proximal_conjugate
Expand All @@ -48,7 +48,6 @@
- Added warmstart capability to proximal evaluation of the CIL TotalVariation function.
- Bug fix in the LinearOperator norm with an additional flag for the algorithm linearOperator.PowerMethod
- Tidied up documentation in the framework folder


* 23.0.1
- Fix bug with NikonReader requiring ROI to be set in constructor.
Expand Down Expand Up @@ -151,7 +150,7 @@
- Fixed PowerMethod for square/non-square, complex/float matrices with stopping criterion.
- CofR image_sharpness improved for large datasets
- Geometry alignment fix for 2D datasets
- CGLS update for sapyb to enable complex data, bugfix in use of initial
- CGLS update for sapyb to enable complex data, bugfix in use of initial
- added sapyb and deprecated axpby. All algorithm updated to use sapyb.
- Allow use of square brackets in file paths to TIFF and Nikon datasets

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,8 @@
try:
from ccpi.filters import regularisers
from ccpi.filters.cpu_regularisers import TV_ENERGY
except ImportError as ie:
raise ImportError(ie , "\n\n",
"This plugin requires the additional package ccpi-regularisation\n" +
"Please install it via conda as ccpi-regulariser from the ccpi channel\n"+
"Minimal version is 20.04")
except ImportError as exc:
raise ImportError('Please `conda install "ccpi::ccpi-regulariser>=20.04"`') from exc


from cil.framework import DataOrder
Expand All @@ -35,11 +32,11 @@

class RegulariserFunction(Function):
def proximal(self, x, tau, out=None):

r""" Generic proximal method for a RegulariserFunction

.. math:: \mathrm{prox}_{\tau f}(x) := \argmin_{z} f(x) + \frac{1}{2}\|z - x \|^{2}

Parameters
----------

Expand All @@ -51,8 +48,8 @@ def proximal(self, x, tau, out=None):
Output :class:`Datacontainer` in which the result is placed.

Note
----
----

If the :class:`ImageData` contains complex data, rather than the default `float32`, the regularisation
is run independently on the real and imaginary part.

Expand Down Expand Up @@ -99,16 +96,16 @@ class TV_Base(RegulariserFunction):

Parameters
----------
strong_convexity_constant : Number

strong_convexity_constant : Number
Positive parameter that allows Total variation regulariser to be strongly convex. Default = 0.

Note
----

By definition, Total variation is a convex function. However,
adding a strongly convex term makes it a strongly convex function.
Then, we say that `TV` is a :math:`\gamma>0` strongly convex function i.e.,
Then, we say that `TV` is a :math:`\gamma>0` strongly convex function i.e.,

.. math:: TV(u) = \alpha \|\nabla u\|_{2,1} + \frac{\gamma}{2}\|u\|^{2}

Expand All @@ -126,7 +123,7 @@ def __call__(self,x):
else:
return 0.5*EnergyValTV[0]

def convex_conjugate(self,x):
def convex_conjugate(self,x):
return 0.0


Expand All @@ -137,16 +134,16 @@ class FGP_TV(TV_Base):
The :class:`FGP_TV` computes the proximal operator of the Total variation regulariser

.. math:: \mathrm{prox}_{\tau (\alpha TV)}(x) = \underset{z}{\mathrm{argmin}} \,\alpha\,\mathrm{TV}(z) + \frac{1}{2}\|z - x\|^{2} .
The algorithm used for the proximal operator of TV is the Fast Gradient Projection algorithm

The algorithm used for the proximal operator of TV is the Fast Gradient Projection algorithm
applied to the _dual problem_ of the above problem, see :cite:`BeckTeboulle_b`, :cite:`BeckTeboulle_a`.


Parameters
----------

alpha : :obj:`Number` (positive), default = 1.0 .
Total variation regularisation parameter.
Total variation regularisation parameter.

max_iteration : :obj:`int`. Default = 100 .
Maximum number of iterations for the Fast Gradient Projection algorithm.
Expand All @@ -163,19 +160,19 @@ class FGP_TV(TV_Base):

tolerance : :obj:`float`, Default = 0 .
Stopping criterion for the FGP algorithm.

.. math:: \|x^{k+1} - x^{k}\|_{2} < \mathrm{tolerance}

device : :obj:`str`, Default = 'cpu' .
FGP_TV algorithm runs on `cpu` or `gpu`.

strong_convexity_constant : :obj:`float`, default = 0
A strongly convex term weighted by the :code:`strong_convexity_constant` (:math:`\gamma`) parameter is added to the Total variation.
A strongly convex term weighted by the :code:`strong_convexity_constant` (:math:`\gamma`) parameter is added to the Total variation.
Now the :code:`TotalVariation` function is :math:`\gamma` - strongly convex and the proximal operator is

.. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2\tau}\|u - b\|^{2} + \mathrm{TV}(u) + \frac{\gamma}{2}\|u\|^{2} \Leftrightarrow

.. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2\frac{\tau}{1+\gamma\tau}}\|u - \frac{b}{1+\gamma\tau}\|^{2} + \mathrm{TV}(u)
.. math:: \underset{u}{\mathrm{argmin}} \frac{1}{2\frac{\tau}{1+\gamma\tau}}\|u - \frac{b}{1+\gamma\tau}\|^{2} + \mathrm{TV}(u)


Examples
Expand All @@ -195,8 +192,8 @@ class FGP_TV(TV_Base):

>>> G1 = (alpha/ig.voxel_size_x) * FGP_TV(max_iteration=100, device='gpu')
>>> G2 = alpha * TotalVariation(max_iteration=100, lower=0.)


See Also
--------
:class:`~cil.optimisation.functions.TotalVariation`
Expand All @@ -206,7 +203,7 @@ class FGP_TV(TV_Base):


def __init__(self, alpha=1, max_iteration=100, tolerance=0, isotropic=True, nonnegativity=True, device='cpu', strong_convexity_constant=0):

if isotropic == True:
self.methodTV = 0
else:
Expand All @@ -221,16 +218,16 @@ def __init__(self, alpha=1, max_iteration=100, tolerance=0, isotropic=True, nonn
self.max_iteration = max_iteration
self.tolerance = tolerance
self.nonnegativity = nonnegativity
self.device = device
self.device = device

super(FGP_TV, self).__init__(strong_convexity_constant=strong_convexity_constant)

def _fista_on_dual_rof(self, in_arr, tau):
r""" Implements the Fast Gradient Projection algorithm on the dual problem

r""" Implements the Fast Gradient Projection algorithm on the dual problem
of the Total Variation Denoising problem (ROF).

"""
"""

res , info = regularisers.FGP_TV(\
in_arr,\
Expand All @@ -250,18 +247,18 @@ def proximal_numpy(self, in_arr, tau):
strongly_convex_factor = (1 + tau * self.strong_convexity_constant)
in_arr /= strongly_convex_factor
tau /= strongly_convex_factor

solution = self._fista_on_dual_rof(in_arr, tau)

if self.strong_convexity_constant>0:
in_arr *= strongly_convex_factor
tau *= strongly_convex_factor

return solution

def __rmul__(self, scalar):
'''Define the multiplication with a scalar

this changes the regularisation parameter in the plugin'''
if not isinstance (scalar, Number):
raise NotImplemented
Expand All @@ -271,11 +268,11 @@ def __rmul__(self, scalar):
def check_input(self, input):
if len(input.shape) > 3:
raise ValueError('{} cannot work on more than 3D. Got {}'.format(self.__class__.__name__, input.geometry.length))

class TGV(RegulariserFunction):

def __init__(self, alpha=1, gamma=1, max_iteration=100, tolerance=0, device='cpu' , **kwargs):
'''Creator of Total Generalised Variation Function
'''Creator of Total Generalised Variation Function

:param alpha: regularisation parameter
:type alpha: number, default 1
Expand All @@ -287,9 +284,9 @@ def __init__(self, alpha=1, gamma=1, max_iteration=100, tolerance=0, device='cpu
:type tolerance: float, default 0
:param device: determines if the code runs on CPU or GPU
:type device: string, default 'cpu', can be 'gpu' if GPU is installed

'''

self.alpha = alpha
self.gamma = gamma
self.max_iteration = max_iteration
Expand All @@ -299,7 +296,7 @@ def __init__(self, alpha=1, gamma=1, max_iteration=100, tolerance=0, device='cpu
if kwargs.get('iter_TGV', None) is not None:
# raise ValueError('iter_TGV parameter has been superseded by num_iter. Use that instead.')
self.num_iter = kwargs.get('iter_TGV')

def __call__(self,x):
warnings.warn("{}: the __call__ method is not implemented. Returning NaN.".format(self.__class__.__name__))
return np.nan
Expand All @@ -316,7 +313,7 @@ def alpha2(self):
@property
def alpha1(self):
return 1.

def proximal_numpy(self, in_arr, tau):
res , info = regularisers.TGV(in_arr,
self.alpha * tau,
Expand All @@ -326,19 +323,19 @@ def proximal_numpy(self, in_arr, tau):
self.LipshitzConstant,
self.tolerance,
self.device)

# info: return number of iteration and reached tolerance
# https://github.com/vais-ral/CCPi-Regularisation-Toolkit/blob/master/src/Core/regularisers_CPU/TGV_core.c#L168
# Stopping Criteria || u^k - u^(k-1) ||_{2} / || u^{k} ||_{2}
# Stopping Criteria || u^k - u^(k-1) ||_{2} / || u^{k} ||_{2}
return res, info

def convex_conjugate(self, x):
warnings.warn("{}: the convex_conjugate method is not implemented. Returning NaN.".format(self.__class__.__name__))
return np.nan

def __rmul__(self, scalar):
'''Define the multiplication with a scalar

this changes the regularisation parameter in the plugin'''
if not isinstance (scalar, Number):
raise NotImplemented
Expand All @@ -356,7 +353,7 @@ def check_input(self, input):
self.LipshitzConstant = 16 # Vaggelis to confirm
else:
raise ValueError('{} cannot work on more than 3D. Got {}'.format(self.__class__.__name__, input.geometry.length))


class FGP_dTV(RegulariserFunction):
'''Creator of FGP_dTV Function
Expand Down Expand Up @@ -397,7 +394,7 @@ def __init__(self, reference, alpha=1, max_iteration=100,
self.device = device # string for 'cpu' or 'gpu'
self.reference = np.asarray(reference.as_array(), dtype=np.float32)
self.eta = eta

def __call__(self,x):
warnings.warn("{}: the __call__ method is not implemented. Returning NaN.".format(self.__class__.__name__))
return np.nan
Expand All @@ -418,10 +415,10 @@ def proximal_numpy(self, in_arr, tau):
def convex_conjugate(self, x):
warnings.warn("{}: the convex_conjugate method is not implemented. Returning NaN.".format(self.__class__.__name__))
return np.nan

def __rmul__(self, scalar):
'''Define the multiplication with a scalar

this changes the regularisation parameter in the plugin'''
if not isinstance (scalar, Number):
raise NotImplemented
Expand All @@ -434,7 +431,7 @@ def check_input(self, input):
raise ValueError('{} cannot work on more than 3D. Got {}'.format(self.__class__.__name__, input.geometry.length))

class TNV(RegulariserFunction):

def __init__(self,alpha=1, max_iteration=100, tolerance=0):
'''Creator of TNV Function

Expand All @@ -449,16 +446,16 @@ def __init__(self,alpha=1, max_iteration=100, tolerance=0):
self.alpha = alpha
self.max_iteration = max_iteration
self.tolerance = tolerance

def __call__(self,x):
warnings.warn("{}: the __call__ method is not implemented. Returning NaN.".format(self.__class__.__name__))
return np.nan

def proximal_numpy(self, in_arr, tau):
# remove any dimension of size 1
in_arr = np.squeeze(in_arr)
res = regularisers.TNV(in_arr,

res = regularisers.TNV(in_arr,
self.alpha * tau,
self.max_iteration,
self.tolerance)
Expand All @@ -470,7 +467,7 @@ def convex_conjugate(self, x):

def __rmul__(self, scalar):
'''Define the multiplication with a scalar

this changes the regularisation parameter in the plugin'''
if not isinstance (scalar, Number):
raise NotImplemented
Expand All @@ -489,5 +486,3 @@ def check_input(self, input):
# discard any dimension of size 1
if sum(1 for i in input.shape if i!=1) != 3:
raise ValueError('TNV requires 3D data (with channel as first axis). Got {}'.format(input.shape))


Loading
Loading