Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 0.8 #473

Merged
merged 44 commits into from
Feb 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
d886cf2
Use primary context when setting up pycuda-related tests (#468)
ptim0626 Jan 24, 2023
44e5055
CuPy backend (#469)
daurer Jan 24, 2023
df07370
bump version to 0.8
daurer Jan 24, 2023
8b416ae
Improved GitHub Actions pipeline
daurer Jan 20, 2023
14dba28
Change installation instructions for GPU (#437)
Nordicus Jan 22, 2023
5d3e4b9
Merge pull request #470 from ptycho/patch-install-and-actions
daurer Feb 7, 2023
89bb8c0
instantiate jupyter client only the first time (#478)
daurer Feb 10, 2023
a7402a7
patch to make numpy FFT C-contiguous and allow user to choose FFT typ…
daurer Feb 10, 2023
84641e9
update release notes for 0.7.1
daurer Feb 10, 2023
3f52247
Merge branch 'hotfixes' into dev
daurer Feb 10, 2023
1bb8397
non-threaded autoplotting (Jupyter) should only be on when autoplot i…
daurer Feb 15, 2023
1c9d832
SimSacn: reset diff storage to zeros (#479)
daurer Feb 15, 2023
4d46f5f
Load data during creation with SwmrLoader class (#428)
jsouter Feb 22, 2023
20d02ea
interactive plotting: move Ipython dependency into jupyter client (#482)
daurer Mar 1, 2023
6ecaaa3
Python 3.11 compatibility (#489)
daurer Jun 22, 2023
fd52d7c
Changes in numpy 1.25 (#492)
daurer Jul 19, 2023
cc0932b
Properly clean up accelerated ML engines to allow chaining (#491)
daurer Jul 19, 2023
6cad7ce
Add Euclidean noise model for ML (#486)
jfowkes Jul 19, 2023
f6a3376
Count CPU number as usable by current process but not whole system (#…
ptim0626 Jul 19, 2023
771cfb6
fix indentation for benchmarks
daurer Aug 10, 2023
94f3b83
dump numbers for benchmarks
daurer Aug 10, 2023
148239f
Update CONTRIB.rst (#508)
daurer Oct 12, 2023
c544fef
Updated docstrings which are missing choices (#507)
thomas-milburn Oct 12, 2023
4b88db0
Add new kind to save all used params into .ptyr (#501)
daurer Oct 27, 2023
d6da828
if refined positions are saved, they are also saved to dump files (#509)
kahntm Oct 27, 2023
2ab0bb4
[WIP] Make it easier to do parameter sweeps (#506)
daurer Nov 1, 2023
7f41f07
increase range for rebinning (#514)
daurer Dec 15, 2023
1ade54b
Remove legacy scipy.fftpack (#516)
ptim0626 Dec 15, 2023
ae7cff6
Starting point for implementation of multislice ePIE (#500)
daurer Dec 15, 2023
b2cf5d8
Reset parallel.loadmanager after a test to ensure same subdividing of…
ptim0626 Dec 15, 2023
82ff2b2
Cast mask to float32 to avoid precision issue (#515)
ptim0626 Dec 15, 2023
4e34111
Raise megapixel limit to 500 (#513)
daurer Dec 18, 2023
c78a84d
Retain context and device memory pool among pycuda engines (#520)
ptim0626 Jan 29, 2024
2b61f94
use latest version of checkout and setup-python (#529)
daurer Feb 1, 2024
cc929c5
Modifications to the Diamond SWMRLoader (#528)
daurer Feb 1, 2024
f8f9750
streaming loader for diamond (#502)
daurer Feb 1, 2024
e3075c1
add exit buffer to copied state (#530)
daurer Feb 1, 2024
be94818
fixed imports in threepie moonflower example (#531)
daurer Feb 1, 2024
2fa9f5a
remove NCCL capability from pycuda, use cupy engines instead (#524)
daurer Feb 2, 2024
cb888b7
Add WASP reconstruction engine (#522)
ptim0626 Feb 2, 2024
db8b26c
Add new option to provide arbitrary order of frames (#526)
daurer Feb 2, 2024
a7b635d
Changes to Cupy backend (#483)
daurer Feb 2, 2024
a0a4b7e
Release notes for 0.8 (#532)
daurer Feb 2, 2024
1798bd1
Remove old argument in WASP pycuda because of #520 (#533)
ptim0626 Feb 2, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 14 additions & 7 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: ptypy tests
name: Tests

on:
# Trigger the workflow on push or pull request,
Expand All @@ -10,6 +10,7 @@ on:
branches:
- master
- dev
- hotfixes
# Also trigger on page_build, as well as release created events
page_build:
release:
Expand All @@ -20,26 +21,32 @@ jobs:
build-linux:
runs-on: ubuntu-latest
strategy:
max-parallel: 5
max-parallel: 10
fail-fast: false
matrix:
python-version: ['3.7','3.8','3.9','3.10']

python-version: ['3.8','3.9','3.10', '3.11']
name: Testing with Python ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v3
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Add conda to system path
run: |
# $CONDA is an environment variable pointing to the root of the miniconda directory
echo $CONDA/bin >> $GITHUB_PATH
conda --version
- name: Install dependencies
run: |
# replace python version in core dependencies
sed -i 's/python/python=${{ matrix.python-version }}/' dependencies_core.yml
conda env update --file dependencies_core.yml --name base
conda list
- name: Prepare ptypy
run: |
# Dry install to create ptypy/version.py
# Install ptypy
pip install .
- name: Lint with flake8
run: |
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,4 @@ ghostdriver*
.DS_Store
.ipynb_checkpoints
.clang-format
pip-wheel-metadata/
30 changes: 12 additions & 18 deletions CONTRIB.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,46 +26,40 @@ Please ensure you satisfy most of PEP8_ recommendations. We are not dogmatic abo
Testing
^^^^^^^

Not much testing exists at the time of writing this document, but we are aware that this is something that should change. If you want to contribute code, it would be very good practice to also submit related tests.
All tests are in the (``/test/``) folder and our CI pipeline runs these test for every commit (?). Please note that tests that require GPUs are disabled for the CI pipeline. Make sure to supply tests for new code or drastic changes to the existing code base. Smaller commits or bug fixes don't require an extra test.

Branches
^^^^^^^^

We are following the Gitflow https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow development model where a development branch (``dev``) is merged into the master branch for every release. Individual features are developed on topic branches from the development branch and squash-merged back into it when the feature is mature

The important permanent branches are:
- ``master``: the current cutting-edge but functional package.
- ``stable``: the latest release, recommended for production use.
- ``target``: target for a next release. This branch should stay up-to-date with ``master``, and contain planned updates that will break compatibility with the current version.
- other thematic and temporary branches will appear and disappear as new ideas are tried out and merged in.
- ``master``: (protected) the current release plus bugfixes / hotpatches.
- ``dev``: (protected) current branch for all developments. Features are branched this branch and merged back into it upon completion.


Development cycle
^^^^^^^^^^^^^^^^^

There has been only two releases of the code up to now, so what we can tell about the *normal development cycle* for |ptypy| is rather limited. However the plan is as follows:
- Normal development usually happens on thematic branches. These branches are merged back to master when it is clear that (1) the feature is sufficiently debugged and tested and (2) no current functionality will break.
- At regular interval admins will decide to freeze the development for a new stable release. During this period, development will be allowed only on feature branches but master will accept only bug fixes. Once the stable release is done, development will continue.
|ptypy| does not follow a rigid release schedule. Releases are prepared for major event or when a set of features have reached maturity.

- Normal development usually happens on thematic branches from the ``dev`` branch. These branches are merged back to ``dev`` when it is clear that (1) the feature is sufficiently debugged and tested and (2) no current functionality will break.
- For a release the dev branch will be merged back into master and that merge tagged as a release.


3. Pull requests
----------------

Most likely you are a member of the |ptypy| team, which give you access to the full repository, but no right to commit changes. The proper way of doing this is *pull requests*. You can read about how this is done on github's `pull requests tutorial`_.

Pull requests can be made against one of the feature branches, or against ``target`` or ``master``. In the latter cases, if your changes are deemed a bit too substantial, the first thing we will do is create a feature branch for your commits, and we will let it live for a little while, making sure that it is all fine. We will then merge it onto ``master`` (or ``target``).

In principle bug fixes can be requested on the ``stable`` branch.

3. Direct commits
-----------------

If you are one of our power-users (or power-developers), you can be given rights to commit directly to |ptypy|. This makes things much simpler of course, but with great power comes great responsibility.
Pull requests shall be made against one of the feature branches, or against ``dev`` or ``master``. For PRs against master we will only accept bugifxes or smaller changes. Every other PR should be made against ``dev``. Your PR will be reviewed and discussed anmongst the core developer team. The more you touch core libraries, the more scrutiny your PR will face. However, we created two folders in the main source folder where you have mmore freedom to try out things. For example, if you want to provide a new reconstruction engine, place it into the ``custom/`` folder. A new ``PtyScan`` subclass that prepares data from your experiment is best placed in the ``experiment/`` folder.

To make sure that things are done cleanly, we encourage all the core developers to create thematic remote branches instead of committing always onto master. Merging these thematic branches will be done as a collective decision during one of the regular admin meetings.
If you develop a new feature on a topic branch, it is your responsibility to keep it current with dev branch to avoid merge conflicts.


.. |ptypy| replace:: PtyPy


.. _PEP8: https://www.python.org/dev/peps/pep-0008/

.. _`pull requests tutorial`: https://help.github.com/articles/using-pull-requests/
.. _`pull requests tutorial`: https://help.github.com/articles/using-pull-requests/
1 change: 1 addition & 0 deletions archive/cuda_extension/engines/DM_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ class DMGpu(DMNpy):
default = 'linear'
type = str
help = Subpixel interpolation; 'fourier','linear' or None for no interpolation
choices = ['fourier','linear',None]

[update_object_first]
default = True
Expand Down
1 change: 1 addition & 0 deletions archive/cuda_extension/engines/DM_npy.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ class DMNpy(DM):
default = 'linear'
type = str
help = Subpixel interpolation; 'fourier','linear' or None for no interpolation
choices = ['fourier','linear',None]

[update_object_first]
default = True
Expand Down
2 changes: 1 addition & 1 deletion archive/cuda_extension/python/gpu_extension.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ def abs2(input):
cdef np.float32_t [:,:,::1] cout_3c
cdef np.float64_t [:,::1] cout_d2c
cdef np.float64_t [:,:,::1] cout_d3c
cdef int n = np.product(cin.shape)
cdef int n = np.prod(cin.shape)

cdef np.float32_t [:, ::1] cin_f2c
cdef np.complex64_t [:, ::1] cin_c2c
Expand Down
1 change: 1 addition & 0 deletions archive/engines/DM.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ class DM(PositionCorrectionEngine):
default = 'linear'
type = str
help = Subpixel interpolation; 'fourier','linear' or None for no interpolation
choices = ['fourier','linear',None]

[update_object_first]
default = True
Expand Down
1 change: 1 addition & 0 deletions benchmark/diamond_benchmarks/moonflower_scripts/i08.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
p.io.autoplot = u.Param(active=False)
p.io.interaction = u.Param()
p.io.interaction.server = u.Param(active=False)
p.io.benchmark = "all"

# max 200 frames (128x128px) of diffraction data
p.scans = u.Param()
Expand Down
1 change: 1 addition & 0 deletions benchmark/diamond_benchmarks/moonflower_scripts/i13.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
p.io.autoplot = u.Param(active=False)
p.io.interaction = u.Param()
p.io.interaction.server = u.Param(active=False)
p.io.benchmark = "all"

# max 200 frames (128x128px) of diffraction data
p.scans = u.Param()
Expand Down
1 change: 1 addition & 0 deletions benchmark/diamond_benchmarks/moonflower_scripts/i14_1.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
p.io.autoplot = u.Param(active=False)
p.io.interaction = u.Param()
p.io.interaction.server = u.Param(active=False)
p.io.benchmark = "all"

# max 200 frames (128x128px) of diffraction data
p.scans = u.Param()
Expand Down
1 change: 1 addition & 0 deletions benchmark/diamond_benchmarks/moonflower_scripts/i14_2.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
p.io.autoplot = u.Param(active=False)
p.io.interaction = u.Param()
p.io.interaction.server = u.Param(active=False)
p.io.benchmark = "all"

# max 200 frames (128x128px) of diffraction data
p.scans = u.Param()
Expand Down
4 changes: 2 additions & 2 deletions benchmark/mpi_allreduce_speed.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
}

def run_benchmark(shape):
megabytes = np.product(shape) * 8 / 1024 / 1024 * 2
megabytes = np.prod(shape) * 8 / 1024 / 1024 * 2

data = np.zeros(shape, dtype=np.complex64)

Expand Down Expand Up @@ -39,4 +39,4 @@ def run_benchmark(shape):
print('Final results for {} processes'.format(parallel.size))
print(','.join(['Name', 'Duration', 'MB', 'MB/s']))
for r in res:
print(','.join([str(x) for x in r]))
print(','.join([str(x) for x in r]))
2 changes: 1 addition & 1 deletion cufft/dependencies.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: ptypy_cufft
channels:
- conda-forge
dependencies:
- python=3.9
- python
- cmake>=3.8.0
- pybind11
- compilers
Expand Down
2 changes: 1 addition & 1 deletion cufft/extensions.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
import os, re
import subprocess
import sysconfig
import pybind11
from distutils.unixccompiler import UnixCCompiler
from distutils.command.build_ext import build_ext

Expand Down Expand Up @@ -98,6 +97,7 @@ def __init__(self, *args, **kwargs):
self.LD_FLAGS = [archflag, "-lcufft_static", "-lculibos", "-ldl", "-lrt", "-lpthread", "-cudart shared"]
self.NVCC_FLAGS = ["-dc", archflag]
self.CXXFLAGS = ['"-fPIC"']
import pybind11
pybind_includes = [pybind11.get_include(), sysconfig.get_path('include')]
INCLUDES = pybind_includes + [self.CUDA['lib64'], module_dir]
self.INCLUDES = ["-I%s" % ix for ix in INCLUDES]
Expand Down
1 change: 1 addition & 0 deletions cufft/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
description='Extension of CuFFT to include pre- and post-filters using callbacks',
packages=package_list,
ext_modules=ext_modules,
install_requires=["pybind11"],
cmdclass=cmdclass
)

Expand Down
4 changes: 1 addition & 3 deletions dependencies_core.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
name: ptypy_core
channels:
- conda-forge
dependencies:
- python=3.9
- python
- numpy
- scipy
- h5py
Expand Down
2 changes: 1 addition & 1 deletion dependencies_dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: ptypy_full
channels:
- conda-forge
dependencies:
- python=3.9
- python
- numpy
- scipy
- matplotlib
Expand Down
2 changes: 1 addition & 1 deletion dependencies_full.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: ptypy_full
channels:
- conda-forge
dependencies:
- python=3.9
- python
- numpy
- scipy
- matplotlib
Expand Down
2 changes: 1 addition & 1 deletion doc/rst_templates/getting_started.tmp
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ GPUs based on our own kernels and the
Install the dependencies for this version like so.
::

$ conda env create -f accelerate/cuda_pycuda/dependencies.yml
$ conda env create -f ptypy/accelerate/cuda_pycuda/dependencies.yml
$ conda activate ptypy_pycuda
(ptypy_pycuda)$ pip install .

Expand Down
7 changes: 6 additions & 1 deletion ptypy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,11 +78,16 @@

# Convenience loader for GPU engines
def load_gpu_engines(arch='cuda'):
if arch=='cuda':
if arch in ['cuda', 'pycuda']:
from .accelerate.cuda_pycuda.engines import projectional_pycuda
from .accelerate.cuda_pycuda.engines import projectional_pycuda_stream
from .accelerate.cuda_pycuda.engines import stochastic
from .accelerate.cuda_pycuda.engines import ML_pycuda
if arch in ['cuda', 'cupy']:
from .accelerate.cuda_cupy.engines import projectional_cupy
from .accelerate.cuda_cupy.engines import projectional_cupy_stream
from .accelerate.cuda_cupy.engines import stochastic
from .accelerate.cuda_cupy.engines import ML_cupy
if arch=='serial':
from .accelerate.base.engines import projectional_serial
from .accelerate.base.engines import projectional_serial_stream
Expand Down
1 change: 1 addition & 0 deletions ptypy/accelerate/base/engines/ML_serial.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,7 @@ def engine_finalize(self):
prep = self.diff_info[d.ID]
float_intens_coeff[label] = prep.float_intens_coeff
self.ptycho.runtime["float_intens"] = parallel.gather_dict(float_intens_coeff)
super().engine_finalize()


class BaseModelSerial(BaseModel):
Expand Down
Loading