Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev -> Main for release of 1.11.0 #1342

Merged
merged 93 commits into from
May 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
98ef067
Fix SBX functions for older tifffile versions and Windows (close mmap…
ethanbb Apr 1, 2024
a5de8bf
VERSION: 1.10.10 post-release
pgunn Apr 3, 2024
0fec0b0
Extensions starts with a dot
marberi Apr 8, 2024
4a08717
Use memmap to load only indexed parts of files
ethanbb Apr 10, 2024
163022f
Simplify by using the same loading code for all time index types
ethanbb Apr 10, 2024
20bb8ab
Merge branch 'dev' of https://github.com/flatironinstitute/CaImAn int…
ethanbb Apr 10, 2024
2647b59
Revert tifffile write to what is in dev since it works
ethanbb Apr 11, 2024
f5d5295
Merge pull request #1326 from marberi/dev
pgunn Apr 11, 2024
c75a4f8
Use del to close memmaps rather than _mmap; reduce code duplication
ethanbb Apr 12, 2024
6c836c0
Add comment about not viewing memmap and rename Ns
ethanbb Apr 12, 2024
69772e3
Merge pull request #1329 from proektlab/sbxload-update
pgunn Apr 12, 2024
149d624
Trying out a refactor of the basic demo to be less notebook-y, more a…
pgunn Oct 25, 2023
cd64806
note on single_thread
pgunn Oct 25, 2023
4592b43
demos get a readme (that caimanmanager will deliver, so it needs to b…
pgunn Oct 27, 2023
e17931b
Demos readme: format it a little more nicely
pgunn Oct 27, 2023
f30b975
Smaller cleanups, move logger setup into main(), preparing this to be…
pgunn Nov 3, 2023
fab66bc
bringing demo_pipeline and demo_caiman_basic closer together as I ref…
pgunn Nov 3, 2023
4ff7854
CLI Demos: Finish re-syncing demo_caiman_basic and demo_pipeline
pgunn Nov 3, 2023
46a15dd
Rebasing
pgunn Nov 3, 2023
9b56d85
Work on converting more CLI demos
pgunn Nov 3, 2023
1315b9a
cli_demos work: convert more, regularise more
pgunn Nov 15, 2023
2859174
cli_demos: Initial conversion of demo_pipeline_cnmfE to "app"
pgunn Nov 15, 2023
38436e1
cli demos: convert demo_pipeline_voltage_imaging to "app"
pgunn Nov 15, 2023
b882ddd
rebasing
pgunn Nov 16, 2023
21eb57f
cli demos: make amount of parallelism in the backend a CLI parameter
pgunn Nov 16, 2023
7ee20e1
demo_OnACID CLI: Use the new CNMFParams JSON API to let people specif…
pgunn Feb 28, 2024
2ffe070
CLI demo_caiman_basic.py: only set params once
pgunn Feb 28, 2024
860ea9f
demo_caiman_basic CLI demo: move to json config files, only provide o…
pgunn Feb 28, 2024
5a98259
Demos/jsonconfigs: Fix misuse of params api
pgunn Feb 29, 2024
3862d71
Revise the three CLI demos that have been refactored to do argument/f…
pgunn Mar 1, 2024
994773b
Demo revisions: remove inline parameters
pgunn Mar 1, 2024
13b2730
demos_refactor: demo_onacid file load fix
pgunn Mar 8, 2024
32a6428
demos_refactor: demo_pipeline rework to use json
pgunn Mar 8, 2024
fa065d6
demo_pipeline_NWB conversion work
pgunn Mar 12, 2024
7cfc771
cli demos json format fix whoops
pgunn Mar 12, 2024
01362a7
demos refactor: fixing demos as testing
pgunn Mar 13, 2024
d04b012
demos_refactor: params files should not try to set quality.decay_time…
pgunn Mar 13, 2024
29dcdcf
CLI demos rework: fix demo_onacid_mesoscope's visualisation defaults,…
pgunn Mar 14, 2024
2f6291e
demo_behaviour:rebasing
pgunn Apr 16, 2024
a6bfde2
demo refactor: demo_pipeline_voltage_imaging: make method a commandli…
pgunn Mar 14, 2024
c27f1a0
demos_refactor: volpy demo import tweaks
pgunn Mar 15, 2024
ad2533a
CNMF: Fix some issues from recent CNMFParams refactor
pgunn Mar 15, 2024
896389a
demo_pipeline_cnmfE.py: ready for testing
pgunn Apr 12, 2024
1dd6e2a
remove param temporal.memory_efficient which doesn't seem to do anything
pgunn Apr 12, 2024
28606d4
Params: Remove doc for temporal.memory_efficient which was documented…
pgunn Apr 12, 2024
d607a7c
CNMFParams: Remove documented but never-implemented data.mmap_C and d…
pgunn Apr 12, 2024
0c00f4c
Fix bug from 2019 with #030bc6d35
pgunn Apr 16, 2024
bc95a7f
CNMF: "is True" -> ""
pgunn Apr 16, 2024
dbd8e50
Remove spatial.block_size_spat
pgunn Apr 16, 2024
378b732
Remove merging.max_merge_area
pgunn Apr 16, 2024
f8f924b
demo_behavior: more rebase nonsense
pgunn Apr 16, 2024
8f296f6
demo_behavior: rebase nonsense
pgunn Apr 16, 2024
d87162f
After rebase, add demo_behavior back in
pgunn Apr 16, 2024
9c36719
CLI demo refactor: forgot to import os
pgunn Apr 18, 2024
126ca4a
CLI demos refactor: demo_pipeline_cnmfE: Fix a mistake in move to params
pgunn Apr 18, 2024
b607769
Fix paths for more temporary files over a run to land in caiman_data/…
pgunn Apr 18, 2024
97451ba
Adjust gating, path handling to try harder not to drop temporary file…
pgunn Apr 19, 2024
3ed88c7
Note in docs that caiman.movie.save() returns the filename. Fix a tes…
pgunn Apr 19, 2024
66ede97
Movie.save(): Always return filename (before it only did it for some …
pgunn Apr 19, 2024
f78791c
environment.yml: be smarter about alternate builds of conda, set chan…
pgunn Apr 23, 2024
7d5824d
environment.yml: can't actually set channel priority in environment.yml
pgunn Apr 24, 2024
53ea964
Try adjusting CI to use environment-minimal.yml
pgunn Apr 24, 2024
8b177d5
minimal environment needs a few more things
pgunn Apr 24, 2024
701c9af
environment-minimal hasn't seen enough testing over the years.
pgunn Apr 24, 2024
5082983
If we're to use environment-minimal for CI we need to include nose
pgunn Apr 24, 2024
b6e67c5
prevent old numpy breaking typing
pgunn Apr 24, 2024
0b81678
yaml: Prevent old versions of numpy and associated problems
pgunn Apr 24, 2024
14af8d5
CI's way of managing updating the test environment means lots of mini…
pgunn Apr 24, 2024
aabb25d
Merge pull request #1334 from flatironinstitute/dev-environment_rework
pgunn Apr 24, 2024
73d5721
Hotfix: Fix issue in demo_caiman_cnmf_3D from a past refactor to visu…
pgunn Apr 24, 2024
fd24f25
cherry-pick 73d572
pgunn Apr 24, 2024
41cd982
params: better handle if user provides ring_CNN (compensating for a m…
pgunn Apr 26, 2024
75c95b2
params: better handle if user provides ring_CNN (compensating for a m…
pgunn Apr 26, 2024
c45b9aa
Fix goof in last hotfix to CNMFParams
pgunn Apr 26, 2024
9ea5587
Fix goof in last hotfix to CNMFParams
pgunn Apr 26, 2024
b8e97f8
Remove (broken) support for the old non-hdf5 format of .mat files
pgunn Apr 26, 2024
fafffce
.mat files: give users better messages if they try to open a v1 file
pgunn Apr 26, 2024
2581815
mat file revamp: Use consistent phrasing
pgunn Apr 26, 2024
29a3a67
Update docs on mat file support
pgunn Apr 26, 2024
df5b487
Merge pull request #1337 from flatironinstitute/dev-fix_mat_support
pgunn Apr 26, 2024
7943ede
CNMFParams: remove some dead code (really just a commit to make CI ru…
pgunn Apr 30, 2024
ac1e2aa
CLI OnACID demos: correctly handle display of results
pgunn Apr 30, 2024
0fa78a3
CLI demo pipeline: get rid of qt errors
pgunn Apr 30, 2024
a5886c4
Add note to demo_behaviour that it needs a qt6 build of opencv
pgunn Apr 30, 2024
fe1ad0c
demo_behavior and caiman.behavior.behavior: fix ancient calls
pgunn Apr 30, 2024
0c80201
CLI demo: fix demo_behavior
pgunn Apr 30, 2024
862dba8
Remove caiman.base.movies.online_NMF() which is unused (and needs the…
pgunn May 1, 2024
887e99e
caiman.behavior.extract_components(): remove method_factorization of …
pgunn May 1, 2024
b45633a
Add note about commandline demos to the main README
pgunn May 1, 2024
6581329
Merge pull request #1211 from flatironinstitute/cli_demos_refactor
pgunn May 1, 2024
8e9ad9f
caimanmanager: don't copy __pycache__ dirs out of the library bundle
pgunn May 1, 2024
29743f0
Merge pull request #1341 from flatironinstitute/dev-fix_pycache_creation
pgunn May 1, 2024
63c1932
VERSION: next release will be 1.11.0
pgunn May 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
environment-file: environment.yml
environment-file: environment-minimal.yml
activate-environment: caiman

- name: Install OS Dependencies
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ The main use cases and notebooks are listed in the following table:

A comprehensive list of references, where you can find detailed discussion of the methods and their development, can be found [here](https://caiman.readthedocs.io/en/master/CaImAn_features_and_references.html#references).

# CLI demos
Caiman also provides commandline demos, similar to the notebooks, demonstrating how to work with the codebase outside of Jupyter. They take their configuration primarily from json files (which you will want to modify to work with your data and its specifics) and should be reasonably easy to modify if they don't already do what you want them to do (in particular, saving things; a standard output format for Caiman is something intended for future releases). To run them, activate your environment, and find the demos in demos/general under your caiman data directory; you can run them like you would any other python application, or edit them with your code editor. Each demo comes with a json configuration file that you can customise. There is a README in the demos directory that covers some of this.

# How to get help
- [Online documentation](https://caiman.readthedocs.io/en/latest/) contains a lot of general information about Caiman, the parameters, how to interpret its outputs, and more.
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.10.9
1.11.0
121 changes: 28 additions & 93 deletions caiman/base/movies.py
Original file line number Diff line number Diff line change
Expand Up @@ -673,69 +673,6 @@ def NonnegativeMatrixFactorization(self,

return space_components, time_components

def online_NMF(self,
n_components: int = 30,
method: str = 'nnsc',
lambda1: int = 100,
iterations: int = -5,
model=None,
**kwargs) -> tuple[np.ndarray, np.ndarray]:
""" Method performing online matrix factorization and using the spams

(http://spams-devel.gforge.inria.fr/doc-python/html/index.html) package from Inria.
Implements bith the nmf and nnsc methods

Args:
n_components: int

method: 'nnsc' or 'nmf' (see http://spams-devel.gforge.inria.fr/doc-python/html/index.html)

lambda1: see http://spams-devel.gforge.inria.fr/doc-python/html/index.html

iterations: see http://spams-devel.gforge.inria.fr/doc-python/html/index.html

batchsize: see http://spams-devel.gforge.inria.fr/doc-python/html/index.html

model: see http://spams-devel.gforge.inria.fr/doc-python/html/index.html

**kwargs: more arguments to be passed to nmf or nnsc

Returns:
time_comps

space_comps
"""
try:
import spams # XXX consider moving this to the head of the file
except:
logging.error("You need to install the SPAMS package")
raise

T, d1, d2 = np.shape(self)
d = d1 * d2
X = np.asfortranarray(np.reshape(self, [T, d], order='F'))

if method == 'nmf':
(time_comps, V) = spams.nmf(X, return_lasso=True, K=n_components, numThreads=4, iter=iterations, **kwargs)

elif method == 'nnsc':
(time_comps, V) = spams.nnsc(X,
return_lasso=True,
K=n_components,
lambda1=lambda1,
iter=iterations,
model=model,
**kwargs)
else:
raise Exception('Method unknown')

space_comps = []

for _, mm in enumerate(V):
space_comps.append(np.reshape(mm.todense(), (d1, d2), order='F'))

return time_comps, np.array(space_comps)

def IPCA(self, components: int = 50, batch: int = 1000) -> tuple[np.ndarray, np.ndarray, np.ndarray]:
"""
Iterative Principal Component analysis, see sklearn.decomposition.incremental_pca
Expand Down Expand Up @@ -1244,7 +1181,8 @@ def load(file_name: Union[str, list[str]],
dimension of the movie along x and y if loading from a two dimensional numpy array

var_name_hdf5: str
if loading from hdf5/n5 name of the dataset inside the file to load (ignored if the file only has one dataset)
if loading from hdf5/n5 name of the dataset inside the file to load (ignored if the file only has one dataset).
This is also used for (new-style) mat files

in_memory: bool=False
This changes the behaviour of the function for npy files to be a readwrite rather than readonly memmap,
Expand Down Expand Up @@ -1314,17 +1252,6 @@ def load(file_name: Union[str, list[str]],
basename, extension = os.path.splitext(file_name)

extension = extension.lower()
if extension == '.mat':
logging.warning('Loading a *.mat file. x- and y- dimensions ' +
'might have been swapped.')
try: # scipy >= 1.8
byte_stream, file_opened = scipy.io.matlab._mio._open_file(file_name, appendmat=False)
mjv, mnv = scipy.io.matlab.miobase.get_matfile_version(byte_stream)
except: # scipy <= 1.7
byte_stream, file_opened = scipy.io.matlab.mio._open_file(file_name, appendmat=False)
mjv, mnv = scipy.io.matlab.mio.get_matfile_version(byte_stream)
if mjv == 2:
extension = '.h5'

if extension in ['.tif', '.tiff', '.btf']: # load tif file
with tifffile.TiffFile(file_name) as tffl:
Expand Down Expand Up @@ -1512,23 +1439,23 @@ def load(file_name: Union[str, list[str]],
else:
input_arr = input_arr[np.newaxis, :, :]

elif extension == '.mat': # load npy file
input_arr = scipy.io.loadmat(file_name)['data']
input_arr = np.rollaxis(input_arr, 2, -3)
if subindices is not None:
input_arr = input_arr[subindices]

elif extension == '.npz': # load movie from saved file
if subindices is not None:
raise Exception('Subindices not implemented')
with np.load(file_name) as f:
return movie(**f).astype(outtype)

elif extension in ('.hdf5', '.h5', '.nwb', 'n5', 'zarr'):
elif extension in ('.hdf5', '.h5', '.mat', '.nwb', 'n5', 'zarr'):
if extension in ('n5', 'zarr'): # Thankfully, the zarr library lines up closely with h5py past the initial open
f = zarr.open(file_name, "r")
else:
f = h5py.File(file_name, "r")
try:
f = h5py.File(file_name, "r")
except:
if extension == '.mat':
raise Exception(f"Problem loading {file_name}: Unknown format. This may be in the original version 1 (non-hdf5) mat format; please convert it first")
else:
raise Exception(f"Problem loading {file_name}: Unknown format.")
ignore_keys = ['__DATA_TYPES__'] # Known metadata that tools provide, add to this as needed. Sync with get_file_size() !!
fkeys = list(filter(lambda x: x not in ignore_keys, f.keys()))
if len(fkeys) == 1: # If the file we're parsing has only one dataset inside it,
Expand Down Expand Up @@ -1951,11 +1878,17 @@ def load_iter(file_name: Union[str, list[str]], subindices=None, var_name_hdf5:
yield frame # was frame[..., 0].astype(outtype)
return

elif extension in ('.hdf5', '.h5', '.nwb', '.mat', 'n5', 'zarr'):
if extension in ('n5', 'zarr'): # Thankfully, the zarr library lines up closely with h5py past the initial open
elif extension in ('.hdf5', '.h5', '.nwb', '.mat', '.n5', '.zarr'):
if extension in ('.n5', '.zarr'): # Thankfully, the zarr library lines up closely with h5py past the initial open
f = zarr.open(file_name, "r")
else:
f = h5py.File(file_name, "r")
try:
f = h5py.File(file_name, "r")
except:
if extension == '.mat':
raise Exception(f"Problem loading {file_name}: Unknown format. This may be in the original version 1 (non-hdf5) mat format; please convert it first")
else:
raise Exception(f"Problem loading {file_name}: Unknown format.")
ignore_keys = ['__DATA_TYPES__'] # Known metadata that tools provide, add to this as needed.
fkeys = list(filter(lambda x: x not in ignore_keys, f.keys()))
if len(fkeys) == 1: # If the hdf5 file we're parsing has only one dataset inside it,
Expand Down Expand Up @@ -2010,11 +1943,7 @@ def get_file_size(file_name, var_name_hdf5:str='mov') -> tuple[tuple, Union[int,
if os.path.exists(file_name):
_, extension = os.path.splitext(file_name)[:2]
extension = extension.lower()
if extension == '.mat':
byte_stream, file_opened = scipy.io.matlab.mio._open_file(file_name, appendmat=False)
mjv, mnv = scipy.io.matlab.mio.get_matfile_version(byte_stream)
if mjv == 2:
extension = '.h5'

if extension in ['.tif', '.tiff', '.btf']:
tffl = tifffile.TiffFile(file_name)
siz = tffl.series[0].shape
Expand Down Expand Up @@ -2042,12 +1971,18 @@ def get_file_size(file_name, var_name_hdf5:str='mov') -> tuple[tuple, Union[int,
filename = os.path.split(file_name)[-1]
Yr, dims, T = caiman.mmapping.load_memmap(os.path.join(
os.path.split(file_name)[0], filename))
elif extension in ('.h5', '.hdf5', '.nwb', 'n5', 'zarr'):
elif extension in ('.h5', '.hdf5', '.mat', '.nwb', 'n5', 'zarr'):
# FIXME this doesn't match the logic in load()
if extension in ('n5', 'zarr'): # Thankfully, the zarr library lines up closely with h5py past the initial open
f = zarr.open(file_name, "r")
else:
f = h5py.File(file_name, "r")
try:
f = h5py.File(file_name, "r")
except:
if extension == '.mat':
raise Exception(f"Problem loading {file_name}: Unknown format. This may be in the original version 1 (non-hdf5) mat format; please convert it first")
else:
raise Exception(f"Problem loading {file_name}: Unknown format.")
ignore_keys = ['__DATA_TYPES__'] # Known metadata that tools provide, add to this as needed. Sync with movies.my:load() !!
kk = list(filter(lambda x: x not in ignore_keys, f.keys()))
if len(kk) == 1: # TODO: Consider recursing into a group to find a dataset
Expand Down
11 changes: 11 additions & 0 deletions caiman/base/timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ def save(self,
Args:
file_name: str
name of file. Possible formats are tif, avi, npz, mmap and hdf5
If a path is not part of the filename, it will be saved into a temporary directory under caiman_data

to32: Bool
whether to transform to 32 bits
Expand All @@ -165,6 +166,9 @@ def save(self,
if saving as .tif, specifies the compression level
if saving as .avi or .mkv, compress=0 uses the IYUV codec, otherwise the FFV1 codec is used

Returns:
generated_filename: The full filename, path included, where the data was saved

Raises:
Exception 'Extension Unknown'

Expand Down Expand Up @@ -197,6 +201,8 @@ def foo(i):
if to32 and not ('float32' in str(self.dtype)):
curfr = curfr.astype(np.float32)
tif.save(curfr, compress=compress)
return file_name

elif extension == '.npz':
if to32 and not ('float32' in str(self.dtype)):
input_arr = self.astype(np.float32)
Expand All @@ -209,6 +215,8 @@ def foo(i):
fr=self.fr,
meta_data=self.meta_data,
file_name=self.file_name)
return file_name

elif extension in ('.avi', '.mkv'):
codec = None
if compress == 0:
Expand Down Expand Up @@ -241,6 +249,7 @@ def foo(i):
for d in data:
vw.write(cv2.cvtColor(d, cv2.COLOR_GRAY2BGR))
vw.release()
return file_name

elif extension == '.mat':
if self.file_name[0] is not None:
Expand Down Expand Up @@ -271,6 +280,7 @@ def foo(i):
'meta_data': self.meta_data,
'file_name': f_name
})
return file_name

elif extension in ('.hdf5', '.h5'):
with h5py.File(file_name, "w") as f:
Expand All @@ -289,6 +299,7 @@ def foo(i):
if self.meta_data[0] is not None:
logging.debug("Metadata for saved file: " + str(self.meta_data))
dset.attrs["meta_data"] = cpk.dumps(self.meta_data)
return file_name
elif extension == '.mmap':
base_name = name

Expand Down
27 changes: 9 additions & 18 deletions caiman/behavior/behavior.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,10 @@ def select_roi(img: np.ndarray, n_rois: int = 1) -> list:
each element is an the mask considered a ROIs
"""

# FIXME This function depends on particular builds of OpenCV
# and may be difficult to support moving forward; would be good to
# move this kind of code out of the core and find more portable ways
# to do it
masks = []
for _ in range(n_rois):
fig = plt.figure()
Expand Down Expand Up @@ -130,8 +134,8 @@ def extract_magnitude_and_angle_from_OF(spatial_filter_,
x, y = scipy.signal.medfilt(time_trace, kernel_size=[1, 1]).T
x = scipy.signal.savgol_filter(x.squeeze(), sav_filter_size, 1)
y = scipy.signal.savgol_filter(y.squeeze(), sav_filter_size, 1)
mag, dirct = to_polar(x - caiman.components_evaluation.mode_robust(x),
y - caiman.components_evaluation.mode_robust(y))
mag, dirct = to_polar(x - caiman.utils.stats.mode_robust(x),
y - caiman.utils.stats.mode_robust(y))
dirct = scipy.signal.medfilt(dirct.squeeze(), kernel_size=1).T

# normalize to pixel units
Expand Down Expand Up @@ -325,25 +329,12 @@ def extract_components(mov_tot,

if method_factorization == 'nmf':
nmf = NMF(n_components=n_components, **kwargs)

time_trace = nmf.fit_transform(newm)
spatial_filter = nmf.components_
spatial_filter = np.concatenate([np.reshape(sp, (d1, d2))[np.newaxis, :, :] for sp in spatial_filter], axis=0)

elif method_factorization == 'dict_learn':
import spams
newm = np.asfortranarray(newm, dtype=np.float32)
time_trace = spams.trainDL(newm, K=n_components, mode=0, lambda1=1, posAlpha=True, iter=max_iter_DL)

spatial_filter = spams.lasso(newm,
D=time_trace,
return_reg_path=False,
lambda1=0.01,
mode=spams.spams_wrap.PENALTY,
pos=True)

spatial_filter = np.concatenate([np.reshape(sp, (d1, d2))[np.newaxis, :, :] for sp in spatial_filter.toarray()],
axis=0)
else:
# Caiman used to support a method_factorization called dict_learn, implemented using spams.lasso
raise Exception(f"Unknown or unsupported method_factorization: {method_factorization}")

time_trace = [np.reshape(ttr, (c, T)).T for ttr in time_trace.T]

Expand Down
10 changes: 5 additions & 5 deletions caiman/caimanmanager.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
#!/usr/bin/env python

import argparse
import distutils.dir_util
import filecmp
import glob
import os
Expand Down Expand Up @@ -53,20 +52,21 @@

def do_install_to(targdir: str, inplace: bool = False, force: bool = False) -> None:
global sourcedir_base
ignore_pycache=shutil.ignore_patterns('__pycache__')
if os.path.isdir(targdir) and not force:
raise Exception(targdir + " already exists. You may move it out of the way, remove it, or use --force")
if not inplace: # In this case we rely on what setup.py put in the share directory for the module
if not force:
shutil.copytree(sourcedir_base, targdir)
shutil.copytree(sourcedir_base, targdir, ignore=ignore_pycache)
else:
distutils.dir_util.copy_tree(sourcedir_base, targdir)
shutil.copytree(sourcedir_base, targdir, ignore=ignore_pycache, dirs_exist_ok=True)
os.makedirs(os.path.join(targdir, 'temp' ), exist_ok=True)
else: # here we recreate the other logical path here. Maintenance concern: Keep these reasonably in sync with what's in setup.py
for copydir in extra_dirs:
if not force:
shutil.copytree(copydir, os.path.join(targdir, copydir))
shutil.copytree(copydir, os.path.join(targdir, copydir), ignore=ignore_pycache)
else:
distutils.dir_util.copy_tree(copydir, os.path.join(targdir, copydir))
shutil.copytree(copydir, os.path.join(targdir, copydir), ignore=ignore_pycache, dirs_exist_ok=True)
os.makedirs(os.path.join(targdir, 'example_movies'), exist_ok=True)
os.makedirs(os.path.join(targdir, 'temp' ), exist_ok=True)
for stdmovie in standard_movies:
Expand Down
4 changes: 2 additions & 2 deletions caiman/mmapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ def save_memmap(filenames:list[str],
recompute_each_memmap = True


if recompute_each_memmap or (remove_init>0) or (idx_xy is not None)\
if recompute_each_memmap or (remove_init > 0) or (idx_xy is not None)\
or (xy_shifts is not None) or (add_to_movie != 0) or (border_to_0>0)\
or slices is not None:

Expand Down Expand Up @@ -527,7 +527,7 @@ def save_memmap(filenames:list[str],
sys.stdout.flush()
Ttot = Ttot + T

fname_new = caiman.paths.fn_relocated(fname_tot + f'_frames_{Ttot}.mmap')
fname_new = os.path.join(caiman.paths.get_tempdir(), caiman.paths.fn_relocated(f'{fname_tot}_frames_{Ttot}.mmap'))
try:
# need to explicitly remove destination on windows
os.unlink(fname_new)
Expand Down
Loading
Loading