Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in 1.3. Load key dependencies StarDist_2D #249

Open
BelES123 opened this issue Mar 14, 2023 · 10 comments
Open

Error in 1.3. Load key dependencies StarDist_2D #249

BelES123 opened this issue Mar 14, 2023 · 10 comments

Comments

@BelES123
Copy link

Hello, I run into the following problem trying to load key dependencies for StarDist_2D (1.3. Load key dependencies). Installation of 1.1 StarDist and dependencies goes fine.

2.11.0
Tensorflow enabled.

Libraries installed
Notebook version: 1.18
Latest notebook version: 1.18
This notebook is up-to-date.

ValueError Traceback (most recent call last)
in
472 # Build requirements file for local run
473 after = [str(m) for m in sys.modules]
--> 474 build_requirements_file(before, after)

3 frames
/usr/local/lib/python3.9/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults)
1534
1535 if delimiter == "\n":
-> 1536 raise ValueError(
1537 r"Specified \n as separator or delimiter. This forces the python engine "
1538 "which does not accept a line terminator. Hence it is not allowed to use "

ValueError: Specified \n as separator or delimiter. This forces the python engine which does not accept a line terminator. Hence it is not allowed to use the line terminator as separator.

@clairepiperr
Copy link

Hi,

I am having the same error with 1.3 Load Key dependencies crashing.

2.11.0
Tensorflow enabled.
site.config.json: 5.71kiB [00:00, 2.58MiB/s]
_resolve_source.py (471): Download (5705) does not have expected size (2006).
collection.json: 189kiB [00:00, 24.5MiB/s]
_resolve_source.py (471): Download (189473) does not have expected size (24320).

Libraries installed
Notebook version: 1.18
Latest notebook version: 1.18
This notebook is up-to-date.

ParserError Traceback (most recent call last)
in
471 # Build requirements file for local run
472 after = [str(m) for m in sys.modules]
--> 473 build_requirements_file(before, after)

let me know

stardist error

@guijacquemet
Copy link
Collaborator

Hi,
I posted a quick fix.
Let me know if it works on your side!

Cheers

Guillaume

@clairepiperr
Copy link

Hi,

Yes that seems to be working now,

Thanks :)

Claire

@clairepiperr
Copy link

I managed to get to 3.3 fine this time but now having an error using pretrained weights.
Seems the download path isnt working even though it was earlier when I had the other error

image

Any advice?

@guijacquemet
Copy link
Collaborator

I cannot reproduce the error on my side. What settings are you using?

@clairepiperr
Copy link

I get the same error regardless of which model i use
image

@guijacquemet
Copy link
Collaborator

My best guess is that you need to restart your Google Colab session. Looks like a connection issue

@BelES123
Copy link
Author

BelES123 commented Mar 16, 2023

Hi, I posted a quick fix. Let me know if it works on your side!

Cheers

Guillaume

Thank you very much, Guillaume, it works now!

I have another question regarding StarDist training.

In step 4.1 Prepare the training data and model for training I get the following warning.

WARNING: median object size larger than field of view of the neural network.
Config2D(n_dim=2, axes='YXC', n_channel_in=1, n_channel_out=33, train_checkpoint='weights_best.h5', train_checkpoint_last='weights_last.h5', train_checkpoint_epoch='weights_now.h5', n_rays=32, grid=(2, 2), backbone='unet', n_classes=None, unet_n_depth=3, unet_kernel_size=(3, 3), unet_n_filter_base=32, unet_n_conv_per_depth=2, unet_pool=(2, 2), unet_activation='relu', unet_last_activation='relu', unet_batch_norm=False, unet_dropout=0.0, unet_prefix='', net_conv_after_unet=128, net_input_shape=(None, None, 1), net_mask_shape=(None, None, 1), train_shape_completion=False, train_completion_crop=32, train_patch_size=(1024, 1024), train_background_reg=0.0001, train_foreground_only=0.9, train_sample_cache=True, train_dist_loss='mae', train_loss_weights=(1, 0.2), train_class_weights=(1, 1), train_epochs=400, train_steps_per_epoch=100, train_learning_rate=0.0003, train_batch_size=2, train_n_val_patches=None, train_tensorboard=True, train_reduce_lr={'factor': 0.5, 'patience': 40, 'min_delta': 0}, use_gpu=False)
Number of steps: 7

Also, Stardist doesn't perform well after training. I believe this is because I am trying to recognize degradation spots that are much smaller than nuclei. Is there a way to input a median size of the object into the model?

@guijacquemet
Copy link
Collaborator

Hi,

Not directly. In the training advanced parameters, you could try playing with:
grid_parameter: increase this number if the cells/nuclei are very large or decrease it if they are very small. Default value: 2

Could you post an example image?

@BelES123
Copy link
Author

Hi,

Not directly. In the training advanced parameters, you could try playing with: grid_parameter: increase this number if the cells/nuclei are very large or decrease it if they are very small. Default value: 2

Could you post an example image?

Hello Guillaume, here is an original image. But for training Stardist I invert it (second image) and the third image is my mask.

Image_original

Image

Mask

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants