Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model used in the publication #8

Open
ZyHUpenn opened this issue Mar 22, 2023 · 27 comments
Open

Model used in the publication #8

ZyHUpenn opened this issue Mar 22, 2023 · 27 comments

Comments

@ZyHUpenn
Copy link

Hi, I really interested which model you used for voltage imaging data of mouse cortex layer and zebrafish. Because recently I was dealing with similar voltage imaging data, and I found some tiny irregular spikes that looks like noise on my baseline. Did you use bs1 or bs3, or other models?

@ZyHUpenn
Copy link
Author

image
I do the average in cell's region and there seems to be many irregular spikes, then I was wondering if it is because the trainning set of the model are quite diffrent which may cause some noise being ignored.

@EOMMINHO
Copy link
Member

EOMMINHO commented Mar 22, 2023

Hi, I am not sure what the problem is only looking at the trace.
We used bs1 for the data that does not contain structural noise and used bs3 otherwise.
It might be better to try bs3 if denoising does not work correctly.

Did you use the pre-trained model we uploaded or trained the model on your own?

@ZyHUpenn
Copy link
Author

I used pre-trained model and I remembered that is bs1.

@ZyHUpenn
Copy link
Author

image
image
See comparing the denoised data in your publication, the curve is clean and smooth but for our data I directly applied SUPPORT and the baseline seeems incorrect.

@SteveJayH
Copy link
Member

Here is model parameters which were used for mouse and zebrafish data in Fig. 4: Denoising population voltage imaging data. of our paper.
GDrive

Try that two parameters, and different imaging rate, spike property, noise level... between train / test data could hinder perfect denoising.

The best way is to train SUPPORT to your data. If these pretrain models also not work well and if you have any difficulties on training, let us know.

@ZyHUpenn
Copy link
Author

I will try it, thank you very much!

@ZyHUpenn
Copy link
Author

Sorry, seems the pre-trained model's parameter 'in-channels' is 61, the model parameter's 'in-channels' you shared is 16, shall I use 61 or change the layer's architecture?

@SteveJayH
Copy link
Member

image

Ah, sorry for the inconvenience. That was typo, 61 is correct.

You can load both parameters like this. (I checked it now!)

@ZyHUpenn
Copy link
Author

Yes, I already validated the model on your zebrafish data, it works well. Seems we need to train models with our data in order to get good results. Thank you very much for the help!

@ZyHUpenn
Copy link
Author

ZyHUpenn commented Apr 5, 2023

Hi, can I ask what's the raw data you used in Fig.4a and 4d? I downloaded the mouse cortex data and found they are all single cell's images and for zebrafish's data seems they are brain's images while Fig 4d seems from the body. Besides, can I ask how many data you used in the training? I trained some of my data however the SNR's improvement is not obvious. Thanks for your time!

@EOMMINHO
Copy link
Member

EOMMINHO commented Apr 5, 2023

Hi, the data used in Fig.4a can be downloaded from (https://zenodo.org/record/4515768#.ZC0_DHZByUk), and the data used in Fig.4d can be downloaded from (https://figshare.com/articles/dataset/Voltage_imaging_in_zebrafish_spinal_cord_with_zArchon1/14153339).
We trained each model using a single video.

@ZyHUpenn
Copy link
Author

ZyHUpenn commented Apr 5, 2023

Thank you very much! Can I ask how you train your video, like learning rate or other parameters? I trained with a 3000020096, filming rate = 1000 movie for 100 epochs for saving the time, the loss is around 0.1 and the loss decay is really unobvious, shall I increase my training epochs or modify some other parameters?

@EOMMINHO
Copy link
Member

EOMMINHO commented Apr 5, 2023

We trained the model with the default parameters uploaded on (https://github.com/NICALab/SUPPORT/blob/main/src/utils/util.py). If you found that the improvement of SNR is not obvious, I would like to recommend increasing the "bs_size" to [3, 3] or higher.

@ZyHUpenn
Copy link
Author

I used bs_size [3,3] to train with my movie, seems there is not much improvement. The training loss is around 0.10. My movie is 200 * 96 * 30000, each frame looks like the image attached, is it the different space size or too much background in our movie that cause the problem? Do you think I need to modify some network architecture like change the kernel size or add some regularization?
image

@SteveJayH
Copy link
Member

SteveJayH commented Apr 12, 2023

  1. Is the attached frame denoised, or is it raw??

  2. What does not much improvement mean in detail?

  • Do you observe "noise" also in the denoised frame? (spatially)
  • Or it is visually okay, but noise did not reduce in ROI trace? (temporally)

If the noise remains in the denoised frame, we usually increase the bs_size.

p.s., Is it possible to share the data and let us try processing it?
We are not sure just by looking at one frame of the data...
I think your data is not much different from the data we have processed... of course, based on current information.

@ZyHUpenn
Copy link
Author

Thank you very much! And I've sent our data through the email. Besides, when I tried to use pre-trained model and your parameter to denoise your data, there seems some spike's signal lost. I attached a denoised trace from the chosen ROI, do you have any idea how it could happen?

image
image

red line is the denoised trace and blue line is the raw trace.

@ZyHUpenn
Copy link
Author

ZyHUpenn commented Apr 16, 2023 via email

@EOMMINHO
Copy link
Member

Hi, the model we uploaded to Github was trained with another dataset, which I believe was the mouse calcium imaging dataset.
We uploaded the sample model just for demo purposes.
As the modality of the data is quite different from the zebrafish voltage imaging dataset, the denoising performance could not be satisfying.

@SteveJayH
Copy link
Member

We'll try SUPPORT on your data.
If you observe noise in the denoised data, we typically increase the size of the blind spot.
And if the dF/F0 of the spikes reduced after denoising, we proceed longer training.

Also, we are currently unavailable to download the data, seems like we need permission.

These are our Gmails, so please add us to your shared link!! After that, we'll try your data.

Minho ([email protected])
Seungjae ([email protected])

@ZyHUpenn
Copy link
Author

ZyHUpenn commented Apr 17, 2023

Thank you very much! I've added you to the movie link. And for what you say mouse calcium imaging dataset, do you mean the mouse cortex data? I just tried the mouse cortex movie. After denoising, I flipped the data to get the trace, seems the spikes are eliminated, do I need to do some preprocessing or postprocessing to negative-indicator data or did I do anything wrong?
image

@SteveJayH
Copy link
Member

Hi, we have denoised your data, and would like to share what we have done.


In short, we believe that spikes (based on our view) are preserved and the noise has been reduced after denoising.

Here is the shared folder that contains 1. denoised image, 2. Mean traces from raw and denoised data, 3. model pth file, and 4. ImageJ ROI we used for analysis.
shared GDrive folder

Please take a look, and check if the results are same as yours, or better.
There are traces in png files in roi_traces folder, so I recommend to take a look at these.

It would be great if you could point out the ROI or temporal region where the denoised data shows poor performance.
Additionally, we found out the fluctuations in both raw and denoised data at the subthreshold region. Since the fluctuations are quite regular, we think that they are not the noise component, and therefore, SUPPORT did not remove them.

Below are the details. I assume most of the things are similar to your experiment.


We trained about 150 epochs (~26hours with RTX 3090Ti GPU).
The model specification is as follows,

    model = SUPPORT(in_channels=61, mid_channels=[64, 128, 256, 512, 1024], depth=5,\
            blind_conv_channels=64, one_by_one_channels=[32, 16], last_layer_channels=[64, 32, 16], bs_size=[3, 3]).cuda()

where only the mid_channels are increased compared to the default.
This simply increases the capacity of the model.

And we used the patch_size as [61, 96, 96] and the patch_interval as [1, 48, 48] since the width of your data is smaller than the default patch_size 128.

@ZyHUpenn
Copy link
Author

Thank you very much for trying our data and help us figure out the problem! The result is good! Can I ask a question which may be silly, did you extract the subthreshold? I am slightly confused of what you mean the regular fluctuations on subthreshold region. Can I understand it as you think the frequency and strength of those fluctuations are consistant in subthreshold region so that you think they are not noise since the noise should be independent?

@SteveJayH
Copy link
Member

Glad to hear that the result is good!

To answer your question,

  1. Regarding the visualization, we plotted (F - F0) / F0, where the F0 was calculated as the moving average of the fluorescence signal F. The size of the moving window was set to 200, and we found that this visualization removed some subthreshold signals. We uploaded fluorescent traces to the Google Drive. Please check them! They are just visualizations of the raw.csv and denoised_bs3_150.csv files.
  2. The regular fluctuations that I mentioned were the parts indicated by red arrows, which looks like ripples with a period of 5 frames.
    image
    Yes, the frequency of each ripple is almost the same, so we thought that this is not a noise component.

@ZyHUpenn
Copy link
Author

Thank you for your explanation! Can I ask what data and model you used for supplementary figure 9, I want to see how SUPPORT works on unseen data.

@EOMMINHO
Copy link
Member

EOMMINHO commented May 1, 2023

We used the data uploaded at (https://zenodo.org/record/4515768#.ZE9k83ZByUl).
The name of the data was L1.03.35.
We also uploaded the model we used (https://github.com/NICALab/SUPPORT/blob/main/src/GUI/trained_models/L1_generalization.pth).
Please let us know if anything goes wrong.

@ZyHUpenn
Copy link
Author

Sorry, it's been a long time. We've tried SUPPORT many times, the results on Zebrafish and Mouse cortex data are Brilliant. But we still couldn't get the same good denoising performance on our data. We checked the movie that I shared to you, there maybe some regular fluctuation from sample itself which can not be denoised. Then we used another movie which should have less such fluctuations, but we found the similar issue. Besides, things are strange that when I used the zebrafish model to deal with our second movie, we found the denoising performance is much better than the performance using the model trained by the movie.
So i wanna ask if that means the issue is we haven't trained our model well enough? Can I ask the loss of your zebrafish and mouse model? The figure I attached is the trace comparison. Thank you!
image

@ZyHUpenn
Copy link
Author

And this is our second movie. I understand that your time is valuable, if you have some spare time, you can try our data. I'm thinking there must be some issues on my training process, so please let me know if you find something about our problem. Thank you again!
https://drive.google.com/file/d/1LqoHeDXPeDmdgiM_5Z1ju9lRr7RJDyWg/view?usp=sharing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants