Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tools required for conversion of KNIME image analysis workflows to Galaxy #105

Closed
7 tasks done
rmassei opened this issue Mar 7, 2024 · 35 comments
Closed
7 tasks done

Comments

@rmassei
Copy link
Contributor

rmassei commented Mar 7, 2024

I made a list of potential tools which could be implemented for image processing and analysis. The list is derived from a comparison of the tools present in Galaxy with some workflow developed by the KNIME Image Processing Extension. Overall, there is a good presence of tools and it possible to reproduce most of the workflow with few modifications.

https://galaxyproject.org/news/2024-03-08-hackathon-imaging/#workflow-translation-from-knime-to-galaxy

Binary

  • Fill Holes (Fill black holes in binary images)
    Skimage -> binary_fill_holes
    Morphological Image Operations
  • Perform morphological operations (like erode and dilate or open and close) on images. As far as I saw, it is not possible to perform these single operations in Galaxy
    - Erode: Shrink bright areas.
    - Dilate: Grow bright areas.
    - Open: Erode followed by Dilate. Erases tiny bright spots.
    - Close: Dilate followed by Erode. Erases tiny dark holes.
    Example: https://imagej.net/ij/plugins/gray-morphology.html
    In General: https://github.com/ijpb/MorphoLibJ/

Process

Labelling

Quantification

  • Extract features from the Voronoi based segmentation

Visualization

  • Overlay the colorized labeled map on the original image by keeping the same color palette
@kostrykin
Copy link
Member

Thanks for your suggestions, @rmassei! This looks very doable and will sure be very useful.

@kostrykin
Copy link
Member

The tool added in #106 should cover all use cases you described under Binary @rmassei. The tool added in #107 covers the functionality of the https://imagej.net/ij/plugins/inverter.html plugin which you listed under Inverter.

The tool added in #108 performs a Voronoi tessellation, which, judging by the example you gave, is what you mean by Voronoi segmentation. However, I do not see what an input table should be used for. Can you elaborate on that?

The ImageJ plugin suite OrientationJ is a series of multiple plugins. Can you maybe narrow a bit, which functionality exactly is required? Then, it will be easier to think whether writing a Python tool which mimics that functionality is feasible, or whether it might be wiser to aim for a wrapper of the original ImageJ plugin.

@rmassei
Copy link
Contributor Author

rmassei commented Mar 11, 2024

Sorry @kostrykin, I just copy-pasted the description from the KNIME node and the table refers to the image input 😄

Voronoi looks good, there is another option in the node to input an image with the "seed regions" before the applying the voronoi segmentation but I do not know if make sense to implement this in Galaxy or can be achieve with other steps:

image

Regarding the OrientationJ, I have the experience using the orientation parameter which report the orientation propriety of the image:

Screenshot from 2024-03-11 11-21-17

Orientation can be then used to straight the image (in this case a -9.101 degree rotation) :

Screenshot from 2024-03-11 11-22-00

I found a bit of theoretical background here:
https://bigwww.epfl.ch/demo/orientation/theoretical-background.pdf
https://bigwww.epfl.ch/demo/orientation/

@kostrykin
Copy link
Member

kostrykin commented Mar 11, 2024

Thanks @rmassei!

So what we now have in voronoi_tessellation is basically your Voronoi Segmentation with seeds, yet without the "Image to work on". I can't make much sense of what "Image to work on" could mean in that context, because Voronoi actually is purely about the geometric relation of the seeds. Can you maybe provide some more info?

Moreover, when looking at the example images, I'd say that it could be that there is a threshold for the Voronoi Tessellation, like a maximum distance (or thresholding the distance transform), but this is just a rough suspicion and nothing I've ever heard of. Can you maybe confirm that, or do you have any further info?

Regarding OrientationJ, thanks for the infos, I will look into this and see how feasible it is.

@rmassei
Copy link
Contributor Author

rmassei commented Mar 11, 2024

@kostrykin, you are completely right and I overseen it, sorry for this. There is a background threshold that need to be set for the voronoi segmentation. Pixel below that value are just considered to not be part of any cell. Moreover, it is also possible to add a "fill holes" post-processing step.

@kostrykin
Copy link
Member

kostrykin commented Mar 11, 2024

Thanks for the clarification! I think we can (almost) imitate this behavior already with the tools currently on-board (edit: since #109 it should be exactly imitable).

The first step would be to compute the Voronoi tessellation from your labels. For this you can use the new Voronoi Tessellation tool. The next step would be to compute the 0/1-mask of the foreground (only pixels within this mask will be considered part of a cell). In a third step, we would use the new Process images using arithmetic expressions tool to multiply the Voronoi tessellation with the foreground mask. The result should be what you have been looking for.

For the computation of the foreground mask using intensity thresholding, we have the Threshold Image tool. However, this tool currently only supports thresholding using automatically determined thresholds (e.g., Otsu). It is a no-brainer to extend that tool to also allow custom threshold values. However, note that this tool labels the foreground with 255, not with 1. So when it comes to the image arithmetics described above, you would also need to divide the mask by a factor of 255 (because a 0/1 mask is what you want).

Let me know if you agree/disagree! @rmassei

This was referenced Mar 11, 2024
@kostrykin
Copy link
Member

Regarding the OrientationJ, I have the experience using the orientation parameter which report the orientation propriety of the image

@rmassei We have added OerientationPy, which is the successor of OrientationJ from the same authors, in #110. I think everything is done now?

@rmassei
Copy link
Contributor Author

rmassei commented Mar 12, 2024

Hi @kostrykin thanks a lot for this!
I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1
Was OerientationPy already added to the tools?

@kostrykin
Copy link
Member

I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1

I think the problem is that you piped the filtered image into Convert binary image into label map, not a binary image. Besides, you used the factor 225 in your expression input1 * (input2)/225 for the Voronoi tessellation tool. This is supposed to be 255.

Was OerientationPy already added to the tools?

#109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

@bgruening
Copy link
Collaborator

#109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

Both updates should be available now.

@kostrykin
Copy link
Member

@bgruening Hm Galaxy is complaining that the Python script from #109 is missing:

python: can't open file '/opt/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/imgteam/2d_auto_threshold/7db4fc31dbee/2d_auto_threshold/auto_threshold.py': [Errno 2] No such file or directory

@bgruening
Copy link
Collaborator

that happens if I do manual installation ;)

I forgot a sync. This will work in 5min.

@rmassei
Copy link
Contributor Author

rmassei commented Mar 13, 2024

I am actually not able to reproduce the voronoi segmentation by following the aforementioned steps. I am sure I am overlooking some passages or making a mistake somewhere, maybe you have a solution: https://usegalaxy.eu/u/rmassei88/h/voronoitest-1

I think the problem is that you piped the filtered image into Convert binary image into label map, not a binary image. Besides, you used the factor 225 in your expression input1 * (input2)/225 for the Voronoi tessellation tool. This is supposed to be 255.

Was OerientationPy already added to the tools?

#109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :)

Hi @kostrykin, I tried to change the workflow according to your suggestion but still cannot achieve a good segmentation, the output is basically the same of the threshold image (step 7).

@kostrykin
Copy link
Member

I've built an example of a Voronoi segmentation workflow based on your explanations:
https://usegalaxy.eu/u/e26918e6b1264c81874871c01e988195/w/voronoi-segmentation

Required inputs:

  • Input image
  • Seeds (a binary image)
  • Threshold
Bildschirmfoto 2024-03-13 um 12 06 54

Indeed, to achieve a good segmentation performance, the choice of the seeds is crucial.

And here is an example invocation for your input image, for which I have created the seeds by hand:
https://usegalaxy.eu/u/e26918e6b1264c81874871c01e988195/h/voronoi-segmentation

Bildschirmfoto 2024-03-13 um 12 36 49

I think this looks very much like what you had posted above.

Another pitfall I have noticed while experimenting with this, is that our Filter 2D image tool changes the range of image intensity values. This is probably because the data type is changed (from uint8 to something floatish, which actually makes sense). We will have to look into this at some point to see whether this can be made more user-friendly and/or transparent.

@rmassei
Copy link
Contributor Author

rmassei commented Mar 13, 2024

Hi!
I tried with the original seeds and results looks pretty neat:
image
Small differences are just related to some threshold parameters which need to be tuned... but, overall, seems that it is possible to reproduce the same behavior!

@kostrykin
Copy link
Member

Glad to hear that! What will be the next steps?

I hope to be able to improve the coloring of the Colorize labels tool timely, so that there will be fewer cases of hard-to-keep-apart-colors like adjacent red-and-pink or green-and-teal.

@rmassei
Copy link
Contributor Author

rmassei commented Mar 14, 2024

I am going now to try to put together the whole workflow and check a way to quantitative comparing the two outputs!
Keep you posted

@kostrykin kostrykin changed the title Potential tools and feature to implement Tools required for conversion of KNIME image analysis workflows to Galaxy Mar 14, 2024
@rmassei
Copy link
Contributor Author

rmassei commented Apr 23, 2024

Hi @kostrykin!
back here after a month :)
I finally managed to rebuild the KNIME workflows in Galaxy white some modularization:

  1. Generic nuclei segmentation plus feature extraction
  2. Voronoi Segmentation plus feature extraction
  3. Full combined workflow

Additionally, I created a workflow to test different manual threshold levels in batch:
4) Testing Manual Threholds
This was particularly useful to test different level of thresholds before performing the Voronoi

While 1 is working pretty fine, I am still having some problems with 2.

  • After coloring the labels, I cannot find a good solution for extracting 2d features from the label map and plot them on the original image. At the present, I am:
    Screenshot from 2024-04-23 12-24-51
    but unfortunately the output and counting does not looks so good...
    image

It would be nice to have such output if possible:
Screenshot from 2024-04-23 12-28-15

@kostrykin
Copy link
Member

Thanks @rmassei!

I'm not sure whether I can follow. I understand that you are having issues with Workflow 2 (Voronoi segmentation plus feature extraction). Please clarify:

  1. The first image that you posted, is this the result of the "Overlay images" step in Workflow 2?
  2. Your first issue is that you want it to look like the second image you posted?
  3. Your second issue is that you need to extract image features from the segmented image regions?
  4. What do you mean by "plot them on the original image"? Is this somehow related to my first question?

@rmassei
Copy link
Contributor Author

rmassei commented Apr 24, 2024

sorry, although I was not clear in the explanation, you got all my issues :D

1 and 4) Yes, step 14. I guess the "connected component" is not really the best option
2) Yes, it would be nice to be able to plot the colorize image on the original one but I am failing in performing this step
3. Exactly

@kostrykin
Copy link
Member

Ok, I now have a clearer picture, but can you please explain:

Issue 1: To me, the expected image (the lower one) looks like a blending of the colorized Voronoi segmentation and the original image. Do you have any extra info on how the blending of the Voronoi segmentation and the original image works? I think it looks like a linear combination of the two, but I'm not entirely sure.

Issue 3: Am I right that you intend to plot the extracted image features into the blended image? If so, can you please elaborate what you mean by plotting, like what kind of plots do you need?

@rmassei
Copy link
Contributor Author

rmassei commented Apr 24, 2024

Issue 1: Unfortunately, I do not have further info. I tried to perform a linear blending of the color map and the original image using the overlay tool but the problem is that the color map is RGB Color and i cannot find a way to convert it in a 8-bit RGB before blending

Issue 3: sorry, I explained myself badly. The issue is extracting 2d features from the voronoi segmentation. I tried to run the tool after the arithmetic expression but I received the following fatal error:

Matplotlib is building the font cache; this may take a moment.
Traceback (most recent call last):
  File "/opt/galaxy/shed_tools/toolshed.g2.bx.psu.edu/repos/imgteam/2d_feature_extraction/2436a8807ad1/2d_feature_extraction/2d_feature_extraction.py", line 62, in <module>
    raw_label_image = skimage.io.imread(label_file)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/_io.py", line 62, in imread
    img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/manage_plugins.py", line 214, in call_plugin
    return func(*args, **kwargs)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/skimage/io/_plugins/pil_plugin.py", line 36, in imread
    im = Image.open(f)
  File "/usr/local/tools/_conda/envs/mulled-v1-87cd3d652f9000594ed4460deda0c803a677af5435a69dd3a0d9f10a265c901f/lib/python3.7/site-packages/PIL/Image.py", line 3148, in open
    "cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file <_io.BufferedReader name='/data/dnb09/galaxy_db/files/5/0/8/dataset_50856838-1c7c-4d8f-aff1-c05fa77bcb4b.dat'>

@kostrykin
Copy link
Member

OK, I think I mostly got it now.

Can you please provide a history of an example execution of Workflow 2, shared via a link? Or, alternatively, a full set of inputs (input files and values). I will then look into it.

@rmassei
Copy link
Contributor Author

rmassei commented Apr 24, 2024

Here the execution:
https://usegalaxy.eu/u/rmassei88/h/voronoitest

@kostrykin
Copy link
Member

Thanks!

A few quick notes regarding Issue 3:

  • Extract image features version 0.14.2+galaxy0 is based on a very old version of scikit-image and has been updated to a newer version just this week (will be available on Galaxy EU presumably by Monday). 8772c79
    • Testing locally, version 0.14 of scikit-image fails to load the input label image.
    • Testing locally, the new version 0.18 successfully loads the input label image.
  • I think that the Intensity image used for Extract image features should be dataset 13, not dataset 1?

I will look into Issue 1 later.

@kostrykin
Copy link
Member

Another issue you have:

This is why your 18: Colorize label map on data 17 looks strange.

@kostrykin
Copy link
Member

kostrykin commented Apr 25, 2024

Regarding Issue 1:

  • Your TIFF image has two frames, but for the overlay you only want to use frame 0. You already used the Convert image format tool to extract frame 1 from the image for the analysis. For the overlay, you need to use the tool again, but this time you will want to extract frame 0.
  • You were right that a problem is that this will be a grayscale image, while 18: Colorize label map on data 17 is an RGB image. The current version of the Overlay images tool expects the number of axes to be same. You were also right that there currently is no tool for conversion of grayscale images into RGB — so here comes Add "Convert single-channel to multi-channel image" tool #120 🎉

@rmassei Let me know if you need help plugging it all together.

@rmassei
Copy link
Contributor Author

rmassei commented Apr 25, 2024

Cannot find find the convert tool, is it already there? :D

@kostrykin
Copy link
Member

kostrykin commented Apr 26, 2024

New tools and tool versions usually become available on Mondays


Edit: It's there now usegalaxy-eu/usegalaxy-eu-tools#712

@rmassei
Copy link
Contributor Author

rmassei commented Apr 29, 2024

Just receiving this error:
Screenshot from 2024-04-29 13-26-59

@bgruening
Copy link
Collaborator

Can you please try again?

@rmassei
Copy link
Contributor Author

rmassei commented Apr 29, 2024

everything worked out:

linear_blending

@bgruening
Copy link
Collaborator

Looks like modern art to me ... is everything working, like everything? :)

@rmassei
Copy link
Contributor Author

rmassei commented Apr 30, 2024

Yep also feature extraction worked out perfectly 👍
Overall, a background removal step is missing since it seems not necessary to improve the image quality but maybe can it be useful for other processing workflows?

https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/18_image_filtering/03_background_removal.html

I guess a similar effect can be achieve by using the arithmetic expression tool, or am I wrong?

@kostrykin
Copy link
Member

kostrykin commented Apr 30, 2024

I guess a similar effect can be achieve by using the arithmetic expression tool, or am I wrong?

Yes, background removal using a Gaussian filter (see the link you posted) can already be achieved using the tools on-board, as we have Gaussian filters and we have division using arithmetic expressions. More specialized techniques like top-hat, rolling ball, or rank filters can be provided without too much effort, so let me know if this is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants