-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tools required for conversion of KNIME image analysis workflows to Galaxy #105
Comments
Thanks for your suggestions, @rmassei! This looks very doable and will sure be very useful. |
The tool added in #106 should cover all use cases you described under Binary @rmassei. The tool added in #107 covers the functionality of the https://imagej.net/ij/plugins/inverter.html plugin which you listed under Inverter. The tool added in #108 performs a Voronoi tessellation, which, judging by the example you gave, is what you mean by Voronoi segmentation. However, I do not see what an input table should be used for. Can you elaborate on that? The ImageJ plugin suite OrientationJ is a series of multiple plugins. Can you maybe narrow a bit, which functionality exactly is required? Then, it will be easier to think whether writing a Python tool which mimics that functionality is feasible, or whether it might be wiser to aim for a wrapper of the original ImageJ plugin. |
Sorry @kostrykin, I just copy-pasted the description from the KNIME node and the table refers to the image input 😄 Voronoi looks good, there is another option in the node to input an image with the "seed regions" before the applying the voronoi segmentation but I do not know if make sense to implement this in Galaxy or can be achieve with other steps: Regarding the OrientationJ, I have the experience using the orientation parameter which report the orientation propriety of the image: Orientation can be then used to straight the image (in this case a -9.101 degree rotation) : I found a bit of theoretical background here: |
Thanks @rmassei! So what we now have in Moreover, when looking at the example images, I'd say that it could be that there is a threshold for the Voronoi Tessellation, like a maximum distance (or thresholding the distance transform), but this is just a rough suspicion and nothing I've ever heard of. Can you maybe confirm that, or do you have any further info? Regarding OrientationJ, thanks for the infos, I will look into this and see how feasible it is. |
@kostrykin, you are completely right and I overseen it, sorry for this. There is a background threshold that need to be set for the voronoi segmentation. Pixel below that value are just considered to not be part of any cell. Moreover, it is also possible to add a "fill holes" post-processing step. |
Thanks for the clarification! I think we can The first step would be to compute the Voronoi tessellation from your labels. For this you can use the new Voronoi Tessellation tool. The next step would be to compute the 0/1-mask of the foreground (only pixels within this mask will be considered part of a cell). In a third step, we would use the new Process images using arithmetic expressions tool to multiply the Voronoi tessellation with the foreground mask. The result should be what you have been looking for. For the computation of the foreground mask using intensity thresholding, we have the Threshold Image tool. However, this tool currently only supports thresholding using automatically determined thresholds (e.g., Otsu). It is a no-brainer to extend that tool to also allow custom threshold values. However, note that this tool labels the foreground with 255, not with 1. So when it comes to the image arithmetics described above, you would also need to divide the mask by a factor of 255 (because a 0/1 mask is what you want). Let me know if you agree/disagree! @rmassei |
Hi @kostrykin thanks a lot for this! |
I think the problem is that you piped the filtered image into Convert binary image into label map, not a binary image. Besides, you used the factor 225 in your expression
#109 and #110 will be available on Galaxy EU presumably by Monday (maybe earlier if we're lucky). I will then create a small example of Voronoi segmentation for you :) |
@bgruening Hm Galaxy is complaining that the Python script from #109 is missing:
|
that happens if I do manual installation ;) I forgot a sync. This will work in 5min. |
Hi @kostrykin, I tried to change the workflow according to your suggestion but still cannot achieve a good segmentation, the output is basically the same of the threshold image (step 7). |
I've built an example of a Voronoi segmentation workflow based on your explanations: Required inputs:
Indeed, to achieve a good segmentation performance, the choice of the seeds is crucial. And here is an example invocation for your input image, for which I have created the seeds by hand: I think this looks very much like what you had posted above. Another pitfall I have noticed while experimenting with this, is that our Filter 2D image tool changes the range of image intensity values. This is probably because the data type is changed (from uint8 to something floatish, which actually makes sense). We will have to look into this at some point to see whether this can be made more user-friendly and/or transparent. |
Glad to hear that! What will be the next steps? I hope to be able to improve the coloring of the Colorize labels tool timely, so that there will be fewer cases of hard-to-keep-apart-colors like adjacent red-and-pink or green-and-teal. |
I am going now to try to put together the whole workflow and check a way to quantitative comparing the two outputs! |
Hi @kostrykin!
Additionally, I created a workflow to test different manual threshold levels in batch: While 1 is working pretty fine, I am still having some problems with 2. |
Thanks @rmassei! I'm not sure whether I can follow. I understand that you are having issues with Workflow 2 (Voronoi segmentation plus feature extraction). Please clarify:
|
sorry, although I was not clear in the explanation, you got all my issues :D 1 and 4) Yes, step 14. I guess the "connected component" is not really the best option |
Ok, I now have a clearer picture, but can you please explain: Issue 1: To me, the expected image (the lower one) looks like a blending of the colorized Voronoi segmentation and the original image. Do you have any extra info on how the blending of the Voronoi segmentation and the original image works? I think it looks like a linear combination of the two, but I'm not entirely sure. Issue 3: Am I right that you intend to plot the extracted image features into the blended image? If so, can you please elaborate what you mean by plotting, like what kind of plots do you need? |
Issue 1: Unfortunately, I do not have further info. I tried to perform a linear blending of the color map and the original image using the overlay tool but the problem is that the color map is RGB Color and i cannot find a way to convert it in a 8-bit RGB before blending Issue 3: sorry, I explained myself badly. The issue is extracting 2d features from the voronoi segmentation. I tried to run the tool after the arithmetic expression but I received the following fatal error:
|
OK, I think I mostly got it now. Can you please provide a history of an example execution of Workflow 2, shared via a link? Or, alternatively, a full set of inputs (input files and values). I will then look into it. |
Here the execution: |
Thanks! A few quick notes regarding Issue 3:
I will look into Issue 1 later. |
Another issue you have:
This is why your 18: Colorize label map on data 17 looks strange. |
Regarding Issue 1:
@rmassei Let me know if you need help plugging it all together. |
Cannot find find the convert tool, is it already there? :D |
New tools and tool versions usually become available on Mondays Edit: It's there now usegalaxy-eu/usegalaxy-eu-tools#712 |
Can you please try again? |
Looks like modern art to me ... is everything working, like everything? :) |
Yep also feature extraction worked out perfectly 👍 I guess a similar effect can be achieve by using the arithmetic expression tool, or am I wrong? |
Yes, background removal using a Gaussian filter (see the link you posted) can already be achieved using the tools on-board, as we have Gaussian filters and we have division using arithmetic expressions. More specialized techniques like top-hat, rolling ball, or rank filters can be provided without too much effort, so let me know if this is needed. |
I made a list of potential tools which could be implemented for image processing and analysis. The list is derived from a comparison of the tools present in Galaxy with some workflow developed by the KNIME Image Processing Extension. Overall, there is a good presence of tools and it possible to reproduce most of the workflow with few modifications.
https://galaxyproject.org/news/2024-03-08-hackathon-imaging/#workflow-translation-from-knime-to-galaxy
Binary
Skimage -> binary_fill_holes
Morphological Image Operations
- Erode: Shrink bright areas.
- Dilate: Grow bright areas.
- Open: Erode followed by Dilate. Erases tiny bright spots.
- Close: Dilate followed by Erode. Erases tiny dark holes.
Example: https://imagej.net/ij/plugins/gray-morphology.html
In General: https://github.com/ijpb/MorphoLibJ/
Process
Example: https://imagej.net/ij/plugins/inverter.html
Example: https://github.com/Biomedical-Imaging-Group/OrientationJ Add
orientationpy
tool #110Labelling
Examples: https://bioimagebook.github.io/chapters/2-processing/6-transforms/imagej.html
Quantification
Visualization
The text was updated successfully, but these errors were encountered: