Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-scale Large Image Volume workflow #92

Open
droumis opened this issue Mar 6, 2024 · 13 comments
Open

Multi-scale Large Image Volume workflow #92

droumis opened this issue Mar 6, 2024 · 13 comments

Comments

@droumis
Copy link
Collaborator

droumis commented Mar 6, 2024

⚠️ This issue is related to #87. There, the focus was on handling multi-scale data in the time dimension. In contrast, this issue is focused on multi-scaling volumetric images (x,y,z).

Problem:

See #87

Description/Solution/Goals:

See #87 for general motivation. In contrast, the goal of this current issue is to focus on multi-scale large image volumes, rather than downscaling in the time dimension.

Potential Methods and Tools to Leverage:

See #87
Also:

  • ipyvolume
  • VTK.js, also see Panel's VTK components: VTK and VTKVolume
  • Neuroglancer + Cloudvolume + Igneous stack
    • neuroglancer: WebGL-based viewer for volumetric data
      • works with several data sources. See their info about working with zarr and in-memory Python stuff.
    • cloudvolume - Python interface of neuroglancer precomputed data format.
    • Igneous - Python pipeline for scalable meshing, skeletonizing, downsampling, and managment of large 3d images focusing on Neuroglancer Precomputed format.

Tasks:

  1. Evaluate and determine whether to adopt/adapt any aspects of the Neuroglancer + Cloudvolume + Igneous stack.
  2. Build a POC example visualizing a medium (multi-GB) multi-scale image volume from local storage
  3. Build a POC example visualizing a multi-scale image volume from cloud storage

Use-Cases, Starter Viz Code, and Datasets:

Electron Microscopy (EM):

@droumis
Copy link
Collaborator Author

droumis commented Mar 6, 2024

Quoted from #87 (comment):

I have a lot of data like this, and I would love to be able to browse it from a jupyter notebook. In fact, there is (to my knowledge) no python solution for browsing this data in an acceptable way. I'd be super excited to try anything you make on some of our datasets.

Hi @d-v-b , We are looking into your EM/large-volume use case and considering what might be possible given constraints. We'd love to hear more about what an 'acceptable' solution entails. Briefly, in your opinion, what are the necessary features of a viewer? what might be missing from neuroglancer? Would something like neuroglancer, but viewable inside a jupyter notebook be sufficient?

Also, assuming the FIB-SEM fly dataset that you linked is a good/representative place to start, what other datasets would you recommend looking into? For data that you are working with now, what is the typical size/resolution range of a single 2D slice? 3D volume?

@d-v-b
Copy link

d-v-b commented Mar 10, 2024

about our data

Also, assuming the FIB-SEM fly dataset that you linked is a good/representative place to start, what other datasets would you recommend looking into? For data that you are working with now, what is the typical size/resolution range of a single 2D slice? 3D volume?

The group I work for routinely releases 3D ~isotropic images with ~ 10k samples in each dimension (so an image size like (15000, 15000, 15000) would not be weird). In addition to the grayscale EM data, we also release segmentation images that have the same dimensions, but use dtypes like uint32 or uint64. The images have grid spacing (resolution) in the range of of 2 - 8 nanometers. We publish these datasets on www.openorganelle.org, in case you want to look at more of them.

desired viewer features

  • off-axis reslicing: because the images are isotropic, we like to view the data off-axis (the biological structures don't care how the sample was oriented in the microscope 🤷 ).
  • shareable views: neuroglancer stores the current state of the viewer in its URL. this means you can share your current view of some data with a collaborator by sending them a link. It also means that you can programmatically create neuroglancer views and send them to people without actually opening neuroglancer. We use both of these features a lot, so I think "viewer can round-trip its state to a JSON / string representation" should be a mandatory feature of any viewing tool for big images.
  • web-based: because neuroglancer is a web page, anyone with a web browser can use it. this means our imaging data is really easy to share, because we don't have to tell people to use conda or pip to install some python program, and then use the CLI, to look at an image. I can see how this would be hard if you are developing a jupyter-embeddable solution, but ideally that solution would use a JS-only component that could be embedded in a stand-alone site.
  • good performance for remote data. neuroglancer has really good performance. it caches and prefetches aggressively, which makes it smooth when data comes from high-latency storage like s3.
  • sane coordinate space manipulation. neuroglancer knows about named axes, and units, and makes it really easy to change the scale / translation of an individual axis of the data. Contrast this with some popular viewers in the python ecosystem that don't show a scale bar and don't support named axes.

honestly I would start by copying the design decisions neuroglancer made, and deviate from that when necessary. it's a really good tool, and I wish more tools in bioimaging copied it!

@droumis
Copy link
Collaborator Author

droumis commented Mar 12, 2024

These are really great points; thanks a lot for the response! Based on your suggestions, we will next evaluate what approach might work best given our constraints and go from there.

@droumis
Copy link
Collaborator Author

droumis commented Mar 19, 2024

There's plenty left to figure out (notably the bidirectional link with the served viewer state), but in principle, we should be able to leverage Panel to use Neuroglancer in a Jupyter Notebook. Here is a POC:

Code
import panel as pn
import neuroglancer

pn.extension()

class NeuroglancerViewerApp(pn.viewable.Viewer):
    def __init__(self, **params):
        super().__init__(**params)
        self.url_input = pn.widgets.TextInput(placeholder="Enter Neuroglancer URL and click Load", width=650)
        self.load_button = pn.widgets.Button(name="Load", button_type="primary")
        self.load_button.on_click(self.update_view)
        self.load_demo_button = pn.widgets.Button(name="Demo", button_type="warning")
        self.demo_url = 'https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22y%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22z%22:%5B3.0000000000000004e-8%2C%22m%22%5D%7D%2C%22position%22:%5B5029.42333984375%2C6217.5849609375%2C1182.5%5D%2C%22crossSectionScale%22:3.7621853549999242%2C%22projectionOrientation%22:%5B-0.05179581791162491%2C-0.8017329573631287%2C0.0831851214170456%2C-0.5895944833755493%5D%2C%22projectionScale%22:4699.372698097029%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image%22%2C%22tab%22:%22source%22%2C%22name%22:%22original-image%22%7D%2C%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22tab%22:%22source%22%2C%22name%22:%22corrected-image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth%22%2C%22tab%22:%22source%22%2C%22selectedAlpha%22:0.63%2C%22notSelectedAlpha%22:0.14%2C%22segments%22:%5B%223208%22%2C%224901%22%2C%2213%22%2C%224965%22%2C%224651%22%2C%222282%22%2C%223189%22%2C%223758%22%2C%2215%22%2C%224027%22%2C%223228%22%2C%22444%22%2C%223207%22%2C%223224%22%2C%223710%22%5D%2C%22name%22:%22ground_truth%22%7D%5D%2C%22layout%22:%224panel%22%7D'
        self.load_demo_button.on_click(self.load_demo)
        self.iframe = pn.pane.HTML(sizing_mode='stretch_width')
        self.json_pane = pn.pane.JSON({}, name='Parsed URL', height=600, width=400)

        input_layout = pn.Row(self.url_input, self.load_button, self.load_demo_button)

        self.layout = pn.Column(
                        pn.Row(input_layout),
                        pn.Row(self.iframe, self.json_pane),
                        sizing_mode='stretch_both'
        )
        
    def load_demo(self, event):
        self.url_input.value = self.demo_url
        self.load_button.clicks+=1
            
    def update_view(self, event):
        self.iframe.object = f'<iframe src="{self.url_input.value}" width="1000" height="1000"></iframe>'
        self.update_json_pane()

    def update_json_pane(self):
        try:
            parsed_url = neuroglancer.parse_url(self.url_input.value).to_json()
            self.json_pane.object = parsed_url
        except Exception as e:
            self.json_pane.object = {"error": str(e)}
        
    def __panel__(self):
        return self.layout

app = NeuroglancerViewerApp()
app.layout.servable()
Screen.Recording.2024-03-19.at.2.35.22.PM.mov

@d-v-b
Copy link

d-v-b commented Mar 19, 2024

that's super cool! is the source code for that demo available? I think lots of people would use this

@droumis
Copy link
Collaborator Author

droumis commented Mar 19, 2024

Nice, yep the code for this quick demo is in the dropdown above the video.

We welcome any and all feedback. I think getting the JSON panel on the right to stay synchronized with the neuroglancer iframe state is my next priority. Right now it's just parsing the original URL

@droumis
Copy link
Collaborator Author

droumis commented Mar 19, 2024

Regarding your comment about being 'web-based', if the use-case was solely limited to someone visiting a website and interactive with a web app, we could probably make things work without the user having to python install anything, via pyodide. However, neuroglancer seems to have addressed that use-case itself. Given the unaddressed use-case is based on use in Jupyter notebook... I'm thinking that it makes sense to expect our users to be comfortable in Python installation land.

@droumis
Copy link
Collaborator Author

droumis commented Mar 22, 2024

Made some updates.. I'm now starting a new viewer instance from python so that the state of the embedded neuroglancer app now can be kept in sync with other components! For instance, you can see the properties on the right remain updated as I pan the viewer position.

I think this is a pretty promising approach since it allows for two primary workflows. First, it allows anyone with an existing neuroglancer url to just plop it in to the input field and voila, you have your own viewer based on that url. Alternatively, someone could start by just creating an empty viewer with this app and then programmatically build it up however they want using the app's viewer (e.g. app.viewer).

Code:
import panel as pn
import neuroglancer

pn.extension()

class NeuroglancerViewerApp(pn.viewable.Viewer):
    DEMO_URL = 'https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22y%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22z%22:%5B3.0000000000000004e-8%2C%22m%22%5D%7D%2C%22position%22:%5B5029.42333984375%2C6217.5849609375%2C1182.5%5D%2C%22crossSectionScale%22:3.7621853549999242%2C%22projectionOrientation%22:%5B-0.05179581791162491%2C-0.8017329573631287%2C0.0831851214170456%2C-0.5895944833755493%5D%2C%22projectionScale%22:4699.372698097029%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image%22%2C%22tab%22:%22source%22%2C%22name%22:%22original-image%22%7D%2C%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22tab%22:%22source%22%2C%22name%22:%22corrected-image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth%22%2C%22tab%22:%22source%22%2C%22selectedAlpha%22:0.63%2C%22notSelectedAlpha%22:0.14%2C%22segments%22:%5B%223208%22%2C%224901%22%2C%2213%22%2C%224965%22%2C%224651%22%2C%222282%22%2C%223189%22%2C%223758%22%2C%2215%22%2C%224027%22%2C%223228%22%2C%22444%22%2C%223207%22%2C%223224%22%2C%223710%22%5D%2C%22name%22:%22ground_truth%22%7D%5D%2C%22layout%22:%224panel%22%7D'

    def __init__(self, **params):
        super().__init__(**params)
        self.viewer = neuroglancer.Viewer()
        self._setup_ui_components()
        self._configure_viewer()
        self._setup_callbacks()

    def _setup_ui_components(self):
        self.url_input = pn.widgets.TextInput(
            placeholder="Enter a Neuroglancer URL and click Load", name='Input URL', width=700
        )
        self.load_button = pn.widgets.Button(name="Load", button_type="primary", width=75)
        self.demo_button = pn.widgets.Button(name="Demo", button_type="warning", width=75)
        self.json_pane = pn.pane.JSON({}, theme='light', depth=2, name='Viewer State', height=600, width=400)
        self.shareable_url_pane = pn.pane.Markdown("**Shareable URL:**")
        self.local_url_pane = pn.pane.Markdown("**Local URL:**")
        self.iframe = pn.pane.HTML(sizing_mode='stretch_both', min_height=700, min_width=700)

    def _configure_viewer(self):
        self.update_local_url()
        self.update_iframe_with_local_url()

    def _setup_callbacks(self):
        self.load_button.on_click(self._on_load_button_clicked)
        self.demo_button.on_click(self._on_demo_button_clicked)
        self.viewer.shared_state.add_changed_callback(self._on_viewer_state_changed)

    def _on_demo_button_clicked(self, event):
        self.url_input.value = self.DEMO_URL
        self._load_neuroglancer_state_from_url(self.url_input.value)

    def _on_load_button_clicked(self, event):
        self._load_neuroglancer_state_from_url(self.url_input.value)

    def _load_neuroglancer_state_from_url(self, url):
        try:
            new_state = neuroglancer.parse_url(url)
            self.viewer.set_state(new_state)
        except Exception as e:
            print(f"Error loading Neuroglancer state: {e}")

    def _on_viewer_state_changed(self):
        self.update_shareable_url()
        self.update_json_pane()

    def update_shareable_url(self):
        shareable_url = neuroglancer.to_url(self.viewer.state)
        self.shareable_url_pane.object = self._generate_details_markup("Shareable URL", shareable_url)

    def update_local_url(self):
        self.local_url_pane.object = self._generate_details_markup("Local URL", self.viewer.get_viewer_url())

    def update_iframe_with_local_url(self):
        self.iframe.object = f'<iframe src="{self.viewer.get_viewer_url()}" width="700" height="700"></iframe>'

    def update_json_pane(self):
        self.json_pane.object = self.viewer.state.to_json()

    def _generate_details_markup(self, title, url):
        return f"""
            <details>
                <summary><b>{title}:</b></summary>
                <a href="{url}" target="_blank">{url}</a>
            </details>
        """

    def __panel__(self):
        controls_layout = pn.Column(
            pn.Row(self.demo_button, self.load_button),
            pn.Row(self.url_input))
        links_layout = pn.Column(self.local_url_pane, self.shareable_url_pane)
        return pn.Column(
            controls_layout,
            links_layout,
            pn.Row(self.iframe, self.json_pane))

app = NeuroglancerViewerApp()
app.servable()
neuroglancer_app.mov

Next steps:

  • Make the sizing of the iframe responsive, currently hardcoded.
  • Make the JSON visibility toggleable
  • Allow for a preexisting local viewer instance as input.

@droumis
Copy link
Collaborator Author

droumis commented Mar 26, 2024

Here is the revised class with the following updates:

  • Make the sizing of the iframe responsive
  • Make the JSON visibility toggleable
  • Allow for a preexisting local viewer instance as input.
Code
import panel as pn
import neuroglancer

pn.extension()

class NeuroglancerViewerApp(pn.viewable.Viewer):
    """
    A HoloViz Panel app for visualizing and interacting with Neuroglancer viewers
    within a Jupyter Notebook.

    This app supports loading from a parameterized Neuroglancer URL or an existing
    `neuroglancer.viewer.Viewer` instance.
    """
    
    DEMO_URL = 'https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22y%22:%5B6.000000000000001e-9%2C%22m%22%5D%2C%22z%22:%5B3.0000000000000004e-8%2C%22m%22%5D%7D%2C%22position%22:%5B5029.42333984375%2C6217.5849609375%2C1182.5%5D%2C%22crossSectionScale%22:3.7621853549999242%2C%22projectionOrientation%22:%5B-0.05179581791162491%2C-0.8017329573631287%2C0.0831851214170456%2C-0.5895944833755493%5D%2C%22projectionScale%22:4699.372698097029%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image%22%2C%22tab%22:%22source%22%2C%22name%22:%22original-image%22%7D%2C%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected%22%2C%22tab%22:%22source%22%2C%22name%22:%22corrected-image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth%22%2C%22tab%22:%22source%22%2C%22selectedAlpha%22:0.63%2C%22notSelectedAlpha%22:0.14%2C%22segments%22:%5B%223208%22%2C%224901%22%2C%2213%22%2C%224965%22%2C%224651%22%2C%222282%22%2C%223189%22%2C%223758%22%2C%2215%22%2C%224027%22%2C%223228%22%2C%22444%22%2C%223207%22%2C%223224%22%2C%223710%22%5D%2C%22name%22:%22ground_truth%22%7D%5D%2C%22layout%22:%224panel%22%7D'

    def __init__(self, source=None, aspect_ratio=1.5, **params):

        """
        Args:
            source (str or neuroglancer.viewer.Viewer, optional): Source for the initial state of the viewer,
                which can be a URL string or an existing neuroglancer.viewer.Viewer instance.
                If None, a new viewer will be initialized without a predefined state.
            aspect_ratio (float, optional): The width to height ratio for the window-responsive Neuroglancer viewer.
                Default is 1.5.
        """
            
        super().__init__(**params)

        self.viewer = source if isinstance(source, neuroglancer.viewer.Viewer) else neuroglancer.Viewer()
        self._setup_ui_components(aspect_ratio=aspect_ratio)    
        self._configure_viewer()
        self._setup_callbacks()
        
        # If source is provided and not a Viewer, assume it's a URL
        if source and not isinstance(source, neuroglancer.viewer.Viewer):
            self._initialize_viewer_from_url(source)

    def _initialize_viewer_from_url(self, source:str):
        # load URL state into viewer
        assert isinstance(source, str), "Source must be a URL string"
        self.url_input.value = source
        self._load_state_from_url(source)

    def _setup_ui_components(self, aspect_ratio):
        self.url_input = pn.widgets.TextInput(placeholder="Enter a Neuroglancer URL and click Load", name='Input URL', width=700)
        self.load_button = pn.widgets.Button(name="Load", button_type="primary", width=75)
        self.demo_button = pn.widgets.Button(name="Demo", button_type="warning", width=75)
        self.json_pane = pn.pane.JSON({}, theme='light', depth=2, name='Viewer State', height=600, width=400)
        self.shareable_url_pane = pn.pane.Markdown("**Shareable URL:**")
        self.local_url_pane = pn.pane.Markdown("**Local URL:**")
        self.iframe = pn.pane.HTML(sizing_mode='stretch_both', aspect_ratio=aspect_ratio)

    def _configure_viewer(self):
        self._update_local_url()
        self._update_iframe_with_local_url()

    def _setup_callbacks(self):
        self.load_button.on_click(self._on_load_button_clicked)
        self.demo_button.on_click(self._on_demo_button_clicked)
        self.viewer.shared_state.add_changed_callback(self._on_viewer_state_changed)

    def _on_demo_button_clicked(self, event):
        self.url_input.value = self.DEMO_URL
        self._load_state_from_url(self.url_input.value)

    def _on_load_button_clicked(self, event):
        self._load_state_from_url(self.url_input.value)

    def _load_state_from_url(self, url):
        try:
            new_state = self._parse_state_from_url(url)
            self.viewer.set_state(new_state)
        except Exception as e:
            print(f"Error loading Neuroglancer state: {e}")

    def _parse_state_from_url(self, url):
        return neuroglancer.parse_url(url)

    def _on_viewer_state_changed(self):
        self._update_shareable_url()
        self._update_json_pane()

    def _update_shareable_url(self):
        shareable_url = neuroglancer.to_url(self.viewer.state)
        self.shareable_url_pane.object = self._generate_dropdown_markup("Shareable URL", shareable_url)

    def _update_local_url(self):
        self.local_url_pane.object = self._generate_dropdown_markup("Local URL", self.viewer.get_viewer_url())

    def _update_iframe_with_local_url(self):
        iframe_style = 'frameborder="0" scrolling="no" marginheight="0" marginwidth="0" style="width:100%; height:100%; min-width:500px; min-height:500px;"'
        self.iframe.object = f'<iframe src="{self.viewer.get_viewer_url()}" {iframe_style}"></iframe>'

    def _update_json_pane(self):
        self.json_pane.object = self.viewer.state.to_json()

    def _generate_dropdown_markup(self, title, url):
        return f"""
            <details>
                <summary><b>{title}:</b></summary>
                <a href="{url}" target="_blank">{url}</a>
            </details>
        """

    def __panel__(self):
        controls_layout = pn.Column(
            pn.Row(self.demo_button, self.load_button),
            pn.Row(self.url_input))
        links_layout = pn.Column(self.local_url_pane, self.shareable_url_pane)
        return pn.Column(
            controls_layout,
            links_layout,
            pn.FlexBox(self.iframe, pn.Card(self.json_pane, title='State', collapsed=True)))
app = NeuroglancerViewerApp()
app
Screen.Recording.2024-03-26.at.3.16.59.PM.mov

Next steps:

  • I think I'm basically done with the proof of concept until I receive any further requests so I'll soon go ahead and make a PR on this repo to formalize a workflow.
  • I may also make a PR on the neuroglancer repo. For instance, it would be nice if a user could just run viewer.app or viewer.display and get this app in the notebook without running anything else.

@d-v-b
Copy link

d-v-b commented Mar 31, 2024

We welcome any and all feedback. I think getting the JSON panel on the right to stay synchronized with the neuroglancer iframe state is my next priority. Right now it's just parsing the original URL

I haven't tried this yet but i'm super excited to, and thanks for putting this demo together! When I have feedback I will post it here.

@d-v-b
Copy link

d-v-b commented Dec 3, 2024

@droumis this is great, and I would love to package this work in a stand-alone panel extension. what's the best way to do that?

I created this repo https://github.com/d-v-b/panel-neuroglancer with the appropriate name; is there an org I can transfer ownership to so that things are centralized?

After we have some basic python packaging in place, would you be willing to submit the code contained in https://github.com/holoviz-topics/neuro/blob/main/workflows/neuroglancer_notebook/neuroglancer-nb-workflow.ipynb in a PR against panel-neuroglancer? I would like to ensure that you get credit for this :)

@droumis
Copy link
Collaborator Author

droumis commented Dec 3, 2024

Hi @d-v-b , thanks for getting that started! Sounds good, let's get it transferred to the panel-extensions org. I'll send you an invite and then I'll open a PR to submit the code.

I was actually just about to wrap up a HoloViz example PR with a slightly updated version of this code. I agree that having it as an easily installable package makes sense (as does @philippjfr) and I will update the HoloViz example accordingly once panel-neuroglancer is released.

@d-v-b
Copy link

d-v-b commented Dec 3, 2024

thanks for the invite, the repo is now part of of panel-extensions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants