diff --git a/index.html b/index.html index 9f6af34..33a77b2 100644 --- a/index.html +++ b/index.html @@ -737,7 +737,7 @@
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/anndata.py:1756: UserWarning: Observation names are not unique. To make them unique, call `.obs_names_make_unique`. +/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/anndata.py:1754: UserWarning: Observation names are not unique. To make them unique, call `.obs_names_make_unique`. utils.warn_names_duplicates("obs")
-
NGIO is a Python library to streamline OME-Zarr image analysis workflows.
Main Goals:
To get started, check out the Getting Started guide.
"},{"location":"#ngio-is-under-active-development","title":"\ud83d\udea7 Ngio is Under active Development \ud83d\udea7","text":""},{"location":"#roadmap","title":"Roadmap","text":"Feature Status ETA Description Metadata Handling \u2705 Read, Write, Validate OME-Zarr Metadata (0.4 supported, 0.5 ready) OME-Zarr Validation \u2705 Validate OME-Zarr files for compliance with the OME-Zarr Specification + Compliance between Metadata and Data Base Image Handling \u2705 Load data from OME-Zarr files, retrieve basic metadata, and write data ROI Handling \u2705 Common ROI models Label Handling \u2705 Mid-September Based on Image Handling Table Validation \u2705 Mid-September Validate Table fractal V1 + Compliance between Metadata and Data Table Handling \u2705 Mid-September Read, Write ROI, Features, and Masked Tables Basic Iterators Ongoing End-September Read and Write Iterators for common access patterns Base Documentation \u2705 End-September API Documentation and Examples Beta Ready Testing \u2705 End-September Beta Testing; Library is ready for testing, but the API is not stable Streaming from Fractal ONGOING December Ngio can stream ome-zarr from fractal Mask Iterators Ongoing Early 2025 Iterators over Masked Tables Advanced Iterators Not started mid-2025 Iterators for advanced access patterns Parallel Iterators Not started mid-2025 Concurrent Iterators for parallel read and write Full Documentation Not started 2025 Complete Documentation Release 1.0 (Commitment to API) Not started 2025 API is stable; breaking changes will be avoided"},{"location":"#contributors","title":"Contributors","text":"ngio
is developed at the BioVisionCenter at the University of Zurich. The main contributors are: @lorenzocerrone, @jluethi.
ngio
is released according to the BSD-3-Clause License. See LICENSE
Warning
The library is still in development and is not yet stable. The API is subject to change, bugs and breaking changes are expected.
Warning
The documentation is still under development. It is not yet complete and may contain errors and inaccuracies.
"},{"location":"getting-started/#installation","title":"Installation","text":"The library can be installed from PyPI using pip:
pip install \"ngio[core]\"\n
The core
extra installs the the zarr-python
dependency. As of now, zarr-python
is required to be installed separately, due to the transition to the new zarr-v3
library.
ngio
API Overview","text":"ngio
implements an abstract object base API for handling OME-Zarr files. The three main objects are NgffImage
, Image
(Label
), and ROITables
.
NgffImage
is the main entry point to the library. It is used to open an OME-Zarr Image and manage its metadata. This object can not be used to access the data directly. but it can be used to access and create the Image
, Label
, and Tables
objects. Moreover it can be used to derive a new Ngff
images based on the current one.Image
and Label
are used to access \"ImageLike\" objects. They are the main objects to access the data in the OME-Zarr file, manage the metadata, and write data.ROITables
can be used to access specific region of interest in the image. They are tightly integrated with the Image
and Label
objects.Currently, the library is not yet stable. However, you can see some example usage in our demo notebooks:
ngio.core
","text":"Core classes for the ngio library.
"},{"location":"api/core/#ngffimage","title":"NGFFImage","text":""},{"location":"api/core/#ngio.core.NgffImage","title":"ngio.core.NgffImage
","text":"A class to handle OME-NGFF images.
Source code inngio/core/ngff_image.py
class NgffImage:\n \"\"\"A class to handle OME-NGFF images.\"\"\"\n\n def __init__(\n self, store: StoreLike, cache: bool = False, mode: AccessModeLiteral = \"r+\"\n ) -> None:\n \"\"\"Initialize the NGFFImage in read mode.\"\"\"\n self.store = store\n self._mode = mode\n self._group = open_group_wrapper(store=store, mode=self._mode)\n\n if self._group.read_only:\n self._mode = \"r\"\n\n self._image_meta = get_ngff_image_meta_handler(\n self._group, meta_mode=\"image\", cache=cache\n )\n self._metadata_cache = cache\n self.tables = TableGroup(self._group, mode=self._mode)\n self.labels = LabelGroup(\n self._group, image_ref=self.get_image(), mode=self._mode\n )\n\n ngio_logger.info(f\"Opened image located in store: {store}\")\n ngio_logger.info(f\"- Image number of levels: {self.num_levels}\")\n\n def __repr__(self) -> str:\n \"\"\"Get the string representation of the image.\"\"\"\n name = \"NGFFImage(\"\n len_name = len(name)\n return (\n f\"{name}\"\n f\"group_path={self.group_path}, \\n\"\n f\"{' ':>{len_name}}paths={self.levels_paths}, \\n\"\n f\"{' ':>{len_name}}labels={self.labels.list()}, \\n\"\n f\"{' ':>{len_name}}tables={self.tables.list()}, \\n\"\n \")\"\n )\n\n @property\n def group(self) -> zarr.Group:\n \"\"\"Get the group of the image.\"\"\"\n return self._group\n\n @property\n def root_path(self) -> str:\n \"\"\"Get the root path of the image.\"\"\"\n return str(self._group.store.path)\n\n @property\n def group_path(self) -> str:\n \"\"\"Get the path of the group.\"\"\"\n root = self.root_path\n if root.endswith(\"/\"):\n root = root[:-1]\n return f\"{root}/{self._group.path}\"\n\n @property\n def image_meta(self) -> ImageMeta:\n \"\"\"Get the image metadata.\"\"\"\n meta = self._image_meta.load_meta()\n assert isinstance(meta, ImageMeta)\n return meta\n\n @property\n def num_levels(self) -> int:\n \"\"\"Get the number of levels in the image.\"\"\"\n return self.image_meta.num_levels\n\n @property\n def levels_paths(self) -> list[str]:\n \"\"\"Get the paths of the levels in the image.\"\"\"\n return self.image_meta.levels_paths\n\n def get_image(\n self,\n *,\n path: str | None = None,\n pixel_size: PixelSize | None = None,\n highest_resolution: bool = True,\n ) -> Image:\n \"\"\"Get an image handler for the given level.\n\n Args:\n path (str | None, optional): The path to the level.\n pixel_size (tuple[float, ...] | list[float] | None, optional): The pixel\n size of the level.\n highest_resolution (bool, optional): Whether to get the highest\n resolution level\n\n Returns:\n ImageHandler: The image handler.\n \"\"\"\n if path is not None or pixel_size is not None:\n highest_resolution = False\n\n image = Image(\n store=self._group,\n path=path,\n pixel_size=pixel_size,\n highest_resolution=highest_resolution,\n label_group=LabelGroup(self._group, image_ref=None, mode=self._mode),\n cache=self._metadata_cache,\n mode=self._mode,\n )\n ngio_logger.info(f\"Opened image at path: {image.path}\")\n ngio_logger.info(f\"- {image.dimensions}\")\n ngio_logger.info(f\"- {image.pixel_size}\")\n return image\n\n def _compute_percentiles(\n self, start_percentile: float, end_percentile: float\n ) -> tuple[list[float], list[float]]:\n \"\"\"Compute the percentiles for the window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n\n \"\"\"\n meta = self.image_meta\n\n lowest_res_image = self.get_image(highest_resolution=True)\n lowest_res_shape = lowest_res_image.shape\n for path in self.levels_paths:\n image = self.get_image(path=path)\n if np.prod(image.shape) < np.prod(lowest_res_shape):\n lowest_res_shape = image.shape\n lowest_res_image = image\n\n num_c = lowest_res_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n starts, ends = [], []\n for c in range(num_c):\n data = lowest_res_image.get_array(c=c, mode=\"dask\").ravel()\n _start_percentile, _end_percentile = da.percentile(\n data, [start_percentile, end_percentile], method=\"nearest\"\n ).compute()\n\n starts.append(_start_percentile)\n ends.append(_end_percentile)\n\n return starts, ends\n\n def lazy_init_omero(\n self,\n labels: list[str] | int | None = None,\n wavelength_ids: list[str] | None = None,\n colors: list[str] | None = None,\n active: list[bool] | None = None,\n start_percentile: float | None = 1,\n end_percentile: float | None = 99,\n data_type: Any = np.uint16,\n consolidate: bool = True,\n ) -> None:\n \"\"\"Set the OMERO metadata for the image.\n\n Args:\n labels (list[str] | int | None): The labels of the channels.\n wavelength_ids (list[str] | None): The wavelengths of the channels.\n colors (list[str] | None): The colors of the channels.\n active (list[bool] | None): Whether the channels are active.\n start_percentile (float | None): The start percentile for computing the data\n range. If None, the start is the same as the min value of the data type.\n end_percentile (float | None): The end percentile for for computing the data\n range. If None, the start is the same as the max value of the data type.\n data_type (Any): The data type of the image.\n consolidate (bool): Whether to consolidate the metadata.\n \"\"\"\n if labels is None:\n ref = self.get_image()\n labels = ref.num_channels\n\n if start_percentile is not None and end_percentile is not None:\n start, end = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n elif start_percentile is None and end_percentile is None:\n raise ValueError(\"Both start and end percentiles cannot be None.\")\n elif end_percentile is None and start_percentile is not None:\n raise ValueError(\n \"End percentile cannot be None if start percentile is not.\"\n )\n else:\n start, end = None, None\n\n self.image_meta.lazy_init_omero(\n labels=labels,\n wavelength_ids=wavelength_ids,\n colors=colors,\n start=start,\n end=end,\n active=active,\n data_type=data_type,\n )\n\n if consolidate:\n self._image_meta.write_meta(self.image_meta)\n\n def update_omero_window(\n self,\n start_percentile: int = 1,\n end_percentile: int = 99,\n min_value: int | float | None = None,\n max_value: int | float | None = None,\n ) -> None:\n \"\"\"Update the OMERO window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n min_value (int | float | None): The minimum value of the window.\n max_value (int | float | None): The maximum value of the window.\n\n \"\"\"\n start, ends = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n meta = self.image_meta\n ref_image = self.get_image()\n\n for func in [np.iinfo, np.finfo]:\n try:\n type_max = func(ref_image.on_disk_array.dtype).max\n type_min = func(ref_image.on_disk_array.dtype).min\n break\n except ValueError:\n continue\n else:\n raise ValueError(\"Data type not recognized.\")\n\n if min_value is None:\n min_value = type_min\n if max_value is None:\n max_value = type_max\n\n num_c = ref_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n if len(channel_list) != len(start):\n raise ValueError(\"The number of channels does not match the image.\")\n\n for c, (channel, s, e) in enumerate(\n zip(channel_list, start, ends, strict=True)\n ):\n channel.channel_visualisation.start = s\n channel.channel_visualisation.end = e\n channel.channel_visualisation.min = min_value\n channel.channel_visualisation.max = max_value\n\n ngio_logger.info(\n f\"Updated window for channel {channel.label}. \"\n f\"Start: {start_percentile}, End: {end_percentile}\"\n )\n meta.omero.channels[c] = channel\n\n self._image_meta.write_meta(meta)\n\n def derive_new_image(\n self,\n store: StoreLike,\n name: str,\n overwrite: bool = True,\n **kwargs: dict,\n ) -> \"NgffImage\":\n \"\"\"Derive a new image from the current image.\n\n Args:\n store (StoreLike): The store to create the new image in.\n name (str): The name of the new image.\n overwrite (bool): Whether to overwrite the image if it exists\n **kwargs: Additional keyword arguments.\n Follow the same signature as `create_empty_ome_zarr_image`.\n\n Returns:\n NgffImage: The new image.\n \"\"\"\n image_0 = self.get_image(highest_resolution=True)\n\n # Get the channel information if it exists\n omero = self.image_meta.omero\n if omero is not None:\n channels = omero.channels\n omero_kwargs = omero.extra_fields\n else:\n channels = []\n omero_kwargs = {}\n\n default_kwargs = {\n \"store\": store,\n \"on_disk_shape\": image_0.on_disk_shape,\n \"chunks\": image_0.on_disk_array.chunks,\n \"dtype\": image_0.on_disk_array.dtype,\n \"on_disk_axis\": image_0.dataset.on_disk_axes_names,\n \"pixel_sizes\": image_0.pixel_size,\n \"xy_scaling_factor\": self.image_meta.xy_scaling_factor,\n \"z_scaling_factor\": self.image_meta.z_scaling_factor,\n \"time_spacing\": image_0.dataset.time_spacing,\n \"time_units\": image_0.dataset.time_axis_unit,\n \"levels\": self.num_levels,\n \"name\": name,\n \"channel_labels\": image_0.channel_labels,\n \"channel_wavelengths\": [ch.wavelength_id for ch in channels],\n \"channel_visualization\": [ch.channel_visualisation for ch in channels],\n \"omero_kwargs\": omero_kwargs,\n \"overwrite\": overwrite,\n \"version\": self.image_meta.version,\n }\n\n default_kwargs.update(kwargs)\n\n create_empty_ome_zarr_image(\n **default_kwargs,\n )\n return NgffImage(store=store)\n
"},{"location":"api/core/#ngio.core.NgffImage.group","title":"group: zarr.Group
property
","text":"Get the group of the image.
"},{"location":"api/core/#ngio.core.NgffImage.group_path","title":"group_path: str
property
","text":"Get the path of the group.
"},{"location":"api/core/#ngio.core.NgffImage.image_meta","title":"image_meta: ImageMeta
property
","text":"Get the image metadata.
"},{"location":"api/core/#ngio.core.NgffImage.levels_paths","title":"levels_paths: list[str]
property
","text":"Get the paths of the levels in the image.
"},{"location":"api/core/#ngio.core.NgffImage.num_levels","title":"num_levels: int
property
","text":"Get the number of levels in the image.
"},{"location":"api/core/#ngio.core.NgffImage.root_path","title":"root_path: str
property
","text":"Get the root path of the image.
"},{"location":"api/core/#ngio.core.NgffImage.__init__","title":"__init__(store: StoreLike, cache: bool = False, mode: AccessModeLiteral = 'r+') -> None
","text":"Initialize the NGFFImage in read mode.
Source code inngio/core/ngff_image.py
def __init__(\n self, store: StoreLike, cache: bool = False, mode: AccessModeLiteral = \"r+\"\n) -> None:\n \"\"\"Initialize the NGFFImage in read mode.\"\"\"\n self.store = store\n self._mode = mode\n self._group = open_group_wrapper(store=store, mode=self._mode)\n\n if self._group.read_only:\n self._mode = \"r\"\n\n self._image_meta = get_ngff_image_meta_handler(\n self._group, meta_mode=\"image\", cache=cache\n )\n self._metadata_cache = cache\n self.tables = TableGroup(self._group, mode=self._mode)\n self.labels = LabelGroup(\n self._group, image_ref=self.get_image(), mode=self._mode\n )\n\n ngio_logger.info(f\"Opened image located in store: {store}\")\n ngio_logger.info(f\"- Image number of levels: {self.num_levels}\")\n
"},{"location":"api/core/#ngio.core.NgffImage.__repr__","title":"__repr__() -> str
","text":"Get the string representation of the image.
Source code inngio/core/ngff_image.py
def __repr__(self) -> str:\n \"\"\"Get the string representation of the image.\"\"\"\n name = \"NGFFImage(\"\n len_name = len(name)\n return (\n f\"{name}\"\n f\"group_path={self.group_path}, \\n\"\n f\"{' ':>{len_name}}paths={self.levels_paths}, \\n\"\n f\"{' ':>{len_name}}labels={self.labels.list()}, \\n\"\n f\"{' ':>{len_name}}tables={self.tables.list()}, \\n\"\n \")\"\n )\n
"},{"location":"api/core/#ngio.core.NgffImage.derive_new_image","title":"derive_new_image(store: StoreLike, name: str, overwrite: bool = True, **kwargs: dict) -> NgffImage
","text":"Derive a new image from the current image.
Parameters:
store
(StoreLike
) \u2013 The store to create the new image in.
name
(str
) \u2013 The name of the new image.
overwrite
(bool
, default: True
) \u2013 Whether to overwrite the image if it exists
**kwargs
(dict
, default: {}
) \u2013 Additional keyword arguments. Follow the same signature as create_empty_ome_zarr_image
.
Returns:
NgffImage
( NgffImage
) \u2013 The new image.
ngio/core/ngff_image.py
def derive_new_image(\n self,\n store: StoreLike,\n name: str,\n overwrite: bool = True,\n **kwargs: dict,\n) -> \"NgffImage\":\n \"\"\"Derive a new image from the current image.\n\n Args:\n store (StoreLike): The store to create the new image in.\n name (str): The name of the new image.\n overwrite (bool): Whether to overwrite the image if it exists\n **kwargs: Additional keyword arguments.\n Follow the same signature as `create_empty_ome_zarr_image`.\n\n Returns:\n NgffImage: The new image.\n \"\"\"\n image_0 = self.get_image(highest_resolution=True)\n\n # Get the channel information if it exists\n omero = self.image_meta.omero\n if omero is not None:\n channels = omero.channels\n omero_kwargs = omero.extra_fields\n else:\n channels = []\n omero_kwargs = {}\n\n default_kwargs = {\n \"store\": store,\n \"on_disk_shape\": image_0.on_disk_shape,\n \"chunks\": image_0.on_disk_array.chunks,\n \"dtype\": image_0.on_disk_array.dtype,\n \"on_disk_axis\": image_0.dataset.on_disk_axes_names,\n \"pixel_sizes\": image_0.pixel_size,\n \"xy_scaling_factor\": self.image_meta.xy_scaling_factor,\n \"z_scaling_factor\": self.image_meta.z_scaling_factor,\n \"time_spacing\": image_0.dataset.time_spacing,\n \"time_units\": image_0.dataset.time_axis_unit,\n \"levels\": self.num_levels,\n \"name\": name,\n \"channel_labels\": image_0.channel_labels,\n \"channel_wavelengths\": [ch.wavelength_id for ch in channels],\n \"channel_visualization\": [ch.channel_visualisation for ch in channels],\n \"omero_kwargs\": omero_kwargs,\n \"overwrite\": overwrite,\n \"version\": self.image_meta.version,\n }\n\n default_kwargs.update(kwargs)\n\n create_empty_ome_zarr_image(\n **default_kwargs,\n )\n return NgffImage(store=store)\n
"},{"location":"api/core/#ngio.core.NgffImage.get_image","title":"get_image(*, path: str | None = None, pixel_size: PixelSize | None = None, highest_resolution: bool = True) -> Image
","text":"Get an image handler for the given level.
Parameters:
path
(str | None
, default: None
) \u2013 The path to the level.
pixel_size
(tuple[float, ...] | list[float] | None
, default: None
) \u2013 The pixel size of the level.
highest_resolution
(bool
, default: True
) \u2013 Whether to get the highest resolution level
Returns:
ImageHandler
( Image
) \u2013 The image handler.
ngio/core/ngff_image.py
def get_image(\n self,\n *,\n path: str | None = None,\n pixel_size: PixelSize | None = None,\n highest_resolution: bool = True,\n) -> Image:\n \"\"\"Get an image handler for the given level.\n\n Args:\n path (str | None, optional): The path to the level.\n pixel_size (tuple[float, ...] | list[float] | None, optional): The pixel\n size of the level.\n highest_resolution (bool, optional): Whether to get the highest\n resolution level\n\n Returns:\n ImageHandler: The image handler.\n \"\"\"\n if path is not None or pixel_size is not None:\n highest_resolution = False\n\n image = Image(\n store=self._group,\n path=path,\n pixel_size=pixel_size,\n highest_resolution=highest_resolution,\n label_group=LabelGroup(self._group, image_ref=None, mode=self._mode),\n cache=self._metadata_cache,\n mode=self._mode,\n )\n ngio_logger.info(f\"Opened image at path: {image.path}\")\n ngio_logger.info(f\"- {image.dimensions}\")\n ngio_logger.info(f\"- {image.pixel_size}\")\n return image\n
"},{"location":"api/core/#ngio.core.NgffImage.lazy_init_omero","title":"lazy_init_omero(labels: list[str] | int | None = None, wavelength_ids: list[str] | None = None, colors: list[str] | None = None, active: list[bool] | None = None, start_percentile: float | None = 1, end_percentile: float | None = 99, data_type: Any = np.uint16, consolidate: bool = True) -> None
","text":"Set the OMERO metadata for the image.
Parameters:
labels
(list[str] | int | None
, default: None
) \u2013 The labels of the channels.
wavelength_ids
(list[str] | None
, default: None
) \u2013 The wavelengths of the channels.
colors
(list[str] | None
, default: None
) \u2013 The colors of the channels.
active
(list[bool] | None
, default: None
) \u2013 Whether the channels are active.
start_percentile
(float | None
, default: 1
) \u2013 The start percentile for computing the data range. If None, the start is the same as the min value of the data type.
end_percentile
(float | None
, default: 99
) \u2013 The end percentile for for computing the data range. If None, the start is the same as the max value of the data type.
data_type
(Any
, default: uint16
) \u2013 The data type of the image.
consolidate
(bool
, default: True
) \u2013 Whether to consolidate the metadata.
ngio/core/ngff_image.py
def lazy_init_omero(\n self,\n labels: list[str] | int | None = None,\n wavelength_ids: list[str] | None = None,\n colors: list[str] | None = None,\n active: list[bool] | None = None,\n start_percentile: float | None = 1,\n end_percentile: float | None = 99,\n data_type: Any = np.uint16,\n consolidate: bool = True,\n) -> None:\n \"\"\"Set the OMERO metadata for the image.\n\n Args:\n labels (list[str] | int | None): The labels of the channels.\n wavelength_ids (list[str] | None): The wavelengths of the channels.\n colors (list[str] | None): The colors of the channels.\n active (list[bool] | None): Whether the channels are active.\n start_percentile (float | None): The start percentile for computing the data\n range. If None, the start is the same as the min value of the data type.\n end_percentile (float | None): The end percentile for for computing the data\n range. If None, the start is the same as the max value of the data type.\n data_type (Any): The data type of the image.\n consolidate (bool): Whether to consolidate the metadata.\n \"\"\"\n if labels is None:\n ref = self.get_image()\n labels = ref.num_channels\n\n if start_percentile is not None and end_percentile is not None:\n start, end = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n elif start_percentile is None and end_percentile is None:\n raise ValueError(\"Both start and end percentiles cannot be None.\")\n elif end_percentile is None and start_percentile is not None:\n raise ValueError(\n \"End percentile cannot be None if start percentile is not.\"\n )\n else:\n start, end = None, None\n\n self.image_meta.lazy_init_omero(\n labels=labels,\n wavelength_ids=wavelength_ids,\n colors=colors,\n start=start,\n end=end,\n active=active,\n data_type=data_type,\n )\n\n if consolidate:\n self._image_meta.write_meta(self.image_meta)\n
"},{"location":"api/core/#ngio.core.NgffImage.update_omero_window","title":"update_omero_window(start_percentile: int = 1, end_percentile: int = 99, min_value: int | float | None = None, max_value: int | float | None = None) -> None
","text":"Update the OMERO window.
This will setup percentiles based values for the window of each channel.
Parameters:
start_percentile
(int
, default: 1
) \u2013 The start percentile.
end_percentile
(int
, default: 99
) \u2013 The end percentile
min_value
(int | float | None
, default: None
) \u2013 The minimum value of the window.
max_value
(int | float | None
, default: None
) \u2013 The maximum value of the window.
ngio/core/ngff_image.py
def update_omero_window(\n self,\n start_percentile: int = 1,\n end_percentile: int = 99,\n min_value: int | float | None = None,\n max_value: int | float | None = None,\n) -> None:\n \"\"\"Update the OMERO window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n min_value (int | float | None): The minimum value of the window.\n max_value (int | float | None): The maximum value of the window.\n\n \"\"\"\n start, ends = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n meta = self.image_meta\n ref_image = self.get_image()\n\n for func in [np.iinfo, np.finfo]:\n try:\n type_max = func(ref_image.on_disk_array.dtype).max\n type_min = func(ref_image.on_disk_array.dtype).min\n break\n except ValueError:\n continue\n else:\n raise ValueError(\"Data type not recognized.\")\n\n if min_value is None:\n min_value = type_min\n if max_value is None:\n max_value = type_max\n\n num_c = ref_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n if len(channel_list) != len(start):\n raise ValueError(\"The number of channels does not match the image.\")\n\n for c, (channel, s, e) in enumerate(\n zip(channel_list, start, ends, strict=True)\n ):\n channel.channel_visualisation.start = s\n channel.channel_visualisation.end = e\n channel.channel_visualisation.min = min_value\n channel.channel_visualisation.max = max_value\n\n ngio_logger.info(\n f\"Updated window for channel {channel.label}. \"\n f\"Start: {start_percentile}, End: {end_percentile}\"\n )\n meta.omero.channels[c] = channel\n\n self._image_meta.write_meta(meta)\n
"},{"location":"notebooks/basic_usage/","title":"OME-Zarr Image Exploration","text":"In\u00a0[1]: Copied! from ngio.core import NgffImage\n\n# Ngio can stream data from any fsspec-compatible store\npath = \"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\"\nngff_image = NgffImage(path, \"r\")\nfrom ngio.core import NgffImage # Ngio can stream data from any fsspec-compatible store path = \"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\" ngff_image = NgffImage(path, \"r\")
The ngff_image
object provides a high-level interface to read, write and manipulate OME-Zarr images.
Print the image will show some overview information like:
print(ngff_image)\nprint(ngff_image)
NGFFImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n paths=['0', '1', '2', '3', '4'], \n labels=['nuclei', 'wf_2_labels', 'wf_3_labels', 'wf_4_labels'], \n tables=['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4'], \n)\n
From the NgffImage
object we can easily access access the image data (at any resolution level), the labels and the tables.
Get a single level
of the image pyramid as Image
(to know more about the Image
class, please refer to the Image notebook The Image
object is the main object to interact with the image. It contains methods to interact with the image data and metadata.
from ngio.ngff_meta import PixelSize\n\n# 1. Get image from highest resolution (default)\nimage = ngff_image.get_image()\nprint(image)\n\n# 2. Get image from a specific level using the path keyword\nimage = ngff_image.get_image(path=\"1\")\nprint(image)\n\n# 3. Get image from a specific pixel size using the pixel_size keyword\nimage = ngff_image.get_image(pixel_size=PixelSize(x=0.65, y=0.65, z=1))\nprint(image)\nfrom ngio.ngff_meta import PixelSize # 1. Get image from highest resolution (default) image = ngff_image.get_image() print(image) # 2. Get image from a specific level using the path keyword image = ngff_image.get_image(path=\"1\") print(image) # 3. Get image from a specific pixel size using the pixel_size keyword image = ngff_image.get_image(pixel_size=PixelSize(x=0.65, y=0.65, z=1)) print(image)
Image(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=0,\n PixelSize(x=0.1625, y=0.1625, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=4320, x=5120),\n)\nImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=1,\n PixelSize(x=0.325, y=0.325, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=2160, x=2560),\n)\nImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=2,\n PixelSize(x=0.65, y=0.65, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=1080, x=1280),\n)\n
The Image
object provides a high-level interface to read and write image data at a specific resolution level.
print(\"Shape\", image.shape)\nprint(\"Axes\", image.axes_names)\nprint(\"PixelSize\", image.pixel_size)\nprint(\"Dimensions\", image.dimensions)\nprint(\"Channel Names\", image.channel_labels)\nprint(\"Shape\", image.shape) print(\"Axes\", image.axes_names) print(\"PixelSize\", image.pixel_size) print(\"Dimensions\", image.dimensions) print(\"Channel Names\", image.channel_labels)
Shape (3, 1, 1080, 1280)\nAxes ['c', 'z', 'y', 'x']\nPixelSize PixelSize(x=0.65, y=0.65, z=1.0, unit=micrometer)\nDimensions Dimensions(c=3, z=1, y=1080, x=1280)\nChannel Names ['DAPI', 'nanog', 'Lamin B1']\nIn\u00a0[5]: Copied!
# Get data as a numpy array or a dask array\ndata = image.get_array(c=0, mode=\"numpy\")\nprint(data)\n\ndask_data = image.get_array(c=0, mode=\"dask\")\ndask_data\n# Get data as a numpy array or a dask array data = image.get_array(c=0, mode=\"numpy\") print(data) dask_data = image.get_array(c=0, mode=\"dask\") dask_data
[[[280 258 223 ... 252 248 274]\n [288 284 269 ... 206 236 259]\n [334 358 351 ... 212 240 259]\n ...\n [ 70 87 102 ... 2 5 4]\n [186 184 209 ... 2 4 4]\n [209 220 223 ... 4 1 7]]]\nOut[5]: Array Chunk Bytes 2.64 MiB 2.64 MiB Shape (1, 1080, 1280) (1, 1080, 1280) Dask graph 1 chunks in 3 graph layers Data type uint16 numpy.ndarray 1280 1080 1
ngio
design is to always provide the data in a canonical axis order (t
, c
, z
, y
, x
) no matter what is the order on disk. The Image
object provides methods to access the data in this order. If you want to access data or metadata in the on-disk order, you can by using on_disk_{method_name}
methods.
print(\"On-disk shape\", image.on_disk_shape)\nprint(\"On-disk array\", image.on_disk_array)\nprint(\"On-disk dask array\", image.on_disk_dask_array)\nprint(\"On-disk shape\", image.on_disk_shape) print(\"On-disk array\", image.on_disk_array) print(\"On-disk dask array\", image.on_disk_dask_array)
On-disk shape (3, 1, 1080, 1280)\nOn-disk array <zarr.core.Array '/2' (3, 1, 1080, 1280) uint16>\nOn-disk dask array dask.array<from-zarr, shape=(3, 1, 1080, 1280), dtype=uint16, chunksize=(1, 1, 1080, 1280), chunktype=numpy.ndarray>\nIn\u00a0[7]: Copied!
print(\"List of Labels: \", ngff_image.labels.list())\n\nlabel_nuclei = ngff_image.labels.get_label(\"nuclei\", path=\"0\")\nprint(label_nuclei)\nprint(\"List of Labels: \", ngff_image.labels.list()) label_nuclei = ngff_image.labels.get_label(\"nuclei\", path=\"0\") print(label_nuclei)
List of Labels: ['nuclei', 'wf_2_labels', 'wf_3_labels', 'wf_4_labels']\nLabel(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/labels/nuclei, \n path=0,\n name=nuclei,\n PixelSize(x=0.1625, y=0.1625, z=1.0, unit=micrometer),\n Dimensions(z=1, y=4320, x=5120),\n)\nIn\u00a0[8]: Copied!
print(\"List of Tables: \", ngff_image.tables.list())\nprint(\" - Feature tables: \", ngff_image.tables.list(table_type='feature_table'))\nprint(\" - Roi tables: \", ngff_image.tables.list(table_type='roi_table'))\nprint(\" - Masking Roi tables: \", ngff_image.tables.list(table_type='masking_roi_table'))\nprint(\"List of Tables: \", ngff_image.tables.list()) print(\" - Feature tables: \", ngff_image.tables.list(table_type='feature_table')) print(\" - Roi tables: \", ngff_image.tables.list(table_type='roi_table')) print(\" - Masking Roi tables: \", ngff_image.tables.list(table_type='masking_roi_table'))
List of Tables: ['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\n - Feature tables: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\n - Roi tables: ['FOV_ROI_table', 'well_ROI_table']\n - Masking Roi tables: ['nuclei_ROI_table']\nIn\u00a0[9]: Copied!
# Loading a table\nfeature_table = ngff_image.tables.get_table(\"regionprops_DAPI\")\nfeature_table.table\n# Loading a table feature_table = ngff_image.tables.get_table(\"regionprops_DAPI\") feature_table.table
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/aligned_df.py:68: ImplicitModificationWarning: Transforming to str index.\n warnings.warn(\"Transforming to str index.\", ImplicitModificationWarning)\nOut[9]: area bbox_area equivalent_diameter max_intensity mean_intensity min_intensity standard_deviation_intensity label 1 2120.0 2655.0 15.938437 476.0 278.635864 86.0 54.343792 2 327.0 456.0 8.547709 604.0 324.162079 118.0 90.847092 3 1381.0 1749.0 13.816510 386.0 212.682114 60.0 50.169601 4 2566.0 3588.0 16.985800 497.0 251.731491 61.0 53.307186 5 4201.0 5472.0 20.019413 466.0 223.862885 51.0 56.719025 ... ... ... ... ... ... ... ... 3002 1026.0 1288.0 12.513618 589.0 308.404480 132.0 64.681778 3003 859.0 1080.0 11.794101 400.0 270.349243 107.0 49.040470 3004 508.0 660.0 9.899693 314.0 205.043304 82.0 33.249981 3005 369.0 440.0 8.899028 376.0 217.970184 82.0 50.978519 3006 278.0 330.0 8.097459 339.0 217.996399 100.0 38.510067
3006 rows \u00d7 7 columns
In\u00a0[10]: Copied!# Loading a roi table\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\n\nprint(f\"{roi_table.field_indexes=}\")\nprint(f\"{roi_table.get_roi('FOV_1')=}\")\n\nroi_table.table\n# Loading a roi table roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") print(f\"{roi_table.field_indexes=}\") print(f\"{roi_table.get_roi('FOV_1')=}\") roi_table.table
roi_table.field_indexes=['FOV_1', 'FOV_2', 'FOV_3', 'FOV_4']\nroi_table.get_roi('FOV_1')=WorldCooROI(x_length=416.0, y_length=351.0, z_length=1.0, x=0.0, y=0.0, z=0.0)\nOut[10]: x_micrometer y_micrometer z_micrometer len_x_micrometer len_y_micrometer len_z_micrometer x_micrometer_original y_micrometer_original FieldIndex FOV_1 0.0 0.0 0.0 416.0 351.0 1.0 -1448.300049 -1517.699951 FOV_2 416.0 0.0 0.0 416.0 351.0 1.0 -1032.300049 -1517.699951 FOV_3 0.0 351.0 0.0 416.0 351.0 1.0 -1448.300049 -1166.699951 FOV_4 416.0 351.0 0.0 416.0 351.0 1.0 -1032.300049 -1166.699951
Rois can be used to index image and label data.
In\u00a0[11]: Copied!import matplotlib.pyplot as plt\n\n# Plotting a single ROI\nroi = roi_table.get_roi(\"FOV_1\")\nroi_data = image.get_array_from_roi(roi, c=0, mode=\"numpy\")\nplt.title(\"ROI: FOV_1\")\nplt.imshow(roi_data[0], cmap=\"gray\")\nplt.axis(\"off\")\nplt.show()\nimport matplotlib.pyplot as plt # Plotting a single ROI roi = roi_table.get_roi(\"FOV_1\") roi_data = image.get_array_from_roi(roi, c=0, mode=\"numpy\") plt.title(\"ROI: FOV_1\") plt.imshow(roi_data[0], cmap=\"gray\") plt.axis(\"off\") plt.show() In\u00a0[12]: Copied!
new_ngff_image = ngff_image.derive_new_image(\"../../data/new_ome.zarr\", name=\"new_image\")\nprint(new_ngff_image)\nnew_ngff_image = ngff_image.derive_new_image(\"../../data/new_ome.zarr\", name=\"new_image\") print(new_ngff_image)
NGFFImage(group_path=/home/runner/work/ngio/ngio/data/new_ome.zarr/, \n paths=['0', '1', '2', '3', '4'], \n labels=[], \n tables=[], \n)\nIn\u00a0[13]: Copied!
from ngio.core.utils import get_fsspec_http_store\n\n# Ngio can stream data from any fsspec-compatible store\nurl = \"https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/\"\nstore = get_fsspec_http_store(url)\nngff_image = NgffImage(store, \"r\")\n\nprint(ngff_image)\nfrom ngio.core.utils import get_fsspec_http_store # Ngio can stream data from any fsspec-compatible store url = \"https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/\" store = get_fsspec_http_store(url) ngff_image = NgffImage(store, \"r\") print(ngff_image)
NGFFImage(group_path=https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/, \n paths=['0', '1', '2', '3'], \n labels=['nuclei'], \n tables=['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI'], \n)\n"},{"location":"notebooks/basic_usage/#ome-zarr-image-exploration","title":"OME-Zarr Image Exploration\u00b6","text":"
In this notebook we will show how to use the 'NgffImage' class to explore and manage an OME-NGFF image.
For this example we will use a small example image that can be downloaded from the following link: example ome-zarr
"},{"location":"notebooks/basic_usage/#setup","title":"Setup\u00b6","text":"You can download the example image (on Linux and Mac os) by running the following command:
bash setup_data.sh\n
from the root of the repository.
"},{"location":"notebooks/basic_usage/#ngffimage","title":"NgffImage\u00b6","text":"The NgffImage
provides a high-level interface to read, write and manipulate NGFF images. A NgffImage
can be created from a storelike object (e.g. a path to a directory, or a url) or from a zarr.Group
object.
The NgffImage
can also be used to load labels from a OME-NGFF
file and behave similarly to the Image
object.
The NgffImage
can also be used to load tables from a OME-NGFF
file.
ngio
supports three types of tables:
features table
A simple table to store features associated with a label.roi table
A table to store regions of interest.masking roi tables
A table to store single objects bounding boxes associated with a label.When processing an image, it is often useful to derive a new image from the original image. The NgffImage
class provides a method to derive a new image from the original image. When deriving a new image, a new NgffImage
object is created with the same metadata as the original image. Optionally the user can specify different metadata for the new image(.e.g. different channels names).
The NgffImage
class can also be used to stream an image over HTTP. This is useful when the image is stored on a remote server and you want to access it without downloading the entire image. All features of the NgffImage
class are available when streaming an image over HTTP (besides anything that requires writing to the image).
import matplotlib.pyplot as plt\n\nfrom ngio.core.ngff_image import NgffImage\n\nngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\")\nimport matplotlib.pyplot as plt from ngio.core.ngff_image import NgffImage ngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\") In\u00a0[2]: Copied!
image = ngff_image.get_image()\n\nprint(\"Image information:\")\nprint(f\"{image.shape=}\")\nprint(f\"{image.axes_names=}\")\nprint(f\"{image.pixel_size=}\")\nprint(f\"{image.channel_labels=}\")\nprint(f\"{image.dimensions=}\")\nimage = ngff_image.get_image() print(\"Image information:\") print(f\"{image.shape=}\") print(f\"{image.axes_names=}\") print(f\"{image.pixel_size=}\") print(f\"{image.channel_labels=}\") print(f\"{image.dimensions=}\")
Image information:\nimage.shape=(3, 1, 4320, 5120)\nimage.axes_names=['c', 'z', 'y', 'x']\nimage.pixel_size=PixelSize(x=0.1625, y=0.1625, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nimage.channel_labels=['DAPI', 'nanog', 'Lamin B1']\nimage.dimensions=Dimensions(c=3, z=1, y=4320, x=5120)\n
The Image
object created is a lazy object, meaning that the image is not loaded into memory until it is needed. To get the image data from disk we can use the .array
attribute or we can get it as a dask.array
object using the .dask_array
attribute.
# Get image as a dask array\ndask_array = image.on_disk_dask_array\ndask_array\n# Get image as a dask array dask_array = image.on_disk_dask_array dask_array Out[3]: Array Chunk Bytes 126.56 MiB 10.55 MiB Shape (3, 1, 4320, 5120) (1, 1, 2160, 2560) Dask graph 12 chunks in 2 graph layers Data type uint16 numpy.ndarray 3 1 5120 4320 1
Note, directly accessing the .on_disk_array
or .on_disk_dask_array
attributes will load the image as stored in the file.
Since in principle the images can have different axes order. A safer way to access the image data is to use the .get_array()
method, which will return the image data in canonical order (TCZYX).
image_numpy = image.get_array(c=0, x=slice(0, 250), y=slice(0, 250), preserve_dimensions=False, mode=\"numpy\")\n\nprint(f\"{image_numpy.shape=}\")\nimage_numpy = image.get_array(c=0, x=slice(0, 250), y=slice(0, 250), preserve_dimensions=False, mode=\"numpy\") print(f\"{image_numpy.shape=}\")
image_numpy.shape=(1, 250, 250)\nIn\u00a0[5]: Copied!
roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\nroi = roi_table.get_roi(\"FOV_1\")\nprint(f\"{roi=}\")\n\nimage_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=True, mode=\"dask\")\nimage_roi_1\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") roi = roi_table.get_roi(\"FOV_1\") print(f\"{roi=}\") image_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=True, mode=\"dask\") image_roi_1
roi=WorldCooROI(x_length=416.0, y_length=351.0, z_length=1.0, x=0.0, y=0.0, z=0.0)\nOut[5]: Array Chunk Bytes 10.55 MiB 10.55 MiB Shape (1, 1, 2160, 2560) (1, 1, 2160, 2560) Dask graph 1 chunks in 3 graph layers Data type uint16 numpy.ndarray 1 1 2560 2160 1
The roi object can is defined in physical coordinates, and can be used to extract the region of interest from the image or label at any resolution.
In\u00a0[6]: Copied!image_2 = ngff_image.get_image(path=\"2\")\n# Two images at different resolutions\nprint(f\"{image.pixel_size=}\")\nprint(f\"{image_2.pixel_size=}\")\n\n# Get roi for higher resolution image\nimage_1_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False)\n\n# Get roi for lower resolution image\nimage_2_roi_1 = image_2.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False)\n\n# Plot the two images side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image_1_roi_1[0], cmap=\"gray\")\naxs[1].imshow(image_2_roi_1[0], cmap=\"gray\")\nplt.show()\nimage_2 = ngff_image.get_image(path=\"2\") # Two images at different resolutions print(f\"{image.pixel_size=}\") print(f\"{image_2.pixel_size=}\") # Get roi for higher resolution image image_1_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False) # Get roi for lower resolution image image_2_roi_1 = image_2.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False) # Plot the two images side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image_1_roi_1[0], cmap=\"gray\") axs[1].imshow(image_2_roi_1[0], cmap=\"gray\") plt.show()
image.pixel_size=PixelSize(x=0.1625, y=0.1625, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nimage_2.pixel_size=PixelSize(x=0.65, y=0.65, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nIn\u00a0[7]: Copied!
import numpy as np\n\n# Get a small slice of the image\nsmall_slice = image.get_array(x=slice(1000, 2000), y=slice(1000, 2000))\n\n# Set the sample slice to zeros\nzeros_slice = np.zeros_like(small_slice)\nimage.set_array(patch=zeros_slice, x=slice(1000, 2000), y=slice(1000, 2000))\n\n\n# Load the image from disk and show the edited image\nnuclei = ngff_image.labels.get_label(\"nuclei\")\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].imshow(nuclei.on_disk_array[0])\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\n\n# Add back the original slice to the image\nimage.set_array(patch=small_slice, x=slice(1000, 2000), y=slice(1000, 2000))\nimport numpy as np # Get a small slice of the image small_slice = image.get_array(x=slice(1000, 2000), y=slice(1000, 2000)) # Set the sample slice to zeros zeros_slice = np.zeros_like(small_slice) image.set_array(patch=zeros_slice, x=slice(1000, 2000), y=slice(1000, 2000)) # Load the image from disk and show the edited image nuclei = ngff_image.labels.get_label(\"nuclei\") fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\") axs[1].imshow(nuclei.on_disk_array[0]) for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() # Add back the original slice to the image image.set_array(patch=small_slice, x=slice(1000, 2000), y=slice(1000, 2000)) In\u00a0[8]: Copied!
# Create a a new label object and set it to a simple segmentation\nnew_label = ngff_image.labels.derive(\"new_label\", overwrite=True)\n\nsimple_segmentation = image.on_disk_array[0] > 100\nnew_label.on_disk_array[...] = simple_segmentation\n\n# make a subplot with two image show side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].imshow(new_label.on_disk_array[0], cmap=\"gray\")\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\n# Create a a new label object and set it to a simple segmentation new_label = ngff_image.labels.derive(\"new_label\", overwrite=True) simple_segmentation = image.on_disk_array[0] > 100 new_label.on_disk_array[...] = simple_segmentation # make a subplot with two image show side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\") axs[1].imshow(new_label.on_disk_array[0], cmap=\"gray\") for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() In\u00a0[9]: Copied!
label_0 = ngff_image.labels.get_label(\"new_label\", path=\"0\")\nlabel_2 = ngff_image.labels.get_label(\"new_label\", path=\"2\")\n\nlabel_before_consolidation = label_2.on_disk_array[...]\n\n# Consolidate the label\nlabel_0.consolidate()\n\nlabel_after_consolidation = label_2.on_disk_array[...]\n\n\n# make a subplot with two image show side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(label_before_consolidation[0], cmap=\"gray\")\naxs[1].imshow(label_after_consolidation[0], cmap=\"gray\")\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\nlabel_0 = ngff_image.labels.get_label(\"new_label\", path=\"0\") label_2 = ngff_image.labels.get_label(\"new_label\", path=\"2\") label_before_consolidation = label_2.on_disk_array[...] # Consolidate the label label_0.consolidate() label_after_consolidation = label_2.on_disk_array[...] # make a subplot with two image show side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(label_before_consolidation[0], cmap=\"gray\") axs[1].imshow(label_after_consolidation[0], cmap=\"gray\") for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() In\u00a0[10]: Copied!
import numpy as np\nimport pandas as pd\n\nprint(f\"List of feature table: {ngff_image.tables.list(table_type='feature_table')}\")\n\n\nnuclei = ngff_image.labels.get_label('nuclei')\n\n# Create a table with random features for each nuclei in each ROI\nlist_of_records = []\nfor roi in roi_table.rois:\n nuclei_in_roi = nuclei.get_array_from_roi(roi, mode='numpy')\n for nuclei_id in np.unique(nuclei_in_roi)[1:]:\n list_of_records.append(\n {\"label\": nuclei_id,\n \"feat1\": np.random.rand(),\n \"feat2\": np.random.rand(),\n \"ROI\": roi.infos.get(\"FieldIndex\")}\n )\n\nfeat_df = pd.DataFrame.from_records(list_of_records)\n\n# Create a new feature table\nfeat_table = ngff_image.tables.new(name='new_feature_table',\n label_image='../nuclei',\n table_type='feature_table',\n overwrite=True)\n\nprint(f\"New list of feature table: {ngff_image.tables.list(table_type='feature_table')}\")\nfeat_table.set_table(feat_df)\nfeat_table.consolidate()\n\nfeat_table.table\nimport numpy as np import pandas as pd print(f\"List of feature table: {ngff_image.tables.list(table_type='feature_table')}\") nuclei = ngff_image.labels.get_label('nuclei') # Create a table with random features for each nuclei in each ROI list_of_records = [] for roi in roi_table.rois: nuclei_in_roi = nuclei.get_array_from_roi(roi, mode='numpy') for nuclei_id in np.unique(nuclei_in_roi)[1:]: list_of_records.append( {\"label\": nuclei_id, \"feat1\": np.random.rand(), \"feat2\": np.random.rand(), \"ROI\": roi.infos.get(\"FieldIndex\")} ) feat_df = pd.DataFrame.from_records(list_of_records) # Create a new feature table feat_table = ngff_image.tables.new(name='new_feature_table', label_image='../nuclei', table_type='feature_table', overwrite=True) print(f\"New list of feature table: {ngff_image.tables.list(table_type='feature_table')}\") feat_table.set_table(feat_df) feat_table.consolidate() feat_table.table
List of feature table: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\nNew list of feature table: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4', 'new_feature_table']\n
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/anndata.py:1756: UserWarning: Observation names are not unique. To make them unique, call `.obs_names_make_unique`.\n utils.warn_names_duplicates(\"obs\")\nOut[10]: feat1 feat2 ROI label 1 0.368288 0.688558 FOV_1 2 0.497446 0.191019 FOV_1 3 0.202167 0.654706 FOV_1 4 0.290300 0.332909 FOV_1 5 0.081955 0.006951 FOV_1 ... ... ... ... 2987 0.517186 0.112553 FOV_4 2991 0.152718 0.119371 FOV_4 2993 0.777126 0.627275 FOV_4 2995 0.400694 0.777562 FOV_4 2996 0.811340 0.569860 FOV_4
3091 rows \u00d7 3 columns
In\u00a0[\u00a0]: Copied!\n"},{"location":"notebooks/image/#imageslabelstables","title":"Images/Labels/Tables\u00b6","text":"
In this notebook we will show how to use the Image
, Label
and Table
objects.
Images can be loaded from a NgffImage
object.
roi
objects from a roi_table
can be used to extract a region of interest from an image or a label.
Similarly to the .array()
we can use the .set_array()
(or set_array_from_roi
) method to write part of an image to disk.
When doing image analysis, we often need to create new labels or tables. The ngff_image
allows us to simply create new labels and tables.
Every time we modify a label or a image, we are modifying the on-disk data on one layer only. This means that if we have the image saved in multiple resolutions, we need to consolidate the changes to all resolutions. To do so, we can use the .consolidate()
method.
We can simply create a new table by create a new Table
object from a pandas dataframe. For a simple feature table the only reuiremnt is to have a integer column named label
that will be used to link the table to the objects in the image.
import matplotlib.pyplot as plt\n\nfrom ngio.core import NgffImage\n\nngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0\")\nimport matplotlib.pyplot as plt from ngio.core import NgffImage ngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0\") In\u00a0[2]: Copied!
mip_ngff = ngff_image.derive_new_image(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0_mip\",\n name=\"MIP\",\n on_disk_shape=(1, 1, 2160, 5120))\nmip_ngff = ngff_image.derive_new_image(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0_mip\", name=\"MIP\", on_disk_shape=(1, 1, 2160, 5120)) In\u00a0[3]: Copied!
# Get the source imag\nsource_image = ngff_image.get_image()\nprint(\"Source image loaded with shape:\", source_image.shape)\n\n# Get the MIP image\nmip_image = mip_ngff.get_image()\nprint(\"MIP image loaded with shape:\", mip_image.shape)\n\n# Get a ROI table\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\nprint(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\")\n\n# For each ROI in the table\n# - get the data from the source image\n# - calculate the MIP\n# - set the data in the MIP image\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n patch = source_image.get_array_from_roi(roi)\n mip_patch = patch.max(axis=1, keepdims=True)\n mip_image.set_array_from_roi(patch=mip_patch, roi=roi)\n \nprint(\"MIP image saved\")\n\nplt.figure(figsize=(5, 5))\nplt.title(\"Mip\")\nplt.imshow(mip_image.on_disk_array[0, 0, :, :], cmap=\"gray\")\nplt.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the source imag source_image = ngff_image.get_image() print(\"Source image loaded with shape:\", source_image.shape) # Get the MIP image mip_image = mip_ngff.get_image() print(\"MIP image loaded with shape:\", mip_image.shape) # Get a ROI table roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") print(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\") # For each ROI in the table # - get the data from the source image # - calculate the MIP # - set the data in the MIP image for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") patch = source_image.get_array_from_roi(roi) mip_patch = patch.max(axis=1, keepdims=True) mip_image.set_array_from_roi(patch=mip_patch, roi=roi) print(\"MIP image saved\") plt.figure(figsize=(5, 5)) plt.title(\"Mip\") plt.imshow(mip_image.on_disk_array[0, 0, :, :], cmap=\"gray\") plt.axis('off') plt.tight_layout() plt.show()
Source image loaded with shape: (1, 2, 2160, 5120)\nMIP image loaded with shape: (1, 1, 2160, 5120)\nROI table loaded with 2 ROIs\n - Processing ROI None\n - Processing ROI None\nMIP image saved\nIn\u00a0[4]: Copied!
# Get the MIP image at a lower resolution\nmip_image_2 = mip_ngff.get_image(path=\"2\")\n\nimage_before_consolidation = mip_image_2.get_array(c=0, z=0)\n\n# Consolidate the pyramid\nmip_image.consolidate()\n\nimage_after_consolidation = mip_image_2.get_array(c=0, z=0)\n\nfig, axs = plt.subplots(2, 1, figsize=(10, 5))\naxs[0].set_title(\"Before consolidation\")\naxs[0].imshow(image_before_consolidation, cmap=\"gray\")\naxs[1].set_title(\"After consolidation\")\naxs[1].imshow(image_after_consolidation, cmap=\"gray\")\nfor ax in axs:\n ax.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the MIP image at a lower resolution mip_image_2 = mip_ngff.get_image(path=\"2\") image_before_consolidation = mip_image_2.get_array(c=0, z=0) # Consolidate the pyramid mip_image.consolidate() image_after_consolidation = mip_image_2.get_array(c=0, z=0) fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].set_title(\"Before consolidation\") axs[0].imshow(image_before_consolidation, cmap=\"gray\") axs[1].set_title(\"After consolidation\") axs[1].imshow(image_after_consolidation, cmap=\"gray\") for ax in axs: ax.axis('off') plt.tight_layout() plt.show() In\u00a0[5]: Copied!
mip_roi_table = mip_ngff.tables.new(\"FOV_ROI_table\", overwrite=True)\n\nroi_list = []\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n roi.z_length = 1 # In the MIP image, the z dimension is 1\n roi_list.append(roi)\n\nmip_roi_table.set_rois(roi_list, overwrite=True)\nmip_roi_table.consolidate()\n\nmip_roi_table.table\nmip_roi_table = mip_ngff.tables.new(\"FOV_ROI_table\", overwrite=True) roi_list = [] for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") roi.z_length = 1 # In the MIP image, the z dimension is 1 roi_list.append(roi) mip_roi_table.set_rois(roi_list, overwrite=True) mip_roi_table.consolidate() mip_roi_table.table
- Processing ROI None\n - Processing ROI None\nOut[5]: x_micrometer y_micrometer z_micrometer len_x_micrometer len_y_micrometer len_z_micrometer x_micrometer_original y_micrometer_original FieldIndex FOV_1 0.0 0.0 0.0 416.0 351.0 1 -1448.300049 -1517.699951 FOV_2 416.0 0.0 0.0 416.0 351.0 1 -1032.300049 -1517.699951 In\u00a0[6]: Copied!
# Setup a simple segmentation function\n\nimport numpy as np\nfrom matplotlib.colors import ListedColormap\nfrom skimage.filters import threshold_otsu\nfrom skimage.measure import label\n\nrand_cmap = np.random.rand(1000, 3)\nrand_cmap[0] = 0\nrand_cmap = ListedColormap(rand_cmap)\n\n\ndef otsu_threshold_segmentation(image: np.ndarray, max_label:int) -> np.ndarray:\n \"\"\"Simple segmentation using Otsu thresholding.\"\"\"\n threshold = threshold_otsu(image)\n binary = image > threshold\n label_image = label(binary)\n label_image += max_label\n label_image = np.where(binary, label_image, 0)\n return label_image\n# Setup a simple segmentation function import numpy as np from matplotlib.colors import ListedColormap from skimage.filters import threshold_otsu from skimage.measure import label rand_cmap = np.random.rand(1000, 3) rand_cmap[0] = 0 rand_cmap = ListedColormap(rand_cmap) def otsu_threshold_segmentation(image: np.ndarray, max_label:int) -> np.ndarray: \"\"\"Simple segmentation using Otsu thresholding.\"\"\" threshold = threshold_otsu(image) binary = image > threshold label_image = label(binary) label_image += max_label label_image = np.where(binary, label_image, 0) return label_image In\u00a0[7]: Copied!
nuclei_image = mip_ngff.labels.derive(name=\"nuclei\", overwrite=True)\nnuclei_image = mip_ngff.labels.derive(name=\"nuclei\", overwrite=True) In\u00a0[8]: Copied!
# Get the source imag\nsource_image = mip_ngff.get_image()\nprint(\"Source image loaded with shape:\", source_image.shape)\n\n# Get a ROI table\nroi_table = mip_ngff.tables.get_table(\"FOV_ROI_table\")\nprint(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\")\n\n# Find the DAPI channel\ndapi_idx = source_image.get_channel_idx(label=\"DAPI\")\n\n# For each ROI in the table\n# - get the data from the source image\n# - calculate the Segmentation\n# - set the data in segmentation image\nmax_label = 0\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n patch = source_image.get_array_from_roi(roi, c=dapi_idx)\n segmentation = otsu_threshold_segmentation(patch, max_label)\n\n # Add the max label of the previous segmentation to avoid overlapping labels\n max_label = segmentation.max()\n\n nuclei_image.set_array_from_roi(patch=segmentation, roi=roi)\n\n# Consolidate the segmentation image\nnuclei_image.consolidate()\n\nprint(\"Segmentation image saved\")\nfig, axs = plt.subplots(2, 1, figsize=(10, 5))\naxs[0].set_title(\"MIP\")\naxs[0].imshow(source_image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].set_title(\"Nuclei segmentation\")\naxs[1].imshow(nuclei_image.on_disk_array[0], cmap=rand_cmap, interpolation='nearest')\nfor ax in axs:\n ax.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the source imag source_image = mip_ngff.get_image() print(\"Source image loaded with shape:\", source_image.shape) # Get a ROI table roi_table = mip_ngff.tables.get_table(\"FOV_ROI_table\") print(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\") # Find the DAPI channel dapi_idx = source_image.get_channel_idx(label=\"DAPI\") # For each ROI in the table # - get the data from the source image # - calculate the Segmentation # - set the data in segmentation image max_label = 0 for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") patch = source_image.get_array_from_roi(roi, c=dapi_idx) segmentation = otsu_threshold_segmentation(patch, max_label) # Add the max label of the previous segmentation to avoid overlapping labels max_label = segmentation.max() nuclei_image.set_array_from_roi(patch=segmentation, roi=roi) # Consolidate the segmentation image nuclei_image.consolidate() print(\"Segmentation image saved\") fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].set_title(\"MIP\") axs[0].imshow(source_image.on_disk_array[0, 0], cmap=\"gray\") axs[1].set_title(\"Nuclei segmentation\") axs[1].imshow(nuclei_image.on_disk_array[0], cmap=rand_cmap, interpolation='nearest') for ax in axs: ax.axis('off') plt.tight_layout() plt.show()
Source image loaded with shape: (1, 1, 2160, 5120)\nROI table loaded with 2 ROIs\n - Processing ROI None\n
- Processing ROI None\n
Segmentation image saved\nIn\u00a0[\u00a0]: Copied!
\n"},{"location":"notebooks/processing/#processing","title":"Processing\u00b6","text":"
In this notebook we will implement a couple of mock image analysis workflows using ngio
.
In this workflow we will read a volumetric image and create a maximum intensity projection (MIP) along the z-axis.
"},{"location":"notebooks/processing/#step-1-create-a-ngff-image","title":"step 1: Create a ngff image\u00b6","text":"For this example we will use the following publicly available image
"},{"location":"notebooks/processing/#step-2-create-a-new-ngff-image-to-store-the-mip","title":"step 2: Create a new ngff image to store the MIP\u00b6","text":""},{"location":"notebooks/processing/#step-3-run-the-workflow","title":"step 3: Run the workflow\u00b6","text":"For each roi in the image, create a MIP and store it in the new image
"},{"location":"notebooks/processing/#step-4-consolidate-the-results-important","title":"step 4: Consolidate the results (!!! Important)\u00b6","text":"In this we wrote the mip image to a single level of the image pyramid. To truly consolidate the results we would need to write the mip to all levels of the image pyramid. We can do this by calling the .consolidate()
method on the image.
As a final step we will create a new ROI table that contains the MIPs as ROIs. Where we correct the z
bounds of the ROIs to reflect the MIP.
Now we can use the MIP image to segment the image using a simple thresholding algorithm.
"},{"location":"notebooks/processing/#step-1-derive-a-new-label-image-from-the-mip-image","title":"step 1: Derive a new label image from the MIP image\u00b6","text":""},{"location":"notebooks/processing/#step-2-run-the-workflow","title":"step 2: Run the workflow\u00b6","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to NGIO","text":"NGIO is a Python library to streamline OME-Zarr image analysis workflows.
Main Goals:
To get started, check out the Getting Started guide.
"},{"location":"#ngio-is-under-active-development","title":"\ud83d\udea7 Ngio is Under active Development \ud83d\udea7","text":""},{"location":"#roadmap","title":"Roadmap","text":"Feature Status ETA Description Metadata Handling \u2705 Read, Write, Validate OME-Zarr Metadata (0.4 supported, 0.5 ready) OME-Zarr Validation \u2705 Validate OME-Zarr files for compliance with the OME-Zarr Specification + Compliance between Metadata and Data Base Image Handling \u2705 Load data from OME-Zarr files, retrieve basic metadata, and write data ROI Handling \u2705 Common ROI models Label Handling \u2705 Mid-September Based on Image Handling Table Validation \u2705 Mid-September Validate Table fractal V1 + Compliance between Metadata and Data Table Handling \u2705 Mid-September Read, Write ROI, Features, and Masked Tables Basic Iterators Ongoing End-September Read and Write Iterators for common access patterns Base Documentation \u2705 End-September API Documentation and Examples Beta Ready Testing \u2705 End-September Beta Testing; Library is ready for testing, but the API is not stable Streaming from Fractal Ongoing December Ngio can stream ome-zarr from fractal Mask Iterators Ongoing Early 2025 Iterators over Masked Tables Advanced Iterators Not started mid-2025 Iterators for advanced access patterns Parallel Iterators Not started mid-2025 Concurrent Iterators for parallel read and write Full Documentation Not started 2025 Complete Documentation Release 1.0 (Commitment to API) Not started 2025 API is stable; breaking changes will be avoided"},{"location":"#contributors","title":"Contributors","text":"ngio
is developed at the BioVisionCenter at the University of Zurich. The main contributors are: @lorenzocerrone, @jluethi.
ngio
is released according to the BSD-3-Clause License. See LICENSE
Warning
The library is still in development and is not yet stable. The API is subject to change, bugs and breaking changes are expected.
Warning
The documentation is still under development. It is not yet complete and may contain errors and inaccuracies.
"},{"location":"getting-started/#installation","title":"Installation","text":"The library can be installed from PyPI using pip:
pip install \"ngio[core]\"\n
The core
extra installs the the zarr-python
dependency. As of now, zarr-python
is required to be installed separately, due to the transition to the new zarr-v3
library.
ngio
API Overview","text":"ngio
implements an abstract object base API for handling OME-Zarr files. The three main objects are NgffImage
, Image
(Label
), and ROITables
.
NgffImage
is the main entry point to the library. It is used to open an OME-Zarr Image and manage its metadata. This object can not be used to access the data directly. but it can be used to access and create the Image
, Label
, and Tables
objects. Moreover it can be used to derive a new Ngff
images based on the current one.Image
and Label
are used to access \"ImageLike\" objects. They are the main objects to access the data in the OME-Zarr file, manage the metadata, and write data.ROITables
can be used to access specific region of interest in the image. They are tightly integrated with the Image
and Label
objects.Currently, the library is not yet stable. However, you can see some example usage in our demo notebooks:
ngio.core
","text":"Core classes for the ngio library.
"},{"location":"api/core/#ngffimage","title":"NGFFImage","text":""},{"location":"api/core/#ngio.core.NgffImage","title":"ngio.core.NgffImage
","text":"A class to handle OME-NGFF images.
Source code inngio/core/ngff_image.py
class NgffImage:\n \"\"\"A class to handle OME-NGFF images.\"\"\"\n\n def __init__(\n self, store: StoreLike, cache: bool = False, mode: AccessModeLiteral = \"r+\"\n ) -> None:\n \"\"\"Initialize the NGFFImage in read mode.\"\"\"\n self.store = store\n self._mode = mode\n self._group = open_group_wrapper(store=store, mode=self._mode)\n\n if self._group.read_only:\n self._mode = \"r\"\n\n self._image_meta = get_ngff_image_meta_handler(\n self._group, meta_mode=\"image\", cache=cache\n )\n self._metadata_cache = cache\n self.tables = TableGroup(self._group, mode=self._mode)\n self.labels = LabelGroup(\n self._group, image_ref=self.get_image(), mode=self._mode\n )\n\n ngio_logger.info(f\"Opened image located in store: {store}\")\n ngio_logger.info(f\"- Image number of levels: {self.num_levels}\")\n\n def __repr__(self) -> str:\n \"\"\"Get the string representation of the image.\"\"\"\n name = \"NGFFImage(\"\n len_name = len(name)\n return (\n f\"{name}\"\n f\"group_path={self.group_path}, \\n\"\n f\"{' ':>{len_name}}paths={self.levels_paths}, \\n\"\n f\"{' ':>{len_name}}labels={self.labels.list()}, \\n\"\n f\"{' ':>{len_name}}tables={self.tables.list()}, \\n\"\n \")\"\n )\n\n @property\n def group(self) -> zarr.Group:\n \"\"\"Get the group of the image.\"\"\"\n return self._group\n\n @property\n def root_path(self) -> str:\n \"\"\"Get the root path of the image.\"\"\"\n return str(self._group.store.path)\n\n @property\n def group_path(self) -> str:\n \"\"\"Get the path of the group.\"\"\"\n root = self.root_path\n if root.endswith(\"/\"):\n root = root[:-1]\n return f\"{root}/{self._group.path}\"\n\n @property\n def image_meta(self) -> ImageMeta:\n \"\"\"Get the image metadata.\"\"\"\n meta = self._image_meta.load_meta()\n assert isinstance(meta, ImageMeta)\n return meta\n\n @property\n def num_levels(self) -> int:\n \"\"\"Get the number of levels in the image.\"\"\"\n return self.image_meta.num_levels\n\n @property\n def levels_paths(self) -> list[str]:\n \"\"\"Get the paths of the levels in the image.\"\"\"\n return self.image_meta.levels_paths\n\n def get_image(\n self,\n *,\n path: str | None = None,\n pixel_size: PixelSize | None = None,\n highest_resolution: bool = True,\n ) -> Image:\n \"\"\"Get an image handler for the given level.\n\n Args:\n path (str | None, optional): The path to the level.\n pixel_size (tuple[float, ...] | list[float] | None, optional): The pixel\n size of the level.\n highest_resolution (bool, optional): Whether to get the highest\n resolution level\n\n Returns:\n ImageHandler: The image handler.\n \"\"\"\n if path is not None or pixel_size is not None:\n highest_resolution = False\n\n image = Image(\n store=self._group,\n path=path,\n pixel_size=pixel_size,\n highest_resolution=highest_resolution,\n label_group=LabelGroup(self._group, image_ref=None, mode=self._mode),\n cache=self._metadata_cache,\n mode=self._mode,\n )\n ngio_logger.info(f\"Opened image at path: {image.path}\")\n ngio_logger.info(f\"- {image.dimensions}\")\n ngio_logger.info(f\"- {image.pixel_size}\")\n return image\n\n def _compute_percentiles(\n self, start_percentile: float, end_percentile: float\n ) -> tuple[list[float], list[float]]:\n \"\"\"Compute the percentiles for the window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n\n \"\"\"\n meta = self.image_meta\n\n lowest_res_image = self.get_image(highest_resolution=True)\n lowest_res_shape = lowest_res_image.shape\n for path in self.levels_paths:\n image = self.get_image(path=path)\n if np.prod(image.shape) < np.prod(lowest_res_shape):\n lowest_res_shape = image.shape\n lowest_res_image = image\n\n num_c = lowest_res_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n starts, ends = [], []\n for c in range(num_c):\n data = lowest_res_image.get_array(c=c, mode=\"dask\").ravel()\n _start_percentile, _end_percentile = da.percentile(\n data, [start_percentile, end_percentile], method=\"nearest\"\n ).compute()\n\n starts.append(_start_percentile)\n ends.append(_end_percentile)\n\n return starts, ends\n\n def lazy_init_omero(\n self,\n labels: list[str] | int | None = None,\n wavelength_ids: list[str] | None = None,\n colors: list[str] | None = None,\n active: list[bool] | None = None,\n start_percentile: float | None = 1,\n end_percentile: float | None = 99,\n data_type: Any = np.uint16,\n consolidate: bool = True,\n ) -> None:\n \"\"\"Set the OMERO metadata for the image.\n\n Args:\n labels (list[str] | int | None): The labels of the channels.\n wavelength_ids (list[str] | None): The wavelengths of the channels.\n colors (list[str] | None): The colors of the channels.\n active (list[bool] | None): Whether the channels are active.\n start_percentile (float | None): The start percentile for computing the data\n range. If None, the start is the same as the min value of the data type.\n end_percentile (float | None): The end percentile for for computing the data\n range. If None, the start is the same as the max value of the data type.\n data_type (Any): The data type of the image.\n consolidate (bool): Whether to consolidate the metadata.\n \"\"\"\n if labels is None:\n ref = self.get_image()\n labels = ref.num_channels\n\n if start_percentile is not None and end_percentile is not None:\n start, end = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n elif start_percentile is None and end_percentile is None:\n raise ValueError(\"Both start and end percentiles cannot be None.\")\n elif end_percentile is None and start_percentile is not None:\n raise ValueError(\n \"End percentile cannot be None if start percentile is not.\"\n )\n else:\n start, end = None, None\n\n self.image_meta.lazy_init_omero(\n labels=labels,\n wavelength_ids=wavelength_ids,\n colors=colors,\n start=start,\n end=end,\n active=active,\n data_type=data_type,\n )\n\n if consolidate:\n self._image_meta.write_meta(self.image_meta)\n\n def update_omero_window(\n self,\n start_percentile: int = 1,\n end_percentile: int = 99,\n min_value: int | float | None = None,\n max_value: int | float | None = None,\n ) -> None:\n \"\"\"Update the OMERO window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n min_value (int | float | None): The minimum value of the window.\n max_value (int | float | None): The maximum value of the window.\n\n \"\"\"\n start, ends = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n meta = self.image_meta\n ref_image = self.get_image()\n\n for func in [np.iinfo, np.finfo]:\n try:\n type_max = func(ref_image.on_disk_array.dtype).max\n type_min = func(ref_image.on_disk_array.dtype).min\n break\n except ValueError:\n continue\n else:\n raise ValueError(\"Data type not recognized.\")\n\n if min_value is None:\n min_value = type_min\n if max_value is None:\n max_value = type_max\n\n num_c = ref_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n if len(channel_list) != len(start):\n raise ValueError(\"The number of channels does not match the image.\")\n\n for c, (channel, s, e) in enumerate(\n zip(channel_list, start, ends, strict=True)\n ):\n channel.channel_visualisation.start = s\n channel.channel_visualisation.end = e\n channel.channel_visualisation.min = min_value\n channel.channel_visualisation.max = max_value\n\n ngio_logger.info(\n f\"Updated window for channel {channel.label}. \"\n f\"Start: {start_percentile}, End: {end_percentile}\"\n )\n meta.omero.channels[c] = channel\n\n self._image_meta.write_meta(meta)\n\n def derive_new_image(\n self,\n store: StoreLike,\n name: str,\n overwrite: bool = True,\n **kwargs: dict,\n ) -> \"NgffImage\":\n \"\"\"Derive a new image from the current image.\n\n Args:\n store (StoreLike): The store to create the new image in.\n name (str): The name of the new image.\n overwrite (bool): Whether to overwrite the image if it exists\n **kwargs: Additional keyword arguments.\n Follow the same signature as `create_empty_ome_zarr_image`.\n\n Returns:\n NgffImage: The new image.\n \"\"\"\n image_0 = self.get_image(highest_resolution=True)\n\n # Get the channel information if it exists\n omero = self.image_meta.omero\n if omero is not None:\n channels = omero.channels\n omero_kwargs = omero.extra_fields\n else:\n channels = []\n omero_kwargs = {}\n\n default_kwargs = {\n \"store\": store,\n \"on_disk_shape\": image_0.on_disk_shape,\n \"chunks\": image_0.on_disk_array.chunks,\n \"dtype\": image_0.on_disk_array.dtype,\n \"on_disk_axis\": image_0.dataset.on_disk_axes_names,\n \"pixel_sizes\": image_0.pixel_size,\n \"xy_scaling_factor\": self.image_meta.xy_scaling_factor,\n \"z_scaling_factor\": self.image_meta.z_scaling_factor,\n \"time_spacing\": image_0.dataset.time_spacing,\n \"time_units\": image_0.dataset.time_axis_unit,\n \"levels\": self.num_levels,\n \"name\": name,\n \"channel_labels\": image_0.channel_labels,\n \"channel_wavelengths\": [ch.wavelength_id for ch in channels],\n \"channel_visualization\": [ch.channel_visualisation for ch in channels],\n \"omero_kwargs\": omero_kwargs,\n \"overwrite\": overwrite,\n \"version\": self.image_meta.version,\n }\n\n default_kwargs.update(kwargs)\n\n create_empty_ome_zarr_image(\n **default_kwargs,\n )\n return NgffImage(store=store)\n
"},{"location":"api/core/#ngio.core.NgffImage.group","title":"group: zarr.Group
property
","text":"Get the group of the image.
"},{"location":"api/core/#ngio.core.NgffImage.group_path","title":"group_path: str
property
","text":"Get the path of the group.
"},{"location":"api/core/#ngio.core.NgffImage.image_meta","title":"image_meta: ImageMeta
property
","text":"Get the image metadata.
"},{"location":"api/core/#ngio.core.NgffImage.levels_paths","title":"levels_paths: list[str]
property
","text":"Get the paths of the levels in the image.
"},{"location":"api/core/#ngio.core.NgffImage.num_levels","title":"num_levels: int
property
","text":"Get the number of levels in the image.
"},{"location":"api/core/#ngio.core.NgffImage.root_path","title":"root_path: str
property
","text":"Get the root path of the image.
"},{"location":"api/core/#ngio.core.NgffImage.__init__","title":"__init__(store: StoreLike, cache: bool = False, mode: AccessModeLiteral = 'r+') -> None
","text":"Initialize the NGFFImage in read mode.
Source code inngio/core/ngff_image.py
def __init__(\n self, store: StoreLike, cache: bool = False, mode: AccessModeLiteral = \"r+\"\n) -> None:\n \"\"\"Initialize the NGFFImage in read mode.\"\"\"\n self.store = store\n self._mode = mode\n self._group = open_group_wrapper(store=store, mode=self._mode)\n\n if self._group.read_only:\n self._mode = \"r\"\n\n self._image_meta = get_ngff_image_meta_handler(\n self._group, meta_mode=\"image\", cache=cache\n )\n self._metadata_cache = cache\n self.tables = TableGroup(self._group, mode=self._mode)\n self.labels = LabelGroup(\n self._group, image_ref=self.get_image(), mode=self._mode\n )\n\n ngio_logger.info(f\"Opened image located in store: {store}\")\n ngio_logger.info(f\"- Image number of levels: {self.num_levels}\")\n
"},{"location":"api/core/#ngio.core.NgffImage.__repr__","title":"__repr__() -> str
","text":"Get the string representation of the image.
Source code inngio/core/ngff_image.py
def __repr__(self) -> str:\n \"\"\"Get the string representation of the image.\"\"\"\n name = \"NGFFImage(\"\n len_name = len(name)\n return (\n f\"{name}\"\n f\"group_path={self.group_path}, \\n\"\n f\"{' ':>{len_name}}paths={self.levels_paths}, \\n\"\n f\"{' ':>{len_name}}labels={self.labels.list()}, \\n\"\n f\"{' ':>{len_name}}tables={self.tables.list()}, \\n\"\n \")\"\n )\n
"},{"location":"api/core/#ngio.core.NgffImage.derive_new_image","title":"derive_new_image(store: StoreLike, name: str, overwrite: bool = True, **kwargs: dict) -> NgffImage
","text":"Derive a new image from the current image.
Parameters:
store
(StoreLike
) \u2013 The store to create the new image in.
name
(str
) \u2013 The name of the new image.
overwrite
(bool
, default: True
) \u2013 Whether to overwrite the image if it exists
**kwargs
(dict
, default: {}
) \u2013 Additional keyword arguments. Follow the same signature as create_empty_ome_zarr_image
.
Returns:
NgffImage
( NgffImage
) \u2013 The new image.
ngio/core/ngff_image.py
def derive_new_image(\n self,\n store: StoreLike,\n name: str,\n overwrite: bool = True,\n **kwargs: dict,\n) -> \"NgffImage\":\n \"\"\"Derive a new image from the current image.\n\n Args:\n store (StoreLike): The store to create the new image in.\n name (str): The name of the new image.\n overwrite (bool): Whether to overwrite the image if it exists\n **kwargs: Additional keyword arguments.\n Follow the same signature as `create_empty_ome_zarr_image`.\n\n Returns:\n NgffImage: The new image.\n \"\"\"\n image_0 = self.get_image(highest_resolution=True)\n\n # Get the channel information if it exists\n omero = self.image_meta.omero\n if omero is not None:\n channels = omero.channels\n omero_kwargs = omero.extra_fields\n else:\n channels = []\n omero_kwargs = {}\n\n default_kwargs = {\n \"store\": store,\n \"on_disk_shape\": image_0.on_disk_shape,\n \"chunks\": image_0.on_disk_array.chunks,\n \"dtype\": image_0.on_disk_array.dtype,\n \"on_disk_axis\": image_0.dataset.on_disk_axes_names,\n \"pixel_sizes\": image_0.pixel_size,\n \"xy_scaling_factor\": self.image_meta.xy_scaling_factor,\n \"z_scaling_factor\": self.image_meta.z_scaling_factor,\n \"time_spacing\": image_0.dataset.time_spacing,\n \"time_units\": image_0.dataset.time_axis_unit,\n \"levels\": self.num_levels,\n \"name\": name,\n \"channel_labels\": image_0.channel_labels,\n \"channel_wavelengths\": [ch.wavelength_id for ch in channels],\n \"channel_visualization\": [ch.channel_visualisation for ch in channels],\n \"omero_kwargs\": omero_kwargs,\n \"overwrite\": overwrite,\n \"version\": self.image_meta.version,\n }\n\n default_kwargs.update(kwargs)\n\n create_empty_ome_zarr_image(\n **default_kwargs,\n )\n return NgffImage(store=store)\n
"},{"location":"api/core/#ngio.core.NgffImage.get_image","title":"get_image(*, path: str | None = None, pixel_size: PixelSize | None = None, highest_resolution: bool = True) -> Image
","text":"Get an image handler for the given level.
Parameters:
path
(str | None
, default: None
) \u2013 The path to the level.
pixel_size
(tuple[float, ...] | list[float] | None
, default: None
) \u2013 The pixel size of the level.
highest_resolution
(bool
, default: True
) \u2013 Whether to get the highest resolution level
Returns:
ImageHandler
( Image
) \u2013 The image handler.
ngio/core/ngff_image.py
def get_image(\n self,\n *,\n path: str | None = None,\n pixel_size: PixelSize | None = None,\n highest_resolution: bool = True,\n) -> Image:\n \"\"\"Get an image handler for the given level.\n\n Args:\n path (str | None, optional): The path to the level.\n pixel_size (tuple[float, ...] | list[float] | None, optional): The pixel\n size of the level.\n highest_resolution (bool, optional): Whether to get the highest\n resolution level\n\n Returns:\n ImageHandler: The image handler.\n \"\"\"\n if path is not None or pixel_size is not None:\n highest_resolution = False\n\n image = Image(\n store=self._group,\n path=path,\n pixel_size=pixel_size,\n highest_resolution=highest_resolution,\n label_group=LabelGroup(self._group, image_ref=None, mode=self._mode),\n cache=self._metadata_cache,\n mode=self._mode,\n )\n ngio_logger.info(f\"Opened image at path: {image.path}\")\n ngio_logger.info(f\"- {image.dimensions}\")\n ngio_logger.info(f\"- {image.pixel_size}\")\n return image\n
"},{"location":"api/core/#ngio.core.NgffImage.lazy_init_omero","title":"lazy_init_omero(labels: list[str] | int | None = None, wavelength_ids: list[str] | None = None, colors: list[str] | None = None, active: list[bool] | None = None, start_percentile: float | None = 1, end_percentile: float | None = 99, data_type: Any = np.uint16, consolidate: bool = True) -> None
","text":"Set the OMERO metadata for the image.
Parameters:
labels
(list[str] | int | None
, default: None
) \u2013 The labels of the channels.
wavelength_ids
(list[str] | None
, default: None
) \u2013 The wavelengths of the channels.
colors
(list[str] | None
, default: None
) \u2013 The colors of the channels.
active
(list[bool] | None
, default: None
) \u2013 Whether the channels are active.
start_percentile
(float | None
, default: 1
) \u2013 The start percentile for computing the data range. If None, the start is the same as the min value of the data type.
end_percentile
(float | None
, default: 99
) \u2013 The end percentile for for computing the data range. If None, the start is the same as the max value of the data type.
data_type
(Any
, default: uint16
) \u2013 The data type of the image.
consolidate
(bool
, default: True
) \u2013 Whether to consolidate the metadata.
ngio/core/ngff_image.py
def lazy_init_omero(\n self,\n labels: list[str] | int | None = None,\n wavelength_ids: list[str] | None = None,\n colors: list[str] | None = None,\n active: list[bool] | None = None,\n start_percentile: float | None = 1,\n end_percentile: float | None = 99,\n data_type: Any = np.uint16,\n consolidate: bool = True,\n) -> None:\n \"\"\"Set the OMERO metadata for the image.\n\n Args:\n labels (list[str] | int | None): The labels of the channels.\n wavelength_ids (list[str] | None): The wavelengths of the channels.\n colors (list[str] | None): The colors of the channels.\n active (list[bool] | None): Whether the channels are active.\n start_percentile (float | None): The start percentile for computing the data\n range. If None, the start is the same as the min value of the data type.\n end_percentile (float | None): The end percentile for for computing the data\n range. If None, the start is the same as the max value of the data type.\n data_type (Any): The data type of the image.\n consolidate (bool): Whether to consolidate the metadata.\n \"\"\"\n if labels is None:\n ref = self.get_image()\n labels = ref.num_channels\n\n if start_percentile is not None and end_percentile is not None:\n start, end = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n elif start_percentile is None and end_percentile is None:\n raise ValueError(\"Both start and end percentiles cannot be None.\")\n elif end_percentile is None and start_percentile is not None:\n raise ValueError(\n \"End percentile cannot be None if start percentile is not.\"\n )\n else:\n start, end = None, None\n\n self.image_meta.lazy_init_omero(\n labels=labels,\n wavelength_ids=wavelength_ids,\n colors=colors,\n start=start,\n end=end,\n active=active,\n data_type=data_type,\n )\n\n if consolidate:\n self._image_meta.write_meta(self.image_meta)\n
"},{"location":"api/core/#ngio.core.NgffImage.update_omero_window","title":"update_omero_window(start_percentile: int = 1, end_percentile: int = 99, min_value: int | float | None = None, max_value: int | float | None = None) -> None
","text":"Update the OMERO window.
This will setup percentiles based values for the window of each channel.
Parameters:
start_percentile
(int
, default: 1
) \u2013 The start percentile.
end_percentile
(int
, default: 99
) \u2013 The end percentile
min_value
(int | float | None
, default: None
) \u2013 The minimum value of the window.
max_value
(int | float | None
, default: None
) \u2013 The maximum value of the window.
ngio/core/ngff_image.py
def update_omero_window(\n self,\n start_percentile: int = 1,\n end_percentile: int = 99,\n min_value: int | float | None = None,\n max_value: int | float | None = None,\n) -> None:\n \"\"\"Update the OMERO window.\n\n This will setup percentiles based values for the window of each channel.\n\n Args:\n start_percentile (int): The start percentile.\n end_percentile (int): The end percentile\n min_value (int | float | None): The minimum value of the window.\n max_value (int | float | None): The maximum value of the window.\n\n \"\"\"\n start, ends = self._compute_percentiles(\n start_percentile=start_percentile, end_percentile=end_percentile\n )\n meta = self.image_meta\n ref_image = self.get_image()\n\n for func in [np.iinfo, np.finfo]:\n try:\n type_max = func(ref_image.on_disk_array.dtype).max\n type_min = func(ref_image.on_disk_array.dtype).min\n break\n except ValueError:\n continue\n else:\n raise ValueError(\"Data type not recognized.\")\n\n if min_value is None:\n min_value = type_min\n if max_value is None:\n max_value = type_max\n\n num_c = ref_image.dimensions.get(\"c\", 1)\n\n if meta.omero is None:\n raise NotImplementedError(\n \"OMERO metadata not found. \" \" Please add OMERO metadata to the image.\"\n )\n\n channel_list = meta.omero.channels\n if len(channel_list) != num_c:\n raise ValueError(\"The number of channels does not match the image.\")\n\n if len(channel_list) != len(start):\n raise ValueError(\"The number of channels does not match the image.\")\n\n for c, (channel, s, e) in enumerate(\n zip(channel_list, start, ends, strict=True)\n ):\n channel.channel_visualisation.start = s\n channel.channel_visualisation.end = e\n channel.channel_visualisation.min = min_value\n channel.channel_visualisation.max = max_value\n\n ngio_logger.info(\n f\"Updated window for channel {channel.label}. \"\n f\"Start: {start_percentile}, End: {end_percentile}\"\n )\n meta.omero.channels[c] = channel\n\n self._image_meta.write_meta(meta)\n
"},{"location":"notebooks/basic_usage/","title":"OME-Zarr Image Exploration","text":"In\u00a0[1]: Copied! from ngio.core import NgffImage\n\n# Ngio can stream data from any fsspec-compatible store\npath = \"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\"\nngff_image = NgffImage(path, \"r\")\nfrom ngio.core import NgffImage # Ngio can stream data from any fsspec-compatible store path = \"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\" ngff_image = NgffImage(path, \"r\")
The ngff_image
object provides a high-level interface to read, write and manipulate OME-Zarr images.
Print the image will show some overview information like:
print(ngff_image)\nprint(ngff_image)
NGFFImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n paths=['0', '1', '2', '3', '4'], \n labels=['nuclei', 'wf_2_labels', 'wf_3_labels', 'wf_4_labels'], \n tables=['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4'], \n)\n
From the NgffImage
object we can easily access access the image data (at any resolution level), the labels and the tables.
Get a single level
of the image pyramid as Image
(to know more about the Image
class, please refer to the Image notebook The Image
object is the main object to interact with the image. It contains methods to interact with the image data and metadata.
from ngio.ngff_meta import PixelSize\n\n# 1. Get image from highest resolution (default)\nimage = ngff_image.get_image()\nprint(image)\n\n# 2. Get image from a specific level using the path keyword\nimage = ngff_image.get_image(path=\"1\")\nprint(image)\n\n# 3. Get image from a specific pixel size using the pixel_size keyword\nimage = ngff_image.get_image(pixel_size=PixelSize(x=0.65, y=0.65, z=1))\nprint(image)\nfrom ngio.ngff_meta import PixelSize # 1. Get image from highest resolution (default) image = ngff_image.get_image() print(image) # 2. Get image from a specific level using the path keyword image = ngff_image.get_image(path=\"1\") print(image) # 3. Get image from a specific pixel size using the pixel_size keyword image = ngff_image.get_image(pixel_size=PixelSize(x=0.65, y=0.65, z=1)) print(image)
Image(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=0,\n PixelSize(x=0.1625, y=0.1625, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=4320, x=5120),\n)\nImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=1,\n PixelSize(x=0.325, y=0.325, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=2160, x=2560),\n)\nImage(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/, \n path=2,\n PixelSize(x=0.65, y=0.65, z=1.0, unit=micrometer),\n Dimensions(c=3, z=1, y=1080, x=1280),\n)\n
The Image
object provides a high-level interface to read and write image data at a specific resolution level.
print(\"Shape\", image.shape)\nprint(\"Axes\", image.axes_names)\nprint(\"PixelSize\", image.pixel_size)\nprint(\"Dimensions\", image.dimensions)\nprint(\"Channel Names\", image.channel_labels)\nprint(\"Shape\", image.shape) print(\"Axes\", image.axes_names) print(\"PixelSize\", image.pixel_size) print(\"Dimensions\", image.dimensions) print(\"Channel Names\", image.channel_labels)
Shape (3, 1, 1080, 1280)\nAxes ['c', 'z', 'y', 'x']\nPixelSize PixelSize(x=0.65, y=0.65, z=1.0, unit=micrometer)\nDimensions Dimensions(c=3, z=1, y=1080, x=1280)\nChannel Names ['DAPI', 'nanog', 'Lamin B1']\nIn\u00a0[5]: Copied!
# Get data as a numpy array or a dask array\ndata = image.get_array(c=0, mode=\"numpy\")\nprint(data)\n\ndask_data = image.get_array(c=0, mode=\"dask\")\ndask_data\n# Get data as a numpy array or a dask array data = image.get_array(c=0, mode=\"numpy\") print(data) dask_data = image.get_array(c=0, mode=\"dask\") dask_data
[[[280 258 223 ... 252 248 274]\n [288 284 269 ... 206 236 259]\n [334 358 351 ... 212 240 259]\n ...\n [ 70 87 102 ... 2 5 4]\n [186 184 209 ... 2 4 4]\n [209 220 223 ... 4 1 7]]]\nOut[5]: Array Chunk Bytes 2.64 MiB 2.64 MiB Shape (1, 1080, 1280) (1, 1080, 1280) Dask graph 1 chunks in 3 graph layers Data type uint16 numpy.ndarray 1280 1080 1
ngio
design is to always provide the data in a canonical axis order (t
, c
, z
, y
, x
) no matter what is the order on disk. The Image
object provides methods to access the data in this order. If you want to access data or metadata in the on-disk order, you can by using on_disk_{method_name}
methods.
print(\"On-disk shape\", image.on_disk_shape)\nprint(\"On-disk array\", image.on_disk_array)\nprint(\"On-disk dask array\", image.on_disk_dask_array)\nprint(\"On-disk shape\", image.on_disk_shape) print(\"On-disk array\", image.on_disk_array) print(\"On-disk dask array\", image.on_disk_dask_array)
On-disk shape (3, 1, 1080, 1280)\nOn-disk array <zarr.core.Array '/2' (3, 1, 1080, 1280) uint16>\nOn-disk dask array dask.array<from-zarr, shape=(3, 1, 1080, 1280), dtype=uint16, chunksize=(1, 1, 1080, 1280), chunktype=numpy.ndarray>\nIn\u00a0[7]: Copied!
print(\"List of Labels: \", ngff_image.labels.list())\n\nlabel_nuclei = ngff_image.labels.get_label(\"nuclei\", path=\"0\")\nprint(label_nuclei)\nprint(\"List of Labels: \", ngff_image.labels.list()) label_nuclei = ngff_image.labels.get_label(\"nuclei\", path=\"0\") print(label_nuclei)
List of Labels: ['nuclei', 'wf_2_labels', 'wf_3_labels', 'wf_4_labels']\nLabel(group_path=/home/runner/work/ngio/ngio/data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0/labels/nuclei, \n path=0,\n name=nuclei,\n PixelSize(x=0.1625, y=0.1625, z=1.0, unit=micrometer),\n Dimensions(z=1, y=4320, x=5120),\n)\nIn\u00a0[8]: Copied!
print(\"List of Tables: \", ngff_image.tables.list())\nprint(\" - Feature tables: \", ngff_image.tables.list(table_type='feature_table'))\nprint(\" - Roi tables: \", ngff_image.tables.list(table_type='roi_table'))\nprint(\" - Masking Roi tables: \", ngff_image.tables.list(table_type='masking_roi_table'))\nprint(\"List of Tables: \", ngff_image.tables.list()) print(\" - Feature tables: \", ngff_image.tables.list(table_type='feature_table')) print(\" - Roi tables: \", ngff_image.tables.list(table_type='roi_table')) print(\" - Masking Roi tables: \", ngff_image.tables.list(table_type='masking_roi_table'))
List of Tables: ['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\n - Feature tables: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\n - Roi tables: ['FOV_ROI_table', 'well_ROI_table']\n - Masking Roi tables: ['nuclei_ROI_table']\nIn\u00a0[9]: Copied!
# Loading a table\nfeature_table = ngff_image.tables.get_table(\"regionprops_DAPI\")\nfeature_table.table\n# Loading a table feature_table = ngff_image.tables.get_table(\"regionprops_DAPI\") feature_table.table
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/aligned_df.py:68: ImplicitModificationWarning: Transforming to str index.\n warnings.warn(\"Transforming to str index.\", ImplicitModificationWarning)\nOut[9]: area bbox_area equivalent_diameter max_intensity mean_intensity min_intensity standard_deviation_intensity label 1 2120.0 2655.0 15.938437 476.0 278.635864 86.0 54.343792 2 327.0 456.0 8.547709 604.0 324.162079 118.0 90.847092 3 1381.0 1749.0 13.816510 386.0 212.682114 60.0 50.169601 4 2566.0 3588.0 16.985800 497.0 251.731491 61.0 53.307186 5 4201.0 5472.0 20.019413 466.0 223.862885 51.0 56.719025 ... ... ... ... ... ... ... ... 3002 1026.0 1288.0 12.513618 589.0 308.404480 132.0 64.681778 3003 859.0 1080.0 11.794101 400.0 270.349243 107.0 49.040470 3004 508.0 660.0 9.899693 314.0 205.043304 82.0 33.249981 3005 369.0 440.0 8.899028 376.0 217.970184 82.0 50.978519 3006 278.0 330.0 8.097459 339.0 217.996399 100.0 38.510067
3006 rows \u00d7 7 columns
In\u00a0[10]: Copied!# Loading a roi table\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\n\nprint(f\"{roi_table.field_indexes=}\")\nprint(f\"{roi_table.get_roi('FOV_1')=}\")\n\nroi_table.table\n# Loading a roi table roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") print(f\"{roi_table.field_indexes=}\") print(f\"{roi_table.get_roi('FOV_1')=}\") roi_table.table
roi_table.field_indexes=['FOV_1', 'FOV_2', 'FOV_3', 'FOV_4']\nroi_table.get_roi('FOV_1')=WorldCooROI(x_length=416.0, y_length=351.0, z_length=1.0, x=0.0, y=0.0, z=0.0)\nOut[10]: x_micrometer y_micrometer z_micrometer len_x_micrometer len_y_micrometer len_z_micrometer x_micrometer_original y_micrometer_original FieldIndex FOV_1 0.0 0.0 0.0 416.0 351.0 1.0 -1448.300049 -1517.699951 FOV_2 416.0 0.0 0.0 416.0 351.0 1.0 -1032.300049 -1517.699951 FOV_3 0.0 351.0 0.0 416.0 351.0 1.0 -1448.300049 -1166.699951 FOV_4 416.0 351.0 0.0 416.0 351.0 1.0 -1032.300049 -1166.699951
Rois can be used to index image and label data.
In\u00a0[11]: Copied!import matplotlib.pyplot as plt\n\n# Plotting a single ROI\nroi = roi_table.get_roi(\"FOV_1\")\nroi_data = image.get_array_from_roi(roi, c=0, mode=\"numpy\")\nplt.title(\"ROI: FOV_1\")\nplt.imshow(roi_data[0], cmap=\"gray\")\nplt.axis(\"off\")\nplt.show()\nimport matplotlib.pyplot as plt # Plotting a single ROI roi = roi_table.get_roi(\"FOV_1\") roi_data = image.get_array_from_roi(roi, c=0, mode=\"numpy\") plt.title(\"ROI: FOV_1\") plt.imshow(roi_data[0], cmap=\"gray\") plt.axis(\"off\") plt.show() In\u00a0[12]: Copied!
new_ngff_image = ngff_image.derive_new_image(\"../../data/new_ome.zarr\", name=\"new_image\")\nprint(new_ngff_image)\nnew_ngff_image = ngff_image.derive_new_image(\"../../data/new_ome.zarr\", name=\"new_image\") print(new_ngff_image)
NGFFImage(group_path=/home/runner/work/ngio/ngio/data/new_ome.zarr/, \n paths=['0', '1', '2', '3', '4'], \n labels=[], \n tables=[], \n)\nIn\u00a0[13]: Copied!
from ngio.core.utils import get_fsspec_http_store\n\n# Ngio can stream data from any fsspec-compatible store\nurl = \"https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/\"\nstore = get_fsspec_http_store(url)\nngff_image = NgffImage(store, \"r\")\n\nprint(ngff_image)\nfrom ngio.core.utils import get_fsspec_http_store # Ngio can stream data from any fsspec-compatible store url = \"https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/\" store = get_fsspec_http_store(url) ngff_image = NgffImage(store, \"r\") print(ngff_image)
NGFFImage(group_path=https://raw.githubusercontent.com/fractal-analytics-platform/fractal-ome-zarr-examples/refs/heads/main/v04/20200812-CardiomyocyteDifferentiation14-Cycle1_B_03_mip.zarr/, \n paths=['0', '1', '2', '3'], \n labels=['nuclei'], \n tables=['FOV_ROI_table', 'nuclei_ROI_table', 'well_ROI_table', 'regionprops_DAPI'], \n)\n"},{"location":"notebooks/basic_usage/#ome-zarr-image-exploration","title":"OME-Zarr Image Exploration\u00b6","text":"
In this notebook we will show how to use the 'NgffImage' class to explore and manage an OME-NGFF image.
For this example we will use a small example image that can be downloaded from the following link: example ome-zarr
"},{"location":"notebooks/basic_usage/#setup","title":"Setup\u00b6","text":"You can download the example image (on Linux and Mac os) by running the following command:
bash setup_data.sh\n
from the root of the repository.
"},{"location":"notebooks/basic_usage/#ngffimage","title":"NgffImage\u00b6","text":"The NgffImage
provides a high-level interface to read, write and manipulate NGFF images. A NgffImage
can be created from a storelike object (e.g. a path to a directory, or a url) or from a zarr.Group
object.
The NgffImage
can also be used to load labels from a OME-NGFF
file and behave similarly to the Image
object.
The NgffImage
can also be used to load tables from a OME-NGFF
file.
ngio
supports three types of tables:
features table
A simple table to store features associated with a label.roi table
A table to store regions of interest.masking roi tables
A table to store single objects bounding boxes associated with a label.When processing an image, it is often useful to derive a new image from the original image. The NgffImage
class provides a method to derive a new image from the original image. When deriving a new image, a new NgffImage
object is created with the same metadata as the original image. Optionally the user can specify different metadata for the new image(.e.g. different channels names).
The NgffImage
class can also be used to stream an image over HTTP. This is useful when the image is stored on a remote server and you want to access it without downloading the entire image. All features of the NgffImage
class are available when streaming an image over HTTP (besides anything that requires writing to the image).
import matplotlib.pyplot as plt\n\nfrom ngio.core.ngff_image import NgffImage\n\nngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\")\nimport matplotlib.pyplot as plt from ngio.core.ngff_image import NgffImage ngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1_mip.zarr/B/03/0\") In\u00a0[2]: Copied!
image = ngff_image.get_image()\n\nprint(\"Image information:\")\nprint(f\"{image.shape=}\")\nprint(f\"{image.axes_names=}\")\nprint(f\"{image.pixel_size=}\")\nprint(f\"{image.channel_labels=}\")\nprint(f\"{image.dimensions=}\")\nimage = ngff_image.get_image() print(\"Image information:\") print(f\"{image.shape=}\") print(f\"{image.axes_names=}\") print(f\"{image.pixel_size=}\") print(f\"{image.channel_labels=}\") print(f\"{image.dimensions=}\")
Image information:\nimage.shape=(3, 1, 4320, 5120)\nimage.axes_names=['c', 'z', 'y', 'x']\nimage.pixel_size=PixelSize(x=0.1625, y=0.1625, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nimage.channel_labels=['DAPI', 'nanog', 'Lamin B1']\nimage.dimensions=Dimensions(c=3, z=1, y=4320, x=5120)\n
The Image
object created is a lazy object, meaning that the image is not loaded into memory until it is needed. To get the image data from disk we can use the .array
attribute or we can get it as a dask.array
object using the .dask_array
attribute.
# Get image as a dask array\ndask_array = image.on_disk_dask_array\ndask_array\n# Get image as a dask array dask_array = image.on_disk_dask_array dask_array Out[3]: Array Chunk Bytes 126.56 MiB 10.55 MiB Shape (3, 1, 4320, 5120) (1, 1, 2160, 2560) Dask graph 12 chunks in 2 graph layers Data type uint16 numpy.ndarray 3 1 5120 4320 1
Note, directly accessing the .on_disk_array
or .on_disk_dask_array
attributes will load the image as stored in the file.
Since in principle the images can have different axes order. A safer way to access the image data is to use the .get_array()
method, which will return the image data in canonical order (TCZYX).
image_numpy = image.get_array(c=0, x=slice(0, 250), y=slice(0, 250), preserve_dimensions=False, mode=\"numpy\")\n\nprint(f\"{image_numpy.shape=}\")\nimage_numpy = image.get_array(c=0, x=slice(0, 250), y=slice(0, 250), preserve_dimensions=False, mode=\"numpy\") print(f\"{image_numpy.shape=}\")
image_numpy.shape=(1, 250, 250)\nIn\u00a0[5]: Copied!
roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\nroi = roi_table.get_roi(\"FOV_1\")\nprint(f\"{roi=}\")\n\nimage_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=True, mode=\"dask\")\nimage_roi_1\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") roi = roi_table.get_roi(\"FOV_1\") print(f\"{roi=}\") image_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=True, mode=\"dask\") image_roi_1
roi=WorldCooROI(x_length=416.0, y_length=351.0, z_length=1.0, x=0.0, y=0.0, z=0.0)\nOut[5]: Array Chunk Bytes 10.55 MiB 10.55 MiB Shape (1, 1, 2160, 2560) (1, 1, 2160, 2560) Dask graph 1 chunks in 3 graph layers Data type uint16 numpy.ndarray 1 1 2560 2160 1
The roi object can is defined in physical coordinates, and can be used to extract the region of interest from the image or label at any resolution.
In\u00a0[6]: Copied!image_2 = ngff_image.get_image(path=\"2\")\n# Two images at different resolutions\nprint(f\"{image.pixel_size=}\")\nprint(f\"{image_2.pixel_size=}\")\n\n# Get roi for higher resolution image\nimage_1_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False)\n\n# Get roi for lower resolution image\nimage_2_roi_1 = image_2.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False)\n\n# Plot the two images side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image_1_roi_1[0], cmap=\"gray\")\naxs[1].imshow(image_2_roi_1[0], cmap=\"gray\")\nplt.show()\nimage_2 = ngff_image.get_image(path=\"2\") # Two images at different resolutions print(f\"{image.pixel_size=}\") print(f\"{image_2.pixel_size=}\") # Get roi for higher resolution image image_1_roi_1 = image.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False) # Get roi for lower resolution image image_2_roi_1 = image_2.get_array_from_roi(roi=roi, c=0, preserve_dimensions=False) # Plot the two images side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image_1_roi_1[0], cmap=\"gray\") axs[1].imshow(image_2_roi_1[0], cmap=\"gray\") plt.show()
image.pixel_size=PixelSize(x=0.1625, y=0.1625, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nimage_2.pixel_size=PixelSize(x=0.65, y=0.65, z=1.0, unit=<SpaceUnits.micrometer: 'micrometer'>, virtual=False)\nIn\u00a0[7]: Copied!
import numpy as np\n\n# Get a small slice of the image\nsmall_slice = image.get_array(x=slice(1000, 2000), y=slice(1000, 2000))\n\n# Set the sample slice to zeros\nzeros_slice = np.zeros_like(small_slice)\nimage.set_array(patch=zeros_slice, x=slice(1000, 2000), y=slice(1000, 2000))\n\n\n# Load the image from disk and show the edited image\nnuclei = ngff_image.labels.get_label(\"nuclei\")\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].imshow(nuclei.on_disk_array[0])\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\n\n# Add back the original slice to the image\nimage.set_array(patch=small_slice, x=slice(1000, 2000), y=slice(1000, 2000))\nimport numpy as np # Get a small slice of the image small_slice = image.get_array(x=slice(1000, 2000), y=slice(1000, 2000)) # Set the sample slice to zeros zeros_slice = np.zeros_like(small_slice) image.set_array(patch=zeros_slice, x=slice(1000, 2000), y=slice(1000, 2000)) # Load the image from disk and show the edited image nuclei = ngff_image.labels.get_label(\"nuclei\") fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\") axs[1].imshow(nuclei.on_disk_array[0]) for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() # Add back the original slice to the image image.set_array(patch=small_slice, x=slice(1000, 2000), y=slice(1000, 2000)) In\u00a0[8]: Copied!
# Create a a new label object and set it to a simple segmentation\nnew_label = ngff_image.labels.derive(\"new_label\", overwrite=True)\n\nsimple_segmentation = image.on_disk_array[0] > 100\nnew_label.on_disk_array[...] = simple_segmentation\n\n# make a subplot with two image show side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].imshow(new_label.on_disk_array[0], cmap=\"gray\")\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\n# Create a a new label object and set it to a simple segmentation new_label = ngff_image.labels.derive(\"new_label\", overwrite=True) simple_segmentation = image.on_disk_array[0] > 100 new_label.on_disk_array[...] = simple_segmentation # make a subplot with two image show side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(image.on_disk_array[0, 0], cmap=\"gray\") axs[1].imshow(new_label.on_disk_array[0], cmap=\"gray\") for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() In\u00a0[9]: Copied!
label_0 = ngff_image.labels.get_label(\"new_label\", path=\"0\")\nlabel_2 = ngff_image.labels.get_label(\"new_label\", path=\"2\")\n\nlabel_before_consolidation = label_2.on_disk_array[...]\n\n# Consolidate the label\nlabel_0.consolidate()\n\nlabel_after_consolidation = label_2.on_disk_array[...]\n\n\n# make a subplot with two image show side by side\nfig, axs = plt.subplots(1, 2, figsize=(10, 5))\naxs[0].imshow(label_before_consolidation[0], cmap=\"gray\")\naxs[1].imshow(label_after_consolidation[0], cmap=\"gray\")\nfor ax in axs:\n ax.axis(\"off\")\nplt.tight_layout()\nplt.show()\nlabel_0 = ngff_image.labels.get_label(\"new_label\", path=\"0\") label_2 = ngff_image.labels.get_label(\"new_label\", path=\"2\") label_before_consolidation = label_2.on_disk_array[...] # Consolidate the label label_0.consolidate() label_after_consolidation = label_2.on_disk_array[...] # make a subplot with two image show side by side fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(label_before_consolidation[0], cmap=\"gray\") axs[1].imshow(label_after_consolidation[0], cmap=\"gray\") for ax in axs: ax.axis(\"off\") plt.tight_layout() plt.show() In\u00a0[10]: Copied!
import numpy as np\nimport pandas as pd\n\nprint(f\"List of feature table: {ngff_image.tables.list(table_type='feature_table')}\")\n\n\nnuclei = ngff_image.labels.get_label('nuclei')\n\n# Create a table with random features for each nuclei in each ROI\nlist_of_records = []\nfor roi in roi_table.rois:\n nuclei_in_roi = nuclei.get_array_from_roi(roi, mode='numpy')\n for nuclei_id in np.unique(nuclei_in_roi)[1:]:\n list_of_records.append(\n {\"label\": nuclei_id,\n \"feat1\": np.random.rand(),\n \"feat2\": np.random.rand(),\n \"ROI\": roi.infos.get(\"FieldIndex\")}\n )\n\nfeat_df = pd.DataFrame.from_records(list_of_records)\n\n# Create a new feature table\nfeat_table = ngff_image.tables.new(name='new_feature_table',\n label_image='../nuclei',\n table_type='feature_table',\n overwrite=True)\n\nprint(f\"New list of feature table: {ngff_image.tables.list(table_type='feature_table')}\")\nfeat_table.set_table(feat_df)\nfeat_table.consolidate()\n\nfeat_table.table\nimport numpy as np import pandas as pd print(f\"List of feature table: {ngff_image.tables.list(table_type='feature_table')}\") nuclei = ngff_image.labels.get_label('nuclei') # Create a table with random features for each nuclei in each ROI list_of_records = [] for roi in roi_table.rois: nuclei_in_roi = nuclei.get_array_from_roi(roi, mode='numpy') for nuclei_id in np.unique(nuclei_in_roi)[1:]: list_of_records.append( {\"label\": nuclei_id, \"feat1\": np.random.rand(), \"feat2\": np.random.rand(), \"ROI\": roi.infos.get(\"FieldIndex\")} ) feat_df = pd.DataFrame.from_records(list_of_records) # Create a new feature table feat_table = ngff_image.tables.new(name='new_feature_table', label_image='../nuclei', table_type='feature_table', overwrite=True) print(f\"New list of feature table: {ngff_image.tables.list(table_type='feature_table')}\") feat_table.set_table(feat_df) feat_table.consolidate() feat_table.table
List of feature table: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4']\nNew list of feature table: ['regionprops_DAPI', 'nuclei_measurements_wf3', 'nuclei_measurements_wf4', 'nuclei_lamin_measurements_wf4', 'new_feature_table']\n
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/anndata/_core/anndata.py:1754: UserWarning: Observation names are not unique. To make them unique, call `.obs_names_make_unique`.\n utils.warn_names_duplicates(\"obs\")\nOut[10]: feat1 feat2 ROI label 1 0.594384 0.075957 FOV_1 2 0.498476 0.372709 FOV_1 3 0.469159 0.203546 FOV_1 4 0.087192 0.458499 FOV_1 5 0.768503 0.729454 FOV_1 ... ... ... ... 2987 0.689012 0.732178 FOV_4 2991 0.360491 0.116559 FOV_4 2993 0.264360 0.233126 FOV_4 2995 0.591531 0.128540 FOV_4 2996 0.194133 0.681111 FOV_4
3091 rows \u00d7 3 columns
"},{"location":"notebooks/image/#imageslabelstables","title":"Images/Labels/Tables\u00b6","text":"In this notebook we will show how to use the Image
, Label
and Table
objects to do image processing.
Images can be loaded from a NgffImage
object.
roi
objects from a roi_table
can be used to extract a region of interest from an image or a label.
Similarly to the .array()
we can use the .set_array()
(or set_array_from_roi
) method to write part of an image to disk.
When doing image analysis, we often need to create new labels or tables. The ngff_image
allows us to simply create new labels and tables.
Every time we modify a label or a image, we are modifying the on-disk data on one layer only. This means that if we have the image saved in multiple resolutions, we need to consolidate the changes to all resolutions. To do so, we can use the .consolidate()
method.
We can simply create a new table by create a new Table
object from a pandas dataframe. For a simple feature table the only reuiremnt is to have a integer column named label
that will be used to link the table to the objects in the image.
import matplotlib.pyplot as plt\n\nfrom ngio.core import NgffImage\n\nngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0\")\nimport matplotlib.pyplot as plt from ngio.core import NgffImage ngff_image = NgffImage(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0\") In\u00a0[2]: Copied!
mip_ngff = ngff_image.derive_new_image(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0_mip\",\n name=\"MIP\",\n on_disk_shape=(1, 1, 2160, 5120))\nmip_ngff = ngff_image.derive_new_image(\"../../data/20200812-CardiomyocyteDifferentiation14-Cycle1.zarr/B/03/0_mip\", name=\"MIP\", on_disk_shape=(1, 1, 2160, 5120)) In\u00a0[3]: Copied!
# Get the source imag\nsource_image = ngff_image.get_image()\nprint(\"Source image loaded with shape:\", source_image.shape)\n\n# Get the MIP image\nmip_image = mip_ngff.get_image()\nprint(\"MIP image loaded with shape:\", mip_image.shape)\n\n# Get a ROI table\nroi_table = ngff_image.tables.get_table(\"FOV_ROI_table\")\nprint(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\")\n\n# For each ROI in the table\n# - get the data from the source image\n# - calculate the MIP\n# - set the data in the MIP image\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n patch = source_image.get_array_from_roi(roi)\n mip_patch = patch.max(axis=1, keepdims=True)\n mip_image.set_array_from_roi(patch=mip_patch, roi=roi)\n \nprint(\"MIP image saved\")\n\nplt.figure(figsize=(5, 5))\nplt.title(\"Mip\")\nplt.imshow(mip_image.on_disk_array[0, 0, :, :], cmap=\"gray\")\nplt.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the source imag source_image = ngff_image.get_image() print(\"Source image loaded with shape:\", source_image.shape) # Get the MIP image mip_image = mip_ngff.get_image() print(\"MIP image loaded with shape:\", mip_image.shape) # Get a ROI table roi_table = ngff_image.tables.get_table(\"FOV_ROI_table\") print(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\") # For each ROI in the table # - get the data from the source image # - calculate the MIP # - set the data in the MIP image for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") patch = source_image.get_array_from_roi(roi) mip_patch = patch.max(axis=1, keepdims=True) mip_image.set_array_from_roi(patch=mip_patch, roi=roi) print(\"MIP image saved\") plt.figure(figsize=(5, 5)) plt.title(\"Mip\") plt.imshow(mip_image.on_disk_array[0, 0, :, :], cmap=\"gray\") plt.axis('off') plt.tight_layout() plt.show()
Source image loaded with shape: (1, 2, 2160, 5120)\nMIP image loaded with shape: (1, 1, 2160, 5120)\nROI table loaded with 2 ROIs\n - Processing ROI None\n - Processing ROI None\nMIP image saved\nIn\u00a0[4]: Copied!
# Get the MIP image at a lower resolution\nmip_image_2 = mip_ngff.get_image(path=\"2\")\n\nimage_before_consolidation = mip_image_2.get_array(c=0, z=0)\n\n# Consolidate the pyramid\nmip_image.consolidate()\n\nimage_after_consolidation = mip_image_2.get_array(c=0, z=0)\n\nfig, axs = plt.subplots(2, 1, figsize=(10, 5))\naxs[0].set_title(\"Before consolidation\")\naxs[0].imshow(image_before_consolidation, cmap=\"gray\")\naxs[1].set_title(\"After consolidation\")\naxs[1].imshow(image_after_consolidation, cmap=\"gray\")\nfor ax in axs:\n ax.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the MIP image at a lower resolution mip_image_2 = mip_ngff.get_image(path=\"2\") image_before_consolidation = mip_image_2.get_array(c=0, z=0) # Consolidate the pyramid mip_image.consolidate() image_after_consolidation = mip_image_2.get_array(c=0, z=0) fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].set_title(\"Before consolidation\") axs[0].imshow(image_before_consolidation, cmap=\"gray\") axs[1].set_title(\"After consolidation\") axs[1].imshow(image_after_consolidation, cmap=\"gray\") for ax in axs: ax.axis('off') plt.tight_layout() plt.show() In\u00a0[5]: Copied!
mip_roi_table = mip_ngff.tables.new(\"FOV_ROI_table\", overwrite=True)\n\nroi_list = []\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n roi.z_length = 1 # In the MIP image, the z dimension is 1\n roi_list.append(roi)\n\nmip_roi_table.set_rois(roi_list, overwrite=True)\nmip_roi_table.consolidate()\n\nmip_roi_table.table\nmip_roi_table = mip_ngff.tables.new(\"FOV_ROI_table\", overwrite=True) roi_list = [] for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") roi.z_length = 1 # In the MIP image, the z dimension is 1 roi_list.append(roi) mip_roi_table.set_rois(roi_list, overwrite=True) mip_roi_table.consolidate() mip_roi_table.table
- Processing ROI None\n - Processing ROI None\nOut[5]: x_micrometer y_micrometer z_micrometer len_x_micrometer len_y_micrometer len_z_micrometer x_micrometer_original y_micrometer_original FieldIndex FOV_1 0.0 0.0 0.0 416.0 351.0 1 -1448.300049 -1517.699951 FOV_2 416.0 0.0 0.0 416.0 351.0 1 -1032.300049 -1517.699951 In\u00a0[6]: Copied!
# Setup a simple segmentation function\n\nimport numpy as np\nfrom matplotlib.colors import ListedColormap\nfrom skimage.filters import threshold_otsu\nfrom skimage.measure import label\n\nrand_cmap = np.random.rand(1000, 3)\nrand_cmap[0] = 0\nrand_cmap = ListedColormap(rand_cmap)\n\n\ndef otsu_threshold_segmentation(image: np.ndarray, max_label:int) -> np.ndarray:\n \"\"\"Simple segmentation using Otsu thresholding.\"\"\"\n threshold = threshold_otsu(image)\n binary = image > threshold\n label_image = label(binary)\n label_image += max_label\n label_image = np.where(binary, label_image, 0)\n return label_image\n# Setup a simple segmentation function import numpy as np from matplotlib.colors import ListedColormap from skimage.filters import threshold_otsu from skimage.measure import label rand_cmap = np.random.rand(1000, 3) rand_cmap[0] = 0 rand_cmap = ListedColormap(rand_cmap) def otsu_threshold_segmentation(image: np.ndarray, max_label:int) -> np.ndarray: \"\"\"Simple segmentation using Otsu thresholding.\"\"\" threshold = threshold_otsu(image) binary = image > threshold label_image = label(binary) label_image += max_label label_image = np.where(binary, label_image, 0) return label_image In\u00a0[7]: Copied!
nuclei_image = mip_ngff.labels.derive(name=\"nuclei\", overwrite=True)\nnuclei_image = mip_ngff.labels.derive(name=\"nuclei\", overwrite=True) In\u00a0[8]: Copied!
# Get the source imag\nsource_image = mip_ngff.get_image()\nprint(\"Source image loaded with shape:\", source_image.shape)\n\n# Get a ROI table\nroi_table = mip_ngff.tables.get_table(\"FOV_ROI_table\")\nprint(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\")\n\n# Find the DAPI channel\ndapi_idx = source_image.get_channel_idx(label=\"DAPI\")\n\n# For each ROI in the table\n# - get the data from the source image\n# - calculate the Segmentation\n# - set the data in segmentation image\nmax_label = 0\nfor roi in roi_table.rois:\n print(f\" - Processing ROI {roi.infos.get('field_index')}\")\n patch = source_image.get_array_from_roi(roi, c=dapi_idx)\n segmentation = otsu_threshold_segmentation(patch, max_label)\n\n # Add the max label of the previous segmentation to avoid overlapping labels\n max_label = segmentation.max()\n\n nuclei_image.set_array_from_roi(patch=segmentation, roi=roi)\n\n# Consolidate the segmentation image\nnuclei_image.consolidate()\n\nprint(\"Segmentation image saved\")\nfig, axs = plt.subplots(2, 1, figsize=(10, 5))\naxs[0].set_title(\"MIP\")\naxs[0].imshow(source_image.on_disk_array[0, 0], cmap=\"gray\")\naxs[1].set_title(\"Nuclei segmentation\")\naxs[1].imshow(nuclei_image.on_disk_array[0], cmap=rand_cmap, interpolation='nearest')\nfor ax in axs:\n ax.axis('off')\nplt.tight_layout()\nplt.show()\n# Get the source imag source_image = mip_ngff.get_image() print(\"Source image loaded with shape:\", source_image.shape) # Get a ROI table roi_table = mip_ngff.tables.get_table(\"FOV_ROI_table\") print(\"ROI table loaded with\", len(roi_table.rois), \"ROIs\") # Find the DAPI channel dapi_idx = source_image.get_channel_idx(label=\"DAPI\") # For each ROI in the table # - get the data from the source image # - calculate the Segmentation # - set the data in segmentation image max_label = 0 for roi in roi_table.rois: print(f\" - Processing ROI {roi.infos.get('field_index')}\") patch = source_image.get_array_from_roi(roi, c=dapi_idx) segmentation = otsu_threshold_segmentation(patch, max_label) # Add the max label of the previous segmentation to avoid overlapping labels max_label = segmentation.max() nuclei_image.set_array_from_roi(patch=segmentation, roi=roi) # Consolidate the segmentation image nuclei_image.consolidate() print(\"Segmentation image saved\") fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].set_title(\"MIP\") axs[0].imshow(source_image.on_disk_array[0, 0], cmap=\"gray\") axs[1].set_title(\"Nuclei segmentation\") axs[1].imshow(nuclei_image.on_disk_array[0], cmap=rand_cmap, interpolation='nearest') for ax in axs: ax.axis('off') plt.tight_layout() plt.show()
Source image loaded with shape: (1, 1, 2160, 5120)\nROI table loaded with 2 ROIs\n - Processing ROI None\n
- Processing ROI None\n
Segmentation image saved\nIn\u00a0[\u00a0]: Copied!
\n"},{"location":"notebooks/processing/#processing","title":"Processing\u00b6","text":"
In this notebook we will implement a couple of mock image analysis workflows using ngio
.
In this workflow we will read a volumetric image and create a maximum intensity projection (MIP) along the z-axis.
"},{"location":"notebooks/processing/#step-1-create-a-ngff-image","title":"step 1: Create a ngff image\u00b6","text":"For this example we will use the following publicly available image
"},{"location":"notebooks/processing/#step-2-create-a-new-ngff-image-to-store-the-mip","title":"step 2: Create a new ngff image to store the MIP\u00b6","text":""},{"location":"notebooks/processing/#step-3-run-the-workflow","title":"step 3: Run the workflow\u00b6","text":"For each roi in the image, create a MIP and store it in the new image
"},{"location":"notebooks/processing/#step-4-consolidate-the-results-important","title":"step 4: Consolidate the results (!!! Important)\u00b6","text":"In this we wrote the mip image to a single level of the image pyramid. To truly consolidate the results we would need to write the mip to all levels of the image pyramid. We can do this by calling the .consolidate()
method on the image.
As a final step we will create a new ROI table that contains the MIPs as ROIs. Where we correct the z
bounds of the ROIs to reflect the MIP.
Now we can use the MIP image to segment the image using a simple thresholding algorithm.
"},{"location":"notebooks/processing/#step-1-derive-a-new-label-image-from-the-mip-image","title":"step 1: Derive a new label image from the MIP image\u00b6","text":""},{"location":"notebooks/processing/#step-2-run-the-workflow","title":"step 2: Run the workflow\u00b6","text":""}]} \ No newline at end of file