Skip to content

Latest commit

 

History

History
198 lines (170 loc) · 13 KB

README.md

File metadata and controls

198 lines (170 loc) · 13 KB

SPARC Scaffolding in Python (SPARC-SPy)

A python tool to enhance the accessibility of SPARC dataset visualisations and their analyses in accordance with FAIR principles.

Python 3 Contributors Stargazers Issues GitHub issues-closed License Contributor Covenant PyPI version fury.io Conventional Commits

Table of contents

About

This is the repository of team SPARC-SPy (Team #3) of the 2024 SPARC Codeathon. Information about the 2024 SPARC Codeathon can be found here.

No work was done on this project prior to the Codeathon.

Introduction

The NIH Common fund program Stimulating Peripheral Activity to Relieve Conditions (SPARC) seeks to understand how electrical signals control internal organ function. In doing so it explores how therapeutic devices might modulate nerve activity to treat conditions like hypertension, heart failure, and gastrointestinal disorders. To this end, data have been compiled from 60+ research groups, involving 3900+ subjects across 8 species from 49 different anatomical structures.

The SPARC Portal offers a user-friendly interface to access and share resources from the SPARC community. It features well-curated, high-impact data, SPARC projects, and computational simulations, all available under the “Find Data” section.

The problem

In the current landscape of data science and research, visualizing data is crucial for analysis, interpretation, and communication. However, existing tools for reconstructing visualizations from datasets are limited in their accessibility and interoperability. The primary tool available is restricted to the Windows operating system, creating significant barriers for users on other platforms such as macOS and Linux. This limitation hinders the application of the FAIR principles (Findable, Accessible, Interoperable, and Reusable) to data visualization:

Limited Accessibility:

  • Researchers and data scientists using non-Windows operating systems are unable to access the existing tool, leading to inefficiencies and potential data silos.

Poor Interoperability:

  • The existing tool may not support integration with other widely-used data analysis tools or workflows, making it difficult to share and collaborate on visualizations across different platforms and software environments.

Challenges in Reusability:

  • Without a standardized approach to creating and sharing visualizations, researchers may struggle to replicate or adapt visualizations for different datasets or research contexts.

Our solution - (SPARC-SPy)

We have developed a cross-platform Python visualisation tool called the SPARC Scaffolding in Python (SPARC-SPy) to run within o2S2PARC that can produce VTK visualisations from data scaffolds. This Python module enhancess the FAIRness of SPARC data by:

  • Findability
    • Enhanced Metadata: The tool can extract and attach metadata to visualizations, making it easier to locate specific datasets and their visual representations.
    • Searchability: By tagging visualizations with relevant keywords and descriptions, users can quickly find the visual data they need.
  • Accessibility
    • User-Friendly Interface: A well-designed tool can provide an intuitive interface for accessing and generating visualizations, lowering the barrier for users with varying levels of technical expertise.
    • Light weight: A universally implementable visualisation tool can be run within o2S2PARC while accessing visualisations of curated SDS datasets and their metadata (using the Pennsieve API).
    • Open Access: If the tool is open-source or freely available, it ensures that a wider audience can access and use it without restrictions.
  • Interoperability
    • Standard Formats: The tool can support and export visualizations in standardized formats (e.g., JSON & VTK at present - can be expanded further), ensuring compatibility with other tools and platforms.
    • APIs and Integration: By providing APIs and integration capabilities, the tool can work seamlessly with other data analysis and visualization workflows, promoting interoperability.
  • Reusability
    • Documentation and Templates: The tool includes comprehensive documentation and reusable templates for common visualization types, making it easier for users to replicate and adapt visualizations for their own datasets.
    • Version Control: Implementing version control for visualizations ensures that users can track changes and reuse previous versions as needed.

Impact

Improve existing capabilities of SPARC tools with direct integration

The SPARC-SPy tool has been developed to integrate existing SPARC tools such as Pennsieve and sparc-me. This allows for a streamlined process within the SPARC ecosystem from downloading datasets to generating visualisations. By supporting standardised data formats this tool is highly interoperable with existing tools, improving the capabilities and experience of the SPARC platform. The capabilities of spark-spy extend further as it can query metadata and embeded within the visualisations to provide powerful analyses (e.g. scaffold volume). This tool is provided alongside comprehensive documentation to ensure a user-friendly experience, empowering researchers to integrate SPARC-SPy into their workflows for more consistent and reproducible visualisations.

Increase visibility of the value within SPARC's public data

Visualizations can make complex data more engaging and easier to communicate to a broader audience, including those without a technical background. Using SPARC-SPy for reconstructing visualizations, researchers can more effectively analyze and interpret SPARC’s public data, making it more accessible and understandable, which in turn increases its visibility and impact. The tool can help users discover new insights and patterns within SPARC’s datasets, potentially leading to new research questions and applications and the end goal of effective treatments.

alt text

Enable analysis of field data

The SPARC-SPy tool is able to display field data associated with a scaffold. In the example below axon innervation data digitally traced from a flat-mount sample retrieved from the ventral stomach of subject 115 is used to compute a smooth innervation density distribution field. The data is mapped onto a common coordinate framework provided by the generic rat stomach scaffold. The coloured lines represent the axon innervation while the white-to-blue spectrum represents the spatial distribution of innervation density, with blue representing areas with dense innervation and white areas are sparse in innervation

alt text

Setting up SPARC-SPy

Pre-requisites

  • Git
  • Python versions:
    • 3.9

Installing via PyPI

Here is the link to our project on PyPI

pip install sparc-spy

From source code

Downloading source code

Clone the SPARC-SPy repository from github, e.g.:

git clone [email protected]:SPARC-FAIR-codeathon/sparc-spy 

Installing dependencies

pip install requirements.txt

Using SPARC-SPy

Included are guided tutorials covering some applications of SPARC-SPy:

Tutorial Description
Tutorial 1: Getting started - In this tutorial, we use SPARC-SPy to import a JSON scaffold file from a dataset and visualise it within a Jupyter notebook running on o2S2PARC.
Tutorial 2: Finding scaffolds - In this tutorial, we show how SPARC-SPy can be used to identify datasets containing scaffolds using Pennsieve API.
Tutorial 3: Generating analytics - In this tutorial, we show how SPARC-SPy can use scaffolds and metadata to generate powerful analytics (such as volume, average temperature/direction?).
Tutorial 4: New tags - In this tutorial, we show how we can tag visualisations with key descriptors to enable users to quickly identify the data they need.
Tutorial 5: Into the flow - In this tutorial, we show how SPARC-SPy can be used with existing tools such as sparc-flow to simplify visualisation workflows.


Reporting issues

To report an issue or suggest a new feature, please use the issues page. Please check existing issues before submitting a new one.

Contributing

To contribute: fork this repository and submit a pull request. Before submitting a pull request, please read our Contributing Guidelines and Code of Conduct. If you found this tool helpful, please add a GitHub Star to support further developments!

Project structure

  • /src/ - Directory of SPARC-SPy python module.
  • /tutorials/ - Directory of tutorials showcasing SPARC-SPy python module in action.

Cite us

If you use sparc-flow to make new discoveries or use the source code, please cite us as follows:

Michael Hoffman, Yun Gu, Mishaim Malik, Savindi Wijenayaka, Matthew French (2024). SPARC-SPy: v1.0.0 - A python tool to enhance the accessibility of SPARC dataset visualisations and their analyses in accordance with FAIR principles.
Zenodo. https://doi.org/XXXX/zenodo.XXXX. 

FAIR practices

We have assessed the FAIRness of our sparc-flow tool against the FAIR Principles established for research software. The details are available in the following SPARC-SPy-Fairness

License

SPARC-SPy is open source and distributed under the Apache License 2.0. See LICENSE for more information.

Team

Acknowledgements

  • We would like to thank the 2024 SPARC Codeathon organizers for their guidance and support during this Codeathon.