Skip to content

Commit

Permalink
Merge pull request #77 from computational-psychology/dev
Browse files Browse the repository at this point in the history
Dev: refactor, documentation, improve build
  • Loading branch information
LynnSchmittwilken authored Apr 10, 2023
2 parents 4f8a889 + f6a3d98 commit bc1289c
Show file tree
Hide file tree
Showing 43 changed files with 1,373 additions and 522 deletions.
37 changes: 37 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Release to PyPI

on:
release:
workflow_dispatch:

jobs:
test_publish:
runs-on: ubuntu-latest

steps:
- name: Fetch wheel(s) from release
uses: dsaltares/[email protected]
regex: true
file: '*.whl'
target: 'dist/'

- name: Fetch sdist(s) from release
uses: dsaltares/[email protected]
regex: true
file: '*.tar.gz'
target: 'dist/'

- name: Publish to TestPyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__
password: ${{ secrets.TEST_PYPI_API_TOKEN }}
repository_url: https://test.pypi.org/legacy/

- name: Test install from TestPyPI
run: |
pip install \
--index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple \
stimupy
17 changes: 8 additions & 9 deletions .github/workflows/version.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,15 @@ on:

jobs:
bump-version:
if: github.event.pull_request.merged == true
if: (github.event.pull_request.merged == true) || (github.event_name == 'workflow_dispatch')
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0 # checkout full commit history
token: ${{ secrets.GHA_Token }}

- name: Setup Python
uses: actions/setup-python@v4
Expand All @@ -24,15 +25,13 @@ jobs:

- name: Configure git to be able to push to repo
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GHA_Token }}
run: |
git config user.name github-actions
git config user.email github-actions@github.com
git config user.name token
git config user.email token@github.com
- name: Bump version using Semantic Release
- name: Bump version, build & upload release assets using Semantic Release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_TOKEN: ${{ secrets.GHA_Token }}
run: |
pipx run python-semantic-release version -v DEBUG
VERSION="v"$(pipx run python-semantic-release print-version --current)
git push --atomic origin HEAD $VERSION
pipx run python-semantic-release publish -v DEBUG
125 changes: 74 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,68 +1,102 @@
# Stimupy

[![Tests](https://github.com/computational-psychology/stimupy/actions/workflows/test.yml/badge.svg)](https://github.com/computational-psychology/stimupy/actions/workflows/test.yml) [![](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![Documentation Status](https://readthedocs.org/projects/stimupy/badge/?version=latest)](https://stimupy.readthedocs.io/en/latest/?badge=latest)

Contains submodules for
- drawing basic visual stimulus components ([components](stimupy/components/))
- creating different paramaterized stimuli ([stimuli](stimupy/stimuli/))
- replicating stimuli in certain published papers ([papers](stimupy/papers/))
converting pixel values to degrees of visual angle ([utils](stimupy/utils/))

`Stimupy` is a pure-Python package
for creating new and exiting visual stimuli
<p align=center>
A pure-Python package
for creating new and existing visual stimuli
commonly used in the sudy of contrast, brightness/lightness,
and other aspects of visual perception.
</p>

[![Tests](https://github.com/computational-psychology/stimupy/actions/workflows/test.yml/badge.svg)](https://github.com/computational-psychology/stimupy/actions/workflows/test.yml) [![](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![Documentation Status](https://readthedocs.org/projects/stimupy/badge/?version=latest)](https://stimupy.readthedocs.io/en/latest/?badge=latest)

---
- documentation: https://stimupy.readthedocs.io/en/latest/
- source code: https://github.com/computational-psychology/stimupy
---

`stimupy` has been designed to:

- *generate* (novel) visual stimuli in a reproducible, flexible, and easy way
- *recreate* exact stimuli as they have been used in prior vision research
- *explore* large parameter spaces to reveal relations between formerly unconnected stimuli
- *provide* classic stimulus sets (e.g. ModelFest),
exactly as described in the original manuscripts (including experimental data)
- *build* new stimulus sets or benchmarks (e.g. for testing computational models),
and easily add them to `stimupy`
- *support* vision science by providing a large,openly-available and flexible battery of relevant stimulus functions
- *unify and automate* stimulus creation
- be [**FAIR**](https://doi.org/10.1038/s41597-022-01710-x):
**F**indable, **A**ccessible, **I**nteroperable, and **R**eusable

---
## Core features:
Stimupy has been designed to generate stimuli from code,
so that they are reproducible, flexible, and easy.
- Stimuli available through stimupy are:
- basic visual stimulus [components](stimupy/components/),
such as basic shapes, gratings, Gaussians, Gabors
- visual [noise](stimupy/noises/) textures, of different kinds,
- and many different parameterized stimuli [stimuli](stimupy/stimuli/)
most with some special regions of interest,
such as Simultaneous Brightness Contrast, White's illusion,
but also Hermann Grids, checkerboards, Ponzo illusion, etc.

- All these stimuli are fully parameterizable
with interpretable parameters that are relevant to vision scientists
(e.g. visual angle, spatial frequency, target placements).

- basic visual stimulus [components](https://stimupy.readthedocs.io/en/latest/reference/_api/stimupy.components.html),
such as basic shapes, wave gratings, Gaussians
- visual [noise](https://stimupy.readthedocs.io/en/latest/reference/_api/stimupy.noises.html) textures, of different kinds,
- many different parameterized visual [stimuli](https://stimupy.readthedocs.io/en/latest/reference/_api/stimupy.stimuli.html)
- Gabors, plaids, edges,
- a variety of so-called illusions
(e.g. Simultaneous Brightness Contrast, White's illusion, Hermann grid, Ponzo illusion), and many more

- exact replications of stimuli previously published (e.g. ModelFest)
as described in their respecive [papers](stimupy/papers/)

- all stimuli are fully parameterizable
- with interpretable parameters that are familiar and relevant to vision scientists
(e.g. visual angle, spatial frequency, target placements).
- This also makes it possible to explore stimulus parameter spaces
which might reveal relations between formerly unconnected stimuli

- Stimuli are also composable/composed:
`stimuli` tend to be composed from several `components`.
- stimuli are composable/composed:
- `stimuli` tend to be composed from several `components`,
and these provided building blocks and masks
can be used to assemble more complicated geometries

- Generated stimuli are output as a Python `dict`ionary,
containing the stimulus-image as a NumPy-array,
together with other useful stimulus-specific information
(e.g. (target) masks, sizes etc.).
- Since Python dictionaries are mutable data structures (compared to objects),
they allow the user to add additional information easily.
- The image as NumPy-array (rather than, e.g., an OpenGL texture),
makes these stimuli fully interoperablye using common NumPy tooling.
- flexible output structures
- generated stimuli are Python `dict`ionary
- mutable data structures (compared to objects),
so they allow the user to add additional information easily
(e.g. stimulus descriptions, stimulus masks, experimental data).
- containing the stimulus-image as a NumPy-array,
- makes images fully interoperable using common NumPy tooling
(rather than, e.g., an OpenGL texture),
- together with other useful stimulus-specific information
(e.g. (target) masks, sizes etc.).

- In addition, we provide many [utils](stimupy/utils/) functions
to apply common operations to either the images, or the full stimulus-`dict`s.
- modular and therefore easy to extend with new stimulus functions,
and new stimulus sets

- [utility functions](https://stimupy.readthedocs.io/en/latest/reference/_api/stimupy.utils.html)
for stimulus import, export, manipulation (e.g. contrast, size), or plotting

- application-oriented documentation [documentation](https://stimupy.readthedocs.io/en/latest/index.html),
including [interactive demonstrations](https://stimupy.readthedocs.io/en/latest/reference/demos.html) of stimulus functions

- unit and integration [tests](https://github.com/computational-psychology/stimupy/actions/workflows/test.yml)

- Reuse of existing stimuli and stimulus sets should be a key aim,
so also included are exact replications of stimuli previously published (e.g. ModelFest)
as described in their respecive [papers](stimupy/papers/)

See the [documentation](https://stimupy.readthedocs.io/en/latest/) for more details

![A small fraction of the stimulus variety that ``stimupy`` can produce \label{fig:overview}](manuscript/overview.png)

---

## Citing stimupy

## Your stimulus (set) is not here?
Given the modular nature of the package,
any stimulus or stimulus set not currently available, can be easily added.
Open an [issue](https://github.com/computational-psychology/stimupy/issues/new)
and let us know what you'd like to see added.

If you want to contribute yourself, see [contributing](#contributing-to-stimupy)

If you want to contribute yourself, see [contributing](https://stimupy.readthedocs.io/en/latest/contributing/contributing.html)


---
## Installation

For now, `pip` can install directly from GitHub (the `main` branch)
Expand Down Expand Up @@ -100,14 +134,3 @@ Dependencies should be automatically installed (at least using `pip`).
- [Pillow](https://pillow.readthedocs.io/)
- [pandas](https://pandas.pydata.org/)

## Citing stimupy

## Contributing to stimupy
1. *Fork* the [GitHub repository](https://github.com/computational-psychology/stimupy/)
2. *Clone* the repository to your local machine
3. *Install* `stimupy` using the developer install: `pip install -e ".[dev]"`
4. *Edit* the code:
- To contribute a stimulus set, add it to `stimupy/papers/`
- To contribute a stimulus function, add it to the relevant directory
5. *Commit & Push* to your fork
6. *Pull request* from your fork to our repository
23 changes: 22 additions & 1 deletion docs/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ sphinx:
- sphinx.ext.autosummary # generate summary tables of functions in modules
- sphinx.ext.napoleon # recognize NumPy style docstrings
- sphinx.ext.viewcode # add links to source code in API reference
- hoverxref.extension

config:
#autosummary_generate: True # autosummary generates module-level .rst files?
Expand All @@ -54,6 +55,9 @@ sphinx:
templates_path: ['_templates'] # Path(s) that contain templates, relative to this config
exclude_patterns: ['_build', '_templates']
intersphinx_mapping:
python:
- 'https://docs.python.org/3/'
- null
numpy [stable]:
- 'https://numpy.org/doc/stable/'
- null
Expand All @@ -69,4 +73,21 @@ sphinx:
pillow [latest]:
- 'https://pillow.readthedocs.io/en/latest/'
- null
suppress_warnings: ["etoc.toctree"]
suppress_warnings: ["etoc.toctree"]

# Hoverxref Extension
hoverxref_auto_ref: True
hoverxref_intersphinx: [
"python",
"numpy",
"matplotlib",
"scipy",
]
hoverxref_domains: ["py", "numpy", "matplotlib", "scipy"]
hoverxref_role_types: {
"hoverxref": "modal",
"ref": "modal", # for hoverxref_auto_ref config
"mod": "tooltip",
"class": "tooltip",
"func": "tooltip",
}
16 changes: 9 additions & 7 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,25 @@
format: jb-book
root: index
parts:
- caption: Getting started (Tutorial)
- caption: Getting started with stimupy
chapters:
- file: getting_started/installation
- file: getting_started/getting_started
- caption: Topic guides
chapters:
- file: topic_guides/topic_guides
- file: topic_guides/organization
- file: topic_guides/sets_papers
- file: topic_guides/resolution
- file: topic_guides/dimensions
- file: topic_guides/gratings
- file: topic_guides/visual_noise
# - file: topic_guides/sets_papers
# - file: topic_guides/resolution
# - file: topic_guides/dimensions
# - file: topic_guides/gratings
# - file: topic_guides/visual_noise
- caption: Reference
chapters:
- file: reference/api.md
- file: reference/demos.md
- caption: Contributing
- caption: Get in touch
chapters:
- file: contributing/get_in_touch
- file: contributing/contribute

Loading

0 comments on commit bc1289c

Please sign in to comment.