-
-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement support for subcoordinate systems in the y-axis #5840
Conversation
Just as a note-to-self to document where .subplot is in Bokeh: https://github.com/bokeh/bokeh/blob/branch-3.3/src/bokeh/plotting/_figure.py#L225 |
Codecov Report
@@ Coverage Diff @@
## main #5840 +/- ##
===========================================
- Coverage 88.39% 23.30% -65.10%
===========================================
Files 312 313 +1
Lines 64967 65149 +182
===========================================
- Hits 57429 15180 -42249
- Misses 7538 49969 +42431
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 296 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
This is great! From a first pass, I've identified a few related issues:
Screen.Recording.2023-08-03.at.9.36.36.AM.mov
|
UPDATE: I am not so sure what's happening anymore.. the data for a particular curve seems to be allowed to go beyond the assigned subcoordinate_y range so I am just confused now. Here is the code to reproduce:
|
pinging @mattpap so we can stay aligned on this effort |
import numpy as np
import holoviews as hv
hv.extension('bokeh')
x = np.linspace(0, 10, 100)
y = np.sin(x)
curve = hv.Curve((x, y)).opts(subcoordinate_y=(0,1))
vspan = hv.VSpan(2, 4)
curve *= vspan
curve Traceback:``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/IPython/core/formatters.py:974, in MimeBundleFormatter.__call__(self, obj, include, exclude) 971 method = get_real_method(obj, self.print_method) 973 if method is not None: --> 974 return method(include=include, exclude=exclude) 975 return None 976 else: File ~/src/holoviews/holoviews/core/dimension.py:1287, in Dimensioned.repr_mimebundle(self, include, exclude) File ~/src/holoviews/holoviews/core/options.py:1423, in Store.render(cls, obj) File ~/src/holoviews/holoviews/ipython/display_hooks.py:280, in pprint_display(obj) File ~/src/holoviews/holoviews/ipython/display_hooks.py:248, in display(obj, raw_output, **kwargs) File ~/src/holoviews/holoviews/ipython/display_hooks.py:142, in display_hook..wrapped(element) File ~/src/holoviews/holoviews/ipython/display_hooks.py:188, in element_display(element, max_frames) File ~/src/holoviews/holoviews/ipython/display_hooks.py:69, in render(obj, **kwargs) File ~/src/holoviews/holoviews/plotting/renderer.py:397, in Renderer.components(self, obj, fmt, comm, **kwargs) File ~/src/holoviews/holoviews/plotting/renderer.py:404, in Renderer._render_panel(self, plot, embed, comm) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/viewable.py:736, in Viewable._render_model(self, doc, comm) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/layout/base.py:305, in Panel.get_root(self, doc, comm, preprocess) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/viewable.py:658, in Renderable.get_root(self, doc, comm, preprocess) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/layout/base.py:173, in Panel._get_model(self, doc, root, parent, comm) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/layout/base.py:155, in Panel._get_objects(self, model, old_objects, doc, root, comm) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/pane/holoviews.py:411, in HoloViews._get_model(self, doc, root, parent, comm) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/panel/pane/holoviews.py:506, in HoloViews._render(self, doc, comm, root) File ~/src/holoviews/holoviews/plotting/bokeh/renderer.py:70, in BokehRenderer.get_plot(self_or_cls, obj, doc, renderer, **kwargs) File ~/src/holoviews/holoviews/plotting/renderer.py:241, in Renderer.get_plot(self_or_cls, obj, doc, renderer, comm, **kwargs) File ~/src/holoviews/holoviews/plotting/plot.py:943, in DimensionedPlot.update(self, key) File ~/src/holoviews/holoviews/plotting/bokeh/element.py:2782, in OverlayPlot.initialize_plot(self, ranges, plot, plots) File ~/src/holoviews/holoviews/plotting/bokeh/element.py:877, in ElementPlot._update_plot(self, key, plot, element) File ~/src/holoviews/holoviews/plotting/bokeh/element.py:885, in ElementPlot._update_labels(self, key, plot, element) File ~/src/holoviews/holoviews/plotting/bokeh/element.py:885, in (.0) File ~/src/holoviews/holoviews/plotting/bokeh/element.py:815, in ElementPlot._axis_properties(self, axis, key, plot, dimension, ax_mapping) File <array_function internals>:200, in mean(*args, **kwargs) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3464, in mean(a, axis, dtype, out, keepdims, where) File ~/opt/miniconda3/envs/neuro-eeg-viewer/lib/python3.9/site-packages/numpy/core/_methods.py:194, in _mean(a, axis, dtype, out, keepdims, where) TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
|
@philippjfr's awesome latest pushes resolve a number of issues:
Remaining issues:
Code:import numpy as np
import pandas as pd
import holoviews as hv; hv.extension('bokeh')
from holoviews import opts
from holoviews import Dataset
from holoviews.plotting.links import RangeToolLink
from bokeh.models import HoverTool
import panel as pn; pn.extension()
from scipy.stats import zscore
n_channels = 10
n_seconds = 5
fs = 512
max_ch_disp = 5 # max channels to initially display
max_t_disp = 3 # max time in seconds to initially display
total_samples = n_seconds * fs
time = np.linspace(0, n_seconds, total_samples)
data = np.random.randn(n_channels, total_samples).cumsum(axis=1)
channels = ['EEG {}'.format(i) for i in range(n_channels)]
channel_curves = []
ch_subcoord = 1./n_channels
hover = HoverTool(tooltips=[
("Channel", "@channel"),
("Time", "$x s"),
("Amplitude", "$y µV")])
for i, channel_data in enumerate(data):
ds = Dataset((time, channel_data, channels[i]), ["Time", "Amplitude", "channel"])
channel_curves.append(hv.Curve(ds, "Time", ["Amplitude", "channel"], label=channels[i]).opts(
subcoordinate_y=(i*ch_subcoord, (i+1)*ch_subcoord), color="black", line_width=1,
tools=[hover, 'xwheel_zoom'], shared_axes=False))
annotation = hv.VSpan(0.3, 0.5)
eeg_viewer = (hv.Overlay(channel_curves, kdims="Channel") * annotation).opts(
padding=0, xlabel="Time (s)", ylabel="Channel",
show_legend=False, aspect=1.5, min_height=500, responsive=True,
shared_axes=False, backend_opts={
"x_range.bounds": (time.min(), time.max()),
"y_range.bounds": (0, 1)})
yticks = [( (i*ch_subcoord + (i+1)*ch_subcoord) / 2, ich) for i, ich in enumerate(channels)]
y_positions, _ = zip(*yticks)
z_data = zscore(data, axis=1)
minimap = hv.Image((time, y_positions, z_data), ["Time (s)", "Channel"], "Amplitude (uV)")
minimap = minimap.opts(
cmap="RdBu_r", colorbar=False, xlabel='', alpha=.5, yticks=[yticks[0], yticks[-1]],
height=100, responsive=True, default_tools=[''], shared_axes=False, clim=(-z_data.std()*2.5, z_data.std()*2.5))
if len(channels) < max_ch_disp:
max_ch_disp = len(channels)
max_y_disp = (max_ch_disp+2)*ch_subcoord
time_s = len(time)/fs
if time_s < max_t_disp:
max_t_disp = time_s
RangeToolLink(minimap, eeg_viewer, axes=["x", "y"],
boundsx=(None, max_t_disp),
boundsy=(None, max_y_disp))
eeg_app = pn.Column((eeg_viewer + minimap * annotation).cols(1), min_height=650).servable(target='main')
eeg_app |
Very cool! I'm confused about the floating y origin; that already seems to be implemented in the plot above? In any case I don't think that is a must-have, but certainly a nice to have. |
I think it just appears to be aligned with the y axis when the individual traces are scaled down a lot. Here is the same code with some real data.. it's scaled down so much that it isn't that useful of a view: (Relevant line per However, when I 'scale up' the traces by adjusting the range that each subplot is allowed to be within, the data no longer aligns with the y-tick, and it's especially bad when panning across the channels. (Relevant line per Screen.Recording.2023-08-07.at.8.30.48.AM.movI think instead of defining the subplot by the upper and lower y_range that it's allowed to be in, we will need to define it by the offset for the y-origin, and then have a separate parameter for the scaling to use for each group of curves. @philippjfr, @mattpap do you think this would be possible without major changes to Bokeh? Here is a demonstration of overlap that we want to achieve. We also want to be able to expose the channel-level scaling/zooming which would cause more or less overlap across channels.. which reinforces the need for an origin offset approach rather than an upper and lower y_range approach. |
If I understand the issue correctly, then the solution is to compute y source range for each sub-coordinate based on the data indices in the viewport (i.e. indices contained within the cartesian frame) and not all the available indices as it is done now. That would mean using data ranges for y source ranges and changes to bokeh, to allow data ranges optionally to consider the viewport (so called masked indices). I doubt this can be hacked around in Holoviews, because masked indices are not exposed to Python APIs in bokeh. |
My interpretation of what you are saying is that we would need to dynamically adjust the y-range for each trace based on what's currently visible, rather than using a fixed y-range for all available data. I think this means that each y-tick would then be centered on the y-range of each subplot's entire viewport-visible data, correct? Or would the y-tick indeed align with the first (left-most) data sample in the current viewport? |
I think what @mattpap is suggesting may indeed be a nice improvement but I'm not sure it addresses the actual request that @droumis had, which I think merely requires setting the target ranges in such a way that the beginning of the trace aligns with the tick and has the desired level of scaling so the individual traces are visible. |
@philippjfr, @mattpap. To simplify things, I've decided to now drop the requested alignment of the y-tick with the left-most data point. The type of data we are working with for CZI should have very low frequencies ('slow drift') removed prior to plotting, so I think it's good enough as long as the y-tick is reasonably close to the trace. However, reasonable alignment of the ytick when panning with overlapping ranges is still an issue with this PR, as was shown in this video: Screen.Recording.2023-08-07.at.8.30.48.AM.movAny idea how to resolve this, @philippjfr? As @mattpap mentioned in the meeting yesterday, he does not see such behavior in a pure Bokeh implementation. If we can get this resolved (and also have zoom tools to scale subcoordinate ranges), then I think that's all we need for now. |
TODOS:
channel_curves = {}
for channel_data in ds:
channel_curves[channel_data.channel] = hv.Curve(channel_data.data, "Time", ["Amplitude", "Channel"])
plot = hv.NdOverlay(channel_curves, kdims="Channel", subcoordinate_y=True)
# Alternatively, to get 10% overlap:
plot = hv.NdOverlay(channel_curves, kdims="Channel", subcoordinate_y=0.1) thoughts? |
Reproduction in pure bokeh for both categorical and linear axis approaches (change import numpy as np
from bokeh.core.properties import field
from bokeh.io import show
from bokeh.models import (ColumnDataSource, DataRange1d, FactorRange, HoverTool,
Range1d, WheelZoomTool, ZoomInTool, ZoomOutTool)
from bokeh.palettes import Category10
from bokeh.plotting import figure
np.random.seed(0)
categorical = False
n_channels = 10
n_seconds = 15
fs = 512
max_ch_disp = 5 # max channels to initially display
max_t_disp = 3 # max time in seconds to initially display
total_samples = n_seconds * fs
time = np.linspace(0, n_seconds, total_samples)
data = np.random.randn(n_channels, total_samples).cumsum(axis=1)
channels = [f'EEG {i}' for i in range(n_channels)]
hover = HoverTool(tooltips=[
("Channel", "$name"),
("Time", "$x s"),
("Amplitude", "$y µV"),
])
x_range = Range1d(start=time.min(), end=time.max())
#x_range = DataRange1d()
#y_range = DataRange1d() # bug
if categorical:
y_range = FactorRange(factors=channels)
else:
y_range = Range1d(start=-0.5, end=len(channels) - 1 + 0.5)
p = figure(x_range=x_range, y_range=y_range, lod_threshold=None)
source = ColumnDataSource(data=dict(time=time))
renderers = []
for i, channel in enumerate(channels):
if categorical:
y_target=Range1d(start=i, end=i + 1)
else:
y_target=Range1d(start=i - 0.5, end=i + 0.5)
xy = p.subplot(
x_source=p.x_range,
y_source=Range1d(start=data[i].min(), end=data[i].max()),
#y_source=DataRange1d(),
x_target=p.x_range,
y_target=y_target,
)
source.data[channel] = data[i]
line = xy.line(field("time"), field(channel), color=Category10[10][i], source=source, name=channel)
renderers.append(line)
if not categorical:
from bokeh.models import FixedTicker
ticks = list(range(len(channels)))
p.yaxis.ticker = FixedTicker(ticks=ticks)
p.yaxis.major_label_overrides = {i: f"EEG {i}" for i in ticks}
wheel_zoom = WheelZoomTool()#renderers=renderers)
zoom_in = ZoomInTool(renderers=renderers)
zoom_out = ZoomOutTool(renderers=renderers)
p.add_tools(wheel_zoom, zoom_in, zoom_out, hover)
p.toolbar.active_scroll = wheel_zoom
show(p) |
Interesting, so in your pure-bokeh version I can't reproduce the panning issue. |
I found the source of the problem, I had been adding the target ranges to the |
Just for reference some pure Bokeh code to reproduce what we want i.e. a plot with traces on subcoordinates with its minimap. Codeimport numpy as np
from bokeh.layouts import column
from bokeh.core.properties import field
from bokeh.io import show
from bokeh.models import (ColumnDataSource, DataRange1d, FactorRange, HoverTool,
Range1d, WheelZoomTool, ZoomInTool, ZoomOutTool, RangeTool)
from bokeh.palettes import Category10
from bokeh.plotting import figure
from scipy.stats import zscore
np.random.seed(0)
categorical = False
n_channels = 10
n_seconds = 15
fs = 512
max_ch_disp = 5 # max channels to initially display
max_t_disp = 3 # max time in seconds to initially display
total_samples = n_seconds * fs
time = np.linspace(0, n_seconds, total_samples)
data = np.random.randn(n_channels, total_samples).cumsum(axis=1)
channels = [f'EEG {i}' for i in range(n_channels)]
hover = HoverTool(tooltips=[
("Channel", "$name"),
("Time", "$x s"),
("Amplitude", "$y µV"),
])
x_range = Range1d(start=time.min(), end=time.max())
#x_range = DataRange1d()
#y_range = DataRange1d() # bug
if categorical:
y_range = FactorRange(factors=channels)
else:
y_range = Range1d(start=-0.5, end=len(channels) - 1 + 0.5)
p = figure(x_range=x_range, y_range=y_range, lod_threshold=None)
source = ColumnDataSource(data=dict(time=time))
renderers = []
for i, channel in enumerate(channels):
if categorical:
y_target=Range1d(start=i, end=i + 1)
else:
y_target=Range1d(start=i - 0.5, end=i + 0.5)
xy = p.subplot(
x_source=p.x_range,
y_source=Range1d(start=data[i].min(), end=data[i].max()),
#y_source=DataRange1d(),
x_target=p.x_range,
y_target=y_target,
)
source.data[channel] = data[i]
line = xy.line(field("time"), field(channel), color=Category10[10][i], source=source, name=channel)
renderers.append(line)
if not categorical:
from bokeh.models import FixedTicker
ticks = list(range(len(channels)))
p.yaxis.ticker = FixedTicker(ticks=ticks)
p.yaxis.major_label_overrides = {i: f"EEG {i}" for i in ticks}
wheel_zoom = WheelZoomTool()#renderers=renderers)
zoom_in = ZoomInTool(renderers=renderers)
zoom_out = ZoomOutTool(renderers=renderers)
p.add_tools(wheel_zoom, zoom_in, zoom_out, hover)
p.toolbar.active_scroll = wheel_zoom
z_data = zscore(data, axis=1)
range_tool = RangeTool(x_range=p.x_range, y_range=p.y_range)
range_tool.x_range.update(start=4, end=6)
range_tool.y_range.update(start=2, end=8)
select = figure(height=130, #y_range=p.y_range,
tools="", toolbar_location=None)
select.image(image=[z_data], x=0, y=0, dw=15, dh=11, palette="Sunset11")
select.add_tools(range_tool)
show(column(p, select)) |
I have made changes based on your comments @hoxbro , thanks for the thorough review! On the edge cases you shared, the error displayed in the first one is indeed a little terse 🙃 although the error is pretty obvious in your example. After your discussion with Philipp, let me know if you want me to investigate how to raise a more informative error (I guess I'll have to traverse all the elements to check they have unique labels, maybe there's an API for that?). The "same subcordinate_y case" is a feature request to me, please somehow explode the ticks when they overlap. The last case with "only some curves have enabled subcoordinate_y" will hopefully be supported soon. I'm thinking that users could provide a range for each element that doesn't have to be between 0 and 1, we'd traverse all the elements and collect these ranges to compute v0 and v1. On your comment on whether multi axes are subcoordinates are supported together, I believe that is not the case (haven't tried though!) and that it's very much on the edge. |
@hoxbro @philippjfr is there any remaining changes required for this PR to be merged? |
Not sure it's tedious, I just haven't been able to find the internal API to do it, I'll look for it but welcome any pointer! I hope having to traverse the objects won't be too costly, since an aspect of this review was performance optimizations.
I'll add a check. Not sure the property is the right place to make that check though, I guess those kind of checks should all be made in the same place, at some point in the initialization phase, I just have to find where this is. |
Found an issue. When trying to overlay an annotation (hv.VSpan), I get |
Also, running Code
|
Linking with CODE: offset approach with RangeToolLink to an overlay
|
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Implement support for subcoordinate systems in the y-axis