You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We would like to add the option of rendering summary visualizations at the location of groups/clusters (e.g. automatically determined by HDBSCAN) without requiring user selection.
The end goal would be to, e.g., start with a low amount of large clusters and their summary visualizations rendered on top of the cluster centers as the user is zoomed out (e.g. HDBSCAN with high min_samples). As the user then zooms into the embedding or changes the camera position, smaller/different clusters are selected (HDBSCAN with lower min_samples) and the summary visualizations are adapted and rendered on top of these smaller clusters that are currently displayed.
If necessary due to computational efficiency the clusters and summary visualizations might have to be precomputed.
Here I am using a chess embedding python prototype that performs hdbscan and renders the summary vis on top of clusters to create a static output image. I do this for several levels of granularity and assemble the resulting images in a .webm to give an idea of what this could look like:
PSE_issue_automated_summary_vis_rendering.webm
One question is how to handle the general summary visualizations which get really long with the number of features, whether it would be feasible to render them at all, to make them scrollable, etc.
The text was updated successfully, but these errors were encountered:
We would like to add the option of rendering summary visualizations at the location of groups/clusters (e.g. automatically determined by HDBSCAN) without requiring user selection.
The end goal would be to, e.g., start with a low amount of large clusters and their summary visualizations rendered on top of the cluster centers as the user is zoomed out (e.g. HDBSCAN with high min_samples). As the user then zooms into the embedding or changes the camera position, smaller/different clusters are selected (HDBSCAN with lower min_samples) and the summary visualizations are adapted and rendered on top of these smaller clusters that are currently displayed.
If necessary due to computational efficiency the clusters and summary visualizations might have to be precomputed.
Here I am using a chess embedding python prototype that performs hdbscan and renders the summary vis on top of clusters to create a static output image. I do this for several levels of granularity and assemble the resulting images in a .webm to give an idea of what this could look like:
PSE_issue_automated_summary_vis_rendering.webm
One question is how to handle the general summary visualizations which get really long with the number of features, whether it would be feasible to render them at all, to make them scrollable, etc.
The text was updated successfully, but these errors were encountered: