Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How we handle rendering #5

Open
BradyAJohnston opened this issue Dec 17, 2024 · 8 comments
Open

How we handle rendering #5

BradyAJohnston opened this issue Dec 17, 2024 · 8 comments

Comments

@BradyAJohnston
Copy link
Collaborator

I've just been playing around with the example .ipynb and got it all to run and render (opened PR #4 ).

Having gone in to play with some of the render settings - I think the approach of calling .visualize() or .render() on an atom group or visualisation object I don't like.

Yes you can create a camera for each object / instance etc, but you will only every be rendering from a single camera at a time, so I think it makes the most sense to have a single camera object inside the scene, and a single overall object that controls everything about the scene and how it is rendered.

The canvas object which you create with ggmolvis.GGMV() I think would be perfect for exactly that.

ggmv = ggmolvis.GGMV()

res_ag = u.select_atoms('resid 127 40')

# these could be identical
res_mol = ggmv.molecule(res_ag)
res_mol = res_ag.visualize()

# frames the camera and renders the image
ggmv.render(res_mol)

ggmv.render() # without something passed in it will render everything

# set scene-specific parameters that will be used for every render call
ggmv.background = "white"
ggmv.resolution = (720, 480)
ggmv.engine = "EEVEE"

ggmv.render()

# passing in specific parameters will override scene parameters, but only for the rendering and then be returned to normal
ggmv.render(resolution=(1920, 1080))
@yuxuanzhuang
Copy link
Owner

I like this way of rendering things especially ggmv.render(res_mol) looks more intuitive to me. I think it also provides an extra capability to render multiple objects ggmv.render([res_mol1, res_mol2]) (not show which camera should be used in this case then)

I think currently a mixture of usages of visualize and render that both render things is indeed confusing. My initial idea to implement AtomGroup.visualize() is because it is a very simple method that users can directly use to render (not only show it in Blender) something without the need to call anything from ggmolvis.

@BradyAJohnston
Copy link
Collaborator Author

BradyAJohnston commented Dec 18, 2024

If we keep AtomGroup.visuaize() as a quick way to just capture an image, that maybe also isn't persistent. This could also be a method that would be available for other analysis objects?

Then for more persistent models and for better customisation, a use would have to call ggmv.render() on an explicitly created molecule.

That way rendering is an explicit operation with setup, while .visualize() or maybe .snapshot() can be used as a quick sanity check during tinkering with your analysis

@tylerjereddy
Copy link

Highjacking this slightly, but I'm also having fun playing with this now (well, actually using it to try to prepare a report..). I can't ask on Discord during the day since it is blocked at my work, but I have at least git grepd around to try to find stuff that isn't in the sample notebook/docs.

Few questions/things that may be nice:

  1. I suspect this is easy, since there is set_color in the source code, but when I do i.e., ethyl_mol = ggmv.molecule(ethyl, color="green") where ethyl is a regular AtomGroup, I don't actually get an error but I also don't see a color change in the PNG from the final render() call. Is this straightforward to do for coloring the different ggmv.molecules in a scene?
  2. Even in the example notebook, one thing that confuses me is that you call render() on a single ggmv.molecule() even when you have several such molecules, and you get a render that includes all of them? I'm not sure this is so different from Brady's feedback above about scoping levels.
  3. How hard is it likely to be to add text (obvious use case is i.e., frame number or simulation time below the molecule, or maybe a progress bar type thing) to the renders?
  4. Might be nice: tqdm-style progress bar on rendering single frame and multi-frame movie I suppose (to be fair there is some Remaining: time text once the sampling starts I think, though there's a lot of other output around as well).

@BradyAJohnston
Copy link
Collaborator Author

Thanks for stress testing it @tylerjereddy! It is likely very obvious but extremely messy and and early days.

  1. Currently the 3D model that is created will be using a molecular nodes node tree, which will include a node for assigning colors. Regardless of colors that are assigned to vertices, they will get overridden inside the node tree. We need to update it so that colors can be assigned to the mesh, but it's also a discussion around how much the code a user interacts with should interact with node trees (messy to do via code) vs attempting to do things outside of node trees. Hard to balance because everything MN is designed with interaction via a GUI node tree in mind, so requires some rethinking about how things should work in terms of big picture.
  2. From moment you call ggmv = ggmolvis.GGMV(), there now exists a 3D scene. Whenever you load a trajectory / molecule you are adding 3D models to the scene, which is why others show up when you are just wanting to render the latest. This does like you said overlap with what I was wondering earlier about how we handle rendering. We would need to disable the visibility of the other objects when rendering a single one (quite easy, not yet implemented), which I think makes more sense in my proposed setup.
  3. There would be two methods for adding text overlays. Creating 3D text objects that exist in the scene that are then rendered, or creating overlays with the gpu module inside of Blender (examples here: Visualization of MDAnalysis results in Blender MDAnalysis/mdanalysis#4862) which I don't know how well it would work with the headless scripting that we are doing. A couple of approaches but I am unsure what the best would be.
  4. The progress text / absolute deluge of information that Blender prints to console is a part I've really been struggling with myself. You can really aggressively capture all possible outputs to stop it printing a wall of text in a notebook, but then you can't get any progress bar of any kind. I believe this would have to be something that is tackled at the Blender level, as the printing is handled by the underlying C++ blender code so hard to stop it and still print useful information (I couldn't figure out a way)

@tylerjereddy
Copy link

tylerjereddy commented Jan 3, 2025

I'm trying to hack it, but not having success yet,

obj = bpy.context.object
for modifier in obj.modifiers:
   if modifier.type == 'NODES':
       node_tree = modifier.node_group
       print(node_tree.nodes)
       for node in node_tree.nodes:
           if node.type == 'GROUP':
                print(node.name)
                group_tree = node.node_tree
                print("---------")          
                for sub_node in group_tree.nodes:
                    print(sub_node.type)
                      #  sub_node.inputs["Red"].default_value = 1.0
                       # sub_node.inputs["Green"].default_value = 0.0
                      #  sub_node.inputs["Blue"].default_value = 0.0
                      #  sub_node.inputs["Alpha"].default_value = 1.0
                print("---------") 

This does seem to print out names that remind me of some of the geometry view components when working with MN I believe, but figuring out how to set the colors appropriate and get the "flow" right seems a little tricky...

Style Spheres
---------
GROUP_INPUT
GROUP_OUTPUT
JOIN_GEOMETRY
SEPARATE_GEOMETRY
GROUP
GROUP
REALIZE_INSTANCES
---------
Set Color
---------
GROUP_INPUT
STORE_NAMED_ATTRIBUTE
GROUP_OUTPUT
---------
Color Common
---------
GROUP_OUTPUT
GROUP_INPUT
GROUP
GROUP
INDEX_SWITCH
GROUP
SWITCH
---------
Color Attribute Random
---------
GROUP_OUTPUT
COMBINE_COLOR
INPUT_ATTRIBUTE
RANDOM_VALUE
GROUP_INPUT
MATH
INPUT_INT
---------

@tylerjereddy
Copy link

Ah, this hack seems to finally change the colors by molecule:

obj = bpy.context.object

colors = ((1, 0, 0, 1),
          (0, 1, 0, 1),
          (0, 0, 1, 1),
          (1, 1, 1, 1))

idx = 0
for sub_obj in bpy.context.scene.objects:
    if sub_obj.name.startswith("atoms.") and not "camera" in sub_obj.name:
        print("sub_obj:", sub_obj, sub_obj.name)
    

        for modifier in sub_obj.modifiers:
           if modifier.type == 'NODES':
               node_tree = modifier.node_group
               print(node_tree.nodes)
               for node in node_tree.nodes:
                   if node.type == 'GROUP':
                        group_tree = node.node_tree
                        if node.name in {"Color Attribute Random", "Color Common"}:
                            node_tree.nodes.remove(node)
                            
        for modifier in sub_obj.modifiers:
           if modifier.type == 'NODES':
               node_tree = modifier.node_group
               print(node_tree.nodes)
               for node in node_tree.nodes:
                   if node.type == 'GROUP':
                        print(node.name)
                        group_tree = node.node_tree
                        print("---------")          
                        for sub_node in group_tree.nodes:
                            print("sub_node.type:", sub_node.type)
                            if "Color" in node.inputs:
                                print("changing color for node.name:", node.name)
                                color_socket = node.inputs["Color"]
                                color_socket.default_value = colors[idx]
                        print("---------")  
                   else:
                       print("other node.type:", node.type)
           idx += 1

@BradyAJohnston
Copy link
Collaborator Author

You are right that it reminds you of working with MN through the GUI, because all we are doing is just manipulating the node tree through scripting instead of dragging and dropping.

By default, currently the starting node tree looks like this:

image

As you would have no doubt discovered, it's a total mess to try and manipulate anything inside the node tree via python. We should instead have an option that imports it without the color nodes, but in the mean time you can use this script to quickly remove the color related nodes, cleaning up the connections.

import bpy

o = bpy.context.active_object
mod = o.modifiers['MolecularNodes']
tree = mod.node_group

for node in tree.nodes:
    if "Color" in node.name:
        tree.nodes.remove(node)
    
tree.links.new(tree.nodes['Group Input'].outputs['Geometry'], tree.nodes['Style Spheres'].inputs['Atoms'])

image

Now that there isn't anything inside of the node tree that is modifying the Color attribute, we can modify that ourselves via python / numpy and the results should show up in the final generated geometry (which is how I imagine the interface works going forward).

import databpy
import bpy
import numpy as np

# wrap the object for nicer interfacing with the attributes
bob = databpy.BlenderObject(bpy.context.active_object)

bob.store_named_attribute(np.random.uniform(0, 1, size=(len(bob), 4)), 'Color')

This uses the databpy module which I have broken out from inside of Molecular Nodes into it's own module for re-use by other projects. It's a nicer way to interface with Blender's objects and attributes via numpy. ggmolvis isn't currently updated to use it but will be shortly: #8, but it can just be installed from pypi.

@PardhavMaradani
Copy link

Here are some of my thoughts:

About the color issue raised by Tyler above:

This is currently not completely implemented yet - see here.
The existing code attempts to do this by removing the link between the Color Common and Set Color nodes and setting the default value of the Color input as seen here (from the starting node tree that Brady posted above)

Another quick workaround based on how it is supposed to work when fully implemented is:

ng = bpy.context.active_object.modifiers["MolecularNodes"].node_group
ci = ng.nodes["Set Color"].inputs["Color"]
ng.links.remove(ci.links[0]) # if link exists
ci.default_value = (0, 1, 0, 1) # rgba

Coming back to the original issue about how we handle rendering. The current approach of creating individual objects (even though they are grouped into a collection) could run into several potential issues in the future. This is even without considering that they have individual cameras pointing at them, which is a separate issue (more on this later). We are responsible for the entire lifecycle of any objects we create and the lesser that we can get away with (without losing any functionality), the better.

Completely separate objects with their own geometry node trees could initially make it simple to program, but in complex scenes (with multiple universes, multiple selections across them, annotations, other additional components from analysis results etc) they could soon get in each other's way (in renders) unless their visibility is carefully controlled. This will also lead to a huge usability issue if we consider a potential GUI mode option as well, with too many loosely hanging objects in the scene.

Instead, we could potentially consider the same approach we have in MDAnalysis with the Universe as the starting point. We could have something like:

u = mda.Universe(PSF, DCD)
ggmv = ggmolvis.GGMV()
vu = ggmv.universe(u, name='...', ...)

where vu is a visual wrapper over a regular universe. This would lead to a single universe object (From Blender's perspective). Various styles, annotations, analysis details can be added to individual selections. This will require some heavy lifting within the API code to manage the geometry node tree (doable) but might be cleaner in the long run. Most analysis results that apply to universes can also be neatly folded in as opposed to them being separate entities. Styling for individual selections can be something like:

vu.apply_style(selection='protein', style='cartoon', style_options={...})
vu.apply_style(selection='protein', style='surface', style_options={...}) # append
vu.apply_style(selection='resid 127 40', style='ball_and_stick', ...)
vu.apply_style(selection=['sel1', 'sel2', ...], ...) # multiple selections
vu.clear_styles()
...

Though the above examples are shown with selection strings, there could be similar variants for AtomGroups directly. This also clearly separates out multiple universes.

This approach also fits in well with GUI mode. In GUI mode, there could be a separate MDAnalysis tab in Blender's Sidebar (N-panel) of the 3d-viewport. It is here that all GUI controls for various analyses show up (nicely grouped). Styling of individual selections, annotations etc can also be added through simple GUI elements (without having to interact with GN/MN node trees) which require - For example, a place to enter MDAnalysis selection strings, selecting different styles and configuration elements specific to those etc and applying them (using the same API underneath). All of these have to be anchored to an active object and staring with the Universe as above will allow this, which wouldn't be the case with the current approach.

About multiple cameras:

Having individual cameras pointing to selections can make quick visualisations of them easy, but they don't solve the problem that we still need a way to define a 'region of interest` that could span across different universes, different selections within them, taking into account additional analysis components (like say density volume), showing different annotations etc and capture that.

A single global camera in GGMolVis can achieve all of that - as long as we keep pointing it where we want to. This will keep things much simpler and also play well with a potential GUI mode.

About Blender's timeline:

Though this wasn't brought up, sharing my thoughts given that this relevant to rendering. Blender has a single timeline that is used in a variety of ways to create animations. We should not explicitly tie our universe(s) frames to Blender's timeline given that we can have multiple universes with varying trajectory lengths and requirements to keep some selections/universes static etc. MN already has a Update with Scene option and individual nodes for animation. Keeping these decoupled, but with an ability to link/unlink virtually will provide the maximum flexibility to render animations.


One potential approach that could make designing and evaluating the API a bit more streamlined is to list out all current and potential use cases (ordered from simple to complex) and evaluate different approaches against them - this will ensure we don't miss any and also get a sense of the extensibility that they provide for future use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants