-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How we handle rendering #5
Comments
I like this way of rendering things especially I think currently a mixture of usages of |
If we keep Then for more persistent models and for better customisation, a use would have to call That way rendering is an explicit operation with setup, while |
Highjacking this slightly, but I'm also having fun playing with this now (well, actually using it to try to prepare a report..). I can't ask on Discord during the day since it is blocked at my work, but I have at least Few questions/things that may be nice:
|
Thanks for stress testing it @tylerjereddy! It is likely very obvious but extremely messy and and early days.
|
I'm trying to hack it, but not having success yet, obj = bpy.context.object
for modifier in obj.modifiers:
if modifier.type == 'NODES':
node_tree = modifier.node_group
print(node_tree.nodes)
for node in node_tree.nodes:
if node.type == 'GROUP':
print(node.name)
group_tree = node.node_tree
print("---------")
for sub_node in group_tree.nodes:
print(sub_node.type)
# sub_node.inputs["Red"].default_value = 1.0
# sub_node.inputs["Green"].default_value = 0.0
# sub_node.inputs["Blue"].default_value = 0.0
# sub_node.inputs["Alpha"].default_value = 1.0
print("---------") This does seem to print out names that remind me of some of the geometry view components when working with MN I believe, but figuring out how to set the colors appropriate and get the "flow" right seems a little tricky...
|
Ah, this hack seems to finally change the colors by molecule: obj = bpy.context.object
colors = ((1, 0, 0, 1),
(0, 1, 0, 1),
(0, 0, 1, 1),
(1, 1, 1, 1))
idx = 0
for sub_obj in bpy.context.scene.objects:
if sub_obj.name.startswith("atoms.") and not "camera" in sub_obj.name:
print("sub_obj:", sub_obj, sub_obj.name)
for modifier in sub_obj.modifiers:
if modifier.type == 'NODES':
node_tree = modifier.node_group
print(node_tree.nodes)
for node in node_tree.nodes:
if node.type == 'GROUP':
group_tree = node.node_tree
if node.name in {"Color Attribute Random", "Color Common"}:
node_tree.nodes.remove(node)
for modifier in sub_obj.modifiers:
if modifier.type == 'NODES':
node_tree = modifier.node_group
print(node_tree.nodes)
for node in node_tree.nodes:
if node.type == 'GROUP':
print(node.name)
group_tree = node.node_tree
print("---------")
for sub_node in group_tree.nodes:
print("sub_node.type:", sub_node.type)
if "Color" in node.inputs:
print("changing color for node.name:", node.name)
color_socket = node.inputs["Color"]
color_socket.default_value = colors[idx]
print("---------")
else:
print("other node.type:", node.type)
idx += 1 |
You are right that it reminds you of working with MN through the GUI, because all we are doing is just manipulating the node tree through scripting instead of dragging and dropping. By default, currently the starting node tree looks like this: As you would have no doubt discovered, it's a total mess to try and manipulate anything inside the node tree via python. We should instead have an option that imports it without the color nodes, but in the mean time you can use this script to quickly remove the color related nodes, cleaning up the connections. import bpy
o = bpy.context.active_object
mod = o.modifiers['MolecularNodes']
tree = mod.node_group
for node in tree.nodes:
if "Color" in node.name:
tree.nodes.remove(node)
tree.links.new(tree.nodes['Group Input'].outputs['Geometry'], tree.nodes['Style Spheres'].inputs['Atoms']) Now that there isn't anything inside of the node tree that is modifying the import databpy
import bpy
import numpy as np
# wrap the object for nicer interfacing with the attributes
bob = databpy.BlenderObject(bpy.context.active_object)
bob.store_named_attribute(np.random.uniform(0, 1, size=(len(bob), 4)), 'Color') This uses the |
Here are some of my thoughts: About the color issue raised by Tyler above: This is currently not completely implemented yet - see here. Another quick workaround based on how it is supposed to work when fully implemented is:
Coming back to the original issue about how we handle rendering. The current approach of creating individual objects (even though they are grouped into a collection) could run into several potential issues in the future. This is even without considering that they have individual cameras pointing at them, which is a separate issue (more on this later). We are responsible for the entire lifecycle of any objects we create and the lesser that we can get away with (without losing any functionality), the better. Completely separate objects with their own geometry node trees could initially make it simple to program, but in complex scenes (with multiple universes, multiple selections across them, annotations, other additional components from analysis results etc) they could soon get in each other's way (in renders) unless their visibility is carefully controlled. This will also lead to a huge usability issue if we consider a potential GUI mode option as well, with too many loosely hanging objects in the scene. Instead, we could potentially consider the same approach we have in MDAnalysis with the
where
Though the above examples are shown with selection strings, there could be similar variants for This approach also fits in well with GUI mode. In GUI mode, there could be a separate About multiple cameras: Having individual cameras pointing to selections can make quick visualisations of them easy, but they don't solve the problem that we still need a way to define a 'region of interest` that could span across different universes, different selections within them, taking into account additional analysis components (like say density volume), showing different annotations etc and capture that. A single global camera in About Blender's timeline: Though this wasn't brought up, sharing my thoughts given that this relevant to rendering. Blender has a single timeline that is used in a variety of ways to create animations. We should not explicitly tie our universe(s) frames to Blender's timeline given that we can have multiple universes with varying trajectory lengths and requirements to keep some selections/universes static etc. MN already has a One potential approach that could make designing and evaluating the API a bit more streamlined is to list out all current and potential use cases (ordered from simple to complex) and evaluate different approaches against them - this will ensure we don't miss any and also get a sense of the extensibility that they provide for future use cases. |
I've just been playing around with the example
.ipynb
and got it all to run and render (opened PR #4 ).Having gone in to play with some of the render settings - I think the approach of calling
.visualize()
or.render()
on an atom group or visualisation object I don't like.Yes you can create a camera for each object / instance etc, but you will only every be rendering from a single camera at a time, so I think it makes the most sense to have a single camera object inside the scene, and a single overall object that controls everything about the scene and how it is rendered.
The canvas object which you create with
ggmolvis.GGMV()
I think would be perfect for exactly that.The text was updated successfully, but these errors were encountered: