-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to export to .GLB / .GLTF / .OBJ formats? #6
Comments
Triangulating in 3D space (Python)>> |
Hello @3dcinetv , you can find this solution done by hackman, to enable the output in ply format https://github.com/hack-mans/PanoHead But i didnt see textures there neither point color data. Its not everything you need, but its already something 😊 |
Thanks for the heads up, Carlos. Hopefully the developer will be able to output consistent GLTF + Textures from the reference links I've posted in the original thread. |
@3dcinetv The output mesh is reconstructed by sampling SDF of a 3D voxel volume using Marching Cubes. It only creates the vertex + faces + normals for the PLY mesh, there is no UV or texture information that can be obtained. I'm seeing if it is possible to somehow sample the color and store in vertex colors like other 3D AI papers have done. |
It uses Trimesh which does support = |
If stable diffusion is able to "output" (write) something in obj/gltf, having a "standard" UV unwrapping method would be the next logical step for the generated 3d head (at this point, we know it can create the vertex and normal information, and that HAS to have some kind of mapping to not go over-bound with the geometry). It also calls to my attention the number of GPU cards needed to generate "the image sequence" / Video from the 3D image sequence. Why can't lower GPU than the 3090 RTX handle this? At first, it seems a really "optimized" 3d head the model constructs based on "trained" models. If the head is generated by voxels, what's the actual poly count for each of these heads if they were transformed to .ply/obj/gltf? |
@3dcinetv Polycount for one of my PLY outputs is 1.04million, it's fairly detailed but obviously not optimised at all and needs clean up, but is watertight. It only uses around 8GB VRAM to process a video + model, so can use lower end cards too. UV unwrapping a random geometry isn't a simple task, generally the way heads are done in the industry is having a base mesh that is good topology and UV unwrapped, and then wrapping that to fit your generic high poly 3D head mesh. Otherwise you'd need to optimise the high poly mesh to decrease the polycount and turn into quads in order to use automatic UV unwrap. You still have the issue of projecting the textures, or obtaining vertex colors and then baking those to a texture. |
I'll be looking forward to see one export for this. millions of polygons textured automatically doesn't sound great. However, i've saw other projects generate some more optimized meshes and have this kind of export. |
I'm well aware of all the workflow involved (20 years+ 3D experience here). From there, the function copies the optimized head UV properties, and does a projection over the dense one. With the uvs assigned, inheriting the texture is an easy task from what Stable Diffusion generated. |
Here are some links for Sony's 3D Creator>> And this is what the app describes (there was never a chance to involve more programmers after 2018 for the .apk). |
Sorry not super constructive but everyone knows real 3d artists use a z-up coordinate system |
@3dcinetv Sorry I didn't know your experience. I'm not the developer, I'm just trying to integrate it into existing workflows similar to you. I think the issue with the methods you are describing is that they are different from how this one works, 3D scanning is a long time solved problem and can project textures easily since you have many different input images from different angles. This repo is using StyleGAN + neural rendering to generate the novel views of the subject, and meshing from a voxel volume. There is no simple way to do what you are asking for, but I'm trying to figure out if it would be possible to sample the neural rendered color attribute and get that onto the mesh. |
Hey, I loved your work to generate .ply output and I was even able to have an .obj file. Is it somehow possible to also have some colour or texture for the same using the approach you have discussed above? |
可以得到一个3d模型的头部吗,而不是MP4 |
@kashishnaqvi101 Hello, have you ever exported an .obj file? how to make that? |
@hack-mans Hello friend, according to "gen_videos_proj_withseg.py" in your project , I got a .mp4 and some frame images ,but I can't find the .ply file , should I do something else to get it? |
would trimesh export the GLB with texture? or just the mesh? Do you have guidance on how to update the code to use trimesh to export mesh with texture or GLB 2.0 with texture |
does PLY include texture info? |
of course not. |
What about camera information? As that could be used to generate .json files needed for NERF. |
We really need a way to get the UV unwrapped texture of the generated mesh. Otherwise we gonna need to paint the whole face ourselves in some 3D modeling tool, which is a lot of extra work. @hack-mans do you have any plans to add UV texture export to the PLY export script in your repo? |
Hi, I know FBX is a proprietary format, so I'll leave that out.
My question is if there's a possibility to take all the "normal" light calculation (generated from the normal map from all 360°) and generate a full 3D mesh in the .glb/gltf/obj formats? (maybe with embedded textures)?
It would be so helpful if the seeded prompt (face image) with each of their 360° turnaround sequences, could be UV unwrapped on the face for this 3D generated model.
The idea would be to generate the texture with "flat" light, so no shadows are baked in the original seeded generated image.
After creating the normal map, it would then generate a list of 3d points in space (use Z forward coordinate system, please), thus generating the mesh.
From there, (since all human heads will be almost the same) a universal UV unwrapping code can open the UV space texture like this.
And since it's taking 1 picture per degree to generate the correctly anatomically accurate skin, eyes, hair, maybe that texture information could also baked in "degrees" progressing from left to right in the previous generated UVd space.
This would be one step closer to actually generate human characters for games and background props for animation in general 3D applications.
Please let me know.
Kind regards.
-Pierre.
The text was updated successfully, but these errors were encountered: