Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to export to .GLB / .GLTF / .OBJ formats? #6

Open
3dcinetv opened this issue Jun 25, 2023 · 21 comments
Open

Possibility to export to .GLB / .GLTF / .OBJ formats? #6

3dcinetv opened this issue Jun 25, 2023 · 21 comments

Comments

@3dcinetv
Copy link

Hi, I know FBX is a proprietary format, so I'll leave that out.

My question is if there's a possibility to take all the "normal" light calculation (generated from the normal map from all 360°) and generate a full 3D mesh in the .glb/gltf/obj formats? (maybe with embedded textures)?

It would be so helpful if the seeded prompt (face image) with each of their 360° turnaround sequences, could be UV unwrapped on the face for this 3D generated model.
The idea would be to generate the texture with "flat" light, so no shadows are baked in the original seeded generated image.
After creating the normal map, it would then generate a list of 3d points in space (use Z forward coordinate system, please), thus generating the mesh.
From there, (since all human heads will be almost the same) a universal UV unwrapping code can open the UV space texture like this.

And since it's taking 1 picture per degree to generate the correctly anatomically accurate skin, eyes, hair, maybe that texture information could also baked in "degrees" progressing from left to right in the previous generated UVd space.

This would be one step closer to actually generate human characters for games and background props for animation in general 3D applications.

Please let me know.
Kind regards.
-Pierre.

@3dcinetv
Copy link
Author

Triangulating in 3D space (Python)>>
Triangulating in 3D space (coordinates system)>>

@carlosedubarreto
Copy link

Hello @3dcinetv , you can find this solution done by hackman, to enable the output in ply format

https://github.com/hack-mans/PanoHead

But i didnt see textures there neither point color data.

Its not everything you need, but its already something 😊

@3dcinetv
Copy link
Author

Thanks for the heads up, Carlos. Hopefully the developer will be able to output consistent GLTF + Textures from the reference links I've posted in the original thread.
.Ply? That's not optimized polygons for 3D applications, just for 3d printing... no?

@hack-mans
Copy link

@3dcinetv The output mesh is reconstructed by sampling SDF of a 3D voxel volume using Marching Cubes. It only creates the vertex + faces + normals for the PLY mesh, there is no UV or texture information that can be obtained. I'm seeing if it is possible to somehow sample the color and store in vertex colors like other 3D AI papers have done.

@hack-mans
Copy link

It uses Trimesh which does support = Export meshes as binary STL, binary PLY, ASCII OFF, OBJ, GLTF/GLB 2.0, COLLADA

@3dcinetv
Copy link
Author

3dcinetv commented Jun 25, 2023

If stable diffusion is able to "output" (write) something in obj/gltf, having a "standard" UV unwrapping method would be the next logical step for the generated 3d head (at this point, we know it can create the vertex and normal information, and that HAS to have some kind of mapping to not go over-bound with the geometry).

It also calls to my attention the number of GPU cards needed to generate "the image sequence" / Video from the 3D image sequence. Why can't lower GPU than the 3090 RTX handle this? At first, it seems a really "optimized" 3d head the model constructs based on "trained" models.

If the head is generated by voxels, what's the actual poly count for each of these heads if they were transformed to .ply/obj/gltf?

@hack-mans
Copy link

@3dcinetv Polycount for one of my PLY outputs is 1.04million, it's fairly detailed but obviously not optimised at all and needs clean up, but is watertight. It only uses around 8GB VRAM to process a video + model, so can use lower end cards too.

UV unwrapping a random geometry isn't a simple task, generally the way heads are done in the industry is having a base mesh that is good topology and UV unwrapped, and then wrapping that to fit your generic high poly 3D head mesh. Otherwise you'd need to optimise the high poly mesh to decrease the polycount and turn into quads in order to use automatic UV unwrap. You still have the issue of projecting the textures, or obtaining vertex colors and then baking those to a texture.

@LeF0URBE
Copy link

I'll be looking forward to see one export for this. millions of polygons textured automatically doesn't sound great. However, i've saw other projects generate some more optimized meshes and have this kind of export.
we have to get this at some point in my humble opinion.

@3dcinetv
Copy link
Author

@3dcinetv Polycount for one of my PLY outputs is 1.04million, it's fairly detailed but obviously not optimised at all and needs clean up, but is watertight. It only uses around 8GB VRAM to process a video + model, so can use lower end cards too.

UV unwrapping a random geometry isn't a simple task, generally the way heads are done in the industry is having a base mesh that is good topology and UV unwrapped, and then wrapping that to fit your generic high poly 3D head mesh. Otherwise you'd need to optimise the high poly mesh to decrease the polycount and turn into quads in order to use automatic UV unwrap. You still have the issue of projecting the textures, or obtaining vertex colors and then baking those to a texture.

I'm well aware of all the workflow involved (20 years+ 3D experience here).
In regards of the "uv unwrapping" a dense polymesh, what other companies in the industry do, is that they have a generic polygon-optimized head (male, female and child), then (at the moment of generating the dense polymesh), they use a function to project "the nearest face interpolated" by overlapping the optimized human mesh head on top of the dense one.

From there, the function copies the optimized head UV properties, and does a projection over the dense one. With the uvs assigned, inheriting the texture is an easy task from what Stable Diffusion generated.
@hack-mans are you the developer of PanoHead? If so, please check out how Sony 3D creator work. And this is from using a PHONE (much less computational power, yet they output an .fbx)

@3dcinetv
Copy link
Author

Here are some links for Sony's 3D Creator>>

And this is what the app describes (there was never a chance to involve more programmers after 2018 for the .apk).

@c1112
Copy link

c1112 commented Jun 25, 2023

Sorry not super constructive but everyone knows real 3d artists use a z-up coordinate system

@hack-mans
Copy link

@3dcinetv Sorry I didn't know your experience. I'm not the developer, I'm just trying to integrate it into existing workflows similar to you.

I think the issue with the methods you are describing is that they are different from how this one works, 3D scanning is a long time solved problem and can project textures easily since you have many different input images from different angles.

This repo is using StyleGAN + neural rendering to generate the novel views of the subject, and meshing from a voxel volume. There is no simple way to do what you are asking for, but I'm trying to figure out if it would be possible to sample the neural rendered color attribute and get that onto the mesh.

@kashishnaqvi101
Copy link

@3dcinetv Sorry I didn't know your experience. I'm not the developer, I'm just trying to integrate it into existing workflows similar to you.

I think the issue with the methods you are describing is that they are different from how this one works, 3D scanning is a long time solved problem and can project textures easily since you have many different input images from different angles.

This repo is using StyleGAN + neural rendering to generate the novel views of the subject, and meshing from a voxel volume. There is no simple way to do what you are asking for, but I'm trying to figure out if it would be possible to sample the neural rendered color attribute and get that onto the mesh.

Hey, I loved your work to generate .ply output and I was even able to have an .obj file. Is it somehow possible to also have some colour or texture for the same using the approach you have discussed above?

@pyh007
Copy link

pyh007 commented Jul 4, 2023

可以得到一个3d模型的头部吗,而不是MP4

@linzijin1238
Copy link

@kashishnaqvi101 Hello, have you ever exported an .obj file? how to make that?

@linzijin1238
Copy link

@hack-mans Hello friend, according to "gen_videos_proj_withseg.py" in your project , I got a .mp4 and some frame images ,but I can't find the .ply file , should I do something else to get it?

@BlockchainPunks
Copy link

It uses Trimesh which does support = Export meshes as binary STL, binary PLY, ASCII OFF, OBJ, GLTF/GLB 2.0, COLLADA

would trimesh export the GLB with texture? or just the mesh?

Do you have guidance on how to update the code to use trimesh to export mesh with texture or GLB 2.0 with texture

@BlockchainPunks
Copy link

Hello @3dcinetv , you can find this solution done by hackman, to enable the output in ply format

https://github.com/hack-mans/PanoHead

But i didnt see textures there neither point color data.

Its not everything you need, but its already something 😊

does PLY include texture info?

@3dcinetv
Copy link
Author

3dcinetv commented Dec 6, 2023

of course not.

@BlockchainPunks
Copy link

What about camera information? As that could be used to generate .json files needed for NERF.

@epicstar7
Copy link

We really need a way to get the UV unwrapped texture of the generated mesh. Otherwise we gonna need to paint the whole face ourselves in some 3D modeling tool, which is a lot of extra work. @hack-mans do you have any plans to add UV texture export to the PLY export script in your repo?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants