Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rendering Multi-Channel Image #846

Closed
Gzhji opened this issue Aug 8, 2023 · 2 comments
Closed

Rendering Multi-Channel Image #846

Gzhji opened this issue Aug 8, 2023 · 2 comments

Comments

@Gzhji
Copy link

Gzhji commented Aug 8, 2023

Dear community:

I am trying to get surface normal map through AOV integrator and multi-channel image.
I tested the example cbox file and got .png and .exr file successfully.
image

However, when I print the rendered .exr file:
bmp_exr = mi.Bitmap('my_first_render.exr')
print(bmp_exr)

It shows:
image: TensorXf(shape=(256, 256, 12))
RuntimeError: "my_first_render.exr": read 0 out of 4 bytes

Does anyone know this issue?

Thanks in advance!

<scene version="3.0.0">
    <default name="spp" value="128"/>
    <default name="res" value="256"/>
    <default name="max_depth" value="6"/>
    <default name="integrator" value="path"/>
    <integrator type="aov">
        <string name="aovs" value="dd.y:depth,nn:sh_normal"/>
        <integrator type="path" name="my_image"/>
    </integrator>
    <sensor type="perspective" id="sensor">
        <string name="fov_axis" value="smaller"/>
        <float name="near_clip" value="0.001"/>
        <float name="far_clip" value="100.0"/>
        <float name="focus_distance" value="1000"/>
        <float name="fov" value="39.3077"/>
        <transform name="to_world">
            <lookat origin="0,  0,  4"
                    target="0,  0,  0"
                    up    ="0,  1,  0"/>
        </transform>
        <sampler type="independent">
            <integer name="sample_count" value="$spp"/>
        </sampler>
        <film type="hdrfilm">
            <integer name="width"  value="$res"/>
            <integer name="height" value="$res"/>
            <rfilter type="tent"/>
            <string name="pixel_format" value="rgba"/>
            <string name="component_format" value="float32"/>
        </film>
    </sensor>

    <!-- BSDFs -->

    <bsdf type="diffuse" id="gray">
        <rgb name="reflectance" value="0.85, 0.85, 0.85"/>
    </bsdf>

    <bsdf type="diffuse" id="white">
        <rgb name="reflectance" value="0.885809, 0.698859, 0.666422"/>
    </bsdf>

    <bsdf type="diffuse" id="green">
        <rgb name="reflectance" value="0.105421, 0.37798, 0.076425"/>
    </bsdf>

    <bsdf type="diffuse" id="red">
        <rgb name="reflectance" value="0.570068, 0.0430135, 0.0443706"/>
    </bsdf>

    <bsdf type="dielectric" id="glass"/>

    <bsdf type="conductor" id="mirror"/>

    <!-- Light -->

    <shape type="obj" id="light">
        <string name="filename" value="meshes/cbox_luminaire.obj"/>
        <transform name="to_world">
            <translate x="0" y="-0.01" z="0"/>
        </transform>
        <ref id="white"/>
        <emitter type="area">
            <rgb name="radiance" value="18.387, 13.9873, 6.75357"/>
        </emitter>
    </shape>

    <!-- Shapes -->

    <shape type="obj" id="floor">
        <string name="filename" value="meshes/cbox_floor.obj"/>
        <ref id="white"/>
    </shape>

    <shape type="obj" id="ceiling">
        <string name="filename" value="meshes/cbox_ceiling.obj"/>
        <ref id="white"/>
    </shape>

    <shape type="obj" id="back">
        <string name="filename" value="meshes/cbox_back.obj"/>
        <ref id="white"/>
    </shape>

    <shape type="obj" id="greenwall">
        <string name="filename" value="meshes/cbox_greenwall.obj"/>
        <ref id="green"/>
    </shape>

    <shape type="obj" id="redwall">
        <string name="filename" value="meshes/cbox_redwall.obj"/>
        <ref id="red"/>
    </shape>

    <shape type="sphere" id="mirrorsphere">
        <transform name="to_world">
            <scale value="0.5"/>
            <translate x="-0.3" y="-0.5" z="0.2"/>
        </transform>
        <ref id="mirror"/>
    </shape>

    <shape type="sphere" id="glasssphere">
        <transform name="to_world">
            <scale value="0.25"/>
            <translate x="0.5" y="-0.75" z="-0.2"/>
        </transform>
        <ref id="glass"/>
    </shape>
</scene>

@Frollo24
Copy link

Frollo24 commented Aug 9, 2023

Hi!

The 'image' you get when you render with a custom AOV integrator is a multi-channel image which has the same channels as the custom variable channels you are trying to integrate in the form which has this shape:
TensorXf(shape=(img_width, img_height, n_channels))

In the example integrator you have:

  • 4 channels reserved for the final render output
  • 1 channel for the depth value (Your dd.y:depth value)
  • 3 channels for the normal value (Your nn:sh_normal value)
  • 4 channels for the path integrated image

In your case, the final render and the path integrated image will be equal because both are rendered with pathtracing. If you want to get the normal of the output, you can simply put something like:

image_normal = render_output[::, ::, 5:8]

Hope it helps.

@njroussel
Copy link
Member

Hi @Gzhji

@Frollo24 is correct, this tensor just stacks all layers on the third dimesion.

If you're looking to export this an .exr file you can take a look at the snippet in this tutorial/guide: #849

Basically, you can do something like this:

_ = mi.render(scene)
scene.sensors()[0].film().bitmap().write('output.exr')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants