Skip to content
This repository has been archived by the owner on Jan 1, 2024. It is now read-only.

I wonder if the synthetic images are significantly different from the real-world images and could lead to depth calculation failure. #35

Open
pekkykang opened this issue Sep 28, 2022 · 0 comments

Comments

@pekkykang
Copy link

Hi, thanks for the fantastic work:

However, when I use synthetic images to train an image to depth neural network, it works very bad on real-world images.

I found the synthetic images (as shown below) are significantly different from the real image, is this the reason leading to depth calculation failure? When I test the neural network on the synthetic images, it works fine. However, when I test real-world images it just predicts them as no-depth images (a full black image).

As shown in the synthetic images, the contact image of synthetic images looks more three-dimensional.

2

1

I am new to the image process area, therefore, I may miss some key pre-treatment processes for the real-world images to make them more like synthetic images.

Is my digit sensor not fabricated well?

Or is any pre-treatment needed for the real-world images to make them more like synthetic images?

Any advice from the community will be extremely helpful to me.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant