Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mapper.py is a slow in converting between color space to depth space #14

Open
akash02ita opened this issue Nov 23, 2022 · 18 comments
Open

Comments

@akash02ita
Copy link

akash02ita commented Nov 23, 2022

I am using python 3.7 64bit.
i did not do pip install since it seems to not work in that way. rather using the given files directly via import.

However the issue is that running mapper.py is very slow. 0.5 fps is the frame rate. I think waitKey(3000) is the reason. But that still does not fix the issue:

When i set show = False that improves but still slow and not smooth 25/30fps. Rather somewhere around 8/10fps.

Any solution to this issue? When i do not call depth_2_color_space the speed is just back to 25/30fps i can say.

@akash02ita
Copy link
Author

if i use
img = color_2_depth_space(kinect, _ColorSpacePoint, kinect._depth_frame_data, show=False, return_aligned_image=True)
i still see 30fps.
could the issue be due to resolution?

@akash02ita akash02ita changed the title example running very slow mapper.py is a slow in converting between color space to depth space Nov 23, 2022
@KonstantinosAng
Copy link
Owner

Unfortunately, I do not have a Kinect Device anymore to test it, but I do believe that the reason the function is slow is that I perform some operations that are only needed when you pass the show = True flag. The only reason I have put that flag is to test how the aligned image shows, but it slows down the performance by a lot. I suggest no to use the show = True, and only call this function to get the points and then decide on how you want to display them.

I will push some changes in a moment that make some computations happen only when show is True.

@KonstantinosAng
Copy link
Owner

Can you pull the changes and try again to see if you can see a drop in fps, but do not pass the show flag?

@dongtamlx18
Copy link

hello i use this code below but terminal has a problem " "depth_2_color_space" is not defined" although i did import mapper library.

import mapper
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime
import cv2
import numpy as np

if name == 'main':
kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

while True:
    if kinect.has_new_depth_frame():
        color_frame = kinect.get_last_color_frame()
        colorImage = color_frame.reshape((kinect.color_frame_desc.Height, kinect.color_frame_desc.Width, 4)).astype(np.uint8)
        colorImage = cv2.flip(colorImage, 1)
        cv2.imshow('Test Color View', cv2.resize(colorImage, (int(1920 / 2.5), int(1080 / 2.5))))
        depth_frame = kinect.get_last_depth_frame()
        depth_img = depth_frame.reshape((kinect.depth_frame_desc.Height, kinect.depth_frame_desc.Width)).astype(np.uint8)
        depth_img = cv2.flip(depth_img, 1)
        cv2.imshow('Test Depth View', depth_img)
        # print(color_point_2_depth_point(kinect, _DepthSpacePoint, kinect._depth_frame_data, [100, 100]))
        # print(depth_points_2_world_points(kinect, _DepthSpacePoint, [[100, 150], [200, 250]]))
        # print(intrinsics(kinect).FocalLengthX, intrinsics(kinect).FocalLengthY, intrinsics(kinect).PrincipalPointX, intrinsics(kinect).PrincipalPointY)
        # print(intrinsics(kinect).RadialDistortionFourthOrder, intrinsics(kinect).RadialDistortionSecondOrder, intrinsics(kinect).RadialDistortionSixthOrder)
        # print(world_point_2_depth(kinect, _CameraSpacePoint, [0.250, 0.325, 1]))
        # img = depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=False, return_aligned_image=True)
        depth_2_color_space(kinect, _DepthSpacePoint, kinect._depth_frame_data, show=True)
        # img = color_2_depth_space(kinect, _ColorSpacePoint, kinect._depth_frame_data, show=True, return_aligned_image=True)

    # Quit using q
    if cv2.waitKey(1) & 0xff == ord('q'):
        break

cv2.destroyAllWindows()

@KonstantinosAng
Copy link
Owner

try importing like this:

from mapper import *

@dongtamlx18
Copy link

oh i have a problem with my code and i fixed it, thank you so much your source code is very useful.
I wonder that if there is any source code about processing pointcloud by usung KinectV2, or something related about pointcloud using KinectV2. If you have this please share me, please.
By the way, thank you so much!

@KonstantinosAng
Copy link
Owner

KonstantinosAng commented Jul 12, 2023

@dongtamlx18 I have another repo that I use mapper to draw real time (30fps) point clouds with color and depth simultaneously:

https://github.com/KonstantinosAng/PyKinect2-PyQtGraph-PointClouds

@dongtamlx18
Copy link

You are amazing but i have a problem when i test your code. I don't know why is this happen?

from PointCloud import Cloud

pcl = Cloud(file='models/model.pcd')
(I did download model.pcd and placed it into right folder)

and information about the error:
File "e:/2022 - Nam4 - HKII/ĐATN/PythonDA-newVS/RunTest.py", line 3, in
pcl = Cloud(file='models/model.pcd')
File "e:\2022 - Nam4 - HKII\ĐATN\PythonDA-newVS\PointCloud.py", line 116, in init
self.visualize_file()
File "e:\2022 - Nam4 - HKII\ĐATN\PythonDA-newVS\PointCloud.py", line 647, in visualize_file
vis = o3d.Visualizer() # start visualizer
AttributeError: module 'open3d' has no attribute 'Visualizer'

@KonstantinosAng
Copy link
Owner

KonstantinosAng commented Jul 12, 2023

I have used this open3d version:

0.10.0.1

try:

pip uninstall open3d

and then:

pip install open3d==0.10.0.1

@dongtamlx18
Copy link

i can use function pcl.visualize() for
pcl = Cloud(file='models/test_cloud_4.txt'). Maybe i will try with your open3D version

@dongtamlx18
Copy link

Helllo sir, i scroll on Youtube and people said that Open3D lib does not support create pointcloud data, i do not know does it right or not so im here to ask you about that question. And could you help me to introduce me some technique that using KinectV2 to create pointcloud? Thank you so much!
Btw, I did read your code Pointcloud repository but i did not know too much, so I ask you here. Thank you!

@dongtamlx18
Copy link

dongtamlx18 commented Aug 3, 2023 via email

@KonstantinosAng
Copy link
Owner

First of all, about the Open3d:

I do not know if you can create a Pointcloud with open3d, I only use it to visualize Pointcloud files (.ply, .pcd) that I create manually in my Repository. If you see my code you will see that I get the world point from Kinect and I manually create the file using the basic structure of the (.ply, .pcd) file format.

Second, about the x, y, z coordinates:

Where exactly do I swap the values because I cannot find the function ?

@dongtamlx18
Copy link

dongtamlx18 commented Aug 3, 2023 via email

@KonstantinosAng
Copy link
Owner

I did this because the z coordinate for Kinect is the distance from Kinect to the object and the Y coordinate is the distance of the object from the ground to the Kinect. I always had the Z coordinate as the distance from the floor so I swapped the values to suit me better.

@dongtamlx18
Copy link

dongtamlx18 commented Aug 5, 2023 via email

@KonstantinosAng
Copy link
Owner

I think for image processing it is better to use OpenCV.
To compute the normal vector you have to find 3 points in the plane that you are looking for and to find the plane in the image can be difficult if the colors mix. Start looking for a way to first identify the tilted object in the image accurately.

@dongtamlx18
Copy link

dongtamlx18 commented Aug 11, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants