-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not multiply by non-number #2
Comments
indeed it seems to be an issue with urx and math3d. |
Thanks for your reply. I tried all the other versions of math3d and kept getting this issue:
It seems to be the issue of the numpy? I revert the numpy version but still get this. Do you remember the configuration you used before? Thanks! |
Never mind Enyen, I found the original urx code is different with this. I will reinstall the urx. Thanks! |
KOVIS_VisualServo/inference_oja.py Line 158 in 5ea1d9a
The code move_tcp_perpendicular everytime before running the servo, so you could modify this line according to your application.
|
Hi Enyen, thanks for the response, however this is at the inference stage right? Wouldnt we want to train the robot in that horizontal end position from the training data generation phase itself? Basically we want the robot to grab the tube horizontally so in the simulator during training data generation also we need the robot to move that similar way right? |
Hi Enyen, I have sent out the meeting link through gmail. Pls let me know if you didn't receive this. By the way, is there by any change you encountered |
use
|
The speed loss does not look good. Yes, you should continue to try different settings and also on robot. |
yes, camera pose is randomized in every images.
|
Hi Enyen, please forget the problem. I tested this model on the real robot, and the performance is good (although bad performance from some specific initial positions, I will continue to adjust the hyperparameter to improve this). Really thanks for your help! |
Hi Enyen, Sorry for the bother, I'm trying to understand the code, and visualize the keypoints in the inference phase. I found there seems to be some inconsistency between the inference and training phases. In the training, the KOVIS_VisualServo/train_model.py Line 8 in 5ea1d9a
So the encoder will take one single image as input: KOVIS_VisualServo/train_servo.py Line 77 in 5ea1d9a
But in the inference phase, parameter KOVIS_VisualServo/inference_oja.py Line 132 in 5ea1d9a
And when I change it to False, the output keypoints have one dimension all 0:
the output keypoints only have one dimension: I would like to ask whether |
When KOVIS_VisualServo/train_model.py Line 69 in 5ea1d9a
You can seperate the - setattr(self, 'net_servo_{}'.format(net), torch.nn.Sequential(enc, cvt))
+ setattr(self, 'net_enc_{}'.format(net), enc)
+ setattr(self, 'net_cvt_{}'.format(net), cvt)
- getattr(self, 'net_servo_{}'.format(net)).eval()
+ getattr(self, 'net_enc_{}'.format(net)).eval()
+ getattr(self, 'net_cvt_{}'.format(net)).eval() In - vec, speed = getattr(self, 'net_servo_{}'.format(action))([infraL.cuda(), infraR.cuda()])
+ kps = getattr(self, 'net_enc_{}'.format(action))([infraL.cuda(), infraR.cuda()])
+ vec, speed = getattr(self, 'net_cvt_{}'.format(action))(kps)
+ kpl, kpr = kps.squeeze().chunk(chunks=2, dim=1)
+ vis_kp(kpl, kpr) # todo Keypoints are flatten: KOVIS_VisualServo/train_model.py Line 131 in 5ea1d9a
into [h1, w1, m1, ..., hK, wK, mK]
|
Really thanks for your reply! Yes the output is the result after I separated the |
KOVIS_VisualServo/train_model.py Line 114 in 5ea1d9a
|
Hi Enyen,
Thanks for sharing the code! I have trained a kovis1 model and use the command
rob.servo('pick_tube', 10, [0.01, 0.01, 0.01, 0.05, 0, 0], [0.1, 5])
to test it.I printed the velocity and orientaion:
Are they reasonable? Then an error raised:
This seems to be an internal issue in python urx. I searched this but found no solution. Did you encounter this problem before? Any suggestion is appreciated. Thanks!
Regards,
Daiying
The text was updated successfully, but these errors were encountered: