You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm still learning about how matrix and coordinates work so please excuse this basic question.
I'm using GLM and OpenGL and have physics working in a 3d space. Now I would like to render the collision objects to check everything is ok. I note the doco says that the returned vertices are in World Space.
I have looked through the testbed application, and it seems to be doing some sort of inverse matrix magic - which I don't understand.
How do I go from the debug World Coordinates into something I can render using the exisitng pipeline - or what MVP do I need to perform in a shader ?
I can render them now - but they are only appearing as flat rectangles on the screen as the coordinates are in the wrong space. An example in c++ would be very useful.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm still learning about how matrix and coordinates work so please excuse this basic question.
I'm using GLM and OpenGL and have physics working in a 3d space. Now I would like to render the collision objects to check everything is ok. I note the doco says that the returned vertices are in World Space.
I have looked through the testbed application, and it seems to be doing some sort of inverse matrix magic - which I don't understand.
How do I go from the debug World Coordinates into something I can render using the exisitng pipeline - or what MVP do I need to perform in a shader ?
I can render them now - but they are only appearing as flat rectangles on the screen as the coordinates are in the wrong space. An example in c++ would be very useful.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions