You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using glTFast to parse Google 3D tiles data. The downloaded glb assets are of a fairly simple structure (a few nodes with a single texture per node), and I'm getting access to all the data (big thumbs up, great stuff so far!)
Unfortunately, the way the Google data is structured in space (using the ECEF coordinate system) means nearly every high-resolution/small area chunk of geometry is positioned approximately 6.4 million meters away from the origin. When working with an instantiated unity gameObject, the transform is inherently inaccurate simply due to floating point precision - meaning adjacent chunks of geometry are rarely properly aligned with each other, and are often visibly aliased (the error at that scale is on the order of 0.25m).
Is there any way for me to recover the node transform position in double format? To work with the resulting objects means some form of floating origin anyway, so once I've placed nodes relative to that using floats for everything else is fine, but I'm not sure how to work around the errors in the initial offset.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm using glTFast to parse Google 3D tiles data. The downloaded glb assets are of a fairly simple structure (a few nodes with a single texture per node), and I'm getting access to all the data (big thumbs up, great stuff so far!)
Unfortunately, the way the Google data is structured in space (using the ECEF coordinate system) means nearly every high-resolution/small area chunk of geometry is positioned approximately 6.4 million meters away from the origin. When working with an instantiated unity gameObject, the transform is inherently inaccurate simply due to floating point precision - meaning adjacent chunks of geometry are rarely properly aligned with each other, and are often visibly aliased (the error at that scale is on the order of 0.25m).
Is there any way for me to recover the node transform position in double format? To work with the resulting objects means some form of floating origin anyway, so once I've placed nodes relative to that using floats for everything else is fine, but I'm not sure how to work around the errors in the initial offset.
Beta Was this translation helpful? Give feedback.
All reactions