Andrew Davison The enormous power of explicit 3D visual scene understanding is to enable varied, precise manipulation via standard motion planning. Works for many variations of object size/shape/placement with no demos or RL needed! Dyson Robotics Lab: NodeSLAM https://edgarsucar.github.io/NodeSLAM
RL offers the possibility to extend this with new types of learned interaction. For me deep RL work usually tangles up perception and action too much and ends up limited to toy problems; why not start from explicit 3D scene understanding and use learning relative to that? For me the hardest part of robotics is not learning action, but still how to make 3D scene understanding actually work robustly, precisely and efficiently with real sensors in the cluttered real world. https://arxiv.org/abs/1803.11288
reference: https://twitter.com/AjdDavison/status/1476142891990986752?s=20
Model Based Reinforcement Learning: Policy Iteration, Value Iteration, and Dynamic Programming
https://youtu.be/sJIFUTITfBc (Second video in the series!)
reference: https://twitter.com/eigensteve/status/1479553002113417218?s=20
-
Oldies but goldies: Lance Williams, Pyramidal parametrics, 1983. Introduces the idea of mip-mapping to reduce aliasing for 3D textured rendering. https://en.wikipedia.org/wiki/Mipmap
- Michael Black: Lance was one of the best read and most creative scientists I’ve known.
reference: https://twitter.com/gabrielpeyre/status/1474620930575855616?s=20
-
An EveryPoint beta user scanned a warthog from Halo. He was unable to walk around the window display, but he captured amazing details from a few back and fourth passes with the Video+LiDAR 3D scan mode. Check it out on @Sketchfab: https://skfb.ly/o8oO9
reference: https://twitter.com/EveryPointIO/status/1476257458196598793?s=20
-
Web-based TeX is awesome, but packages like MathJax and KaTeX still don't provide the beautiful typography of desktop LaTeX (where math fonts nicely match prose). Starting to hack on TeX export for @UsePenrose to provide the best of both worlds: web or desktop—you choose.
reference: https://twitter.com/keenanisalive/status/1478018932523085826?s=20
-
To help build more versatile & robust AI speech recognition tools, we are announcing Audio-Visual HuBERT (AV-HuBERT), a state-of-the-art self-supervised framework for understanding speech that learns by observing & hearing people speak. https://ai.facebook.com/blog/ai-that-understands-speech-by-looking-as-well-as-hearing
reference: https://twitter.com/MetaAI/status/1479498448013430785?s=20