-
- Feature Detection and Tracking
- Depth Estimation (3D Reconstruction)
- Optical Flow Estimation
- Intensity-Image Reconstruction
- Localization and Ego-motion estimation
- Visual Odometry and SLAM (Simultaneous Localization And Mapping)
- Visual-Inertial Odometry
- Visual Stabilization
- Video Processing
- Pattern recognition
- Control
- Space Applications
- Slip detection (Manipulation)
- DVS (Dynamic Vision Sensor): Lichtsteiner, P., Posch, C., and Delbruck, T., A 128x128 120dB 15μs latency asynchronous temporal contrast vision sensor, IEEE J. Solid-State Circuits, 43(2):566-576, 2008.
- Product page at iniVation. Buy a DVS
- Product specifications
- User guide
- Introductory videos about the DVS
- More videos about the DVS technology
- iniVation AG invents, produces and sells neuromorphic technologies with a special focus on event-based vision into business. Slides by S. E. Jakobsen, board member of iniVation.
- Samsung's DVS (Gen2)
- Son, B., et al., 4.1 A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation, IEEE Int. Solid-State Circuits Conf. (ISSCC), San Francisco, CA, 2017, pp. 66-67.
- Slides and Video by Yoel Yaffe, Samsung Israel Research Center, Samsung Electronics.
- DLS (Dynamic Line Sensor): Posch, C., Hofstaetter, M., Matolin, D., Vanstraelen, G., Schoen, P., Donath, N., and Litzenberger, M., A dual-line optical transient sensor with on-chip precision time-stamp generation, IEEE Int. Solid-State Circuits Conf. - Digest of Technical Papers, Lisbon Falls, MN, US, 2007.
- LWIR DVS: Posch, C., Matolin, D., Wohlgenannt, R., Maier, T., Litzenberger, M., A Microbolometer Asynchronous Dynamic Vision Sensor for LWIR, IEEE Sensors Journal, 9 (2009), 6; S. 654 - 664.
- Prototype, commercially n.a.
- Smart DVS (GAEP): Posch, C., Hoffstaetter, M., Schoen, P., A SPARC-compatible general purpose Address-Event processor with 20-bit 10ns-resolution asynchronous sensor data interface in 0.18um CMOS, IEEE International Symposium on Circuits and Systems (ISCAS), 2010.
- Prototype, commercially n.a.
- DAVIS (Dynamic and Active-Pixel Vision Sensor) :
Brandli, C., Berner, R., Yang, M., Liu, S.-C., Delbruck, T., A 240x180 130 dB 3 µs Latency Global Shutter Spatiotemporal Vision Sensor, IEEE J. Solid-State Circuits, 49(10):2333-2341, 2014.
- Product page at iniVation. Buy a DAVIS
- Product specifications
- User guide
- Color-DAVIS: Li, C., Brandli, C., Berner, R., Liu, H., Yang, M., Liu, S.-C., Delbruck, T., Design of an RGBW Color VGA Rolling and Global Shutter Dynamic and Active-Pixel Vision Sensor, IEEE Int. Symp. Circuits and Systems (ISCAS), 2015, pp. 718-721.
- Insightness's Silicon Eye QVGA event sensor.
- The Silicon Eye Technology
- Slides and Video by Christian Brandli, CEO and co-founder of Insightness.
- ATIS (Asynchronous Time-based Image Sensor): Posch, C., Matolin, D., Wohlgenannt, R. (2011). A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS, IEEE J. Solid-State Circuits, 46(1):259-275, 2011.
- Posch, C., Serrano-Gotarredona, T., Linares-Barranco, B., Delbruck, T.,
Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output,
Proc. IEEE (2014), 102(10):1470-1484. - CeleX (Hillhouse Technology, Singapore). YouTube
- Sensitive DVS (sDVS)
- Leñero-Bardallo, J. A., Serrano-Gotarredona, T., Linares-Barranco, B., A 3.6us Asynchronous Frame-Free Event-Driven Dynamic-Vision-Sensor, IEEE J. of Solid-State Circuits, 46(6):1443-1455, 2011.
- Serrano-Gotarredona, T. and Linares-Barranco, B., A 128x128 1.5% Contrast Sensitivity 0.9% FPN 3us Latency 4mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Amplifiers, IEEE J. Solid-State Circuits, 48(3):827-838, 2013.
- iniVation AG invents, produces and sells neuromorphic technologies with a special focus on event-based vision into business.
- iniLabs AG invents neuromorphic technologies for research.
- Samsung develops Gen2 and Gen3 dynamic vision sensors and event-based vision solutions.
- IBM Research (Synapse project) and Samsung partenered to combine the TrueNorth chip (brain) with a DVS (eye).
- Prophesee (Formerly Chronocam) develops bio-inspired and self-adapting approach to the need for visual sensing and processing in autonomous vehicles, connected devices, security and surveillance systems.
- Insightness AG builds visual systems to give mobile devices spatial awareness. The Silicon Eye Technology.
- SLAMcore develops Localisation and mapping solutions for AR/VR, robotics & autonomous vehicles.
- Hillhouse Technology offer integrated sensory platforms that incorporate various components and technologies, including a processing chipset and an image sensor (a dynamic vision sensor called CeleX).
- AIT Austrian Institute of Technology sells neuromorphic sensor products.
- Serrano-Gotarredona, T. , Andreou, A.G. , Linares-Barranco, B.,
AER Image Filtering Architecture for Vision Processing Systems,
IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., 46(9):1064-1071, 1999. - Serrano-Gotarredona, R., Oster, M., Lichtsteiner, P., Linares-Barranco, A., Paz-Vicente, R., Gomez-Rodriguez, F., Riis, H.K., Delbruck, T., Liu, S.-H., Zahnd, S., Whatley, A.M., Douglas, R., Hafliger, P., Jimenez-Moreno, G., Civit, A., Serrano-Gotarredona, T., Acosta-Jimenez, A., Linares-Barranco, B.,
AER building blocks for multi-layer multi-chip neuromorphic vision systems,
Advances in neural information processing systems, 1217-1224, 2006. - Delbruck, T.,
Frame-free dynamic digital vision*,
Int. Symp. Secure-Life Electronics, Advanced Electronics for Quality Life and Society, University of Tokyo, Tokyo, Japan, Mar. 6-7, 2008, pp. 21-26. Introduces the software architecture of jAER and shows examples of several event-based processing algorithms. - Liu, S.-C. and Delbruck, T.,
Neuromorphic sensory systems,
Current Opinion in Neurobiology, 20:3(288-295), 2010. - Posch, C.,
Bio-inspired vision,
J. of Instrumentation, 7 C01054, 2012. Bio-inspired explanation of the DVS and the ATIS. - Delbruck, T.,
Fun with asynchronous vision sensors and processing.
Computer Vision - ECCV 2012. Workshops and Demonstrations. Springer Berlin/Heidelberg, 2012. A position paper and summary of recent accomplishments of the INI Sensors' group. - Zamarreño-Ramos, C., Linares-Barranco, A., Serrano-Gotarredona, T., Linares-Barranco, B.,
Multi-Casting Mesh AER: A Scalable Assembly Approach for Reconfigurable Neuromorphic Structured AER Systems. Application to ConvNets,
IEEE Trans. Biomed. Circuits Syst., 7(1):82-102, 2013. - Liu, S.-C., Delbruck, T., Indiveri, G., Whatley, A., Douglas, R.,
Event-Based Neuromorphic Systems,
Wiley. ISBN: 978-1-118-92762-5, 2014. - Chicca, E., Stefanini, F., Bartolozzi, C., Indiveri, G.,
Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems,
Proc. IEEE, 102(9):1367-1388, 2014. - Delbruck, T.,
Neuromorophic Vision Sensing and Processing (Invited paper),
46th Eur. Solid-State Device Research Conference (ESSDERC), Lausanne, 2016, pp. 7-14. - Vanarse, A., Osseiran, A., Rassau, A,
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors, Front. Neurosci. (2016), 10:115.
- Litzenberger, M., Posch, C., Bauer, D., Belbachir, A. N., Schon. P., Kohn, B., Garn, H.,
Embedded Vision System for Real-Time Object Tracking using an Asynchronous Transient Vision Sensor,
IEEE 12th Digital Signal Proc. Workshop and 4th IEEE Signal Proc. Education Workshop, Teton National Park, WY, 2006, pp. 173-178. - Litzenberger, M., Kohn, B., Belbachir, A.N., Donath, N., Gritsch, G., Garn, H., Posch, C., Schraml, S.,
Estimation of Vehicle Speed Based on Asynchronous Data from a Silicon Retina Optical Sensor,
IEEE Intelligent Transportation Systems Conf., Toronto, Ont., 2006, pp. 653-658. PDF - Drazen, D., Lichtsteiner, P., Haefliger, P., Delbruck, T., Jensen, A.,
Toward real-time particle tracking using an event-based dynamic vision sensor,
Experiments in Fluids (2011), 51(1):1465-1469. PDF - Ni, Z., Pacoret, C., Benosman, R., Ieng, S., Regnier, S.,
Asynchronous event-based high speed vision for microparticle tracking,
J. Microscopy (2011), 245(3):236-244. - Ni, Z., Bolopion, A., Agnus, J., Benosman, R., Regnier, S.,
Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics,
IEEE Trans. Robot., 28(5):1081-1089, 2012. - Piatkowska, E., Belbachir, A. N., Schraml, S., Gelautz, M.,
Spatiotemporal multiple persons tracking using Dynamic Vision Sensor,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 35-40. PDF - Ni, Ph.D. Thesis, 2013,
Asynchronous Event Based Vision: Algorithms and Applications to Microrobotics. - Lagorce, X., Ieng, S. H., Benosman, R.,
Event-based features for robotic vision,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2013, pp. 4214-4219. - Borer, D., Rosgen, T.,
Large-scale Particle Tracking with Dynamic Vision Sensors,
ISFV16 - 16th Int. Symp. Flow Visualization, Okinawa 2014. Project page, PDF - Lagorce, X., Meyer, C., Ieng, S. H., Filliat, D., Benosman, R.,
Live demonstration: Neuromorphic event-based multi-kernel algorithm for high speed visual features tracking,
IEEE Biomedical Circuits and Systems Conference (BioCAS), Lausanne, 2014, pp. 178-178. - Lagorce, X., Meyer, C., Ieng, S. H., Filliat, D., Benosman, R.,
Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking,
IEEE Trans. Neural Netw. Learn. Syst., 26(8):1710-1720, 2015. - Lagorce, X., Ieng, S.-H., Clady, X., Pfeiffer, M., Benosman, R.,
Spatiotemporal features for asynchronous event-based data,
Front. Neurosci. (2015), 9:46. - Ni, Z., Ieng, S. H., Posch, C., Regnier, S., Benosman, R.,
Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras,
Neural Computation (2015), 27(4):925-953. - Linares-Barranco, A., Gómez-Rodríguez, F., Villanueva, V., Longinotti, L., Delbrück, T.,
A USB3.0 FPGA event-based filtering and tracking framework for dynamic vision sensors,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2015, pp. 2417-2420. - Barranco, F., Teo, C. L., Fermüller, C., Aloimonos, Y.,
Contour Detection and Characterization for Asynchronous Event Sensors,
IEEE Int. Conf. Computer Vision (ICCV), 2015, pp. 486-494. PDF - Liu, H., Moeys, D. P., Das, G., Neil, D., Liu, S.-C., Delbruck, T.,
Combined frame- and event-based detection and tracking,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2016, pp. 2511-2514. - Tedaldi, D., Gallego, G., Mueggler, E., Scaramuzza, D.,
Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS),
IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF, YouTube - Braendli, C., Strubel, J., Keller, S., Scaramuzza, D., Delbruck, T.,
ELiSeD - An Event-Based Line Segment Detector,
Int. Conf. on Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF - Glover, A. and Bartolozzi, C.,
Event-driven ball detection and gaze fixation in clutter,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 2203-2208. YouTube, Code - Clady, X., Maro, J.-M., Barré, S., Benosman, R. B.,
A Motion-Based Feature for Event-Based Pattern Recognition.
Front. Neurosci. (2017), 10:594. - Zhu, A., Atanasov, N., Daniilidis, K.,
Event-based Feature Tracking with Probabilistic Data Association,
IEEE Int. Conf. Robotics and Automation (ICRA), 2017. PDF, YouTube, Code - Glover, A. and Bartolozzi, C.,
Robust Visual Tracking with a Freely-moving Event Camera,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2017. YouTube, Code - Barrios-Avilés, J., Iakymchuk, T., Samaniego, J., Rosado-Muñoz, A.,
An Event-based Fast Movement Detection Algorithm for a Positioning Robot Using POWERLINK Communication,
arXiv:1707.07188, 2017. - Li, J., Shi, F., Liu, W., Zou, D., Wang, Q., Park, Paul-K.J., Hyunsurk, E.R.,
Adaptive Temporal Pooling for Object Detection using Dynamic Vision Sensor,
British Machine Vision Conf. (BMVC), 2017. - Peng, X., Zhao, B., Yan, R., Tang H., Yi, Z.,
Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors,
IEEE Trans. Neural Netw. Learn. Syst., 28(4):791-803, 2017. - Ramesh, B., Yang, H., Orchard, G., Le Thi, N.A., Xiang, C,
DART: Distribution Aware Retinal Transform for Event-based Cameras,
arXiv:1710.10800, 2017. - Gehrig, D., Rebecq, H., Gallego, G., Scaramuzza, D.,
Asynchronous, Photometric Feature Tracking using Events and Frames,
European Conf. Computer Vision (ECCV) 2018. Poster, YouTube, Oral presentation. - Mitrokhin, A., Fermuller, C., Parameshwara, C., Aloimonos, Y.,
Event-based Moving Object Detection and Tracking,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018. YouTube, Project page and Dataset - Barranco, F., Fermuller, C., Ros, E.,
Real-Time Clustering and Multi-Target Tracking Using Event-Based Sensors,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018. - Iacono, M., Weber, S., Glover, A., Bartolozzi, C.,
Towards Event-Driven Object Detection with Off-The-Shelf Deep Learning,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018. -
Ramesh, B., Zhang, S., Lee, Z.-W., Gao, Z., Orchard, G., Xiang, C.,
Long-term object tracking with a moving event camera,
British Machine Vision Conf. (BMVC), 2018. Video
- Clady, X., Ieng, S.-H., Benosman, R.,
Asynchronous event-based corner detection and matching,
Neural Networks (2015), 66:91-106. - Vasco, V., Glover, A., Bartolozzi, C.,
Fast event-based Harris corner detection exploiting the advantages of event-driven cameras,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 4144-4149. YouTube, Code - Mueggler, E., Bartolozzi, C., Scaramuzza, D.,
Fast Event-based Corner Detection,
British Machine Vision Conf. (BMVC), 2017. YouTube, Code - Alzugaray, I., Chli, M.,
Asynchronous Corner Detection and Tracking for Event Cameras in Real Time,
IEEE Robotics and Automation Letters (RA-L), 3(4):3177-3184, Oct. 2018. PDF, YouTube. - Alzugaray, I., Chli, M.,
ACE: An Efficient Asynchronous Corner Tracker for Event Cameras,
Int. Conf. 3D Vision (3DV), 2018. YouTube
- Rebecq, H., Gallego, G., Scaramuzza, D.,
EMVS: Event-based Multi-View Stereo,
British Machine Vision Conf. (BMVC), 2016. PDF, YouTube, 3D Reconstruction Experiments from a Train using an Event Camera - Rebecq, H., Gallego, G., Mueggler, E., Scaramuzza, D.,
EMVS: Event-Based Multi-View Stereo—3D Reconstruction with an Event Camera in Real-Time,
Int. J. of Computer Vision (IJCV), 2017. PDF, YouTube. - Kim et. al. ECCV 2016,
Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera. - Gallego, G., Rebecq, H., Scaramuzza, D.,
A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2018. PDF, Poster, YouTube, Spotlight presentation.
- Brandli, C., Mantel, T.A., Hutter, M., Hoepflinger, M.A., Berner, R., Siegwart, R., Delbruck, T.,
Adaptive Pulsed Laser Line Extraction for Terrain Reconstruction using a Dynamic Vision Sensor,
Front. Neurosci. (2014) 7:275. PDF, YouTube - Matsuda, N., Cossairt, O., Gupta, M.,
MC3D: Motion Contrast 3D Scanning,
IEEE Conf. Computational Photography (ICCP), Houston,TX, 2015, pp. 1-10. PDF, YouTube, Project page
- Schraml, C., Schon, P., Milosevic, N.,
Smartcam for real-time stereo vision - address-event based embedded system,
Int. Conf. Computer Vision Theory and Applications (VISAPP), 2007, pp. 466-471. - Kogler, J., Sulzbachner, C., Kubinger, W.,
Bio-inspired stereo vision system with silicon retina imagers,
Int. Conf. Computer Vision Systems (ICVS), 2009, pp. 174-183. PDF - Schraml, S., Belbachir, A. N., Milosevic, N., Schon, P.,
Dynamic stereo vision system for real-time tracking,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2010, pp. 1409-1412. - Belbachir, A., Pflugfelder, R., Gmeiner, P.,
A Neuromorphic Smart Camera for Real-time 360deg distortion-free Panoramas,
IEEE Conference on Distributed Smart Cameras, 2010. - Kogler, J., Sulzbachner, C., Humenberger, M., Eibensteiner, F.,
Address-Event Based Stereo Vision with Bio-Inspired Silicon Retina Imagers,
Advances in Theory and Applications of Stereo Vision (2011), pp. 165-188. - Kogler, J., Humenberger, M., Sulzbachner, C.,
Event-Based Stereo Matching Approaches for Frameless Address Event Stereo Data,
Int. Symp. Visual Computing (ISVC) 2011, Advances in Visual Computing, pp. 674-685. - Benosman, R., Ieng, S. H., Rogister, P., Posch, C.,
Asynchronous Event-Based Hebbian Epipolar Geometry,
IEEE Trans. Neural Netw., 22(11):1723-1734, 2011. - Rogister, P. , Benosman, R., Ieng, S.-H., Lichtsteiner, P., Delbruck, T.,
Asynchronous Event-Based Binocular Stereo Matching,
IEEE Trans. Neural Netw. Learn. Syst., 23(2):347-353, 2012. - Carneiro, J., Ieng, S.-H., Posch, C., Benosman, R.,
Event-based 3D reconstruction from neuromorphic retinas,
Neural Networks (2013), 45:27-38. - Lee et. al., TNNLS 2014
- Carneiro, Ph.D. Thesis, 2014,
Asynchronous Event-Based 3D Vision. - Piatkowska, E., Belbachir, A. N., Gelautz, M.,
Asynchronous Stereo Vision for Event-Driven Dynamic Stereo Sensor Using an Adaptive Cooperative Approach,
IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2013, pp. 45-50. - Piatkowska, E., Belbachir, A. N., Gelautz, M.,
Cooperative and asynchronous stereo vision for dynamic vision sensors,
Meas. Sci. Technol. (2014), 25(5). - Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Ieng, S. H., Benosman, R. B., Linares-Barranco, B.,
On the use of orientation filters for 3D reconstruction in event–driven stereo vision,
Front. Neurosci. (2014) 8:48. - Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Linares-Barranco, B., Ieng, S., Benosman, R.,
Event-Driven Stereo Vision with Orientation Filters,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2014, pp. 257-260. - Belbachir, A. N., Schraml, S., Mayerhofer, M., Hofstatter, M.,
A Novel HDR Depth Camera for Real-time 3D 360-degree Panoramic Vision,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 419-426. PDF - Eibensteiner, F., Kogler, J., Scharinger, J.,
A High-Performance Hardware Architecture for a Frameless Stereo Vision Algorithm Implemented on a FPGA Platform,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2014, pp. 637-644. - Schraml, S., Belbachir, A. N., Bischof, H.,
Event-Driven Stereo Matching for Real-Time 3D Panoramic Vision,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2015, pp. 466-474. PDF. Slides. - S. Schraml, A. N. Belbachir, Bischof, H.,
An Event-Driven Stereo System for Real-Time 3-D 360° Panoramic Vision,
IEEE Trans. Ind. Electron., 63(1):418-428, 2016. - Firouzi, M. and Conradt, J.,
Asynchronous Event-based Cooperative Stereo Matching Using Neuromorphic Silicon Retinas,
Neural Processing Letters, 43(2):311-326, Apr. 2016. PDF - Osswald, M., Ieng, S.-H., Benosman, R., Indiveri, G.,
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems,
Scientific Reports 7, Article number: 40703 (2017). - Piatkowska, E., Kogler, J., Belbachir, N., Gelautz, M.,
Improved Cooperative Stereo Matching for Dynamic Vision Sensors with Ground Truth Evaluation,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 370-377. PDF. - Dikov, G., Firouzi, M., Röhrbein, F., Conradt, J., Richter, C.,
Spiking Cooperative Stereo-Matching at 2 ms Latency with Neuromorphic Hardware,
Conf. Biomimetic and Biohybrid Systems. Living Machines 2017: Biomimetic and Biohybrid Systems, pp. 119-137. Lecture Notes in Computer Science, vol 10384. Springer, Cham. PDF, Videos - Eibensteiner, F., Brachtendorf, H. G., Scharinger, J.,
Event-driven stereo vision algorithm based on silicon retina sensors,
27th Int. Conf. Radioelektronika, Brno, 2017, pp. 1-6. - Zou, D., Shi, F., Liu, W., Li, J., Wang, Q., Park P.-K.J., Hyunsurk, E. R.,
Robust Dense Depth Map Estimation from Sparse DVS Stereos,
British Machine Vision Conf. (BMVC), 2017. Supp. Material. - Camuñas-Mesa, L. A., Serrano-Gotarredona, T., Ieng, S., Benosman, R., Linares-Barranco, B.,
Event-driven Stereo Visual Tracking Algorithm to Solve Object Occlusion,
IEEE Trans. Neural Netw. Learn. Syst., 2017. - Xie, Z., Chen, S., Orchard, G.
Event-Based Stereo Depth Estimation Using Belief Propagation,
Front. Neurosci. (2017), 11:535. YouTube - Martel, J. N.; Mueller, J.; Conradt, J., Sandamirskaya, Y.,
An Active Approach to Solving the Stereo Matching Problem using Event-Based Sensors,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2018, pp. 1-5. - Andreopoulos, A., Kashyap, H.J., Nayak, T.K., Amir, A., Flickner, M.D.,
A Low Power, High Throughput, Fully Event-Based Stereo System,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2018. - Zhu, A., Chen, Y., Daniilidis, K.,
Realtime Time Synchronized Event-based Stereo,
European Conf. Computer Vision (ECCV) 2018. YouTube - Zhou, Y., Gallego, G., Rebecq, H., Kneip, L., Li, H., Scaramuzza, D.,
Semi-Dense 3D Reconstruction with a Stereo Event Camera,
European Conf. Computer Vision (ECCV) 2018. Poster, YouTube.
- Cook et. al. IJCNN 2011,
Interacting maps for fast visual interpretation.
Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera. - Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C., Srinivasan, M.,
Asynchronous Frameless Event-Based Optical Flow,
Neural Networks (2012), 27:32-37. - Benosman, R., Clercq, C., Lagorce, X., Ieng, S.-H., Bartolozzi, C.,
Event-Based Visual Flow,
IEEE Trans. Neural Netw. Learn. Syst., 25(2):407-417, 2014. - Orchard, G., Benosman, R., Etienne-Cummings, R., Thakor, N,
A Spiking Neural Network Architecture for Visual Motion Estimation,
IEEE Biomedical Circuits and Systems Conf. (BioCAS), Rotterdam, 2013, pp. 298-301. - Clady, X., Clercq, C., Ieng, S.H., Houseini, F., Randazzo, M., Natale, L., Bartolozzi, C., Benosman, R.,
Asynchronous visual event-based time-to-contact,
Front. Neurosci. (2014), 8:9. - Tschechne, S., Sailer R., Neumann, H.,
Bio-Inspired Optic Flow from Event-Based Neuromorphic Sensor Input,
IAPR Workshop on Artificial Neural Networks in Pattern Recognition (ANNPR) 2014, pp. 171-182. - Barranco, F., Fermüller, C., Aloimonos, Y.,
Contour motion estimation for asynchronous event-driven cameras,
Proc. IEEE (2014), 102(10):1537-1556. PDF - Barranco, F., Fermüller, C., Aloimonos, Y.,
Bio-inspired Motion Estimation with Event-Driven Sensors,
Int. Work-Conf. Artificial Neural Networks (IWANN) 2015, Advances in Computational Intelligence, pp. 309-321. - Conradt, J.,
On-Board Real-Time Optic-Flow for Miniature Event-Based Vision Sensors,
IEEE Int. Conf. Robotics and Biomimetics (ROBIO), 2015, pp. 1858-1863. - Brosch, T., Tschechne, S., Neumann, H.,
On event-based optical flow detection,
Front. Neurosci. (2015), 9:137. - Kosiorek, A., Adrian, D., Rausch, J., Conradt, J.,
An Efficient Event-Based Optical Flow Implementation in C/C++ and CUDA,
Tech. Rep. TU Munich, 2015. - E. Mueggler, C. Forster, N. Baumli, G. Gallego, D. Scaramuzza,
Lifetime Estimation of Events from Dynamic Vision Sensors,
IEEE Int. Conf. Robotics and Automation (ICRA), 2015, pp. 4874-4881. PDF, PPT, Code - Rueckauer, B. and Delbruck, T.,
Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor,
Front. Neurosci (2016). 10:176. - Bardow, P. A., Davison, A. J., Leutenegger, S.,
Simultaneous Optical Flow and Intensity Estimation from an Event Camera,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2016. YouTube - Liu, M., Delbruck, T.,
Block-Matching Optical Flow for Dynamic Vision Sensors: Algorithm and FPGA Implementation,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. - Haessig, G., Cassidy, A. Alvarez, R., Benosman, R., Orchard, G.,
Spiking Optical Flow for Event-based Sensors Using IBM's TrueNorth Neurosynaptic System,
arXiv:1710.09820, 2017. - Stoffregen, T., Kleeman, L.,
Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor,
Australasian Conference on Robotics and Automation (ACRA), 2017. PDF, YouTube - Gallego et. al. CVPR 2018,
A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation. - Zhu, A., Yuan, L., Chaney, K., Daniilidis, K.,
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras,
Robotics: Science and Systems XIV (RSS), 2018. PDF, YouTube, Code - Liu, M., Delbruck, T.,
Adaptive Time-Slice Block-Matching Optical Flow Algorithm for Dynamic Vision Sensors,
British Machine Vision Conf. (BMVC), 2018. Supplementary material, Video
- Cook, M., Gugelmann, L., Jug, F., Krautz, C., Steger, A.,
Interacting maps for fast visual interpretation,
Int. Joint Conf. on Neural Networks (IJCNN), San Jose, CA, 2011, pp. 770-776. YouTube- Martel, J. N. P., Cook, M.,
A Framework of Relational Networks to Build Systems with Sensors able to Perform the Joint Approximate Inference of Quantities,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Workshop on Unconventional Computing for Bayesian Inference, 2015, Hamburg. PDF - Martel, J. N. P., Chau, M., Dudek, P., Cook, M.,
Toward joint approximate inference of visual quantities on cellular processor arrays,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2015, pp. 2061-2064.
- Martel, J. N. P., Cook, M.,
- Kim, H., Handa, A., Benosman, R., Ieng, S.-H., Davison, A. J.,
Simultaneous Mosaicing and Tracking with an Event Camera, British Machine Vision Conf. (BMVC), 2014. PDF, YouTube. - Barua, S., Miyatani, Y., Veeraraghavan, A.,
Direct face detection and video reconstruction from event cameras,
IEEE Winter Conf. Applications of Computer Vision (WACV), Lake Placid, NY, 2016, pp. 1-9. YouTube - Bardow et. al. CVPR 2016,
Simultaneous Optical Flow and Intensity Estimation from an Event Camera. - Reinbacher, C., Graber, G., Pock, T.,
Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation,
British Machine Vision Conf. (BMVC), 2016. PDF, YouTube, Code - Moeys, D. P., Li, C., Martel, J. N. P., Bamford, S., Longinotti, L., Motsnyi, V., Bello, D. S. S., Delbruck, T.,
Color Temporal Contrast Sensitivity in Dynamic Vision Sensors,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. PDF. - Munda, G., Reinbacher, C., Pock, T.,
Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation,
Int. J. of Computer Vision (IJCV), 2018. - Shedligeri, P.A., Shah, K., Kumar, D., Mitra, K.,
Photorealistic Image Reconstruction from Hybrid Intensity and Event based Sensor,
arXiv:1805.06140, 2018.
- Cook et. al. IJCNN 2011,
Interacting maps for fast visual interpretation.
Joint estimation of optical flow, image intensity and angular velocity with a rotating event camera. - Weikersdorfer, D. and Conradt, J.,
Event-based particle filtering for robot self-localization,
IEEE Int. Conf. on Robotics and Biomimetcs (ROBIO), Guangzhou, 2012, pp. 866-870. PDF - Censi, A., Strubel, J., Brandli, C., Delbruck, T., Scaramuzza, D.,
Low-latency localization by Active LED Markers tracking using a Dynamic Vision Sensor,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2013. PDF, Slides - Mueggler, E., Huber, B., Scaramuzza, D.,
Event-based, 6-DOF Pose Tracking for High-Speed Maneuvers,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), Chicago, IL, 2014, pp. 2761-2768. PDF, YouTube - Gallego, G., Forster, C., Mueggler, E., Scaramuzza, D.,
Event-based Camera Pose Tracking using a Generative Event Model,
arXiv:1510.01972, 2015. - Mueggler, E., Gallego G., Scaramuzza, D.,
Continuous-Time Trajectory Estimation for Event-based Vision Sensors,
Robotics: Science and Systems XI (RSS), 2015. PDF, PPT, Poster - Reverter Valeiras, D., Orchard, G., Ieng, S.-H., Benosman, R.,
Neuromorphic Event-Based 3D Pose Estimation.
Front. Neurosci. (2016), 9:522. - Gallego, G., Lund, J.E.A., Mueggler, E., Rebecq, H., Delbruck, T., Scaramuzza, D.,
Event-based, 6-DOF Camera Tracking from Photometric Depth Maps,
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 2017. PDF, YouTube - Reinbacher, C., Munda, G., Pock, T.,
Real-Time Panoramic Tracking for Event Cameras,
IEEE Int. Conf. Computational Photography (ICCP), 2017, pp. 1-9. PDF, YouTube, Code - Mueggler et. al. IJRR 2017.
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM. - Vasco, V., Glover, A., Mueggler, E., Scaramuzza, D., Natale, L., Bartolozzi, C.
Independent Motion Detection with Event-driven Cameras,
Int. Conf. Advanced Robotics (ICAR), 2017, pp. 530-536. PDF - Nguyen, A., Do, T.-T., Caldwell, D. G., Tsagarakis, N. G.,
Real-Time 6DOF Pose Relocalization for Event Cameras with Stacked Spatial LSTM Networks,
arXiv:1708.09011. - Maqueda et. al. CVPR 2018.
Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars.
- Weikersdorfer, D., Hoffmann, R., Conradt. J.,
Simultaneous localization and mapping for event-based vision systems.
Int. Conf. Computer Vision Systems (ICVS), 2013, pp. 133-142. PDF, Slides - Censi, A. and Scaramuzza, D.,
Low-latency Event-based Visual Odometry,
IEEE Int. Conf. Robotics and Automation (ICRA), 2014, pp. 703-710. PDF, Slides - Weikersdorfer, D., Adrian, D. B., Cremers, D., Conradt, J.,
Event-based 3D SLAM with a depth-augmented dynamic vision sensor,
IEEE Int. Conf. Robotics and Automation (ICRA), 2014, pp. 359-364. - Weikersdorfer, Ph.D. Thesis, 2014,
Efficiency by Sparsity: Depth-Adaptive Superpixels and Event-based SLAM. - Kueng, B., Mueggler, E., Gallego, G., Scaramuzza, D.,
Low-Latency Visual Odometry using Event-based Feature Tracks,
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2016, pp. 16-23. PDF. YouTube - Kim, H., Leutenegger, S., Davison, A.J.,
Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera,
European Conference on Computer Vision (ECCV), 2016, pp. 349-364. PDF, YouTube - Rebecq, H., Horstschaefer, T., Gallego, G., Scaramuzza, D.,
EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time,
IEEE Robotics and Automation Letters (RA-L), 2(2):593-600, 2017. PDF, PPT, Poster, Youtube. - Gallego, G. and Scaramuzza, D.,
Accurate Angular Velocity Estimation with an Event Camera,
IEEE Robotics and Automation Letters (RA-L), 2(2):632-639, 2017. PDF, PPT, Youtube. - Mueggler et. al. IJRR 2017.
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM. - Gallego et. al. CVPR 2018,
A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
- Mueggler, E., Gallego, G., Rebecq, H., Scaramuzza, D.,
Continuous-Time Visual-Inertial Odometry for Event Cameras,
IEEE Transactions on Robotics, 2018. - Mueggler et. al. IJRR 2017.
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM. - Zhu, A., Atanasov, N., Daniilidis, K.,
Event-based Visual Inertial Odometry,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2017. PDF, Supplementary material, YouTube. - Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization,
British Machine Vision Conf. (BMVC), 2017. PDF, Appendix, YouTube, Project page, PPT, Oral presentation. - Rosinol Vidal, A., Rebecq, H., Horstschaefer, T., Scaramuzza, D.,
Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios,
IEEE Robotics and Automation Letters (RA-L), 3(2):994-1001, Apr. 2018. PDF, YouTube, Poster, Project page, ICRA18 video pitch.
- Delbruck, T., Villanueva, V., Longinotti, L.,
Integration of dynamic vision sensor with inertial measurement unit for electronically stabilized event-based vision,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2014, pp. 2636-2639. YouTube
- Brandli, C., Muller, L., Delbruck, T.,
Real-time, high-speed video decompression using a frame- and event-based DAVIS sensor,
IEEE Int. Symp. on Circuits and Systems (ISCAS), 2014, pp. 686-689.
- Serrano-Gotarredona, R., Oster, M., Lichtsteiner, P., Linares-Barranco, A., Paz-Vicente, R., Gómez-Rodríguez, F., Camuñas-Mesa, L., Berner, R., Rivas, M., Delbrück, T., Liu, S. C., Douglas, R., Häfliger, P., Jiménez-Moreno, G., Civit, A., Serrano-Gotarredona, T., Acosta-Jiménez, A., Linares-Barranco, B.,
CAVIAR: A 45k-Neuron, 5M-Synapse, 12G-connects/sec AER Hardware Sensory-Processing-Learning-Actuating System for High Speed Visual Object Recognition and Tracking,
IEEE Trans. on Neural Netw., 20(9):1417-1438, 2009. PDF - Belbachir, A., Hofstaetter, M., Litzenberger, M., Schoen, P.,
High Speed Embedded Object Analysis Using a Dual-Line Timed-Address-Event Temporal Contrast Vision Sensor,
IEEE Trans. Ind. Electron., 58(3):770-783, 2011. - Camuñas-Mesa, L., Zamarreño-Ramos, C., Linares-Barranco, A., Acosta-Jiménez, A., Serrano-Gotarredona, T., Linares-Barranco, B.
An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors,
IEEE J. of Solid-State Circuits, 47(2):504-517, 2012. - Lee, J., Delbruck, T., Park, P. K. J., Pfeiffer, M., Shin, C. W., Ryu, H., Kang, B. C.,
Live demonstration: Gesture-Based remote control using stereo pair of dynamic vision sensors,
IEEE Int. Symp. Circuits and Systems (ISCAS) 2012, pp. 736-740. PDF, YouTube - Pérez-Carrasco, J. A., Zhao, B., Serrano, C., Acha, B., Serrano-Gotarredona, T., Chen, S., Linares-Barranco, B.,
Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate-Coding and Coincidence Processing. Application to Feed-Forward ConvNets,
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 35(11):2706-2719, 2013. - Lee, J. H., Delbruck, T., Pfeiffer, M., Park, P. K. J., Shin, C.-W., Ryu, H., Kang, B. C.,
Real-Time Gesture Interface Based on Event-Driven Processing From Stereo Silicon Retinas,
IEEE Trans. Neural Netw. Learn. Syst., 25(12):2250-2263, 2014. - Orchard, G., Meyer, C., Etienne-Cummings, R., Posch, C., Thakor, N., Benosman, R.,
HFIRST: A Temporal Approach to Object Recognition,
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 37(10):2028-2040, 2015. PDF- Code: HFIRST: A simple spiking neural network for recognition based on the canonical frame-based HMAX model.
- Zhao, B., Ding, R., Chen, S., Linares-Barranco, B., Tang, H.,
Feedforward Categorization on AER Motion Events using Cortex-like Features in a Spiking Neural Network,
IEEE Trans. Neural Netw. Learn. Syst., 26(9):1963-1978, 2015. - Park, P.K.J. et al.,
Computationally efficient, real-time motion recognition based on bio-inspired visual and cognitive processing,
IEEE Int. Conf. Image Processing (ICIP), Quebec City, QC, 2015, pp. 932-935. - Park, P.K.J. et al.,
Performance improvement of deep learning based gesture recognition using spatiotemporal demosaicing technique,
IEEE Int. Conf. Image Processing (ICIP), Phoenix, AZ, 2016, pp. 1624-1628. - Barua et. al. WACV 2016. Face recognition.
- Moeys, D., Corradi F., Kerr, E., Vance, P., Das, G., Neil, D., Kerr, D., Delbruck, T.,
Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network,
IEEE Int. Conf. Event-Based Control Comm. and Signal Proc. (EBCCSP), 2016. PDF, YouTube 1, YouTube 2 - Lagorce, X., Orchard, G., Gallupi, F., Shi, B., Benosman, R.,
HOTS: A Hierarchy Of event-based Time-Surfaces for pattern recognition,
IEEE Trans. Pattern Anal. Machine Intell. (TPAMI), 39(7):1346-1359, 2017. - Clady et. al. FNINS,
A Motion-Based Feature for Event-Based Pattern Recognition. - Lungu, I.-A., Corradi, F., Delbruck, T.,
Live Demonstration: Convolutional Neural Network Driven by Dynamic Vision Sensor Playing RoShamBo,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. YouTube, Slides 36-39 - Amir, A., Taba, B., Berg, D., Melano, T., McKinstry, J., Di Nolfo, C., Nayak, T., Andreopoulos, A., Garreau, G., Mendoza, M., Kusnitz, J., Debole, M., Esser, S., Delbruck, T., Flickner, M., Modha, D.,
A Low Power, Fully Event-Based Gesture Recognition System,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2017. PDF, Dataset - Stromatias, E., Soto, M., Serrano-Gotarredona, T., Linares-Barranco, B.,
An Event-Based Classifier for Dynamic Vision Sensor and Synthetic Data,
Front. Neurosci. (2017), 11:350. - Yousefzadeh, A., Masquelier, T., Serrano-Gotarredona, T., Linares-Barranco, B.,
Live demonstration: Hardware implementation of convolutional STDP for on-line visual feature learning,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2017. - Sironi, A., Brambilla, M., Bourdis, N., Lagorce, X., Benosman, R.,
HATS: Histograms of Averaged Time Surfaces for Robust Event-based Object Classification,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2018. PDF.- N-CARS Dataset: A large real-world event-based dataset for car classification.
- Maqueda, A.I., Loquercio, A., Gallego, G., Garcia, N., Scaramuzza, D.,
Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars,
IEEE Conf. Computer Vision and Pattern Recognition (CVPR) 2018. PDF, Poster, YouTube. - Zhu et. al. RSS 2018,
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras. - Haessig, G. and Benosman, R.,
A Sparse Coding Multi-Scale Precise-Timing Machine Learning Algorithm for Neuromorphic Event-Based Sensors,
arXiv: 1804.09236, 2018. - Liu, W., Chen, H., Goel, R., Huang, Y., Veeraraghavan, A., Patel, A.,
Fast Retinomorphic Event-Driven Representations for Video Recognition and Reinforcement Learning,
arXiv: 1805.06374, 2018.
- Delbruck, T. and Lichtsteiner, P.,
Fast sensory motor control based on event-based hybrid neuromorphic-procedural system,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2007, pp. 845-848. - Conradt, J., Cook, M., Berner, R., Lichtsteiner, P., Douglas, R. J., Delbruck, T.,
A Pencil Balancing Robot Using a Pair of AER Dynamic Vision Sensors,
IEEE Int. Symp. Circuits and Systems (ISCAS) 2009, pp. 781-784. PDF, Poster, Project page, YouTube 1, YouTube 2, YouTube 3 - Conradt, J., Berner, R., Cook, M., Delbruck, T.,
An embedded AER dynamic vision sensor for low-latency pole balancing,
IEEE Int. Conf. Computer Vision Workshops (ICCVW), 2009. PDF - Delbruck, T. and Lang, M.,
Robotic Goalie with 3ms Reaction Time at 4% CPU Load Using Event-Based Dynamic Vision Sensor,
Front. Neurosci. (2013), 7:223. PDF, YouTube - Censi, A.,
Efficient Neuromorphic Optomotor Heading Regulation,
American Control Conference (ACC), Chicago, IL, 2015, pp. 3854-3861. - Mueggler, E., Baumli, N., Fontana, F., Scaramuzza, D.,
Towards Evasive Maneuvers with Quadrotors using Dynamic Vision Sensors,
Eur. Conf. Mobile Robots (ECMR), Lincoln, 2015. PDF - Delbruck, T., Pfeiffer, M., Juston, R., Orchard, G., Mueggler, E., Linares-Barranco, A., Tilden, M. W.,
Human vs. computer slot car racing using an event and frame-based DAVIS vision sensor,
IEEE Int. Symp. Circuits and Systems (ISCAS), 2015, pp. 2409-2412. YouTube 1, YouTube 2 - Moeys et. al. EBCCSP 2016. VISUALISE Predator/Prey Dataset.
- Vasco, V., Glover, A., Tirupachuri, Y., Solari, F., Chessa M., Bartolozzi C.,
Vergence control with a neuromorphic iCub,
IEEE Int. Conf. Humanoid Robotics (Humanoids), 2016, pp. 732-738.
- Cohen, G., Afshar, S., van Schaik, A., Wabnitz, A., Bessell, T., Rutten, M., Morreale, B.,
Event-based Sensing for Space Situational Awareness,
Proc. Advanced Maui Optical and Space Surveillance (AMOS) Technologies Conf., 2017. -
Cheung, B., Rutten, M., Davey, S., Cohen, G.,
Probabilistic Multi Hypothesis Tracker for an Event Based Sensor,
Int. Conf. Information Fusion (FUSION) 2018, pp. 1-8.
- Rigi, A., Baghaei Naeini, F., Makris, D., Zweiri, Y.,
A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS),
Sensors 2018, 18, 333.
- Datasets from the Sensors group at INI (Institute of Neuroinformatics), Zurich:
- DVS09 - DVS128 Dynamic Vision Sensor Silicon Retina
- DVSFLOW16 - DVS/DAVIS Optical Flow Dataset
- DVSACT16 - DVS Datasets for Object Tracking, Action Recognition and Object Recognition
- PRED18 - VISUALISE Predator/Prey Dataset
- DDD17 - DAVIS Driving Dataset 2017
- ROSHAMBO17 - RoShamBo Rock Scissors Paper game DVS dataset
- DHPE17 - DAVIS Human Pose Estimation and Action Recognition
- DVS/DAVIS Optical Flow Dataset associated to the paper Rueckauer and Delbruck, FNINS 2016.
- Binas et. al. ICML 2017. DDD17: End-To-End DAVIS Driving Dataset.
- Bardow et al. CVPR2016, Four sequences
- Zhu et al. RAL2018: MVSEC The Multi Vehicle Stereo Event Camera Dataset.
- Combined Dynamic Vision / RGB-D Dataset associated to the paper Weikersdorfer et. al. ICRA 2014.
- Barranco, F., Fermuller, C., Aloimonos, Y.,
A Dataset for Visual Navigation with Neuromorphic Methods,
Front. Neurosci. (2016), 10:49. - E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, D. Scaramuzza,
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM,
Int. J. Robotics Research, 36:2, pp. 142-149, 2017. PDF, PDF IJRR, Dataset. - Binas, J., Neil, D., Liu, S.-C., Delbruck, T.,
DDD17: End-To-End DAVIS Driving Dataset,
Int. Conf. Machine Learning, Sydney, Australia, PMLR 70, 2017. Dataset - Zhu, A., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., Daniilidis, K.,
The Multi Vehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception,
IEEE Robotics and Automation Letters 3(3):2032-2039, Feb. 2018. PDF, Dataset, YouTube.
- Orchard, G., Jayawant, A., Cohen, G.K., Thakor, N.,
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades,
Front. Neurosci. (2015), 9:437. YouTube- Neuromorphic-MNIST (N-MNIST) dataset is a spiking version of the original frame-based MNIST dataset (of handwritten digits). YouTube
- The Neuromorphic-Caltech101 (N-Caltech101) dataset is a spiking version of the original frame-based Caltech101 dataset. YouTube
- Serrano-Gotarredona,T. and Linares-Barranco, B.,
Poker-DVS and MNIST-DVS. Their History, How They were Made, and Other Details,
Front. Neurosci. (2015), 9:481.- MNIST-DVS and FLASH-MNIST-DVS datasets are based on the original frame-based MNIST dataset. MNIST-DVS are DVS128 recordings of moving MNIST digits (at 3 scales), while FLASH-MNIST-DVS datasets are recorded by flashing the digits on a monitor.
- POKER-DVS. From a set of DVS recordings of very fast poker card browsing, 32x32 pixel windows tracking the symbols are cropped. On average each symbol lasts about 10-30ms.
- SLOW-POKER-DVS. Paper printed poker card symbols are moved at "human speed" in front of a DVS camera and recorded at 128x128 resolution.
- VISUALISE Predator/Prey Dataset associated to the paper Moeys et. al. EBCCSP 2016
- Hu, Y., Liu, H., Pfeiffer, M., Delbruck, T.,
DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition,
Front. Neurosci. (2016), 10:405. Dataset - Liu, Q., Pineda-García, G., Stromatias, E., Serrano-Gotarredona, T., Furber, SB.,
Benchmarking Spike-Based Visual Recognition: A Dataset and Evaluation,
Front. Neurosci. (2016), 10:496. Dataset, Dataset - DVS128 Gesture Dataset: The dataset that was used to build the real-time gesture recognition system described in Amir et al. CVPR 2017.
- N-CARS Dataset: A large real-world event-based dataset for car classification. Sironi et al. CVPR 2018.
- Mitrokhin et. al. IROS 2018 Event-based Moving Object Detection and Tracking. Project page and Dataset
- jAER (java Address-Event Representation) project. Real time sensory-motor processing for event-based sensors and systems. github page. Wiki
- caer (AER event-based framework, written in C, targeting embedded systems)
- libcaer (Minimal C library to access, configure and get/send AER data from sensors or to/from neuromorphic processors)
- ROS (Robotic Operating System)
- YARP (Yet Another Robot Platform)
- Lens focus adjustment or this other source.
- For the DAVIS: use the grayscale frames to calibrate the optics of both frames and events.
- ROS camera calibrator (monocular or stereo)
- kalibr software by ASL - ETH.
- For the DAVIS camera and IMU calibration: kalibr software by ASL - ETH, using the grayscale frames.
- For the DVS (events-only):
- Calibration using blinking LEDs or computer screens by RPG - UZH.
- DVS camera calibration by G. Orchard.
- DVS camera calibration by VLOGroup at TU Graz.
-
Several event-processing filters in the jAER (java Address-Event Representation) project
-
A collection of tracking and detection algorithms using the YARP framework
-
Optical Flow
- LocalPlanesFlow, inspired by the paper Benosman et. al. TNNLS 2014.
- Several algorithms compared in the paper by Rueckauer and Delbruck, FNINS 2016.
- Event-Lifetime estimation, associated to the paper Mueggler et. al. ICRA 2015.
-
Intensity-Image reconstruction
- Code for intensity reconstruction, inspired by the paper Kim et. al. BMVC 2014.
- DVS reconstruction code associated to the paper Reinbacher et. al. BMVC 2016.
-
Localization and Ego-Motion Estimation
- Panoramic tracking code associated to the paper Reinbacher et. al. ICCP 2017.
-
Pattern Recognition
- A simple spiking neural network for recognition associated to the paper Orchard et. al. TPAMI 2015.
- Process AEDAT: useful scripts to work with data from jAER and cAER.
- Matlab functions in jAER project
- AEDAT Tools: scripts for Matlab and Python to work with aedat files.
- Matlab AER functions by G. Orchard. Some basic functions for filtering and displaying AER vision data, as well as making videos.
- Python code for AER vision data by G. Orchard.
- edvstools, by D. Weikersdorfer: A collection of tools for the embedded Dynamic Vision Sensor eDVS.
- Tarsier Framework for event-based Vision in C++.
- CelexMatlabToolbox by Yuxin Zhang. Tools to decode events generated by CeleX IV DVS, visualize them and denoise.
- Hoffstaetter, M., Belbachir, N., Bodenstorfer, E., Schoen, P.,
Multiple Input Digital Arbiter with Timestamp Assignment for Asynchronous Sensor Arrays, IEEE Int. Conf. Electronics, Circuits and Systems (ICECS), Nice, France, 2006. - Belbachir, A., Hofstaetter, M., Reisinger, K., Litzenberger, M., Schoen, P.,
High-Precision Timestamping and Ultra High-Speed Arbitration of Transient Pixels' Events,
Int. Conf. on Electronics, Circuits and Systems", 2008, pp. 886-889. - Hoffstaetter, M., Schoen, P., Posch, C., Bauer, D.,
An integrated 20-bit 33/5M events/s AER sensor interface with 10ns time-stamping and hardware-accelerated event pre-processing, IEEE BioCAS, 2009. - Hoffstaetter, M., Litzenberger, M., Matolin, D., Posch, C.,
Hardware-accelerated address-event processing for high-speed visual object recognition,
IEEE Int. Conf. Electronics, Circuits, and Systems (ICECS), 2011. - Dynamic Neuromorphic Asynchronous Processor (DYNAP) by aiCTX AG
- Qiao, N., Mostafa, H., Corradi, F., Osswald, M., Stefanini, F., Sumislawska, D., Indiveri, G.,
A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses,
Front. Neurosci. (2015), 9:141. PDF - Indiveri, G., Qiao, N., Corradi, F.,
Neuromorphic Architectures for Spiking Deep Neural Networks,
IEEE Int. Electron Devices Meeting (IEDM), Washington, DC, 2015, pp. 4.2.1-4.2.4. PDF
- Qiao, N., Mostafa, H., Corradi, F., Osswald, M., Stefanini, F., Sumislawska, D., Indiveri, G.,
- Wiesmann, G., Schraml, S., Litzenberger, M., Belbachir, A. N., Hofstatter, M., Bartolozzi, C.,
Event-driven embodied system for feature extraction and object recognition in robotic applications,
IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), 2012, pp. 76-82. - Galluppi, F., Denk, C., Meiner, M. C., Stewart, T. C., Plana, L. A., Eliasmith, C., Furber, S., Conradt, J.,
Event-based neural computing on an autonomous mobile platform,
IEEE Int. Conf. Robotics and Automation (ICRA), 2014, pp. 2862-2867. PDF - Graf, R., King, R., Belbachir, A.,
Braille Vision Using Braille Display and Bio-inspired Camera,
Int. Conf. Computer Supported Education (CSEDU), SCITEPRESS Digital Library, (2014), pp. 214 - 219.
- ICRA 2015 Workshop on Innovative Sensing for Robotics, with a focus on Neuromorphic Sensors.
- Event-Based Vision for High-Speed Robotics (slides) IROS 2015, Workshop on Alternative Sensing for Robot Perception.
- ICRA 2017 First International Workshop on Event-based Vision.
- The Telluride Neuromorphic Cognition Engineering Workshops.
- Capo Caccia Workshops toward Cognitive Neuromorphic Engineering.
- IEEE Embedded Vision Workshop Series, with focus on Biologically-inspired vision and embedded systems.
- Mahowald, M.,
VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function,
Ph.D. thesis, California Inst. Of Technology, Pasadena, CA, 1992. PDF
She won the Caltech's Clauser prize for the best PhD thesis for this work, which included the silicon retina, AER communication, and a beautiful stereopsis chip. - Delbrück, T.,
Investigations of Analog VLSI Visual Transduction and Motion Processing,
Ph.D. Thesis. California Inst. Of Technology, Pasadena, CA, 1993. PDF - Lichtsteiner, P.,
A temporal contrast vision sensor,
Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2006. PDF - Matolin, D.,
Asynchronous CMOS image sensor with extended dynamic range and suppression of time-redundant data,
Ph.D. Thesis, TU Dresden & AIT, deutsch, 2010. - Berner, R.,
Building Blocks for Event-Based Sensors,
Ph.D. Thesis, ETH Zurich, Zurich, Switzerland, 2011. PDF - Ni, Z.,
Asynchronous Event Based Vision: Algorithms and Applications to Microrobotics,
Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2013. - Carneiro, J.,
Asynchronous Event-Based 3D Vision,
Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2014. - Weikersdorfer, D.,
Efficiency by Sparsity: Depth-Adaptive Superpixels and Event-based SLAM,
Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2014. PDF - Borer, D. J.,
4D Flow Visualization with Dynamic Vision Sensors,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2014. PDF - Yang, M.,
Silicon Retina and Cochlea with Asynchronous Delta Modulator for Spike Encoding,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2015. - Brändli, C.,
Event-Based Machine Vision,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2015. PDF - Lagorce, X.,
Computational methods for event-based signals and applications,
Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2015. PDF - Moeys, D. P.,
Analog and digital implementations of retinal processing for robot navigation systems,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2016. PDF - Cohen, G. K.,
Event-Based Feature Detection, Recognition and Classification,
Ph.D. Thesis, Université de Pierre et Marie Curie, Paris, France, 2016. PDF - Li, C.,
Two-stream vision sensors,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2017. - Neil, D.,
Deep Neural Networks and Hardware Systems for Event-driven Data,
Ph.D. Thesis, ETH-Zurich, Zurich, Switzerland, 2017. PDF - Mueggler, E.,
Event-based Vision for High-Speed Robotics,
Ph.D. Thesis, University of Zurich, Zurich, Switzerland, 2017. - Kim, H.,
Real-time visual SLAM with an event camera,
Ph.D. Thesis, Imperial College London, United Kingdom, 2017. - See also Theses from Delbruck's group at INI
- Reisinger, K.,
EMC testing on Silicon Retinas,
MSc. Thesis, TU Wien & AIT, 2006. - Nowakowska, A.,
Recognition of a vision approach for fall detection using a biologically inspired dynamic stereo vision sensor,
MSc. Thesis, TU Wien & AIT, 2011. - Reingruber, H.,
An Asynchronous Data Interface for Event-based Stereo Matching,
MSc. Thesis, TU Wien & AIT, 2011. - Zima, M.,
Hand/Arm Gesture Recognition based on Address-Event-Representation Data,
MSc. Thesis, TU Wien & AIT, 2012. - Huber, B.,
High-Speed Pose Estimation using a Dynamic Vision Sensor,
MSc. Thesis, University of Zurich, 2014. - Horstschaefer, T.,
Parallel Tracking, Depth Estimation, and Image Reconstruction with an Event Camera,
MSc. Thesis, University of Zurich, 2016.
- Institute of NeuroInformatics (INI) of the University of Zurich (UZH) and ETH Zurich.
- iniVation AG (commercialization of neuromorphic vision technology from INI).
- Dynamic Vision Sensor (DVS) - asynchronous temporal contrast silicon retina
- Robotics and Perception Group (RPG-UZH).
- Institut de la Vision Neuromorphics group Paris.
- AIT Austrian Institute of Technology Sensing & vision solutions group in Vienna.
- Sinapse Singapore Institute for Neurotechnology.
Please see CONTRIBUTING for details.