Warning: Declaration of action_plugin_importoldchangelog::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/importoldchangelog/action.php on line 24 Warning: Declaration of action_plugin_safefnrecode::register(Doku_Event_Handler &$controller) should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/safefnrecode/action.php on line 16 Warning: Declaration of action_plugin_importoldindex::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/importoldindex/action.php on line 21 Warning: Declaration of action_plugin_popularity::register(&$controller) should be compatible with DokuWiki_Action_Plugin::register($controller) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/popularity/action.php on line 21 Warning: Cannot modify header information - headers already sent by (output started at /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/importoldchangelog/action.php:8) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/auth.php on line 377 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/auth.php on line 656 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/auth.php on line 656 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/auth.php on line 656 Warning: Cannot modify header information - headers already sent by (output started at /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/importoldchangelog/action.php:8) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/actions.php on line 628 Warning: Cannot modify header information - headers already sent by (output started at /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/lib/plugins/importoldchangelog/action.php:8) in /home/httpd/vhosts/siliconretina.ini.uzh.ch/httpdocs/wiki/inc/actions.php on line 628
======Dynamic Vision Sensor (DVS) - asynchronous temporal contrast silicon retina====== {{dvs128-037.jpg?w=600 |DVS128 prototype cameras}} =====News===== * **See [[http://inilabs.com|inilabs.com]] if you are a user** of one of our **DVS128**, **DVS128_PAER**, **eDVS128**, or **DAVIS** inilabs USB camera engineering samples. * ** See [[https://www.youtube.com/playlist?list=PLVtZ8f-q0U5iROmshCSqzzBNlqpUGbSkS|YouTube video collection of DVS and DAVIS recordings and applications]] ** * [[https://wiki.lsr.ei.tum.de/nst/programming/edvsgettingstarted|Check out the eDVS (embedded dynamic vision sensor).]] * **Winner, ISSCC 2006 Solid State Circuits Society Jan Van Vessem Outstanding European Paper award.** {{lichtsteiner_isscc2006_d27_09.pdf|Download the 2-page ISSCC paper}}. {{lichtsteiner_dvs_jssc08.pdf|Download the full 2008 JSSC paper}}. *Key [[#specifications]]: **128x128 resolution, 120dB dynamic range, 23mW power consumption, 2.1%-contrast threshold mismatch, 15 us-latency** *Our partner's [[http://www.smart-systems.at/products/products_smart_optical_sensors_en.html|Smart Systems SmartEye]] - Austrian Research Centers GmbH smart traffic camera using this sensor * {{:ict_results_-_vision_sensors_keep_their_eye_on_the_ball_at_euro_2008.pdf|Vision sensors keep their eye on the ball at Euro 2008}} - article published 10 June 2008 on ICT Results. * {{:delbrucknmefreeingvisionfromframes2006.pdf|Freeing vision from frames}} - article in [[http://ine-web.org/research/newsletters/index.html|The Neuromorphic Engineer]], 2006. * Jorg Conradt's [[http://www.ini.uzh.ch/~conradt/research/PencilBalancer/|Pencil Balancing Robot]] in regular use for demonstrations, Sept. 2009. * 20 new [[userguide|DVS128 systems]] were assembled and tested, March 2009. * INI spins off [[http://www.inilabs.com|inilabs]] to sell INI-developed technology, 2010. * 10 DVS sensors are used in {{:lichtkunst_in_der_einstein-passage_am_bahnhof_aarau_-_aarau_-_aargau_-_aargauer_zeitung.pdf|the permanent Einstein Passage exhibit in the train station Aarau}} which opened March 2011. PhD student Christian Braendli writes the bulk of the DVS signal processing, which is done in [[http://jaerproject.net|jAER]], and which tracks individuals and groups to trigger some visual effects. * [[http://aer-ear.ini.uzh.ch|Shih-Chii Liu's AER-EAR binaural silicon cochlea]]. This event-based silicon cochlea offers a user-friendly USB interface to jAER and allow rapid development of event-based auditory processing algorithms for sound localization and auditory scene analysis. New AER-EAR systems are being built in April 2011. * 200 DVS128 cameras were commercially assembled, June 2011. =====Technology Briefing===== Conventional vision sensors see the world as a series of frames. Successive frames contain enormous amounts of redundant information, wasting memory access, RAM, disk space, energy, computational power and time. In addition, each frame imposes the same exposure time on every pixel, making it difficult to deal with scenes containing very dark and very bright regions. The Dynamic Vision Sensor (DVS) solves these problems by using patented technology that works like your own retina. Instead of wastefully sending entire images at fixed frame rates, only the local pixel-level changes caused by movement in a scene are transmitted – //at the time they occur//. The result is a stream of events at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second. Power, data storage and computational requirements are also drastically reduced, and sensor dynamic range is increased by orders of magnitude due to the local processing. ====Application Areas==== *Surveillance and ambient sensing *Fast Robotics: mobile, fixed (e.g. [[RoboGoalie]]) *Factory automation *Microscopy *Motion analysis, e.g. human or animal motion *Hydrodynamics *Sleep research and chronobiology *Fluorescent imaging *Particle Tracking ====Advantages==== ^ Conventional high-speed vision systems ^ DVS ^ DVS Benefits ^ | Requires fast PC | Works with any laptop | Lower costs\\ Lower power consumption | | Extremely large data storage (often several TB)\\ Highly redundant data | Low storage requirements\\ No redundant data | Lower costs\\ More portable\\ Easier and faster data management | | Custom interface cards | Webcam-sized, USB2.0\\ [[http://jaerproject.net|Java API]] | More portable\\ Easier programming | | Batch-mode acquisition\\ Off-line post-processing | Real-time acquisition\\ Extremely low latency | Continuous processing\\ No downtime, lower costs | | Low dynamic range, ordinary sensitivity\\ Needs special bright lighting (lasers, strobes, etc.) for short exposure times | High sensitivity\\ No special lighting needed | Lower costs\\ Simpler data acquisition | | Limited dynamic range, typically 50 dB | Very high dynamic range (120 dB) | Usable in more real-world situations | ====Case Studies==== ===Case Study 1: Fast vision in bad lighting=== **Problem:** You need to react quickly to moving objects in uneven lighting conditions. Conventional video cameras are too slow and specialized high frame rate cameras produce too much data to process in real time. Both of these conventional solutions require very high and even lighting at high frame rate. **Solution:** The DVS sensor nearly instantaneously reports movement of objects and automatically adapts to differing lighting conditions in different parts of an image without any calibration. Its high dynamic range brings out details that could not be detected with conventional vision systems and its low data rate enables real time short latency processing at low CPU load. {{robogoalie.swf}}DVS used for robotic goalie with 550 effective frames per second performance at 4% processor load. See [[robogoalie]]. ===Case Study 2: Fluid Particle Tracking Velocimetry (PTV)=== **Problem:** You are analyzing turbulent fluid flow. Your conventional high-speed vision setup requires a cumbersome and expensive high-speed PC, lots of hard disk space, custom interface cards and high-intensity laser strobe lighting to illuminate the fluid. After each test run you have to wait minutes or hours while the data is processed. **Solution:** DVS sensors enable you to replace your entire system with a single standard PC with a USB connection. Only normal collimated light is required to illuminate the fluid. The small data flow can be processed in real time, enabling you to work continuously and even adjust experimental parameters on the fly. {{ptv.swf}} DVS used for PTV, courtesy P. Hafliger, Univ. of Oslo. ===Case Study 3: Mobile Robotics=== **Problem:** You are deploying a fast mobile robot that must work in the real world. You are operating under tight constraints of power consumption, space and weight. Conventional vision processing systems consume far too much power to fit on the robot platform. The only alternative is to send the images for off-line processing, but this would require a separate server, increase response times and limit the range of the robot. **Solution:** The DVS vision sensor does much of the front-end processing, giving you only the “interesting” events in a scene at the time they occur. You can integrate all of your processing hardware on-board and react quickly to new input. {{driving.swf}} DVS data from driving. ===Case Study 4: Sleep disorder research=== **Problem:** You are studying sleep behavior patterns. Conventional video cameras record huge amounts of boring data where the subject is not moving, making it very labor intensive to manually annotate the behaviors. **Solution:** The DVS only outputs subject movements. Instead of playing back the data at constant frame rate, you can play it back at constant event rate, so that the action is continuous. A whole night of sleep can be recorded in a 100 MB of storage and played back in less than a minute. Activity levels can be automatically extracted and any part of the recording can be viewed at 1 millisecond resolution. {{mousesleeping.swf}} DVS used to monitor mouse activity, courtesy I. Tobler, Univ. of Zurich. ===== Functionality ===== The DVS functionality is achieved by having pixels that respond with **precisely-timed events to temporal contrast.** Movement of the scene or of an object with constant reflectance and illumination causes relative intensity change; thus **the pixels are intrinsically invariant to scene illumination and directly encode scene reflectance change **. {{principle.png?w=500|Principle of operation}} ====Temporal resolution and latency ==== The events are output asynchronously and nearly instantaneously on an Address-Event bus, so they have **much higher timing precision than the frame rate of a frame-based imager**. This is shown by these recording from a spinning disk painted with wedges of various contrasts. The disk spins at 17 rev/sec, and the events are painted with colored-time in the right image. Our measurements show that we can often achieve a timing precision of 1 us and a latency of 15 us with bright illumination. Because there are no frames, the events can be played back at any desired rate, as shown in the right video. The low latency is very useful for robotic systems, such as the pencil balancing robot. |{{timeresolution.png?w=300|Temporal resolution}}| {{highspeedvideo.swf|High speed video}} | ====Dynamic range==== Because the pixels locally respond to relative change of intensity, the device has a **large intra-scene dynamic range**. This wide dynamic range is demonstrated by the Edmund gray scale chart, which is differentially illuminated by a ratio of 135:1 -- a 42dB illumination ratio, which means a normal high-quality CCD based device like the Nikon 995 used below must either expose for the bright or dark part of the image to obtain sensible data. Most of the vision sensor pixels still respond to the 10% contrast steps in both halves of the scene. The rightmost data is captured under 3/4 moon with a high contrast scene. Under these conditions the photocurrent is <20% of the photodiode leakage current, but the low threshold mismatch still allows a good response. | {{wdr.png?w=300|DVS Wide dynamic range}} | {{edumund.swf|DVS Wide dynamic range}} | {{moonlight.swf|DVS operating under moonlight}} | =====Technology===== The 4 key innovations in this development are the pixel design, the [[http:jaerproject.net/biasgen|on-chip digital bias generators]], the highly-usable USB2 implementation, and the [[http://jaerproject.net|jAER processing software]]. ====Pixel circuit==== The pixel uses a //continuous-time front end photoreceptor//,(inspired from the [[http://www.ini.uzh.ch/~tobi/anaprose/recep/index.php|adaptive photoreceptor]]), followed by a //precision self-timed switched-capacitor differentiator //(inspired by the column amplifier used in the [[http://www.ini.uzh.ch/~tobi/bipImager/index.php|pulsed bipolar imager]]). The most novel aspects of this pixel are the idea of self-timing the switch-cap differentiation and self-biasing the photoreceptor. This pixel does a data-driven AD conversion (like biology, but very different than the usual ADC architecture). Local capacitor ratio matching gives the differencing circuit a precisely defined gain for changes in log intensity, thus reducing the effective imprecision of the comparators that detect positive and negative changes in log intensity. The pixel is drawn to use quad mirror symmetry to isolate the analog and digital parts. Most of the pixel area is capacitance. The periphery uses the Boahen lab's AER circuits. The chip includes a [[http://jaerproject.net/biasgen|fully programmable bias current generator]] that makes the chip's operation largely independent of temperature and process variations; all dozen chips we have built up into boards behave indistinguishably with identical digital bias settings. {{pixelprinciple.png?w=400|Pixel circuit principle}} ====System integration==== The DVS is integrated with a USB2.0 high-speed interface that plugs into any PC or laptop. The host software presently stands at >200 Java classes. The **open source** [[http://jaerproject.net|jAER software project]] lets you render events in a variety of formats, capture them, replay them, and most important, process them using events and their precise timing. See the [[userguide]] page for detailed chip and camera specifications. ====== Project funding ====== Current work is funded by the European FET BioICT project [[http://www.seebetter.eu/|SEEBETTER]], the Swiss National Center of Competence [[http://www.nccr-robotics.ch/|NCCR Robotics]], and the Samsung Advanced Institute of Technology (SAIT). The original development was supported by the European FET project [[http://caviar.ini.uzh.ch|CAVIAR]] and ETH Research Grant TH-18 07-1. Ongoing support is provided by the [[http://www.ini.uzh.ch|Inst. of Neuroinformatics]] through the [[http://www.uzh.ch|University of Zurich]] and the [[http://www.ethz.ch|Swiss Federal Institute of Technology (ETH Zurich)]]. ====== Developers====== This neuromorphic chip project was the PhD project of [[http://www.ini.uzh.ch/~patrick|Patrick Lichtsteiner]] and started with our colleague, the late [[http://www.ini.unizh.ch/~kramer|Jorg Kramer]], who died in July 2002. Much of this development happened during [[http://caviar.ini.uzh.ch|the CAVIAR project]]. {{tobichristophpatrick.jpg?w=300|Creators of Tmpdiff128}} [[http://www.ini.unizh.ch/~patrick|Patrick Lichtsteiner]], postdoctoral student at INI (pixel design, pixel layout, chip integration, chip characterization, PCB design)\\ Christoph Posch, engineer at ARC (chip integration and device characterization)\\ [[http://www.ini.uzh.ch/~tobi|Tobi Delbruck]] (group leader at INI; pixel design, bias generators, chip integration, USB interfaces, and host software),\\ [[http://www.ini.uzh.ch/~raphael|Raphael Berner]] (PhD student at INI; firmware and host software). The Boahen lab freely provided [[http://www.stanford.edu/group/brainsinsilicon/Downloads.htm|the AE peripheral communication infrastructure]]. [[http://www.ini.unizh.ch/public/person.php?uname=srinjoy|Srinjoy Mitra]] and [[http://www.ini.unizh.ch/~giacomo|Giacomo Indiveri]] provided their 0.35u layout for the AE circuits. ====== Publications ====== * [[http://sensors.ini.uzh.ch/publications.html|Sensors group publications (since 2010)]] * [[https://www.ini.uzh.ch/~tobi/wiki/doku.php?id=publications|Delbruck publications (back to 1990's)]] ======User guide====== See the [[http://inilabs.com/support/|userguide]] page at inilabs for more information if you are a user of one of the engineering prototype systems. ====== Links to related work ====== * [[http://aer-ear.ini.uzh.ch|Shih-Chii Liu's AER-EAR binaural silicon cochlea]]. This event-based silicon cochlea offers a user-friendly USB interface to jAER and allow rapid development of event-based auditory processing algorithms for sound localization and auditory scene analysis. * [[http://caviar.ini.uzh.ch|The CAVIAR project]] - the EU FET project which provided early funding for this work * [[http://www.smart-systems.at/products/products_smart_optical_sensors_en.html|Smart Systems SmartEye]] - Austrian Research Centers GmbH (ARC) traffic camera using this sensor * [[http://www.eng.yale.edu/elab/|Eugenio Culurciello E-Lab at Yale]] - , creators of the Octopus AER image sensors. * [[http://www.stanford.edu/group/brainsinsilicon/|Kwabena Boahen Neuroengineering Lab at Stanford]] - creators of asynchronous AER communication circuits and many interesting neuromorphic devices. * [[http://www.devise.ch|Devise]] - a part of CSEM dedicated to sensory processing, including the VISe spatial contrast retina (led Pierre-Franscois Ruedi), the torque sensor (was led by Alex Mortara), and the OCR 2-chip classifier (led by Peter Masa). * [[http://etienne.ece.jhu.edu/|Ralph Etienne-Cummings Computational Sensory-Motor Systems Lab, Johns Hopkins Univ.]] - makers of the Threshold Change Temporal Difference Imager (TCTDI) and many other innovative vision sensors * [[http://www.cnel.ufl.edu/|John Harris Computational Neuroengineering Lab at Univ. of Florida]] - , makers of the Time-To-First Spike (TTFS) image sensors ======Contact====== Tobi Delbruck <tobi@ini.phys.ethz.ch>\\ Institute of Neuroinformatics\\ Winterthurerstr. 190\\ 8057 Zürich\\ Switzerland [[http://www.ini.uzh.ch]] [[http://inilabs.com|inilabs.com]] for R&D prototype availability and support.