{"id":7180,"date":"2021-11-29T10:01:23","date_gmt":"2021-11-29T16:01:23","guid":{"rendered":"https:\/\/www.wisconsin.edu\/all-in-wisconsin-new\/?post_type=campus_story&#038;p=7180"},"modified":"2021-11-29T10:01:23","modified_gmt":"2021-11-29T16:01:23","slug":"uw-madison-real-time-video-of-scenes-hidden-around-corners-is-now-possible","status":"publish","type":"campus_story","link":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/story\/uw-madison-real-time-video-of-scenes-hidden-around-corners-is-now-possible\/","title":{"rendered":"UW-Madison: Real-time video of scenes hidden around corners is now possible"},"content":{"rendered":"<figure id=\"attachment_7185\" aria-describedby=\"caption-attachment-7185\" style=\"width: 800px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/news.wisc.edu\/real-time-video-of-scenes-hidden-around-corners-is-now-possible\/\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-7185\" src=\"https:\/\/www.wisconsin.edu\/all-in-wisconsin-new\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing-Corners_screenshot_feature-e1637783226819.jpg\" alt=\"Screenshot of video: As Ji Hyun Nam slowly tosses a stuffed cat toy into the air, a real-time video captures the playful scene \u2014 from around a corner. With further refinements, the technology could find uses in search-and-rescue, defense and medical imaging. (Caution: Video contains flashing lights, which may be a problem for some people, including those with photosensitive epilepsy or a history of migraines and headaches.)\" width=\"800\" height=\"445\" srcset=\"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing-Corners_screenshot_feature-e1637783226819.jpg 689w, https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing-Corners_screenshot_feature-e1637783226819-300x167.jpg 300w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><\/a><figcaption id=\"caption-attachment-7185\" class=\"wp-caption-text\">As Ji Hyun Nam slowly tosses a stuffed cat toy into the air, a real-time video captures the playful scene \u2014 from around a corner. With further refinements, the technology could find uses in search-and-rescue, defense and medical imaging. (Caution: Video contains flashing lights, which may be a problem for some people, including those with photosensitive epilepsy or a history of migraines and headaches.)<\/figcaption><\/figure>\n<p>As Ji Hyun Nam slowly tosses a stuffed cat toy into the air, a real-time video captures the playful scene at a 20th century webcam clip \u2014 a mere five frames per second.<\/p>\n<p>The twist? Nam is hidden around the corner from the camera. The video of the stuffed animal was created by capturing light reflected off a wall to the toy and bounced back again in a science-fiction-turned-reality technique known as non-line-of-sight imaging.<\/p>\n<p>And at five frames per second, the video is a blazing fast improvement on recent hidden-scene imaging that previously took minutes to reconstruct a stationary image.<\/p>\n<p>The new technique uses many ultra-fast and highly sensitive light sensors and an improved video reconstruction algorithm to greatly speed the time it takes to display the hidden scenes. The University of Wisconsin\u2013Madison researchers who created the video say the new advance opens up the technology to affordable, real-world applications of both near and distant scenes.<\/p>\n<p>Those future applications include disaster relief, medical imaging and military uses. The technique could also find use outside of around-the-corner imaging, such as improving autonomous vehicle imaging systems. The work was funded by the U.S. Defense Department\u2019s Advanced Research Projects Agency (DARPA) and the National Science Foundation.<\/p>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_7187\" aria-describedby=\"caption-attachment-7187\" style=\"width: 492px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-7187\" src=\"https:\/\/www.wisconsin.edu\/all-in-wisconsin-new\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing_Corners19_5028-492x500-1.jpg\" alt=\"Photo of graduate students Ji Hyun Nam (left) and Toan Le working with assistant professor and principal investigator Andreas Velten in the Computational Optics lab. PHOTO: BRYCE RICHTER\" width=\"492\" height=\"500\" srcset=\"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing_Corners19_5028-492x500-1.jpg 492w, https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-content\/uploads\/sites\/378\/2021\/11\/MAD_Seeing_Corners19_5028-492x500-1-295x300.jpg 295w\" sizes=\"auto, (max-width: 492px) 100vw, 492px\" \/><figcaption id=\"caption-attachment-7187\" class=\"wp-caption-text\">Graduate students Ji Hyun Nam (left) and Toan Le work with assistant professor and principal investigator Andreas Velten in the Computational Optics lab. PHOTO: BRYCE RICHTER<\/figcaption><\/figure>\n<p><a href=\"https:\/\/biostat.wiscweb.wisc.edu\/staff\/velten-andreas\/\">Andreas Velten<\/a>, a professor of biostatistics and medical informatics at the UW School of Medicine and Public Health, and his team published their findings Nov. 11 in Nature Communications. Nam, a former Velten lab doctoral student, is the first author of the report. UW\u00ad\u2013Madison researchers Eric Brandt and Sebastian Bauer, along with collaborators at the Polytechnic University of Milan in Italy, also contributed to the new research.<\/p>\n<p>Velten and his former advisor first demonstrated non-line-of-sight imaging a decade ago. Similar to other light- or sound-based imaging, the technique captures information about a scene by bouncing light off of a surface and sensing the echoes coming back. But to see around corners, the technique focuses not on the first echo, but on reflections of those echoes.<\/p>\n<p>\u201cIt\u2019s basically echolocation, but using additional echoes \u2014 like with reverb,\u201d says Velten, who also holds an appointment in the Department of Electrical and Computer Engineering.<\/p>\n<p>In 2019, Velten\u2019s lab members\u00a0<a href=\"https:\/\/news.wisc.edu\/lessons-of-conventional-imaging-let-scientists-see-around-corners\/\">demonstrated that they could take advantage of existing imaging algorithms<\/a>\u00a0by reconsidering how they approach the math of the system. The new math allowed them to use a laser rapidly scanning against a wall as a kind of \u201cvirtual camera\u201d that provides visibility for the hidden scene.<\/p>\n<p>The algorithms that reconstruct the scenes are fast. Brandt, a doctoral student in the lab of study co-author Eftychios Sifakis, further improved them for processing hidden-scene data. But data collection for earlier non-line-of-sight imaging techniques was painfully slow, in part because light sensors were often just a single pixel.<\/p>\n<blockquote><p>\u201cCan you imagine taking a picture around a corner simply on your phone? There are still many technical challenges, but this work brings us to the next level and opens up the possibilities!\u201d<\/p>\n<p><span class=\"quotee\">Ji Hyun Nam<\/span><\/p><\/blockquote>\n<p>To advance to real-time video, the team needed specialized light sensors \u2014 and more of them. Single-photon avalanche diodes, or SPADs, are now common, even finding their way into the latest iPhones. Able to detect individual photons, they provide the sensitivity needed to capture very weak reflections of light from around corners. But commercial SPADs are about 50 times too slow.<\/p>\n<p>Working with colleagues in Italy, Velten\u2019s lab spent years perfecting new SPADs that can tell the difference between photons arriving just 50 trillionths of a second apart. That ultra-fast time resolution also provides information about depth, allowing for 3D reconstructions. The sensors can also be turned off and on very quickly, helping distinguish different reflections.<\/p>\n<p>\u201cIf I send a light pulse at a wall, I get a very bright reflection I have to ignore. I need to look for the much weaker light coming from the hidden scene,\u201d says Velten.<\/p>\n<p>By using 28 SPAD pixels, the team could collect light quickly enough to enable real-time video with just a one-second delay.<\/p>\n<p>The resulting videos are monochrome and fuzzy, yet able to resolve motion and distinguish objects in 3D space. In successive scenes, Nam demonstrates that the videos can resolve foot-wide letters and pick out human limbs during natural movements. The projected virtual camera can even accurately distinguish a mirror from what it is reflecting, which is technologically challenging for a real camera.<\/p>\n<p>\u201cPlaying with our NLOS (non-line-of-sight) imaging setup is really entertaining,\u201d says Nam. \u201cWhile standing in the hidden scene, you can dance, jump, do exercises and see video of yourself on the monitor in real-time.\u201d<\/p>\n<hr \/>\n<p><em>Watch the\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=QtMfb8H_1kM\">full version<\/a> of the researchers\u2019 video, from which the clip above was excerpted<\/em><\/p>\n<hr \/>\n<p>While the video captures objects just a couple meters from the reflecting wall, the same techniques could be used to image objects hundreds of meters away, so long as they were large enough to see at that distance.<\/p>\n<p>\u201cIf you\u2019re in a dark room, the size of the scene isn\u2019t limited anymore,\u201d says Velten. Even with room lights on, the system can capture nearby objects.<\/p>\n<p>Although the Velten team uses custom equipment, the light sensor and laser technology required for around-the-corner imaging is ubiquitous and affordable. Following further engineering refinements, the technique could be creatively deployed in many areas.<\/p>\n<p>\u201cNowadays you can find time-of-flight sensors integrated in smartphones like iPhone 12,\u201d says Nam. \u201cCan you imagine taking a picture around a corner simply on your phone? There are still many technical challenges, but this work brings us to the next level and opens up the possibilities!\u201d<\/p>\n<p><span class=\"small_caps\">THIS WORK WAS SUPPORTED BY THE U.S. DEFENSE DEPARTMENT\u2019S ADVANCED RESEARCH PROJECTS AGENCY (DARPA) REVEAL PROJECT (GRANT HR0011-16-C-0025), THE NATIONAL SCIENCE FOUNDATION (GRANTS NSF IIS-2008584, CCF-1812944, IIS-1763638, AND IIS-2106768) AND A GRANT FROM UW\u2013MADISON\u2019S DRAPER TECHNOLOGY INNOVATION FUND. \u00a0THE TEAM ALSO PARTNERED WITH\u00a0<a href=\"https:\/\/d2p.wisc.edu\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-auth=\"NotApplicable\" data-linkindex=\"3\">D2P<\/a>\u00a0AND RECEIVED\u00a0<a href=\"https:\/\/www.warf.org\/warf-accelerator\/\" target=\"_blank\" rel=\"noopener noreferrer\" data-auth=\"NotApplicable\" data-linkindex=\"4\">WARF ACCELERATOR<\/a>\u00a0FUNDING, ALONG WITH OTHER PROGRAM SUPPORT FROM WARF.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As Ji Hyun Nam slowly tosses a stuffed cat toy into the air, a real-time video captures the playful scene at a 20th century webcam clip \u2014 a mere five frames per second. The twist? Nam is hidden around the corner from the camera. The video of the stuffed animal was created by capturing light [&hellip;]<\/p>\n","protected":false},"author":15,"featured_media":7185,"comment_status":"closed","ping_status":"closed","template":"","institution":[103],"story_category":[],"class_list":["post-7180","campus_story","type-campus_story","status-publish","has-post-thumbnail","hentry","institution-uw-madison"],"_links":{"self":[{"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/campus_story\/7180","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/campus_story"}],"about":[{"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/types\/campus_story"}],"author":[{"embeddable":true,"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/users\/15"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/comments?post=7180"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/media\/7185"}],"wp:attachment":[{"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/media?parent=7180"}],"wp:term":[{"taxonomy":"institution","embeddable":true,"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/institution?post=7180"},{"taxonomy":"story_category","embeddable":true,"href":"https:\/\/www.wisconsin.edu\/all-in-wisconsin\/wp-json\/wp\/v2\/story_category?post=7180"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}