Original Camera Blueprint

Solving the “Holy Grail” of the Optical Imaging Problem – Scientists Develop Neural Wavefront Shaping Camera

Original Camera Blueprint

Engineers have developed NeuWS, a video technology that adjusts the distribution of light in real time, allowing a clear picture of fog, smoke and tissue. (Artist’s view)

Rice and Maryland engineers have overcome the challenge of ‘scattering light’ in full motion video.

Engineers at Rice University and the University of Maryland have developed a full-motion video technology that can be used to create cameras that look in fog, smoke, driving rain, dirty water, skin, bone, and other media that show scattered light objects hidden in the eye.

“Diffusion media imaging is the ‘holy grail problem’ in optical imaging at the moment,” said Rice’s Ashok Veeraraghavan, corresponding author of the recently published open source study. Advances in Science. “Diffusing that makes light – which has a low wavelength, and therefore offers the best solution for a space – which is not usable in many cases. If you can reverse the effects of the spread, then the thinking goes much further. “

Veeraraghavan’s lab is collaborating with a Maryland research team that includes co-author Christopher Metzler to develop a technology they call NeuWS, which is short for “neural wavefront shaping,” a new technology.

Experimental Results of NeuWS, Technology to Correct Light-Induced Distortion.

In experiments, a camera technology called NeuWS, which was developed by collaborators at Rice University and the University of Maryland, was able to correct the interference of media light between the camera and the object being shown. The top row shows a reference image of a butterfly stamp (left), a stamp displayed by a standard camera through a piece of onion skin that was about 80 microns thick (middle) and a NeuWS image corrected for the light scattered by the onion skin. (right). The middle row shows the reference (left), uncorrected (middle) and corrected (right) images of a dog esophageal tissue sample with a 0.5 degree light diffuser as a scattering medium, and the bottom row shows the corresponding images of a good resolution objective glass slide coated with nail polish as a scattering medium. Close-ups of the inset images in each row are shown for comparison on the left. Credit: Veeraraghavan Lab/Rice University

If you ask people working in autonomous vehicles about the biggest challenge they face, they will say, ‘Bad weather. We can’t make a good picture in bad weather,’” said Veeraraghavan. “They say ‘bad weather,’ but what they mean, technically, is a little bit of a mess. If you ask biologists about the biggest challenges in microscopy, they’ll say, ‘We can’t image deep tissue in vivo.’ They say ‘deep tissue’ and ‘in vivo,’ but what they really mean is that the skin and other pieces of tissue they want to see through, scatter light. If you ask underwater photographers what their biggest challenge is, they’ll say, ‘I can only photograph things that are close to me.’ What they’re saying is that light scatters through water, and so it doesn’t penetrate deep enough to focus on distant objects.

“In all these cases, and others, the real technical problem is spreading,” Veeraghavan said.

He said NeuWS could be used to overcome the spread of those conditions and others.

“This is a big step forward for us, in terms of solving this in a way that is possible,” he said. “There’s a lot of work to be done before we can create prototypes for each application, but the approach we’ve shown can overcome them.”

Conceptually, NeuWS is based on the principle that light waves are complex mathematical numbers with two main properties that can be calculated at any given point. The first, magnitude, is the amount of energy carried to an area, and the second is phase, which is the state of rotation of the waves in that area. Metzler and Veeraraghavan say that the measurement phase is very important in overcoming diffusion, but it is impossible to measure directly because of the abundance of visible light.

Haiyun Guo and Ashok Veeraraghavan

Rice University Ph.D. student Haiyun Guo and Prof. Ashok Veeraraghavan at the Rice Computer Graphics Lab. Guo, Veeraraghavan, and collaborators at the University of Maryland have developed a full-motion video camera technology that corrects light-scattering and has the ability to allow cameras to film through fog, smoke, driving rain, dirty water, skin, bone, and other barriers to light penetration. Credit: Brandon Martin / Rice University

So instead they measure the incoming light as “wavefronts” – single measurements containing phase and depth information – and use backend processing to quickly extract the phase information from several hundred front measurements per second.

“The technical challenge is finding a way to quickly measure phase information,” said Metzler, an assistant professor of computer science at Maryland and a “triple Owl” Rice alum who earned Ph.D.s, masters, and degrees in electronics and computers. engineering from Rice in 2019, 2014 and 2013 respectively. Metzler was at Rice University during the development of an earlier iteration of the so-called wavefront-processing technology WISH that Veeraraghavan and colleagues published in 2020.

“WISH faced a similar problem, but operated under the assumption that everything was static and beautiful,” says Veeraraghavan. “In the real world, of course, things change all the time.”

With NeuWS, he said, the idea is not only to reverse the effects of the broadcast but to reverse it fast enough so that the media itself does not change during the simulation.

“Instead of measuring the state of the oscillation itself, you measure its correlation with known frequencies,” says Veeraraghavan. “You take a known prefix, interfere with an unknown prefix and measure the interference pattern produced by the two. That is the connection between the two waves. “

Metzler used the analogy of looking at the North Star at night through a mist of clouds. “If I know what the North Star is supposed to look like, and I can say it’s dim in a certain way, that tells me how dim everything else is going to be.”

Veerarghavan said, “It’s not a metaphor, it’s a correlation, and if you measure at least three such realities, you can recover the unknown front.”

Haiyun Guo

Rice University Ph.D. student Haiyun Guo, a member of the Rice Computational Imaging Laboratory, presents a complete video camera technology that adjusts the distribution of light, capable of allowing cameras to film fog, smoke, driving rain, black water, skin, bone, and other hidden media. Guo, Rice Prof. Ashok Veeraraghavan and colleagues at the University of Maryland describe the technology in an open access study published in the. Advances in Science. Credit: Brandon Martin / Rice University

High-quality spatial light modules can make several hundred such measurements per minute, and Veeraraghavan, Metzler, and colleagues have shown that they can use the module and their capture method to capture video of moving objects hidden from view by scattering interference. the media.

“This is the first step, a proof of principle that this technology can adjust light intensity in real time,” said Rice’s Haiyun Guo, one of the study’s lead authors and a Ph.D. student in Veeraraghavan’s research group.

In one set of experiments, for example, a microscope slide with a printed image of an owl or a turtle was threaded onto a spinner and photographed with an overhead camera. A light scattering media is placed between the camera and the target slide, and the researchers measure the ability of NeuWS to correct the light distribution. Examples of diffusion media included onion skins, slides coated with nail polish, pieces of chicken breast meat, and light scattering films. Along with these, tests showed that NeuWS can correct scattered light and produce clear video of moving figures.

“We’ve developed algorithms that allow us to continuously estimate the spread and location,” Metzler said. “That’s what allows us to do this, and we do it with mathematical machines called neural representations that allow it to be efficient and fast.”

NeuWS rapidly modulates the light from the incoming wavefront to make slightly shifted phase measurements. The shifted phases are fed directly to a 16,000-parameter neural network that rapidly integrates the coordinates necessary to recover the original phase information of the wavefront.

“Neural networks allow them to be faster by allowing us to design algorithms that require fewer measurements,” Veeraraghavan said.

Metzler said, “That’s the biggest selling point. Fewer measurements, basically, means we need less capture time. It’s what allows us to capture video rather than frames. “

Reference: “NeuWS: Neural wavefront shaping for star-free imaging using static and dynamic scattering media” by Brandon Y. Feng, Haiyun Guo, Mingyang Xie, Vivek Boominathan, Manoj K. Sharma, Ashok Veeraraghavan and Christopher A. Metzler, 28 June 2023, Advances in Science.
DOI: 10.1126/sciadv.adg4671

The research was supported by the Air Force Office of Scientific Research (FA9550- 22-1-0208), the National Science Foundation (1652633, 1730574, 1648451) and National Institutes of Health (DE032051), and part of the open access funding was provided by the University of Maryland Libraries’ Open Access Publishing Fund.


#Solving #Holy #Grail #Optical #Imaging #Problem #Scientists #Develop #Neural #Wavefront #Shaping #Camera

Leave a Reply

Your email address will not be published. Required fields are marked *