MIT invents way to get reflections out of photos

Spread the love

Researchers at the Massachusetts Institute of Technology have developed an algorithm that in many cases can automatically filter the reflections from photos taken through a window. Often objects, such as the photographer himself, are visible in photos taken through a window.

The algorithm will be presented to the public by the researchers at the Computer Vision and Pattern Recognition conference in June. The operation is based on the fact that photos taken through a double-glazed window generally show two almost identical reflections that are slightly apart. One of the researchers, YiChang Shih, explains on the MIT site that with double glazing there is a reflection of the image from both the inner and outer windows. The same effect occurs with thick single-pane windows.

The system only functions with double reflection, otherwise it is virtually unsolvable for the time being because the algorithm makes a comparison between two different, but otherwise similar images in a window. Shih gives the example: “If A + B equals C, how can you get A and B back from a single C? That’s mathematically challenging. Then we just don’t have enough selection criteria to draw a conclusion.”

The second reflection is thus necessary, where the value of A must be the same as the value of B for a pixel that is the same distance in a certain direction. If that selection criterion is added, it is a lot easier to separate A, B and C from D.

However, the complete solution is not comparing pixels in this way. To do this, the researchers recruited another research group. This group assumes that an image photographed through a window has a certain statistical regularity, just like so-called ‘natural’ photos. The idea is that at the pixel level, abrupt transitions within natural and man-made environments are strange and that when a transition occurs, they run along clear boundaries. That means that when a cluster of pixels includes part of a blue and a red object, everything is bluish on one side and reddish on the other. However, that approach did not work very well.

During the research, there appeared to be something to the pixel approach, but for that the algorithm had to learn a few things. To do this, the researchers used a technique developed at the Hebrew University of Jerusalem. By applying statistics to blocks of pixels of eight by eight in 50,000 test images, the correlation between the pixels could be calculated and a good result could be obtained.

Shih hopes that if the algorithm is further improved, it can eventually find a place in normal photo software and, perhaps more importantly, to allow robots to ‘see’ better in rooms with a lot of glass.

You might also like