Synthesizing The Bigger Picture
Written by David Conrad   
Sunday, 09 August 2020

There are more cameras in the world and more people taking photos all the time than ever before. What is more we all tend to take the same shots whenever we are near something "touristy". A new technique can put these crowdsourced photos together into a single image, capturing the many viewpoints.

The title of this research from Noah Snavely and his team at Cornell Tech is "Crowdsampling the Plenoptic Function" and this isn't a title likely to pique the interest of the man in the street - "plenwhat?" The reality is much more interesting than it sounds and it illustrates how much the world we live in has changed. You talk about big data, but here it is - big data in the form of a crowdsourced photo that we all took.

I can't put it better than the abstract:

"Many popular tourist landmarks are captured in a multitude of online, public photos. These photos represent a sparse and unstructured sampling of the plenoptic function for a particular scene. In this paper, we present a new approach to novel view synthesis under time-varying illumination from such data."

You can almost guess what they did next. They collected a big sample, aproximately 50K photos, from Flickr for a range of tourist sites. Next they identified the viewpoints (the red dots) for each photo:

viewpoints

Which only goes to prove that we really do all take the same picture - in future buy a postcard?

Actually there is enough variablity in the space and time that the photos were taken to attempt a reconstruction of the plenoptic function. This is just the plot of the color of every point x,y as seen from position u,v at time t. If you have the plenoptic function, or more practically an approximation to it, you can create a view from any position at any time.

The details are complicated, but what happens next is that a neural network learns how to extract plenoptic slices from the photos. From this you can do some interesting things. In particular you can show what the scene looks like from different viewpoints - smoothly moving from one to another. You can also arrange for a smooth change in what the scene looks like at different times.

Watch the video to see it in action (and don't miss the bit at the start where the researchers wave at you):

 

If you visit the experiment's site you can see many examples of the the reconstruction of visual scenes from crowdsourced photos. I have to admit that while I am impressed and intrigued I can't think of a serious application. Artistic/creative perhaps, but not a really "must have" application. Perhaps I'm just not thinking of the right sort of things?

 

plen1

More Information

http://crowdsampling.io/

Crowdsampling the Plenoptic Function

Zhengqi Li, Wenqi Xian, Abe Davis and Noah Snavely

Cornell University

Related Articles

Crowd Sourced Solar Eclipse Megamovie

FlatCam - Who Needs A Lens?

Plenoptic Sensors For Computational Photography

Light field camera - shoot first, focus later

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Flutter Forked As Flock
05/11/2024

One of developers who worked on the Flutter team at Google has created an open-source form of the framework. Matt Carroll says Flock will be "Flutter+", will remain constantly up to date with Flutter, [ ... ]



Kotlin Ktor Improves Client-Server Support
04/11/2024

Kotlin Ktor 3 is now available with better performance and improvements including support for server-sent events and CSRF (Cross-Site Request Forgery) protection.


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Sunday, 09 August 2020 )