Google Tech Talks August 5, 2008 ABSTRACT Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we will leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earth's surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban/rural classification. Speaker: James Hays James Hays received his B.S. in Computer Science from Georgia Institute of Technology in 2003. He has been a Ph.D. student in Carnegie Mellon University's Computer Science Department since 2003 and is advised by Alexei A. Efros. His research interests are in computer vision and computer graphics, focusing on image understanding and manipulation leveraging massive amounts of data. His research has been supported by a National Science Foundation Graduate Research Fellowship.
Get notified about new features and conference additions.