{"id":8542,"date":"2012-11-15T00:32:10","date_gmt":"2012-11-15T08:32:10","guid":{"rendered":"http:\/\/betaknowledge.tumblr.com\/post\/35765441795"},"modified":"2017-08-18T17:57:52","modified_gmt":"2017-08-18T17:57:52","slug":"what-makes-paris-look-like-parisgiven-a-large","status":"publish","type":"post","link":"https:\/\/blog.nearfuturelaboratory.com\/2012\/11\/15\/what-makes-paris-look-like-parisgiven-a-large\/","title":{"rendered":"\u201cWhat makes Paris look like Paris\u201d\n\nGiven a large\u2026"},"content":{"rendered":"
\u201cWhat makes Paris look like Paris\u201d<\/p>\n
\nGiven a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. [\u2026] To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner.<\/em><\/p>\n<\/blockquote>
\n<\/img><\/a>
<\/img><\/a>
<\/img><\/a>\n<\/div>
","protected":false},"excerpt":{"rendered":"
\u201cWhat makes Paris look like Paris\u201d<\/p>\n
Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the c…<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[169],"tags":[],"yoast_head":"\n
\u201cWhat makes Paris look like Paris\u201d Given a large\u2026 - Near Future Laboratory<\/title>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n