Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Images, Art & Video (E3) - Naik, Kominers, Raskar, Glaeser, Hidalgo 2017 #41

Open
HyunkuKwon opened this issue Apr 7, 2020 · 7 comments

Comments

@HyunkuKwon
Copy link

Post questions about the following exemplary reading here:

Naik, Nikhil, Scott Duke Kominers, Ramesh Raskar, Edward L. Glaeser, César A. Hidalgo. 2017. “Computer vision uncovers predictors of physical urban change.” PNAS 114(29):7571–7576.

@nwrim
Copy link

nwrim commented May 22, 2020

Although I am a fan of the place pulse project (I think this project opened a lot of doors for urban studies!), I still think that there is an external validity problem for algorithm trained via this dataset since the dataset only consists of ratings of street images of only a few cities (I am not sure if this was resolved in place pulse 2.0). This study also utilizes images from only 5 cities (American cities with an eastern bias). Sampling and generalizability are always an issue in this kind of study aimed at finding more general patterns, but I wonder if the capacity of computer vision algorithm will be enough to even go over the limited input. Do you know any studies that trained a computer vision algorithm in a specific dataset (for example, eastern American cities), but the performance of the algorithm was adequate even if tested for images that were similar but systematically different in some features (for example, western American cities)?

@timqzhang
Copy link

It is a really exciting paper on the application of vision processing. One thing that makes me wonder is the calculation of the Street Score. Particularly, the authors mention that the "The Street change metric is not affected by seasonal and weather change", which I personally think that it may have some difficulties on the image identifications. Therefore, I wonder how do the authors manage the accuracy of the street scores?

@WMhYang
Copy link

WMhYang commented May 29, 2020

The same problems emerge as in the exemplary reading of week 2 (Salesses, Schechtner, and Hidalgo 2013), namely why the authors choose the five cities. It seems that Boston and New York City are mentioned in both studies. Are there any systematic reasons? If so, as @nwrim mentioned, this may cause weak external validity. In addition, in table 2, instead of seeing the positive coefficient of the share of college education in 2000, I personally want to see the coefficient of the difference in the share of college education in 2000 and in, say, 2007. This makes more sense as it could better illustrate the concept of gentrification.

Reference:
Salesses, Philip, Katja Schechtner, and César Hidalgo. 2013. “The Collaborative Image of The City: Mapping the Inequality of Urban Perception.” PLoS ONE 8(7):e68400. doi:10.1371/journal.pone.0068400

@Lesopil
Copy link

Lesopil commented May 29, 2020

This is a fascinating study that incorporates machine vision to facilitate the study of urban environments. Like many of the studies that we read over the weeks, I find it hard to criticize the methodology from a CS perspective. It seems that the model they use works. However, it seems that their overall conclusions fail to account for things such as race and red-lining when considering urban improvement. It seems to me that it is often a lot easier to criticize the assumptions that researchers bring to their projects, rather than the data they collect of the methods by which they analyze. This recalls a reading we did several weeks ago about finding unexpected results through our research. I wonder, then, how often to the methods used by the researchers to reach their conclusions the actual subjects for debate?

@liu431
Copy link

liu431 commented May 29, 2020

I think the paper has indeed "illustrated the value of using CV methods and street-level imagery to understand the physical dynamics of cities". However, even with 5 US cities, it took a lot of effort and time to collect data online. How could this method be generalized to study a wider range of regions with less input effort?

@shiyipeng70
Copy link

I am curious about how much our unsupervised algorithm shall rely on human assessment. The human assessment in this paper still serves as the most trustworthy evaluation. Does that mean we can hardly extend or transfer our model to a new dataset or environment without receiving confirmation from human rating?

@minminfly68
Copy link

Thanks for sharing this interesting paper. First, I am a little bit curious about the streetscores which might be too explicit which seems not to take so much dimension into consideration. Also, I am interested in the application to quantify neighborhood appearance as mentioned by the author.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants