This page should slowly be moved into actual issues for this project.
- Graphs
- Seeing how many people have entered and left the AI safety field
- tracking movement among orgs
- a documents table
- A new compare.php option. Some parameters:
by=organization
orby=year
;for=MIRI
orfor=2017
. So e.g./compare.php?by=organization&for=2017
would do the 2017 org comparison, like this blog post. But/compare.php?by=year&for=MIRI
would compare MIRI over the years.
this is a list of things i'd like to add to AI Watch at some point, or am at least considering.
- maybe things that cite concrete problems: https://scholar.google.com/scholar?cites=6186600309471256628&as_sdt=5,48&sciodt=0,48&hl=en
- "Towards Verified Artificial Intelligence"? https://arxiv.org/pdf/1610.06940.pdf
- fortiss? https://arxiv.org/pdf/1709.00911.pdf https://www.fortiss.org/en/about-us/alle-mitarbeiter/ e.g. https://www.fortiss.org/en/about-us/people/chih-hong-cheng/
- things in bibliography of http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/
- There might be more safety-oriented MILA people: https://mila.quebec/en/mila/team/
- Maybe https://www.partnershiponai.org/board-of-directors/
- https://espr-camp.org/staff/ This is the european version of SPARC (formerly called Euro-SPARC); whole team might be relevant if it fits under CFAR's new goals?
- https://www.nytimes.com/2017/11/06/technology/artificial-intelligence-start-up.html
- interns for various organizations (these aren't listed on team pages so will require hunting down each name in many cases)
- look through https://intelligence.org/files/AnnotatedBibliography.pdf