Overall, just head over to ResearchApps.ipynb, click on the "Open in Colab" button, save a copy of the notebook, and get to coding! After you're done, send me the file at [email protected] with the subject "DataRes Research Applications," and you should get a confirmation email back if I'm free.
Howdy we're Colin and Chris and we're the Research heads for this quarter. This is a quick assessment to just test basic proficiency with deep learning and libraries.
We'll will teach you everything from the ground up. You just have to put in the effort and show that you want to learn.
I just want to push this message across extremely heavily: we are not approaching these deep learning projects from the perspective of a data analyst or programmer; we are approaching these projects with the mindset of a researcher. Just like any biologist or scientist, we are going to dive deeply into novel literature regarding our field, and take existing technologies and extend them into applications of our liking. Just like a biologist looks at past studies, we will also look at recent studies regarding deep learning, and learn from them.
That is what makes the DataRes Research team different from other data science organizations. We are scientists first, and then programmers.
If you don't like this format, feel free to join when I am gone :) Other than that, hope to see you on the team!
We will meet for at most 2 hours each week, where our meetings will consist of me teaching for the first half, and everyone applying this knowledge to their projects the second half. Working outside of these meetings is expected for you to present a good presentation at the end of the quarter.
COMPUTER REQUIREMENTS:
- Python!
- a command line interface like Anaconda Prompt/Ubuntu/OSX terminal
- a github account that you can push code to
- Jupyter Notebook (we will be testing on
.ipynb
, but our pipelines wil all be python files.py
) - a computer you can bring to the meetings (no dedicated gpu required)
S22: Graph Deep Learning: Integration of GCN Pipeline with Neo4j DBMS for Social Analysis
F23: Representation Learning with Transformer-based Language Models with Temporal Data
W23: TBD. Probably geometric deep learning or something like that.