Skip to content

Commit

Permalink
w9
Browse files Browse the repository at this point in the history
  • Loading branch information
AnzeXie committed May 28, 2024
1 parent 54d2537 commit 02775e3
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 27 deletions.
38 changes: 11 additions & 27 deletions _modules/week-09.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,23 @@
---
title: Week 9
class: DSC204A
status: not Active
class: DSC291
status: Active
---

Mar 4
: **1**{: .label} Guest Lecture - [Stephanie Wang](https://stephanie-wang.github.io/)
: [Slides](assets/slides/UCSD_3_3_24_A_brief_history_of_the_Ray_ecosystem.pdf) • [Recording](https://drive.google.com/file/d/1KB5LWGNNkgOTWk_shgM4brYGGm7TsS0w/view?usp=sharing) • [Scribe Notes](assets/scribe_notes/Mar_4_scribe_note.pdf)
May 28
: **1**{: .label} LLM - 2
: [Slides](assets/slides/15_llm-2.pdf) • [Recording](#) •
: *Reading:*
* [TensorFlow: A system for large-scale machine learning (required)](https://arxiv.org/pdf/1605.08695.pdf)
* [Petuum: A New Platform for Distributed Machine Learning on Big Data (required)](https://arxiv.org/pdf/1312.7651.pdf)
* [Scaling Distributed Machine Learning with the Parameter Server (required)](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-li_mu.pdf)
* [PipeDream: Generalized Pipeline Parallelism for DNN Training (optional)](https://people.eecs.berkeley.edu/~matei/papers/2019/sosp_pipedream.pdf)
* [PyTorch Distributed: Experiences on Accelerating Data Parallel Training (optional)](https://arxiv.org/pdf/2006.15704.pdf)



Mar 6
: **2**{: .label} ML System - 1
: [Slides](assets/slides/21_ml-system-1.pdf) • [Recording](https://podcast.ucsd.edu/watch/wi24/dsc204a_a00/24) • [Scribe Notes](assets/scribe_notes/Mar_6_scribe_note.pdf)

May 30
: **2**{: .label}
: [Slides](#) • [Recording](h#) •
: *Reading:*
* [TensorFlow: A system for large-scale machine learning (required)](https://arxiv.org/pdf/1605.08695.pdf)
* [Petuum: A New Platform for Distributed Machine Learning on Big Data (required)](https://arxiv.org/pdf/1312.7651.pdf)
* [Scaling Distributed Machine Learning with the Parameter Server (required)](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-li_mu.pdf)
* [PipeDream: Generalized Pipeline Parallelism for DNN Training (optional)](https://people.eecs.berkeley.edu/~matei/papers/2019/sosp_pipedream.pdf)
* [PyTorch Distributed: Experiences on Accelerating Data Parallel Training (optional)](https://arxiv.org/pdf/2006.15704.pdf)



Mar 8
: **3**{: .label} Guest Lecture - [Prof. Ion Stoica](https://people.eecs.berkeley.edu/~istoica/)
: [Slides](#) • [Recording](https://drive.google.com/file/d/1wp0pcSAZtmE3TQQ7vzn1_qBuCfNchzM9/view) • [Scribe Notes](assets/scribe_notes/Mar_8_scribe_note.pdf)
: *Reading:*
* [TensorFlow: A system for large-scale machine learning (required)](https://arxiv.org/pdf/1605.08695.pdf)
* [Petuum: A New Platform for Distributed Machine Learning on Big Data (required)](https://arxiv.org/pdf/1312.7651.pdf)
* [Scaling Distributed Machine Learning with the Parameter Server (required)](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-li_mu.pdf)
* [PipeDream: Generalized Pipeline Parallelism for DNN Training (optional)](https://people.eecs.berkeley.edu/~matei/papers/2019/sosp_pipedream.pdf)
* [PyTorch Distributed: Experiences on Accelerating Data Parallel Training (optional)](https://arxiv.org/pdf/2006.15704.pdf)


Binary file added assets/slides/15_llm-2.pdf
Binary file not shown.

0 comments on commit 02775e3

Please sign in to comment.