Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
eladhoffer authored Aug 18, 2019
1 parent 1c4c28e commit 742a7d8
Showing 1 changed file with 16 additions and 15 deletions.
31 changes: 16 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
# papers
Academic paper by Habana research team
| Paper | Authors | Venue | Year | Link |
**Papers by Habana research team**
------------------------------------------

| Paper | Authors | Venue | Year |
|---------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|----------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------|
| Train longer, generalize better: closing the generalization gap in large batch training of neural networks | Elad Hoffer, Itay Hubara, Daniel Soudry | NeurIPS (Oral) | 2017 | https://papers.nips.cc/paper/6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf |
| Fix your classifier: the marginal value of training the last weight layer | Elad Hoffer, Itay Hubara, Daniel Soudry | ICLR | 2018 | https://arxiv.org/abs/1801.04540 |
| The Implicit Bias of Gradient Descent on Separable Data | Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Nathan Srebro | ICLR | 2018 | https://arxiv.org/abs/1710.10345 |
| Exponentially vanishing sub-optimal local minima in multilayer neural networks | Daniel Soudry, Elad Hoffer | ICLR Workshop | 2018 | https://arxiv.org/abs/1702.05777 |
| Scalable Methods for 8-bit Training of Neural Networks | Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry | NeurIPS | 2018 | https://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks.pdf |
| Norm matters: efficient and accurate normalization schemes in deep networks | Elad Hoffer, Ron Banner, Itay Golan, Daniel Soudry | NeurIPS (Spotlight) | 2018 | https://papers.nips.cc/paper/7485-norm-matters-efficient-and-accurate-normalization-schemes-in-deep-networks.pdf |
| Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning | Tom Zahavy , Matan Haroush , Nadav Merlis , Daniel J. Mankowitz, Shie Mannor | NeurIPS | 2018 | https://papers.nips.cc/paper/7615-learn-what-not-to-learn-action-elimination-with-deep-reinforcement-learning.pdf |
| Task Agnostic Continual Learning Using Online Variational Bayes | Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry | NeurIPS Workshop | 2018 | http://bayesiandeeplearning.org/2018/papers/58.pdf |
| Infer2Train: leveraging inference for better training of deep networks | Elad Hoffer, Berry Weinstein, Itay Hubara , Sergei Gofman , Daniel Soudry | NeurIPS Workshop | 2018 | http://learningsys.org/nips18/assets/papers/24CameraReadySubmissionInfer2Train.pdf |
| Increasing batch size through instance repetition improves generalization | Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler and Daniel Soudry | ICML workshop | 2019 | https://drive.google.com/file/d/13I1qhczfUaLYlEZSfJ04nkRXyD1a5I8Q/view?usp=sharing |
| How Learning Rate and Delay Affect Minima Selection in AsynchronousTraining of Neural Networks: Toward Closing the Generalization Gap | Niv Giladi, Mor Shpigel Nacson, Elad Hoffer and Daniel Soudry | ICML workshop (Oral) | 2019 | https://drive.google.com/file/d/101yxxakquNQYtr5CD7bdbDgDLVmt1H-J/view |
| Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency | Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry | Preprint | 2019 | |
| [Train longer, generalize better: closing the generalization gap in large batch training of neural networks](https://papers.nips.cc/paper/6770-train-longer-generalize-better-closing-the-generalization-gap-in-large-batch-training-of-neural-networks.pdf) | Elad Hoffer, Itay Hubara, Daniel Soudry | NeurIPS (Oral) | 2017
| [Fix your classifier: the marginal value of training the last weight layer](https://arxiv.org/abs/1801.04540) | Elad Hoffer, Itay Hubara, Daniel Soudry | ICLR | 2018 |
| [The Implicit Bias of Gradient Descent on Separable Data](https://arxiv.org/abs/1710.10345) | Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Nathan Srebro | ICLR | 2018 |
| [Exponentially vanishing sub-optimal local minima in multilayer neural networks](https://arxiv.org/abs/1702.05777) | Daniel Soudry, Elad Hoffer | ICLR Workshop | 2018 |
| [Scalable Methods for 8-bit Training of Neural Networks](https://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks.pdf) | Ron Banner, Itay Hubara, Elad Hoffer, Daniel Soudry | NeurIPS | 2018 |
| [Norm matters: efficient and accurate normalization schemes in deep networks](https://papers.nips.cc/paper/7485-norm-matters-efficient-and-accurate-normalization-schemes-in-deep-networks.pdf) | Elad Hoffer, Ron Banner, Itay Golan, Daniel Soudry | NeurIPS (Spotlight) | 2018 |
| [Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning](https://papers.nips.cc/paper/7615-learn-what-not-to-learn-action-elimination-with-deep-reinforcement-learning.pdf) | Tom Zahavy , Matan Haroush , Nadav Merlis , Daniel J. Mankowitz, Shie Mannor | NeurIPS | 2018 |
| [Task Agnostic Continual Learning Using Online Variational Bayes](http://bayesiandeeplearning.org/2018/papers/58.pdf) | Chen Zeno, Itay Golan, Elad Hoffer, Daniel Soudry | NeurIPS Workshop | 2018 |
| [Infer2Train: leveraging inference for better training of deep networks](http://learningsys.org/nips18/assets/papers/24CameraReadySubmissionInfer2Train.pdf) | Elad Hoffer, Berry Weinstein, Itay Hubara , Sergei Gofman , Daniel Soudry | NeurIPS Workshop | 2018 |
| [Increasing batch size through instance repetition improves generalization](https://drive.google.com/file/d/13I1qhczfUaLYlEZSfJ04nkRXyD1a5I8Q/view?usp=sharing) | Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler and Daniel Soudry | ICML workshop | 2019 |
| [How Learning Rate and Delay Affect Minima Selection in AsynchronousTraining of Neural Networks: Toward Closing the Generalization Gap](https://drive.google.com/file/d/101yxxakquNQYtr5CD7bdbDgDLVmt1H-J/view) | Niv Giladi, Mor Shpigel Nacson, Elad Hoffer and Daniel Soudry | ICML workshop (Oral) | 2019 |
| [Mix & Match: training convnets with mixed image sizes for improved accuracy, speed and scale resiliency]() | Elad Hoffer, Berry Weinstein, Itay Hubara, Tal Ben-Nun, Torsten Hoefler, Daniel Soudry | Preprint | 2019 | |

0 comments on commit 742a7d8

Please sign in to comment.