Skip to content

Latest commit

 

History

History
30 lines (23 loc) · 1.51 KB

about.md

File metadata and controls

30 lines (23 loc) · 1.51 KB
layout title order group description
page
About
12
navigation
TVM

{% include JB/setup %}

About TVM

TVM is an open deep learning compiler stack for CPUs, GPUs, and specialized accelerators. It aims to close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends. TVM provides the following main features:

  • Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet into minimum deployable modules on diverse hardware backends.
  • Infrastructure to automatic generate and optimize tensor operators on more backend with better performance.

TVM stack began as a research project at the SAMPL group of Paul G. Allen School of Computer Science & Engineering, University of Washington. The project is now driven by an open source community involving multiple industry and academic institutions. The project adopts Apache-style merit based governace model.

TVM provides two level optimizations show in the following figure. Computational graph optimization to perform tasks such as high-level operator fusion, layout transformation, and memory management. Then a tensor operator optimization and code generation layer that optimizes tensor operators. More details can be found at the techreport.

{:center: style="text-align: center"} image{: width="80%"} {:center}