Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TVM v0.4 Roadmap #1170

Closed
28 of 37 tasks
tqchen opened this issue May 21, 2018 · 20 comments
Closed
28 of 37 tasks

TVM v0.4 Roadmap #1170

tqchen opened this issue May 21, 2018 · 20 comments

Comments

@tqchen
Copy link
Member

tqchen commented May 21, 2018

This roadmap for TVM v0.4. TVM is a community-driven project and we love your feedbacks and proposals on where we should be heading. Feel free to volunteer yourself if you are interested in trying out some items.

As usual, the checklist will go into the release note.

Features

  • BUILD
    • Unify the build system to use cmake
    • Customizable cmake path for vulkan, rocm, cuda
  • TVM operator primitives
    • introduce attrs field to operator primitives to store additional meta data
  • TOPI Vision operators
    • Suport SSD on CPU
    • Tutorials for vision operators
    • GPU based sorting
  • TOPI Operators
    • numpy consistency, all binary operators now broadcast
      • Rename for numpy consistecy: broadcast_add-> add, broadcast_sub -> substract, broadcast_mul -> multiply, broadcast_div->divide
    • slice
  • Fully support for low-bit support
    • general bit-serial convolution and GEMM
    • optimized low bit kernels
    • parallel optimization
    • tutorials of low-bit ops
  • Hybrid python programming model
    • python AST based ir builder interface
    • support GPU programs
    • support non-trivial imperative operators
  • Lowbit operators
    • Optimized popcount generation on ARM
  • Quantized network support
    • 8bit quantizer
    • gemmlowp
  • Frontend
    • merge nnvm into tvm repo
    • Tensorflow graphdef frontend
    • Improve keras frontend to support reuse layers
  • RPC and Device API
    • Support communication between big/small endian machines.
    • RPC and device API protocol upgrade (this is a non-backward compatible change) to support big-small endian communication. This is a non-backward compatible change, need to use the latest version of TVM runtime with the RPC
    • graduate rpc from contrib, tvm.contrib.rpc->tvm.rpc
  • Security
    • tutorials on how to use SGX backend
  • Tutorials and docs
    • Introduction to TOPI
    • How to write a pass in python
    • General lowering flow of TVM
    • example tutorial on how to use vulkan backend
    • tutorial on running vulkan on android
  • Automated tuning and scheduling
    • basic autotvm infra
    • basic autotuning tutorial
    • topi integration
    • graph optimizer integration
  • Backend optimizations
    • Intel graphics
  • VTA: customized accelerator backend
    • custom hardware backend example
    • tutorials on how to use customized accelerator
@starimpact
Copy link

mark ^_^

@yzhliu
Copy link
Member

yzhliu commented May 21, 2018

For 'Accelerator support', does it mean support for AI ASICs like TPU?

@were
Copy link
Contributor

were commented May 22, 2018

Do you have FPGA runtime?
How do you deal with the heterogeneity of different models of FPGAs?

@feiyuvl
Copy link

feiyuvl commented May 25, 2018

Is there any plan to support variable input shape more friendly as mentioned in this link ?https://discuss.tvm.ai/t/variable-shape-input-support-for-code-generation

@tqchen
Copy link
Member Author

tqchen commented May 25, 2018

@feiyuvl I replied in the post, it is already supported in DSL level, but need a bit more work in the optimization part

@feiyuvl
Copy link

feiyuvl commented May 31, 2018

@tqchen Sorry, I can't access the tvm discuss recently, so I'm not sure whether you have replied the tvm.var problem I have met. It seems tvm.var does't work in some condition when generating cuda kernel. The problem can be reproduced by change the sample code
https://github.com/dmlc/tvm/blob/master/tutorials/optimize/opt_conv_cuda.py:28
to
batch = tvm.var('m')
We are planning to use tvm to accelerate our inference engine, and was blocked by this problem for nearly a month. Any help will be appreciated.

@TaoLv
Copy link
Member

TaoLv commented May 31, 2018

What's the further plan for those external libs under folder contrib?

@tqchen
Copy link
Member Author

tqchen commented May 31, 2018

@feiyuvl discussion forum is moved to new address https://discuss.tvm.ai

@wangshangsam
Copy link
Contributor

Hi @tqchen,

I noticed in the newer version of the TVM paper (https://arxiv.org/pdf/1802.04799.pdf), it mentioned that the automatic schedule space exploration & tuning was implemented and experimented, and how it was faster than Tensor Comprehensions on conv2d. However I still couldn't find the code for it in the TVM repo. Could you point me to the right direction to look for?

Thanks a lot!
Shang

@tqchen
Copy link
Member Author

tqchen commented Jun 5, 2018

@wangshangsam more discussions can be found at https://discuss.tvm.ai/t/bring-auto-tuner-to-tvm/206/5 We will release the autotvm in this release cycle.

@tqchen
Copy link
Member Author

tqchen commented Jun 13, 2018

Tensorflow frontend support updated, c.f. #1186

@etaf
Copy link

etaf commented Jun 20, 2018

@tqchen The features you listed above are very attractive, may I know when the release 0.4 will be ready?

@tqchen
Copy link
Member Author

tqchen commented Jun 22, 2018

Hybrid support first phase finished (#1213 )

@tqchen
Copy link
Member Author

tqchen commented Jul 12, 2018

@aditya4d
Copy link
Contributor

aditya4d commented Jul 16, 2018

Here are some items that I would like TVM to have.

  1. Debug log: Tensorflow captures log level from env variable and emit what functions being called. Having this can help understand issues better.
  2. Performance log: As TVM is focused on inference, having a performance log across different function calls can help understand bottlenecks not only in the compiler, but also the runtime.
  3. Performance analysis on AMD CPUs.

@ndcuong91
Copy link

Hi @tqchen i'm really interested in 8bit quantization feature in this release. May i know when this feature will be available?

@tqchen
Copy link
Member Author

tqchen commented Aug 8, 2018

The 8bit quantization will be the major focus of the next release cycle

@tqchen
Copy link
Member Author

tqchen commented Aug 8, 2018

Thanks to everyone who have pushed to last release cycle in the past three months. I would like to propose the release of v0.4 on Aug 13th. As usual, the current checklist will go into the release note, and we will move the unfinished ones into the roadmap of the next release cycle. The first post will be edited into a release note.

We encourage everyone in the community to put their weights to review and vote the release.

@dmlc/tvm-team

@tqchen tqchen changed the title TVM v0.4 Roadmap TVM v0.4 Release Aug 8, 2018
@tqchen tqchen changed the title TVM v0.4 Release TVM v0.4 Roadmap Aug 9, 2018
@tqchen
Copy link
Member Author

tqchen commented Aug 9, 2018

0.4 release note candidate is now up at #1577

@tqchen
Copy link
Member Author

tqchen commented Aug 13, 2018

v0.4 is now tagged, next cycle roadmap issue is available at #1596

@tqchen tqchen closed this as completed Aug 13, 2018
@apache apache locked as resolved and limited conversation to collaborators Aug 13, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

10 participants