강의 주제: TinyML and Efficient Deep Learning Computing
Instructor : Song Han(Associate Professor, MIT EECS)
Fall 2023([schedule] | [youtube]) | Fall 2022([schedule] | [youtube])
-
효율적인 추론 방법 공부
딥러닝 연산에 있어서 효율성을 높일 수 있는 알고리즘을 공부한다.
-
제한된 성능에서의 딥러닝 모델 구성
디바이스의 제약에 맞춘 효율적인 딥러닝 모델을 구성한다.
-
latency, storage, energy
Memory-Related(#parameters, model size, #activations), Computation(MACs, FLOP)
-
Pruning Granularity, Pruning Critertion
Unstructured/Structured pruning(Fine-grained/Pattern-based/Vector-level/Kernel-level/Channel-level)
Pruning Criterion: Magnitude(L1-norm, L2-norm), Sensitivity and Saliency(SNIP), Loss Change(First-Order, Second-Order Taylor Expansion)
Data-Aware Pruning Criterion: Average Percentage of Zero(APoZ), Reconstruction Error, Entropy
-
Automatic Pruning, Lottery Ticket Hypothesis
Finding Pruning Ratio: Reinforcement Learning based, Rule based, Regularization based, Meta-Learning based
Lottery Ticket Hypothesis(Winning Ticket, Iterative Magnitude Pruning, Scaling Limitation)
Pruning at Initialization(Connection Sensitivity, Gradient Flow)
-
System & Hardware Support for Fine-grained Sparsity
Efficient Inference Engine(EIE format: relative index, column pointer)
Sparse Matrix-Matrix Multiplication(SpMM), Sparse Coding(CSR format)
-
Basic Concepts of Quantization
Numeric Data Types: Integer, Fixed-Point, Floating-Point(IEEE FP32/FP16, BF16, NVIDIA FP8), INT4 and FP4
Uniform vs Non-uniform quantization, Symmetric vs Asymmetric quantization
Linear Quantization: Integer-Arithmetic-Only Quantization, Sources of Quantization Error(clipping, rounding, scaling factor, zero point)
-
Vector Quantization(Deep compression: iterative pruning, K-means based quantization, Huffman encoding), Product Quantization
-
Weight Quantiztion: Per-Tensor Activation Per-Channel Activation, Group Quantization(Per-Vector, MX), Weight Equalization, Adative Rounding
Activation Quantization: During training(EMA), Calibration(Min-Max, KL-divergence, Mean Squared Error)
Bias Correction, Zero-Shot Quantization(ZeroQ)
-
Quantization-Aware Training, Low bit-width quantization
Fake quantization, Straight-Through Estimator
Binary Quantization(Deterministic, Stochastic, XNOR-Net), Ternary Quantization
-
Neural Architecture Search: basic concepts & manually-designed neural networks
input stem, stage, head
AlexNet, VGGNet, SqueezeNet(fire module), ResNet(bottleneck block, residual connection), ResNeXt(grouped convolution)
MobileNet(depthwise-separable convolution, width/resolution multiplier), MobileNetV2(inverted bottleneck block), ShuffleNet(channel shuffle), SENet(squeeze-and-excitation block), MobileNetV3(redesigning expensive layers, h-swish)
-
Neural Architecture Search: Search Space
Search Space: Macro, Chain-Structured, Cell-based(NASNet), Hierarchical(Auto-DeepLab, NAS-FPN)
design search space: Cumulative Error Distribution, FLOPs distribution, zero-cost proxy
-
Neural Architecture Search: Performance Estimation & Hardware-Aware NAS
Weight Inheritance, HyperNetwork, Weight Sharing(super-network, sub-network)
Performance Estimation Heuristics: Zen-NAS, GradSign
-
Knowledge Distillation(distillation loss, softmax temperature)
What to Match?: intermediate weights, features(attention maps), sparsity pattern, relational information
Distillation Scheme: Offline Distillation, Online Distillation, Self-Distillation
-
Applications: Object Detection, Semantic Segmentation, GAN, NLP
Tiny Neural Network: NetAug
-
MCUNetV1: TinyNAS, TinyEngine
MCUNetV2: MCUNetV2 architecture(MobileNetV2-RD), patch-based inference, joint automated search
-
Memory Hierarchy of Microcontroller, Primary Memory Format(NCHW, NHWC, CHWN)
Parallel Computing Techniques: Loop Unrolling, Loop Reordering, Loop Tiling, SIMD programming
Inference Optimization: Im2col, In-place depthwise convolution, appropriate data layout(pointwise, depthwise convolution), Winograd convolution
-
Efficient Video(live) Understanding
2D CNNs for Video(live) Understanding, 3D CNNs for Video(live) Understanding(I3D), Temporal Shift Module(TSM)
Other Efficient Methods: Kernel Decomposition, Multi-Scale Modeling, Neural Architecture Search(X3D), Skipping Redundant Frames/Clips, Utilizing Spatial Redundancy
-
Generative Adversarial Networks (GANs)
GANs(Generator, Discriminator), Conditional/Unconditional GANs, Difficulties in GANs
Compress Generator(GAN Compression), Dynamic Cost GANs(Anycost GANs), Data-Efficient GANs(Differentiable Augmenatation)
-
NLP Task(Discriminative, Generative), Pre-Transformer Era(RNN, LSTM, CNN)
Transformer: Tokenizer, Embedding, Multi-Head Attention, Feed-Forward Network, Layer Normalization(Pre-Norm, Post-Norm), Positional Encoding
-
Encoder-Decoder(T5), Encoder-only(BERT), Decoder-only(GPT), Relative Positional Encoding, KV cache optimization, Gated Linear Unit
-
Vision Transformer, Window Attention(Swin Transformer), Sparse Window Attention(FlatFormer), ReLU Linear Attention(EfficientViT), Sparsity-Aware Adaptation(SparseViT)
Date | Lecture | Youtube | Slide |
---|---|---|---|
Sep 8 | Lecture 1: Introduction | - | [slides] |
Sep 13 | Lecture 2: Basics of Deep Learning | [video] | [slides] |
Efficient Inference | |||
Sep 15 | Lecture 3: Pruning and Sparsity (Part I) | [video] | [slides] |
Sep 20 | Lecture 4: Pruning and Sparsity (Part II) | [video][video(live)] | [slides] |
Sep 22 | Lecture 5: Quantization (Part I) | [video][video(live)] | [slides] |
Sep 27 | Lecture 6: Quantization (Part II) | [video][video(live)] | [slides] |
Sep 29 | Lecture 7: Neural Architecture Search (Part I) | [video][video(live)] | [slides] |
Oct 4 | Lecture 8: Neural Architecture Search (Part II) | [video][video(live)] | [slides] |
Oct 6 | Lecture 9: Neural Architecture Search (Part III) | [video][video(live)] | [slides] |
Oct 13 | Lecture 10: Knowledge Distillation | [video][video(live)] | [slides] |
Oct 18 | Lecture 11: MCUNet - Tiny Neural Network Design for Microcontrollers | [video][video(live)] | [slides] |
Efficient Training and System Support | |||
Oct 25 | Lecture 13: Distributed Training and Gradient Compression (Part I) | [video][video(live)] | [slides] |
Oct 27 | Lecture 14: Distributed Training and Gradient Compression (Part II) | [video][video(live)] | [slides] |
Nov 1 | Lecture 15: On-Device Training and Transfer Learning (Part I) | [video][video(live)] | [slides] |
Nov 3 | Lecture 16: On-Device Training and Transfer Learning (Part II) | [video][video(live)] | [slides] |
Nov 8 | Lecture 17: TinyEngine - Efficient Training and Inference on Microcontrollers | [video][video(live)] | [slides] |
Application-Specific Optimizations | |||
Nov 10 | Lecture 18: Efficient Point Cloud Recognition | [video][video(live)] | [slides] |
Nov 15 | Lecture 19: Efficient Video(live) Understanding and GANs | [video][video(live)] | [slides] |
Nov 17 | Lecture 20: Efficient Transformers | [video][video(live)] | [slides] |
Quantum ML | |||
Nov 22 | Lecture 21: Basics of Quantum Computing | [video][video(live)] | [slides] |
Nov 29 | Lecture 22: Quantum Machine Learning | [video][video(live)] | [slides] |
Dec 1 | Lecture 23: Noise Robust Quantum ML | [video][video(live)] | [slides] |
Dec 13 | Lecture 26: Course Summary & Guest Lecture | [video] | [slides] |