AI helps you reading Science
AI generates interpretation videos
AI extracts and analyses the key points of the paper to generate videos automatically
AI parses the academic lineage of this thesis
LIT: Block-wise Intermediate Representation Training for Model Compression.
arXiv: Learning, (2018)
Knowledge distillation (KD) is a popular method for reducing the computational overhead of deep network inference, in which the output of a teacher model is used to train a smaller, faster student model. Hint training (i.e., FitNets) extends KD by regressing a student modelu0027s intermediate representation to a teacher modelu0027s interm...More
PPT (Upload PPT)