type
status
date
slug
summary
tags
category
icon
password
论文网站:
arXiv.orgDataset Distillation

Dataset Distillation
Model distillation aims to distill the knowledge of a complex model into a simpler one. In this paper, we consider an alternative formulation called dataset distillation: we keep the model fixed...
代码网站:
知乎介绍:
zhuanlan.zhihu.com![zhuanlan.zhihu.com]()
zhuanlan.zhihu.com
目标:给定一个模型和一个数据集,我们的目标是获得一个新的、大大缩减合成数据集,其性能几乎与原始数据集一样好。
我们的任务是找到整个训练数据上经验误差的最小值:

固定网络参数训练(缺点:在合成数据集上训练模型的时候,模型的初始化参数得和原本的一样,没有泛化性):

随机网络参数训练:


多个梯度下降步骤和多个时期训练(其实就是第六步的时候多个for语句,多跑几轮):

不同网络权重下的蒸馏

- Author:Coding
- URL:http://preview.tangly1024.com/article/example-8
- Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!
Relate Posts
Dataset Condensation with Gradient Matching
Dataset Distillation by Matching Training Trajectories
Dataset Condensation with Distribution Matching

Dataset Condensation with Differentiable Siamese Augmentation

Dataset Condensation via Efficient Synthetic-Data Parameterization

Generalizing Dataset Distillation via Deep Generative Prior
