Generative Modeling via Drifting

Mingyang Deng1, He Li1, Tianhong Li1, Yilun Du2, Kaiming He1

1MIT    2Harvard University

Training dynamics

Middle Init Far-Away Init Collapsed Init
Middle init dynamics Far-away init dynamics Collapsed init dynamics
Training dynamics
the generated distribution q evolves towards the data distribution p.

TL;DR

Abstract

Generative modeling can be formulated as learning a mapping f such that its pushforward distribution matches the data distribution. The pushforward behavior can be carried out iteratively at inference time, e.g., in diffusion/flow-based models. In this paper, we propose a new paradigm called Drifting Models, which evolve the pushforward distribution during training and naturally admit one-step inference. We introduce a drifting field that governs the sample movement and achieves equilibrium when the distributions match. This leads to a training objective that allows the neural network optimizer to evolve the distribution. In experiments, our one-step generator achieves state-of-the-art results on ImageNet 256×256, with FID 1.54 in latent space and 1.61 in pixel space.

Training evolution teaser

Generated Samples

Uncurated conditional ImageNet 256×256 samples (1 NFE, CFG Scale 1.0, FID 1.54)

Resources

Drifting model demo

Interactive notebook demonstrating the core algorithm on toy 2D distributions.

Notebook preview
Drifting Models: Step-by-Step Demo Part 1: Core Algorithm def drifting_loss (gen, pos, compute_drift): V = compute_drift(gen, pos) target = (gen + V).detach() return F.mse_loss(gen, target) Part 2: Toy 2D Examples # Checkerboard & Swiss Roll model, loss = train_toy( sample_swiss_roll, steps=2000) [Training output] Training on Swiss Roll... ████████████ 100% loss=1.63e-05 [Visualization] Plots generated inline in the notebook. Run in Colab to see results →
Release

ImageNet training code and models will be released.

Citation

@article{deng2026drifting,
  title={Generative Modeling via Drifting},
  author={Deng, Mingyang and Li, He and Li, Tianhong and Du, Yilun and He, Kaiming},
  journal={arXiv preprint arXiv:2602.04770},
  year={2026}
}