Intriguing Properties of Data Attribution on Diffusion Models

1Singapore Management University
2Sea AI Lab, Singapore

Abstract

Data attribution seeks to trace model outputs back to training data. With the recent development of diffusion models, data attribution has become a desired module to properly assign valuations for high-quality or copyrighted training samples, ensuring that data contributors are fairly compensated or credited. Several theoretically motivated methods have been proposed to implement data attribution, in an effort to improve the trade-off between computational scalability and effectiveness.

In this work, we conduct extensive experiments and ablation studies on attributing diffusion models, specifically focusing on DDPMs trained on CIFAR-10 and CelebA, as well as a Stable Diffusion model LoRA-finetuned on ArtBench. Intriguingly, we report counter-intuitive observations that theoretically unjustified design choices for attribution empirically outperform previous baselines by a large margin, in terms of both linear datamodeling score and counterfactual evaluation.

Our work presents a significantly more efficient approach for attributing diffusion models, while the unexpected findings suggest that at least in non-convex settings, constructions guided by theoretical assumptions may lead to inferior attribution performance.

Counter-intuitive observations

Visualization

Proponents and opponents visualization on ArtBench-2 using TRAK and D-TRAK with various # of timesteps (10 or 100). For each sample of interest, 5 most positive influential training samples and 3 most negative influential training samples are given together with the influence scores (below each sample).

Counterfactual visualization on CIFAR-2 (Left) and ArtBench-2 (Right). We compare the original generated samples to those generated from the same random seed with the retrained models.

BibTeX

@inproceedings{
      zheng2023intriguing,
      title={Intriguing Properties of Data Attribution on Diffusion Models},
        author={Zheng, Xiaosen and Pang, Tianyu and Du, Chao and Jiang, Jing and Lin, Min},
      booktitle={International Conference on Learning Representations (ICLR)},
      year={2024},
      }
    }