Quantifying Memorization Across Neural Language Models

Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr and Chiyuan Zhang   (alphabetical author ordering)

International Conference on Learning Representations (ICLR) 2023 (Spotlight Presentation)



Abstract

Large language models (LMs) have been shown to memorize parts of their training data, and when prompted appropriately, they will emit the memorized training data verbatim. This is undesirable because memorization violates privacy (exposing user data), degrades utility (repeated easy-to-memorize text is often low quality), and hurts fairness (some texts are memorized over others).
We describe three log-linear relationships that quantify the degree to which LMs emit memorized training data. Memorization significantly grows as we increase (1) the capacity of a model, (2) the number of times an example has been duplicated, and (3) the number of tokens of context used to prompt the model. Surprisingly, we find the situation becomes complicated when generalizing these results across model families. On the whole, we find that memorization in LMs is more prevalent than previously believed and will likely get worse as models continues to scale, at least without active mitigations.


BibTeX
@inproceedings{CIJL+23,
  author   =   {Carlini, Nicholas and Ippolito, Daphne and Jagielski, Matthew and Lee, Katherine and Tram{\`e}r, Florian and Zhang, Chiyuan},
  title   =   {Quantifying Memorization Across Neural Language Models},
  booktitle   =   {International Conference on Learning Representations (ICLR)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2202.07646},
  url   =   {https://arxiv.org/abs/2202.07646}
}