What Does it Mean for a Language Model to Preserve Privacy?

Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri and Florian Tramèr   (alphabetical author ordering)

ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2022



Abstract

Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use.


BibTeX
@inproceedings{BLMS+22,
  author   =   {Brown, Hannah and Lee, Katherine and Mireshghallah, Fatemehsadat and Shokri, Reza and Tram{\`e}r, Florian},
  title   =   {What Does it Mean for a Language Model to Preserve Privacy?},
  booktitle   =   {ACM Conference on Fairness, Accountability, and Transparency (FAccT)},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2202.05520},
  url   =   {https://arxiv.org/abs/2202.05520}
}