Nov 28, 2022
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Speaker · 0 followers
Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy. The inference cost of these models – which consist of hundreds of millions of parameters – however, can be prohibitively large. Hence, often in practice, LLMs are compressed before they are deployed in specific applications. In this paper, we initiate the study of differentially private model compression and propose frameworks for achieving 50Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy. The inference cost of these models – which consist of hundreds of millions of parameters – however, can be prohibitively large. Hence, often in practice, LLMs are compressed before they are deployed in s…
Account · 952 followers
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Xuan Kan, …
Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Mu Yao, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Kevin Bello, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Ken Norman, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%