Paper at ICASSP 2021 — Error-driven Pruning of Language Models for Virtual Assistants

Sashank Gondala, Lyan Verwimp, Ernest Pusateri, Manos Tsagkias, and Christophe van Gysel

Apple
14 February 2021
Keywords: paper, conference, information retrieval, machine learning, speech, named entities

Abstract

Language models (LMs) for virtual assistants (VAs) are typically trained on large amounts of data, resulting in prohibitively large models which require excessive memory and/or cannot be used to serve user requests in real-time. Entropy pruning results in smaller models but with significant degradation of effectiveness in the tail of the user request distribution. We customize entropy pruning by allowing for a keep list of infrequent n-grams that require a more relaxed pruning threshold, and propose three methods to construct the keep list. Each method has its own advantages and disadvantages with respect to LM size, ASR accuracy and cost of constructing the keep list. Our best LM gives 8% average Word Error Rate (WER) reduction on a targeted test set, but is 3 times larger than the baseline. We also propose discriminative methods to reduce the size of the LM while retaining the majority of the WER gains achieved by the largest LM.

Happy to share yet another publication with the Siri Speech team at Apple, this time led by Sashank Gondala, who interned with us last year. Our full paperError-driven Pruning of Language Models for Virtual Assistants is accepted at ICASSP 2021.