After getting the opportunity to study in the USA next year, I had to finish my bachelor thesis one month earlier than expected. Still, I’m quite happy with the result.
Its title: Implementation of Modiﬁed Kneser-Ney Smoothing on Top of Generalized Language Models for Next Word Prediction
Next word prediction is the task of suggesting the most probable word a user will type next. Current approaches are based on the empirical analysis of corpora (large text files) resulting in probability distributions over the different sequences that occur in the corpus. The resulting language models are then used for predicting the most likely next word. State-of-the-art language models are based on n-grams and use smoothing algorithms like modified Kneser-Ney smoothing in order to reduce the data sparsity by adjusting the probability distribution of unseen sequences. Previous research has shown that building word pairs with different distances by inserting wildcard words into the sequences can result in better predictions by further reducing data sparsity. The aim of this thesis is to formalize this novel approach and implement it by also including modified Kneser-Ney smoothing.
But to be clear: This novel approach, called Generalized Language Models, was not my idea. After working together on the Typology project, René presented this concept as a topic for my bachelor thesis and I was happy to implement it.
You can find my thesis here.
Today, I had my colloquium which focused more on the implementation of smoothed Generalized Language Models for large datasets (6GB+). You can find the slides here. Also, the slides of my Oberseminar talk which focused on the theoretical parts are here.
The next step is the evaluation of smoothed Generalized Language Models. I’m looking forward to find out if the results really are better than smoothed n-gram language models!