Low-Resource NLP: Natural Language Processing and Word Embeddings

Welcome to the first session in Low-Resource NLP: Multilinguality and Machine Translation!

In the first iteration in this series, we go over the definition of a low-resource setting which require specific NLP techniques. We mention techniques such as data enrichment and general machine learning. The basics of word embeddings and their different types are also brought up for discussion.

The following sessions would include information on areas such as Unsupervised Machine Translation, Transformer Models, Self-Supervised Neural Machine Translation and the State-of-the-art: WMT Evaluations.

The YouTube video can be found below, while the PowerPoint slides can be found in this link.

Leave a Reply