Options d’inscription
Language Models have caused a paradigm shift in Natural Language Processing (NLP), leading to innovation in academic research and industrial applications. This course is meant to provide an in-depth understanding of language modeling, along with the historical effort in the field which led to bigger, revolutionary models. The course is designed to serve as a non-mandatory pre-requisite to the upcoming courses in the following semesters. It will also show the practicability of the theories behind language models with the help of lab sessions focused primarily on various downstream NLP tasks such as sentiment analysis, etc.
Topics covered:
- Introduction to Language and NLP
- Theory of language modeling
- Generation + First "working" models (RNNs) + Seq2seq and translation
- Word2Vec, Masked Language Models
- Semantic representations
- Large Language Models + Prompt Engineering
- Low-rank adaptation
- Ethical Aspects of LLMs
- Enseignant: Mehwish Alam
- Enseignant: Maria Boritchev
- Enseignant: Nils Holzenberger
- Enseignant: Matthieu Labeau
- Enseignant: Fabian Suchanek
Les visiteurs anonymes ne peuvent pas accéder à ce cours. Veuillez vous connecter.