The Natural Language Processing Group at the University of Edinburgh (EdinburghNLP) is a group of faculty, postdocs, and PhD students working on algorithms that make it possible for computers to understand and produce human language. We do research in all core areas of natural language processing, including morphology, parsing, semantics, discourse, language generation, and machine translation. EdinburghNLP also has a strong track record of work at the interface of NLP with other areas, including speech technology, machine learning, computer vision, cognitive modeling, social media, information retrieval, robotics, bioinformatics, and educational technology.

With 11 core faculty members, EdinburghNLP is one of the largest NLP group in the world. It is also ranked as the most productive group in the area, according to csrankings.org. Our achievements include the award-winning neural machine translation system Nematus and the high-performance language modeling toolkit KenLM. EdinbughNLP faculty have a strong record of getting high-profile grants, and have so far won a total of five European Research Council (ERC) grants.

We are looking for new PhD students! Join us. Also, please check out the new UKRI Centre for Doctoral Training in Natural Language Processing!

news

We are pleased to welcome the first cohort of students to the interdisciplinary UKRI Centre for Doctoral Training in Natural Language Processing https://t.co/ilM8WOBVas

A fascinating article by @lena_voita if you're interested in understanding what makes MLM models like BERT differents from LM models like GPT/GPT-2 (auto-regressive) and MT models.

And conveyed in such a beautiful blog post, a master-piece of knowledge sharing! https://t.co/OlmIsv2ewc

Evolution of Representations in the Transformer: blog post on our @emnlp2019 paper is out!
blog post: https://t.co/tknJUpjPF9
paper: https://t.co/bX7fU9qNKs
@lena_voita, @RicoSennrich, @iatitov

@emnlp2019 @RicoSennrich @iatitov We look at Transformers trained for MT, LM, MLM (aka BERT) tasks and show differences in how representations of individual tokens evolve. To explain the underlying process defining the observed behavior, we look at this evolution from the Information Bottleneck perspective.

Congratulations to Marco Damonte, who passed his viva with minor corrections last Friday! His thesis is entitled "Understanding and Generating Language with Abstract Meaning Representation." Thank you, Johan Bos and Ivan Titov, for being the examiners.

Load More...