The Natural Language Processing Group at the University of Edinburgh (EdinburghNLP) is a group of faculty, postdocs, and PhD students working on algorithms that make it possible for computers to understand and produce human language. We do research in all core areas of natural language processing, including morphology, parsing, semantics, discourse, language generation, and machine translation. EdinburghNLP also has a strong track record of work at the interface of NLP with other areas, including speech technology, machine learning, computer vision, cognitive modeling, social media, information retrieval, robotics, bioinformatics, and educational technology.

With 11 core faculty members, EdinburghNLP is one of the largest NLP group in the world. It is also ranked as the most productive group in the area, according to csrankings.org. Our achievements include the award-winning neural machine translation system Nematus and the high-performance language modeling toolkit KenLM. EdinbughNLP faculty have a strong record of getting high-profile grants, and have so far won a total of five European Research Council (ERC) grants.

We are looking for new PhD students! Join us. Also, please check out the new UKRI Centre for Doctoral Training in Natural Language Processing!

We are hiring new faculty! See here the job advertisement.

news

Congratulations to all @EdinburghNLP authors who are presenting papers at @emnlpmeeting 2021! There are 25 Edinburgh papers in total. Here are the titles and authors:

Fully funded studentships for PhD students available, both in @Edin_CDT_NLP and in ILCC, @InfAtEd. For details please see https://web.inf.ed.ac.uk/cdt/natural-language-processing/apply and https://web.inf.ed.ac.uk/cdt/natural-language-processing/apply

Sharon Goldwater & former @InfAtEd student, Herman Kamper, win @INTERSPEECH2021 ISCA Award for Best Research Paper published in Computer Speech & Language for their paper: 'Segmental framework for fully-unsupervised large-vocabulary speech recognition'
➡️ http://ow.ly/1qGW50G9w6K

Happy to announce my second #EMNLP2021 paper: Editing Factual Knowledge in Language Models.

We took some pre-trained LMs (BERT/ BART) and we learn an "editor" function that can modify factual knowledge in the LM.

📄paper https://arxiv.org/abs/2104.08164
💻code https://github.com/nicola-decao/KnowledgeEditor

In our #EMNLP2021 paper on abstractive opinion summarization, we contributed:

- 33k+ human-written Amazon product summaries 🔥
- Model trained jointly (VI+RL) to select and summarize relevant reviews from large collections

📄: https://arxiv.org/pdf/2109.04325.pdf
💻: https://github.com/abrazinskas/SelSum

Load More...