The Natural Language Processing Group at the University of Edinburgh (EdinburghNLP) is a group of faculty, postdocs, and PhD students working on algorithms that make it possible for computers to understand and produce human language. We do research in all core areas of natural language processing, including morphology, parsing, semantics, discourse, language generation, and machine translation. EdinburghNLP also has a strong track record of work at the interface of NLP with other areas, including speech technology, machine learning, computer vision, cognitive modeling, social media, information retrieval, robotics, bioinformatics, and educational technology.

With 11 core faculty members, EdinburghNLP is one of the largest NLP group in the world. It is also ranked as the most productive group in the area, according to Our achievements include the award-winning neural machine translation system Nematus and the high-performance language modeling toolkit KenLM. EdinbughNLP faculty have a strong record of getting high-profile grants, and have so far won a total of five European Research Council (ERC) grants.

We are looking for new PhD students! Join us. Also, please check out the new UKRI Centre for Doctoral Training in Natural Language Processing!


3rd day of @emnlp2019: @kchonyc in his great keynote talks about sequence generation with an adaptive order and mentions (among others) our @NeurIPSConf paper by my Yandex student Dima Emelianenko (w/ Pavel Serduykov).

By the way, the paper is out!

Xinchi (@dalstonchen) will show on Thu that "capsules" are a natural way of embedding NL 'propositions' (i.e. predicates and their typed arguments); this view yields a simple iterative inference algo for SRL.
13:30, AWE Hall 2A #emnlp2019 @chunchuan_lyu


2nd day of @emnlp: Evolution of Representations in the Transformer!
16:30-18:00, hall 2A, poster P43
(another paper with my research parents @iatitov and @RicoSennrich )

@alvations @iatitov @RicoSennrich @EdinburghNLP First day of @emnlp: had a lot of fun explaining how to do context-aware NMT without document-level parallel data and, what is more surprising, why it even works 🙂
(the paper with my research parents @iatitov and @RicoSennrich)

Wednesday presentations at #emnlp2019 by
@lena_voita and Bailin Wang (@berlin1993428)


Lena's blog with lots of extra details:

Load More...