Projects
I have contributed to several large collaborative projects, such as ParaCrawl, HPLT, MaLA, and UTTER, on open-source web-scale multilingual resource provision, including data, tools, and models.
I have been very fortunate to work with many wonderful people on various smaller research projects: machine translation, summarization, multilingual and cross-lingual methods, and LLMs, e.g. agentic systems, long-context modelling, reward modelling, multi-modality, and trustworthy evaluation.
Recent Preprints
- Dayyán O'Brien, Barry Haddow, Emily Allaway, and Pinzhen Chen. MatheMagic: Generating dynamic mathematics benchmarks robust to memorization.
- Xiao Zhu, Chenmien Tan, Pinzhen Chen, Rico Sennrich, Yanlin Zhang, and Hanxu Hu. CHARM: Calibrating reward models with Chatbot Arena scores.
- Wenhao Zhu, Pinzhen Chen, Hanxu Hu, Shujian Huang, Fei Yuan, Jiajun Chen, and Alexandra Birch. Generalizing from short to long: Effective data synthesis for long-context instruction tuning.
- Shaoxiong Ji, Zihao Li, Indraneil Paul, Jaakko Paavola, Peiqin Lin, Pinzhen Chen, Dayyán O'Brien, Hengyu Luo, Hinrich Schütze, Jörg Tiedemann, and Barry Haddow. EMMA-500: Enhancing massively multilingual adaptation of large language models.
Recent Publications
- Stephan Oepen, Nikolay Arefev, Mikko Aulamo, Marta Bañón, Maja Buljan, Laurie Burchell, Lucas Charpentier, Pinzhen Chen, Mariya Fedorova, Ona de Gibert, Barry Haddow, Jan Hajič, Jindřich Helcl, Andrey Kutuzov, Veronika Laippala, Zihao Li, Risto Luukkonen, Bhavitvya Malik, Vladislav Mikhailov, Amanda Myntti, Dayyán O'Brien, Lucie Poláková, Sampo Pyysalo, Gema Ramírez Sánchez, Janine Siewert, Pavel Stepachev, Jörg Tiedemann, Teemu Vahtola, Dušan Variš, Fedor Vitiugin, Tea Vojtěchová, and Jaume Zaragoza. HPLT 3.0: Very Large-Scale Multilingual Resources for LLM and MT. Mono-and Bi-lingual Data, Multilingual Evaluation, and Pre-Trained Models. Accepted to LREC 2026.
- David Tan, Pinzhen Chen, Josef Van Genabith, and Koel Dutta Chowdhury. When Flores Bloomz wrong: Cross-direction contamination in machine translation evaluation. EACL 2026.
- Dayyán O’Brien, Bhavitvya Malik, Ona de Gibert, Pinzhen Chen, Barry Haddow, and Jörg Tiedemann. DocHPLT: A massively multilingual document-level translation dataset. WMT 2025.
- Kirill Semenov, Xu Huang, Vilém Zouhar, Nathaniel Berger, Dawei Zhu, Arturo Oncevay, and Pinzhen Chen. Findings of the WMT25 terminology translation task: Terminology is useful especially for good MTs. WMT 2025.
- Tom Kocmi, Sweta Agrawal, Ekaterina Artemova, Eleftherios Avramidis, Eleftheria Briakou, Pinzhen Chen, Marzieh Fadaee, Markus Freitag, Roman Grundkiewicz, Yupeng Hou, Philipp Koehn, Julia Kreutzer, Saab Mansour, Stefano Perrella, Lorenzo Proietti, Parker Riley, Eduardo Sánchez, Patricia Schmidtova, Mariya Shmatova, and Vilém Zouhar. Findings of the WMT25 multilingual instruction shared task: Persistent hurdles in reasoning, generation, and evaluation. WMT 2025.
- Vivek Iyer, Pinzhen Chen, Ricardo Rei, and Alexandra Birch. XL-Suite: Cross-lingual synthetic training and evaluation data for open-ended generation. EMNLP Findings 2025.
- Laurie Burchell, Ona de Gibert, Nikolay Arefyev, Mikko Aulamo, Marta Bañón, Pinzhen Chen, Mariia Fedorova, Liane Guillou, Barry Haddow, Jan Hajič, Jindřich Helcl, Erik Henriksson, Mateusz Klimaszewski, Ville Komulainen, Andrey Kutuzov, Joona Kytöniemi, Veronika Laippala, Petter Mæhlum, Bhavitvya Malik, Farrokh Mehryary, Vladislav Mikhailov, Nikita Moghe, Amanda Myntti, Dayyán O'Brien, Stephan Oepen, Proyag Pal, Jousia Piha, Sampo Pyysalo, Gema Ramírez-Sánchez, David Samuel, Pavel Stepachev, Jörg Tiedemann, Dušan Variš, Tereza Vojtěchová, and Jaume Zaragoza-Bernabeu. An expanded massive multilingual dataset for high-performance language technologies. ACL 2025.
- Hanxu Hu, Simon Yu, Pinzhen Chen, and Edoardo M. Ponti. Fine-tuning large language models with sequential instructions. NAACL 2025.
- Mateusz Klimaszewski, Pinzhen Chen, Liane Guillou, Ioannis Papaioannou, Barry Haddow, and Alexandra Birch. AveniBench: Accessible and versatile evaluation of finance intelligence. FinNLP 2025.
- Shaoxiong Ji and Pinzhen Chen. How many languages make good multilingual instruction tuning? A case study on BLOOM. COLING 2025.
-
A complete list of publications can be found on Google Scholar.