I am a PhD student in NLP at the University of Melbourne, developing computational methods for difficult (i.e., cross-cultural or low-resource) translation. I have also worked on topics relating to multilinguality [1-6], psycholinguistics [3-6] and interpretability [1, 2, 5].
My long-term research interests include:
- understanding how model architecture, training data and algorithms impose learning biases on language models, and their limitations in representing cognitively driven language phenomena
- studying how large-scale models navigate varying (sometimes conflicting) goals, and developing controllable mechanisms that drive convergent or adaptable behaviour across time, domains, languages and modalities.
I am jointly advised by Ekaterina Vylomova, Charles Kemp and Trevor Cohn. In 2024, I was a student researcher at Google Research Australia.
Selected Publications
-
Zheng Wei Lim, Alham Fikri Aji, and Trevor Cohn
Preprint 2025
-
Zheng Wei Lim, Nitish Gupta, Honglin Yu, and Trevor Cohn
International Conference on Learning Representations 2025
-
Zheng Wei Lim, Ekaterina Vylomova, Trevor Cohn, and Charles Kemp
Association for Computational Linguistics 2024
-
Zheng Wei Lim, Harry Stuart, Simon De Deyne, Terry Regier, Ekaterina Vylomova, Trevor Cohn, and Charles Kemp
Cognitive Science 48, no. 1 (2024): e13402.
-
Zheng Wei Lim, Ekaterina Vylomova, Charles Kemp, and Trevor Cohn
Transactions of the Association for Computational Linguistics 2024
-
Zheng Wei Lim, Trevor Cohn, Charles Kemp, and Ekaterina Vylomova
Findings of the Association for Computational Linguistics 2023