About me...
I am a Laplace Postdoctoral Chair in Data Science at École Normale Supérieure, Paris. Previously I was a postdoctoral fellow at ETH Zurich after completing my PhD in 2021 at the University of Edinburgh, supervised by Profs Tim Hospedales and Iain Murray.
My main research interest is in developing a mathematical understanding of how successful machine learning methods work (e.g. neural networks), with the aim of: (i) developing better performing, more interpretable and more reliable machine leraning algorithms; and perhaps (ii) deepening our understanding of the underlying data, such as language, images, speech or DNA, in term of its latent structure and the mechanisms by which it can be learned; While my PhD focused on representing discrete objects, e.g. words, knowledge graph entities/relations and network/graph nodes, my current work considers more general latent variable/representation models.
My PhD (Towards a Theoretical Understanding of Word and Relation Representation) focused on developing a theoretical understanding of how words are represented, whether as word embeddings learned from huge text corpora (e.g. by word2vec or GloVe); or as entity embeddings learned from the facts (subject, relation, object) in a knowledge graph.
During my PhD I spent 6 months on internship at Samsung AI Centre, Cambridge working at the interestection of representation learning and logical reasoning.
Background: I moved to Artificial Intelligence/Machine Learning research after some time working in Project Finance. I hold a BSc in Mathematics and Chemistry from the University of Southampton, an MSc Mathematics and the Foundations of Computer Science (MFoCS) from the University of Oxford and MScs in Artificial Intelligence and Data Science from the University of Edinburgh.
Awards & Invited Talks: “Analogies Explained: Towards Understanding Word Embeddings” received Best Paper, Honourable Mention at ICML, 2019. I have been awarded grant funding from the Hasler Foundation. I have given several invted talks, including at Harvard Center of Mathematical Sciences & Applications and Astra-Zeneca.
Publications
A Probabilistic Model for Self-Supervised Learning
A Bizeul, B Schölkopf, C Allen;
under review, 2024
Variational Classification: A Probabilistic Generalization of the Softmax Classifier
[arXiv]
S Dhuliawala, M Sachan, C Allen;
TMLR, 2024
Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs
[NeurIPS]
Đ Miladinović, K Shridhar, K Jain, M Paulus, JM Buhmann, C Allen;
NeurIPS, 2022
Adapters for Enhanced Modelling of Multilingual Knowledge and Text
[arXiv]
Y Hou, W Jiao, M Liu, C Allen, Z Tu, M Sachan;
EMNLP, 2022
Interpreting Knowledge Graph Relation Representation from Word Embeddings
[arXiv]
C Allen*, I Balažević*, T Hospedales;
ICLR, 2021
Multi-scale Attributed Embedding of Networks
[arXiv] [github]
B Rozemberczki, C Allen, R Sarkar;
Journal of Complex Networks , 2021
What the Vec? Towards Probabilistically Grounded Embeddings
[arXiv]
C Allen, I Balažević, T Hospedales;
NeurIPS, 2019
Multi-relational Poincaré Graph Embeddings
[arXiv] [github]
I Balažević, C Allen, T Hospedales;
NeurIPS, 2019
Analogies Explained: Towards Understanding Word Embeddings
[arXiv]
[blog post]
[slides]
C Allen, T Hospedales;
ICML, 2019 (Best Paper, honorable mention)
TuckER: Tensor Factorization for Knowledge Graph Completion
[arXiv] [github]
I Balažević, C Allen, T Hospedales;
EMNLP, 2019 (oral)
Hypernetwork Knowledge Graph Embeddings
[arXiv] [github]
I Balažević, C Allen, T Hospedales;
ICANN, 2019 (oral)
Posts
subscribe via RSS