Ferenc Huszár

Our lab at Cambridge studies foundational aspects of learning and inference, with particular focus on modern deep learning methods.

PhD students

Research Assistants

Alumni and friends

For prospective students

I invite PhD applications relating to theory of deep learning: studying learning dynamics in neural networks, characterising and understanding rational behaviour and extrapolation phenomena in autoregressive language models, studying neural network behaviour in algorithmic/mathematical datasets.

Topics I don't currently seek new students in: computer vision, superresolution, image compression, recommender systems, self-supervised representation learning, causal inference.

About me

I'm Ferenc Huszár, Associate Professor of Machine Learning at the University of Cambridge. I joined the Department of Computer Science and Technology, a nice and cozy department, in 2020. I'm interested in principled deep learning techniques: optimization, generalization, representation, transfer, meta-learning, and so on. I focus more on understanding than on developing new techniques.

I did my PhD in Bayesian machine learning with Carl Rasmussen, Máté Lengyel and Zoubin Ghahramani over at the Engineering Department in Cambridge. I worked on topics of approximate inference [1, 2], active learning [3, 4], and applications of these to sciences [5, 6].

Following my PhD I worked in various jobs in the London tech/startup sector. My highlight as a researcher is joining Magic Pony Technology, a startup where we developed deep learning-based image superresolution [7, 8] and compression [9] techniques. After Twitter's acquisition Magic Pony, I have worked on a range of ML topics, like recommender systems [10, 11] and fair machine learning.

inFERENCe

I started this blog some time ago but now it kind of has its own life. inFERENCe got started when, in 2015, I returned to machine learning research after a 3-year stint as a data scientist. I basically slept through the deep learning revolution. In those three years, many things happened, so I had to play catch up.

Initially, these blog posts helped me understand the body of literature I have missed: generative adversarial networks, variational autoencoders, representation learning, etc. Nowadays, I continue reading and writing about current papers, trying to reinterpret them and find connections to things I know, usually ending up with some kind of KL divergence.

If you're new here and want to get a taste, you may want to start with these crowd favourites:

Contact

You can send me an email to fh277@cam.ac.uk, follow me on twitter @fhuszar, or look at my Google scholar profile.