Ferenc Huszár

For students

I'll be taking PhD students starting in October 2021. You would be in the first cohort of my students, in a new lab. This has great benefits (I would have a lot more time to supervise you, and then generations of students will consider you the wise elder) but probably also some downsides. If you're considering starting a PhD in fundamental machine learning, please send me an email with your interests. Applications are open, but we can have an informal chat before you apply.

About me

I'm Ferenc Huszár, Senior Lecturer in Machine Learning at the University of Cambridge. I recently joined the Department of Computer Science and Technology, a nice and cozy department, where we're building a new machine learning group with Neil Lawrence and Carl Henrik Ek and others. I'm interested in principled deep learning techniques: optimization, generalization, representation, transfer, meta-learning, and so on.

I did my PhD in Bayesian machine learning with Carl Rasmussen, Máté Lengyel and Zoubin Ghahramani over at the Engineering Department in Cambridge. I worked on topics of approximate inference [1, 2], active learning [3, 4], and applications of these to sciences [5, 6].

Following my PhD I worked in various jobs in the London tech/startup sector. My highlight as a researcher is joining Magic Pony Technology, a startup where we developed deep learning-based image superresolution [7, 8] and compression [9] techniques. After Twitter's acquisition Magic Pony, I have worked on a range of ML topics, like recommender systems [10, 11] and fair machine learning.

inFERENCe

I started this blog some time ago but now it kind of has its own life. inFERENCe got started when, in 2015, I returned to machine learning research after a 3-year stint as a data scientist. I basically slept through the deep learning revolution. In those three years, many things happened, so I had to play catch up.

Initially, these blog posts helped me understand the body of literature I have missed: generative adversarial networks, variational autoencoders, representation learning, etc. Nowadays, I continue reading and writing about current papers, trying to reinterpret them and find connections to things I know, usually ending up with some kind of KL divergence.

If you're new here and want to get a taste, you may want to start with these crowd favourites:

Contact

You can send me an email to fh277@cam.ac.uk, follow me on twitter @fhuszar, or look at my Google scholar profile.