Ferenc Huszár
For students
I am NOT planning to take PhD students in this application cycle (October 2022 start) This is due to managing to recruit a fantastic first cohort of students last year so I want to focus on them.
I am, however, looking for applicants for our one-year research-heavy MPhil programme. There are a number of full scholarships available due to generous donations from Twitter and DeepMind. Here is a guide I wrote for prospective students last year, which hasn't been updated, but hopefully the information there is still useful.
I currently don't have a good way to host interns and visiting students in Cambridge.
About me
I'm Ferenc Huszár, Senior Lecturer in Machine Learning at the University of Cambridge. I recently joined the Department of Computer Science and Technology, a nice and cozy department, where we're building a new machine learning group with Neil Lawrence and Carl Henrik Ek and others. I'm interested in principled deep learning techniques: optimization, generalization, representation, transfer, meta-learning, and so on.
I did my PhD in Bayesian machine learning with Carl Rasmussen, Máté Lengyel and Zoubin Ghahramani over at the Engineering Department in Cambridge. I worked on topics of approximate inference [1, 2], active learning [3, 4], and applications of these to sciences [5, 6].
Following my PhD I worked in various jobs in the London tech/startup sector. My highlight as a researcher is joining Magic Pony Technology, a startup where we developed deep learning-based image superresolution [7, 8] and compression [9] techniques. After Twitter's acquisition Magic Pony, I have worked on a range of ML topics, like recommender systems [10, 11] and fair machine learning.
inFERENCe
I started this blog some time ago but now it kind of has its own life. inFERENCe got started when, in 2015, I returned to machine learning research after a 3-year stint as a data scientist. I basically slept through the deep learning revolution. In those three years, many things happened, so I had to play catch up.
Initially, these blog posts helped me understand the body of literature I have missed: generative adversarial networks, variational autoencoders, representation learning, etc. Nowadays, I continue reading and writing about current papers, trying to reinterpret them and find connections to things I know, usually ending up with some kind of KL divergence.
If you're new here and want to get a taste, you may want to start with these crowd favourites:
- introduction to causal inference and do-calculus and follow-up posts.
- dilated convolutions and Kronecker products
- Gaussian Distributions as Soap Bubbles
- Everything that works works because it's Bayesian
- GANs are broken in more than one way
Contact
You can send me an email to fh277@cam.ac.uk, follow me on twitter @fhuszar, or look at my Google scholar profile.