Webpage of Edouard Oyallon
I am EUnlike the common French name 'Édouard',
there is no accent on the 'E'.douard Oyallon, a CNRS researcher in the team MLIA of Sorbonne University.
Prior to it, you might have met me at Flatiron Institute (CCM), Ecole Polytechnique (DepMap), CentraleSupélec
(Opis), INRIA Lille (SequeL, with Michal Valko),
Ecole Normale Supérieure (DATA, with Stéphane Mallat) or even at ENS Cachan, campus de Ker Lann. My research interests used to lie in the foundations of machine learning techniques, examining both their applied and theoretical aspects, including the symmetries of deep neural networks. Currently, I am developing algorithms for large-scale distributed and decentralized training, utilizing asynchronous methods and increased parallelism to accelerate the training of larger neural networks.
Feel free to send me any emails to discuss my work at edouard[d.o.t]oyallon[a.t]cnrs[d.o.t]fr.
My Google Scholar.
My GitHub.
My CV.
Students/Postdoctoral researcher, past and present:
- 2023: Thomas Pumir (Princeton University), Postdoctoral researcher ↝ Helm.ai
- 2022-now: Stéphane Rivaud (SonyCSL), Postdoctoral researcher ↝ INRIA
- 2021-now: Adel Nabli, intern (CentraleSupélec/Université de Montréal) ↝ PhD student, co-supervised with
Eugene Belilovsky.
- 2021-now: Léo Grinsztajn, intern (Ecole Polytechnique) ↝ PhD student, co-supervised with Gaël Varoquaux.
- 2021-now: Louis Fournier, intern (Ecole Polytechnique) ↝ PhD student.
- 2020-2021: Louis Leconte, intern (ENS Paris-Saclay), co-supervised with Aymeric Dieuleveut and Eric Moulines.
- 2021: Jakob Maier, intern (Institut Polytechnique de Paris).
- 2021: Jiang Ruiyao, intern (Ecole Polytechnique).
Grants/projects
- 2022, ADONIS project, funded by ANR and Sorbonne University, for which I'm the
PI.
- 2022, VHS, collaborator
- 2022, CoCa4AI, collaborator
Academic service:
- 2018-now, One of the core developer and maintainer of Kymatio, http://www.kymat.io. Wavelet Scattering Transforms in Python with GPU acceleration.
- 2022, Co-organiser of the GdR ISIS Recent Advances in Graph Machine Learning.
- 2019-2022, Member of the MALIA group at the Société Française de Statistique (SFdS).
- 2021, Organizer of the Learning and Deep Learning session at Journées MAS 2021.
- 2021-2022, Co-organizer of the AAAI 2021 Workshop: Learning Network Architecture During Training.
- 2020-2021, Co-organizer with SFdS, Okwin and AccentureLab of the Federated Learning workshop.
- 2019, Co-organizer of the ICLR 2019 Workshop: Learning with Limited Labeled Data: Representation Learning for Weak Supervision and Beyond.
- 2017, Co-organizer of the NeurIPS 2017 Workshop: Learning with Limited Labeled Data: Weak Supervision and Beyond.
Recent preprints/technical reports:
- Nabli A. and Oyallon E. - Decentralized Asynchronous Optimization with DADAO allows Decoupling and Acceleration, preprint. https://hal.science/hal-03737694
- Fournier L. and Oyallon E. - Cyclic Data Parallelism for Efficient Parallelism of Deep Neural Networks, preprint. https://arxiv.org/abs/2403.08837
- Thérien B., Joseph C.-É., Knyazev B., Oyallon E., Rish I., and Belilovsky E. - LO: Compute-Efficient Meta-Generalization of Learned Optimizers, preprint. https://arxiv.org/abs/2406.00153
- Fournier L., Nabli A., Aminbeidokhti M., Pedersoli M., Belilovsky E., and Oyallon E. - WASH: Train your Ensemble with Communication-Efficient Weight Shuffling, then Average, preprint. https://hal.science/hal-04588075
- Rivaud S., Fournier L., Pumir T., Belilovsky E., Eickenberg M., and Oyallon E. - PETRA: Parallel End-to-end Training with Reversible Architectures, preprint. https://hal.science/hal-04594647
- Grinsztajn L., Eickenberg M., Oyallon E., and Varoquaux G. - Encoding numerical values for Transformers, submitted.
- Nabli A., Fournier L., Erbacher P., Serrano L., Belilovsky E., and Oyallon E. - ACCO: Accumulate while you Communicate, Hiding Communications in Distributed LLM Training, preprint. https://hal.science/hal-04592562
List of publications:
- Oyallon E. - Contributions to Local, Asynchronous and Decentralized Learning, and to Geometric Deep Learning, HDR manuscript, 2023. https://hal.science/tel-04334118, slides
- Grinsztajn L., Kim M.J., Oyallon E. and Varoquaux G. - Modeling string entries for tabular data prediction: do we need big large language models?, NeurIPS TRL 2023. https://openreview.net/forum?id=WXNmnmpRBJ
- Nabli A., Belilovsky E. and Oyallon E. - A2CiD2: Accelerating Asynchronous Communication in Decentralized Deep Learning, NeurIPS 2023. https://hal.science/hal-04124318
- Legate G., Bernier N., Caccia L., Oyallon E., Belilovsky E., Guiding The Last Layer in Federated Learning with Pre-Trained Models, NeurIPS 2023. https://arxiv.org/abs/2306.03937
- Tenison I., Sreeramadas S. A., Mugunthan V., Oyallon E., Rish I., Belilovsky E. - Gradient Masked Averaging for Federated Learning, TMLR 2023.
https://arxiv.org/abs/2201.11986
- Fournier L., Patel A., Eickenberg M., Oyallon E. and Belilovsky E. - Preventing Dimensional Collapse in Contrastive Local Learning with Subsampling, ICML LLW 2023. https://hal.science/hal-04156218
- Fournier L., Rivaud, S., Belilovsky E., Eickenberg M. and Oyallon E. - Can Forward Gradient Match Backpropagation?, ICML 2023. https://hal.science/hal-04119829
-
- Nabli A. and Oyallon E. - DADAO: Decoupled Accelerated Decentralized Asynchronous Optimization, ICML 2023. https://hal.science/hal-03737694
- Grinsztajn L., Oyallon E. and Varoquaux G. - Why do tree-based models still outperform deep learning on tabular data?, NeurIPS 2022. https://hal.science/hal-03723551/
- Sergeant-Perthuis G., Maier J., Bruna J. and Oyallon E. - On Non-Linear operators for Geometric Deep Learning, NeurIPS 2022. https://hal.science/hal-03711864/
- Laousy O., Chassagnon G., Oyallon E., Paragios N., Revel M.-P., Vakalopoulou M - Deep Reinforcement Learning for L3 Slice Localization in Sarcopenia Assessment, International Workshop on Machine Learning in Medical Imaging 2021. https://arxiv.org/abs/2107.12800
- Grinsztajn N., Preux P. and Oyallon E. - Low-rank projections of GCNs Laplacian, ICLR 2021 Workshop GTRL. https://openreview.net/forum?id=DnlDTxsQ8Uz
- Thiry L., Arbel M., Belilovsky E. and Oyallon E. - The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods, ICLR 2021. https://openreview.net/forum?id=aYuZO9DIdnn
- Oyallon E. - Interferometric Graph Transform: a Deep Unsupervised Graph Representation, ICML 2020. https://arxiv.org/abs/2006.05722
- Belilovsky E., Eickenberg M. and Oyallon E. - Decoupled Greedy Learning of CNNs, ICML 2020. https://arxiv.org/abs/1901.08164
- Andreux M., Angles T., Exarchakis G., Leonarduzzi R., Rochette G., Thiry L., Zarka J., Mallat S., Andén J., Belilovsky E., Bruna J., Lostanlen V., Matthew J. Hirn, Oyallon E., Zhang S., Cella C., Eickenberg, M. - Kymatio: Scattering Transforms in Python, JMLR software 2020. https://arxiv.org/abs/1812.11214
- Chizat L., Oyallon, E. and Bach F. - On Lazy Training in Differentiable Programming, NeurIPS 2019. https://arxiv.org/abs/1812.07956
- Belilovsky E., Eickenberg M. and Oyallon E. - Greedy Layerwise Learning Can Scale to ImageNet, ICML 2019. https://arxiv.org/abs/1812.11446, slides
- Belilovsky E., Eickenberg M., Oyallon E. - Do Deep Convolutional Network Layers Need to be Trained End-to-End?, NeurIPS CRACT workshop 2018.
- Oyallon E., Belilovsky E., Zagoruyko S., and Valko M. - Compressing the Input for CNNs with the First-Order scattering Transform, ECCV 2018. https://arxiv.org/abs/1809.10200, poster
- Scieur D., Oyallon E., d'Aspremont, A. and Bach, F. - Nonlinear Acceleration of CNNs, ICLR workshop 2018. https://openreview.net/forum?id=HkNpF_kDM, long version https://arxiv.org/abs/1805.09639
- Oyallon E., Zagoruyko S., Huang G., Komodakis N., Lacoste-Julien S., Blaschko M., and Belilovsky E. - Scattering Networks for Hybrid Representation Learning, TPAMI 2018. https://arxiv.org/abs/1809.06367
- Jacobsen J.-H., Smeulders A.W.M. and Oyallon E. - i-RevNet: Deep Invertible Networks, ICLR 2018. https://openreview.net/forum?id=HJsjkMb0Z
- Oyallon E. - Analyzing and Introducing Structures in Deep Convolutional Neural Networks, PhD thesis, 2017. https://hal.archives-ouvertes.fr/tel-02353134v1, slides, intro to my thesis (french)
- Oyallon E., Belilovsky E., and Zagoruyko S. - Scaling the Scattering Transform: Deep Hybrid Networks, ICCV 2017. https://arxiv.org/abs/1703.08961, poster
- Jacobsen J.-H., Oyallon E., Mallat, S. and Smeulders, A.W.M. - Multiscale Hierarchical Convolutional Networks, ICML PADL 2017. https://arxiv.org/abs/1703.04140, poster
- Oyallon E. - Building a Regular Decision Boundary with Deep Networks, CVPR 2017. https://arxiv.org/abs/1703.01775, poster
- Oyallon E. and Mallat S. - Deep Roto-translation Scattering for Object Classification, CVPR 2015. http://arxiv.org/abs/1412.8659, poster
- Oyallon E. and Rabin J. - An Analysis of the SURF Method, IPOL 2015. http://www.ipol.im/pub/art/2015/69/
- Oyallon E., Mallat S. and Sifre L. - Generic Deep Networks with wavelet Scattering, ICLR 2014 workshop. http://arxiv.org/abs/1312.5940, poster
Teaching/trainings:
- Generic notes of my Deep Learning teaching can be found here. Feedbacks to improve them are welcome!
- 2023, Lecturer, Advanced topics in Deep Learning, at IPP, webpage of the class
- 2022, Lecturer, Advanced topics in Deep Learning, at IPP
- 2021, Lecturer, Deep Learning, at HiParis2021!'s summer school, slides | Lecturer, Advanced topics in Deep Learning, at IPP | Guest lecturer, Deep Learning, at
Concordia University | Lecturer, Deep Learning, at CIRM - notebooks are here, videos part 1/part 2, writing notes.
- 2019, Lecturer, Reinforcement Learning, CentraleSupélec | Lecturer, Deep Learning in Practice, MVA/CentraleSupélec, with Guillaume Charpiat, webpage of the class.
- 2018, Tutorial at Ateliers Statistiques de la Société Française de Statistique | Teaching assistant, Deep Learning, MVA.
- 2017, Corporate Seminar Series with Sébastien Loustau.
- 2014-2017, Teaching assistant, ENSAE, fundamental probability classes and calculus.