Research

Interests

My current research interests regard the theory of deep learning. During my PhD, I studied in particular the connection between large-depth residual networks and neural differential equations. You're welcome to download my PhD thesis. I worked previously on Natural Language Processing (following an internship at Google) and on Quasi-Monte Carlo methods (following an internship at University of Montreal).

Publications

Peer-reviewed publications

  • P. Marion*, Y.-H. Wu*, M. Sander, G. Biau, Implicit regularization of deep residual networks towards neural ODEs , accepted for publication at ICLR 2024, spotlight presentation, May 2024. Online access
  • P. Marion, Generalization bounds for neural ordinary differential equations and deep residual networks, Advances in Neural Information Processing Systems 36 (NeurIPS 2023), December 2023. Online access
  • P. Marion, R. Berthier, Leveraging the two-timescale regime to demonstrate convergence of neural networks, Advances in Neural Information Processing Systems 36 (NeurIPS 2023), December 2023. Online access
  • A. Fermanian*, P. Marion*, J.P. Vert, G. Biau, Framing RNN as a kernel method: A neural ODE approach, Advances in Neural Information Processing Systems 34 (NeurIPS 2021), oral presentation, December 2021. Online access
  • P. Marion, P. Nowak, F. Piccinno, Structured Context and High-Coverage Grammar for Conversational Question Answering over Knowledge Graphs, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), November 2021. Online access
  • P. L’Ecuyer, P. Marion, M. Godin, and F. Puchhammer, A Tool for Custom Construction of QMC and RQMC Point Sets, Proceedings of Monte Carlo and Quasi-Monte Carlo Methods 2020, August 2020. Online access.
  • P. Marion, M. Godin, and P. L’Ecuyer, An algorithm to compute the t-value of a digital net and of its projections, Journal of Computational and Applied Mathematics, June 2020. Online access

(stars denote first co-authors.)

Preprints

  • P. Marion, A. Korba, P. Bartlett, M. Blondel, V. De Bortoli, A. Doucet, F. Llinares-López, C. Paquette, Q. Berthet, Implicit Diffusion: Efficient Optimization through Stochastic Sampling, arXiv:2402.05468, February 2024. Online access
  • P. Marion, A.Fermanian, G. Biau, J.P. Vert, Scaling ResNets in the Large-depth Regime, arXiv:2206.06929, June 2022. Online access

Other publications

  • C. Lucas, P. Marion, Recherche et innovation : comment rapprocher sphère publique et privée, Les Docs de La Fabrique, Paris, Presses des Mines, 2022.
  • P. Marion, Comment compter nos morts du Covid ?, The Conversation, May 2020, link (in French)
  • O. Borderies, O. Coudray and P. Marion, Authoring Custom Jupyter Widgets, The Jupyter Blog, March 2018, link

Talks and posters

  • Implicit Diffusion: Efficient Optimization through Stochastic Sampling, Mostly Monte Carlo Seminar, Paris, France, March 2024
  • Implicit Diffusion: Efficient Optimization through Stochastic Sampling, Google, Paris, France, February 2024
  • Generalization bounds for neural ordinary differential equations and deep residual networks, NeurIPS@Paris 2023, Paris, France, December 2023
  • Leveraging the two-timescale regime to demonstrate convergence of neural networks, NeurIPS@Paris 2023, Paris, France, December 2023
  • Framing RNNs as a kernel method through the signature , Rencontre sur les signatures, applications et machine learning, Pau, France, December 2023
  • Neural ODEs and residual neural networks , séminaire de l'équipe Inria HeKA, Paris, France, September 2023
  • Generalization bounds for neural ordinary differential equations and deep residual networks , poster at StatMathAppli 2023, Fréjus, France, September 2023
  • Apprentissage par optimisation stochastique bi-échelle pour les réseaux de neurones, Journées de Statistique, Bruxelles, Belgium, July 2023
  • Leveraging the two-timescale regime to demonstrate convergence of neural networks, poster at FoCM 2023, Paris, France, June 2023
  • Neural network training with stochastic gradient descent in a two-timescale regime, PhD students seminar at LPSM, Sorbonne Université, Paris, France, March 2023
  • Framing RNN as a kernel method: a neural ODE approach, ICSDS 2022, Florence, Italy, December 2022
  • Recurrent neural networks are kernel methods: a neural ODE approach, EPFL, Switzerland, December 2022
  • Signatures and continuous-time neural networks for sequential data, séminaire de l'équipe Inria MIND, Saclay, France, November 2022
  • Limites profondes des réseaux de neurones, MathInnov Day, Paris, France, October 2022
  • Scaling ResNets in the Large-depth Regime, conférence LOL 2022, Marseille, France, October 2022
  • Framing RNN as a kernel method: a neural ODE approach, Summer Cluster on deep learning theory, Simons Institute, Berkeley, USA, July 2022
  • Scaling ResNets in the Large-depth Regime, Journées de Statistique, Lyon, France, June 2022
  • Towards understanding residual neural networks via the large-depth limit, Journée Équation d'évolution et Machine Learning, Paris, France, May 2022
  • Large-depth limit for residual neural networks, Rencontre des Jeunes Statisticiens, Porquerolles, France, April 2022
  • Framing RNN as a kernel method: A neural ODE approach, Séminaire Parisien de Statistique, Paris, France, January 2022
  • Framing RNN as a kernel method: A neural ODE approach, NeurIPS@Paris 2021, Paris, France, December 2021
  • Framing RNN as a kernel method: A neural ODE approach, oral at NeurIPS 2021 (online), December 2021 (Youtube video)
  • Framing RNN as a kernel method: A neural ODE approach, PhD students seminar at LPSM, Sorbonne Université, Paris, France, November 2021
  • Framing RNN as a kernel method: A neural ODE approach, DeepMath Working Group, Sorbonne Université, Paris, France, November 2021
  • A primer on Neural ODEs, Institut Mathématique de Toulouse, France, Septembre 2021
  • "Who plays Gandalf in LOTR?" - Natural Language Processing on structured data, PhD students seminar at LPSM, Sorbonne Université, Paris, France, May 2021
  • Algorithms and Software for Custom Digital Net Constructions, MCQMC 2020, online, August 2020 (Youtube video)
  • How to beat randomness? A Quasi-Monte Carlo methods overview, PROWLER.io, Cambridge, UK, November 2019
  • Deep learning beyond the hype, Corps des Mines, November 2019

Other activities

  • 2022-2023:
    • Organizer of NeurIPS@Paris 2022, a 200-people meetup before NeurIPS 2022
    • Organizer of the session "Differential equations and machine learning" at ICSDS 2022
    • Organizer of the PhD seminar at LPSM
    • Organizer of the PhD seminar at Corps des Mines
  • 2021-2022:
    • Organizer of NeurIPS@Paris 2021, a 150-people meetup during NeurIPS 2021
    • Organizer of the PhD seminar at Corps des Mines
  • 2020-2021:
    • Organizer of the research seminar at Corps des Mines
  • Reviewer for journals (JASA, Annals of Statistics, Information and Inference) and conferences (NeurIPS 2023, ICLR 2024). Top reviewer at NeurIPS 2023.