Machine learning has achieved many successes during the past decades, spanning domains of game-playing, protein folding, competitive programming, and many others. However, while there have been major efforts in building programming techniques and frameworks for machine learning programming, there has been very little study of general language design for machine learning programming.
We pursue such a study in this talk, focusing on choice-based learning, particularly where choices are driven by optimizations. This includes widely-used decision-making models and techniques (e.g., Markov decision processes or gradient descent) which provide frameworks for describing systems in terms of choices (e.g., actions or parameters) and their resulting feedback as losses (dually, rewards).
We propose and give evidence for the following thesis: languages for choice-based learning can be obtained by combining two paradigms, algebraic effects and handlers, and the selection monad. We provide a prototype implementation as a Haskell library and present a variety of programming examples for choice-based learning: stochastic gradient descent, hyperparameter tuning, generative adversarial networks, and reinforcement learning.
Sat 9 SepDisplayed time zone: Pacific Time (US & Canada) change
09:00 - 10:30 | Haskell: Keynote 2Haskell at B - Fifth Avenue Chair(s): Leonidas Lampropoulos University of Maryland, College Park | ||
09:00 60mKeynote | Haskell for choice-based learning Haskell Ningning Xie University of Toronto DOI |