When: 9am PT, noon ET, 6pm CET
How: http://MLclub.net
Who:
- Yuan-Sen Ting (IAS)
- Kim Venn (Victoria)
- Keith Hawkins (Texas)
- David W. Hogg (NYU)
Title: “Will Machine Learning Solve the Analysis of Stellar Spectroscopy?”
Abstract: An online debate on the benefits and downsides that Machine Learning can play in stellar spectroscopy.
Who:
- Francois Lanusse (CNRS; CEA Saclay)
- Marc Huertas-Company (U. Paris; IAC)
- Brice Menard (Johns Hopkins)
- J. Xavier Prochaska (UC Santa Cruz, AAII)
Title: “What is Deep Learning Teaching Astronomy?”
Abstract: The esteemed organizers of ML Club for Astronomy will discuss their perspectives on the positive and negative impacts of deep learning algorithms in Astronomy. Questions from the audience will be entertained.
Who: Sebastian Wetzel (Perimeter Institute for Theoretical Physics, Canada)
Title: “Siamese Neural Networks Learn Symmetry Invariants and Conserved Quantities”
Abstract: In this talk, we discuss Siamese Neural Networks (SNN) for similarity detection and apply them to examples in physics. These examples include special relativity, electromagnetism, and motion in a gravitational potential. The SNNs learn to identify data points belonging to the same events, field configurations, or trajectory of motion. In the process of learning which data points belong to the same event or field configuration, these SNNs also learn the relevant symmetry invariants and conserved quantities, which can be revealed by interpreting the latent space of the SNNs.
Who: Jared Kaplan (Johns Hopkins / OpenAI)
Title: “GPT-3, the most powerful language model, and neural scaling laws”
The work is based on this paper:
Who: François Charton (Facebook) and Amaury Hayat (Pasitech, Rutgers)
Title: “Learning maths from example with deep language models”
Abstract: In this presentation we show how to use deep language models to solve problems related to differential equations, namely solving ODE, and deriving properties of differential systems such as local stability or controllability. Our models have no built-in knowledge of mathematics and only learn through examples, yet they manage to achieve high performance on advanced problems.
The work is based on these 2 papers:
Presenter: J. Xavier Prochaska (UC Santa Cruz)
Title: A Deep-ish Dive into CapsNets
See mlclub.net for further details.
When: 9am PT
Title: SimCLR is a simple framework for contrastive learning of visual representations
Abstract: SimCLR is a simple framework for contrastive learning of visual representations. It simplifies recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Bio: Ting Chen is a research scientist in the Google Brain Team. His main research interest is on representation learning, discrete structures, and generative modeling. He joined Google in 2019 after finishing his PhD at University of California, Los Angeles.
Held Jointly with the Astrophysics Machine Learning Club — mlclub.net
When/where: May 6, 2020 at 9am PT; Zoom only — https://ucsc.zoom.us/j/272379932
Talk 1:
Title: Quasar continua predictions with neural spline flows
Speaker: David Reiman (UCSC)
Talk 2:
Title: QuasarNET
Speaker: James Farr (UCL)
Held Jointly with the Astrophysics Machine Learning Club — mlclub.net
When/where: April 8, 2020 at 12pm PT; Zoom only — https://ucsc.zoom.us/j/611217103
Title: Neural Networks with Euclidean Symmetry for Physical Sciences
Speaker: Tess E. Schmidt (LBNL)
Abstract: Neural networks are built for specific data types and assumptions about those data types are built into the operations of the neural network. For example, full-connected layers assume vector inputs are made of independent components and 2D convolutional layers assume that the same features can occur anywhere in an image. In this talk, I show how to build neural networks for the data types of physical systems, geometry and geometric tensors, which transform predictably under rotation, translation, and inversion — the symmetries of 3D Euclidean space. This is traditionally a challenging representation to use with neural networks because coordinates are sensitive to 3D rotations and translations and there is no canonical orientation for physical systems. I present a general neural network architecture that naturally handles 3D geometry and operates on the scalar, vector, and tensor fields that characterize physical systems. Our networks are locally equivariant to 3D rotations and translations at every layer. In this talk, I describe how the network achieves these equivariances and demonstrate the capabilities of our network using simple tasks. I also present applications of Euclidean neural networks to quantum chemistry and geometry generation using our Euclidean equivariant learning framework, e3nn.
When/where: March 18, 2020 at 12pm PT; Zoom only — https://ucsc.zoom.us/j/562952785
Title: Deep Learning for Predicting Domain Prices
Speaker: Jason Ansel, Distinguished Engineer at GoDaddy
Learn how GoDaddy uses neural networks to predict the price of a
domain name in the aftermarket. GoDaddy Domain Appraisals (GoValue) is
available to millions of GoDaddy customers and provides estimated
values to help both buyers and sellers more effectively price domain
names. GoValue is 1.25x better at predicting past domain name sale
prices than human experts.
This talk will explain the hybrid recurrent neural networks behind
GoValue. It will discuss some of the practical aspects of scaling and
deploying a sophisticated machine learning system. Finally, we will
dive into recent research at GoDaddy that created a new neural network
structure for outputting tighter prediction intervals than preexisting
techniques.
Try GoDaddy Domain Appraisals for yourself:
https://www.godaddy.com/domain-value-appraisal