AAII Seminar: May 29, 2019

When/where:  E2-215 at 12pm

Presenter: Jaehoon Lee (Google Brain)

Title:  Everything you wanted to know about batch size (in neural net training) but were afraid to ask

Abstract: Recent hardware developments have made unprecedented amounts of data parallelism available for accelerating neural network training. Among the simplest ways to harness next-generation accelerators is to increase the batch size in standard mini-batch neural network training algorithms. In this work, we aim to experimentally characterize the effects of increasing the batch size on training time, as measured in the number of steps necessary to reach a goal out-of-sample error. Eventually, increasing the batch size will no longer reduce the number of training steps required, but the exact relationship between the batch size and how many training steps are necessary is of critical importance to practitioners, researchers, and hardware designers alike. We study how this relationship varies with the training algorithm, model, and data set and find extremely large variation between workloads. Along the way, we reconcile disagreements in the literature on whether batchsize affects model quality. Finally, we discuss the implications of our results for efforts to train neural networks much faster in the future.

Reference: https://arxiv.org/abs/1811.03600