new

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Apr 21

Mesa: A Memory-saving Training Framework for Transformers

There has been an explosion of interest in designing high-performance Transformers. While Transformers have delivered significant performance improvements, training such networks is extremely memory intensive owing to storing all intermediate activations that are needed for gradient computation during backpropagation, especially for long sequences. To this end, we present Mesa, a memory-saving training framework for Transformers. Specifically, Mesa uses exact activations during forward pass while storing a low-precision version of activations to reduce memory consumption during training. The low-precision activations are then dequantized during back-propagation to compute gradients. Besides, to address the heterogeneous activation distributions in the multi-head self-attention layers, we propose a head-wise activation quantization strategy, which quantizes activations based on the statistics of each head to minimize the approximation error. To further boost training efficiency, we learn quantization parameters by running estimates. More importantly, by re-investing the saved memory in employing a larger batch size or scaling up model size, we may further improve the performance under constrained computational resources. Extensive experiments on ImageNet, CIFAR-100 and ADE20K demonstrate that Mesa can achieve flexible memory-savings (up to 50%) during training while achieving comparable or even better performance. Code is available at https://github.com/ziplab/Mesa.

Investigating cannibalistic millisecond pulsar binaries using MESA: New constraints from pulsar spin and mass evolution

Compact binary millisecond pulsars (MSPs) with orbital periods lesssim1d are key to understanding binary evolution involving massive neutron stars (NSs). Due to the ablation of the companion by the rapidly spinning pulsar, these systems are also known as spiders and categorized into two main branches: redbacks (RBs; companion mass in the range of 0.1 to 0.5\,\Msun) and black widows (BWs; companion mass lesssim\,0.1\,\Msun). We present models of low- and intermediate-mass X-ray binaries and compare them with observations of Galactic spiders (including the presence or absence of hydrogen lines in their optical spectra), and we constrain and quantify the interaction between the pulsar and the companion. Using MESA, we created the allowed initial parameter space. For the first time in MESA, we also included the detailed evolution of the pulsar spin and modeled the irradiation of the companion by the pulsar wind. Efficient mass accretion onto the NS (at least 70% of the mass transferred is accreted) with an X-ray irradiated disk followed by strong irradiation of the companion can explain most of the properties of the observed spiders. Our RB evolutionary tracks continue to the BW regime, connecting the two branches of spiders. Our models explain the lack of hydrogen in some observed BWs with ultra-light companions. During accretion induced spin up, the mass required to spin up an NS to sub-milliseconds is high enough to collapse it into a black hole. Finally, after analyzing the formation of RB-like spiders with giant companions and orbital periods of several days (huntsmen), we conclude that they are unlikely to produce super-massive NSs (maximum accreted mass lesssim0.5M_{odot}). Cannibalistic MSP binary formation depends heavily on the interplay between accretion onto the pulsar and pulsar wind irradiation.

Overview and Evaluation of Sound Event Localization and Detection in DCASE 2019

Sound event localization and detection is a novel area of research that emerged from the combined interest of analyzing the acoustic scene in terms of the spatial and temporal activity of sounds of interest. This paper presents an overview of the first international evaluation on sound event localization and detection, organized as a task of the DCASE 2019 Challenge. A large-scale realistic dataset of spatialized sound events was generated for the challenge, to be used for training of learning-based approaches, and for evaluation of the submissions in an unlabeled subset. The overview presents in detail how the systems were evaluated and ranked and the characteristics of the best-performing systems. Common strategies in terms of input features, model architectures, training approaches, exploitation of prior knowledge, and data augmentation are discussed. Since ranking in the challenge was based on individually evaluating localization and event classification performance, part of the overview focuses on presenting metrics for the joint measurement of the two, together with a reevaluation of submissions using these new metrics. The new analysis reveals submissions that performed better on the joint task of detecting the correct type of event close to its original location than some of the submissions that were ranked higher in the challenge. Consequently, ranking of submissions which performed strongly when evaluated separately on detection or localization, but not jointly on both, was affected negatively.