Efficient AI

Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training
DNA: Differentiable Network-Accelerator Co-Search
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks
var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);