Efficient AI

SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training
DNA: Differentiable Network-Accelerator Co-Search
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks
var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);