Computer Vision

Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation
SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks
Federated Learning with Non-IID Data
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training
var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);