Meng Li's Homepage
Meng Li's Homepage
Home
News
Talks
Publications
Projects
Students
Contact
Light
Dark
Automatic
English
中文 (简体)
Computer Vision
Stochastic Multivariate Universal-Radix Finite-State Machine: a Theoretically and Practically Elegant Nonlinear Function Approximator
Xincheng Feng
,
Guodong Shen
,
Jianhao Hu
,
Meng Li
,
Ngai Wong
Depth Shrink: Empowering Hardware-Friendly Shallow Neural Networks
Yonggan Fu
,
Haichuan Yang
,
Jiayi Yuan
,
Meng Li
,
Raghuraman Krishnamoorthi
,
Vikas Chandra
,
Yingyan Lin
Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation
Vision Transformers (ViTs) have emerged with superior performance on computer vision tasks compared to convolutional neural network …
Jiaqi Gu
,
Hyoukjun Kwon
,
Dilin Wang
,
Wei Ye
,
Meng Li
,
Yu-Hsin Chen
,
Liangzhen Lai
,
Vikas Chandra
,
David Pan
SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems
We design deep neural networks (DNNs) and corresponding networks’ splittings to distribute DNNs’ workload to camera sensors …
Xin Dong
,
Ziyun Li
,
Meng Li
,
Zhongnan Qu
,
Barbara De Salvo
,
Chiao Liu
,
Hsiang-Tsung Kung
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training
Designing accurate and efficient vision transformers (ViTs) is a highly important but challenging task. Supernet-based one-shot neural …
Chengyue Gong
,
Dilin Wang
,
Meng Li
,
Xinlei Chen
,
Zhicheng Yan
,
Yuandong Tian
,
Qiang Liu
,
Vikas Chandra
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Weight-sharing neural architecture search (NAS) is an effective technique for automating efficient neural architecture design. …
Dilin Wang
,
Chengyue Gong
,
Meng Li
,
Qiang Liu
,
Vikas Chandra
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and …
Dilin Wang
,
Meng Li
,
Chengyue Gong
,
Vikas Chandra
Improving efficiency in neural network accelerator using operands hamming distance optimization
Neural network accelerator is a key enabler for the on-device AI inference, for which energy efficiency is an important metric. The …
Meng Li
,
Yilei Li
,
Vikas Chandra
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems. In this paper, we empirically …
Chengyue Gong
,
Dilin Wang
,
Meng Li
,
Vikas Chandra
,
Qiang Liu
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks
Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and …
Lei Yang
,
Zheyu Yan
,
Meng Li
,
Hyoukjun Kwon
,
Liangzhen Lai
,
Tushar Krishna
,
Vikas Chandra
,
Weiwen Jiang
,
Yiyu Shi
»
Cite
×