李萌的个人主页
李萌的个人主页
首页
新闻
最新讲座
论文发表
开源项目
学生
联系方式
浅色
深色
自动
中文 (简体)
English
1
READ: Reliability-Enhanced Accelerator Dataflow Optimization using Critical Input Pattern Reduction
Zuodong Zhang
,
李萌
,
Yibo Lin
,
Runsheng Wang
,
Ru Huang
BiT: Robustly Binarized Multi-distilled Transformer
Zechun Liu
,
Barlas Oguz
,
Aasish Pappu
,
Lin Xiao
,
Scott Yih
,
李萌
,
Raghuraman Krishnamoorthi
,
Yashar Mehdad
Depth Shrink: Empowering Hardware-Friendly Shallow Neural Networks
Yonggan Fu
,
Haichuan Yang
,
Jiayi Yuan
,
李萌
,
Raghuraman Krishnamoorthi
,
Vikas Chandra
,
Yingyan Lin
Multi-Scale High-Resolution Vision Transformer for Semantic Segmentation
Vision Transformers (ViTs) have emerged with superior performance on computer vision tasks compared to convolutional neural network …
Jiaqi Gu
,
Hyoukjun Kwon
,
Dilin Wang
,
Wei Ye
,
李萌
,
Yu-Hsin Chen
,
Liangzhen Lai
,
Vikas Chandra
,
David Pan
DNA: Differentiable Network-Accelerator Co-Search
Powerful yet complex deep neural networks (DNNs) have fueled a booming demand for efficient DNN solutions to bring DNN-powered …
Yongan Zhang
,
Yonggan Fu
,
Weiwen Jiang
,
Chaojian Li
,
Haoran You
,
李萌
,
Vikas Chandra
,
Yingyan Lin
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Weight-sharing neural architecture search (NAS) is an effective technique for automating efficient neural architecture design. …
Dilin Wang
,
Chengyue Gong
,
李萌
,
Qiang Liu
,
Vikas Chandra
AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling
Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and …
Dilin Wang
,
李萌
,
Chengyue Gong
,
Vikas Chandra
Improving efficiency in neural network accelerator using operands hamming distance optimization
Neural network accelerator is a key enabler for the on-device AI inference, for which energy efficiency is an important metric. The …
李萌
,
Yilei Li
,
Vikas Chandra
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems. In this paper, we empirically …
Chengyue Gong
,
Dilin Wang
,
李萌
,
Vikas Chandra
,
Qiang Liu
Co-exploration of neural architectures and heterogeneous asic accelerator designs targeting multiple tasks
Neural architecture search (NAS) has shown great promise in designing state-of-the-art (SOTA) models that are both accurate and …
Lei Yang
,
Zheyu Yan
,
李萌
,
Hyoukjun Kwon
,
Liangzhen Lai
,
Tushar Krishna
,
Vikas Chandra
,
Weiwen Jiang
,
Yiyu Shi
«
»
引用
×