Weight-sharing neural architecture search (NAS) is an effective technique for automating efficient neural architecture design. Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks. The success of weight-sharing NAS heavily relies on 1) the search space design and 2) the supernet training strategies. In this talk, we discuss our recent progress on improving the weight-sharing NAS by designing better search space and better supernet training algorithms to achieve state-of-the-art performance for various computer vision tasks.