DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures

Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun; The European Conference on Computer Vision (ECCV), 2018, pp. 517-531

Abstract


Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performances in applications such as image classification and language modeling. However, these techniques typically ignore device-related objectives such as inference time, memory usage, and power consumption. Optimizing neural architecture for device-related objectives is immensely crucial for deploying deep networks on portable devices with limited computing resources. We propose DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures, optimizing for both device-related (e.g., inference time and memory usage) and device-agnostic (e.g., accuracy and model size) objectives. DPP-Net employs a compact search space inspired by current state-of-the-art mobile CNNs, and further improves search efficiency by adopting progressive search (Liu et al. 2017). Experimental results on CIFAR-10 are poised to demonstrate the effectiveness of Pareto-optimal networks found by DPP-Net, for three different devices: (1) a workstation with Titan X GPU, (2) NVIDIA Jetson TX1 embedded system, and (3) mobile phone with ARM Cortex-A53. Compared to CondenseNet and NASNet (Mobile), DPP-Net achieves better performances: higher accuracy and shorter inference time on various devices. Additional experimental results show that models found by DPP-Net also achieve considerably-good performance on ImageNet as well.

Related Material


[pdf]
[bibtex]
@InProceedings{Dong_2018_ECCV,
author = {Dong, Jin-Dong and Cheng, An-Chieh and Juan, Da-Cheng and Wei, Wei and Sun, Min},
title = {DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}