Dr. Ran Cheng, the founder of the Evolving Machine Intelligence (EMI) Group, is currently a tenured Associate Professor with the Southern University of Science and Technology (SUSTech), China. He received the PhD degree in computer science from the University of Surrey, UK, in 2016.
His research interests mainly fall into the interdisciplinary fields across evolutionary computation and other major AI branches such as statistical learning and deep learning, to provide end-to-end solutions to optimization & modeling in scientific research and engineering related applications.
He is the Founding Chair of IEEE Computational Intelligence Society (CIS) Shenzhen Chapter and IEEE Symposium on Model Based Evolutionary Algorithms (IEEE MBEA). He serves as an Associated Editor/Editorial Board Member for several journals, including: IEEE Transactions on Evolutionary Computation, IEEE Transactions on Cognitive and Developmental Systems, IEEE Transactions on Artificial Intelligence, etc. He is the recipient of the IEEE Transactions on Evolutionary Computation Outstanding Paper Awards (2018, 2021), the IEEE CIS Outstanding PhD Dissertation Award (2019), the IEEE Computational Intelligence Magazine Outstanding Paper Award (2020). He is a Senior Member of IEEE.
Download my resumé.
PhD, Computer Science, 2013 - 2016
University of Surrey, UK
Postgraduate, Computer Science and Technology, 2010 - 2012
Zhejiang University, China
BEng, Computer Science and Technology, 2006 - 2010
Northeastern University, China
Despite the remarkable successes of convolutional neural networks (CNNs) in computer vision, it is time-consuming and error-prone to manually design a CNN. Among various neural architecture search (NAS) methods that are motivated to automate designs of high-performance CNNs, the differentiable NAS and population-based NAS are attracting increasing interests due to their unique characters. To benefit from the merits while overcoming the deficiencies of both, this work proposes a novel NAS method, RelativeNAS. As the key to efficient search, RelativeNAS performs joint learning between fast learners (i.e., decoded networks with relatively lower loss value) and slow learners in a pairwise manner. Moreover, since RelativeNAS only requires low-fidelity performance estimation to distinguish each pair of fast learner and slow learner, it saves certain computation costs for training the candidate architectures. The proposed RelativeNAS brings several unique advantages: 1) it achieves state-of-the-art performances on ImageNet with top-1 error rate of 24.88%, that is, outperforming DARTS and AmoebaNet-B by 1.82% and 1.12%, respectively; 2) it spends only 9 h with a single 1080Ti GPU to obtain the discovered cells, that is, 3.75x and 7875x faster than DARTS and AmoebaNet, respectively; and 3) it provides that the discovered cells obtained on CIFAR-10 can be directly transferred to object detection, semantic segmentation, and keypoint detection, yielding competitive results of 73.1% mAP on PASCAL VOC, 78.7% mIoU on Cityscapes, and 68.5% AP on MSCOCO, respectively. The implementation of RelativeNAS is available at https://github.com/EMI-Group/RelativeNAS.