Macro benchmarking edge devices using enhanced super-resolution generative adversarial networks (ESRGANs)

Springer Science and Business Media LLC - Tập 79 - Trang 5360-5373 - 2022
Jing-Ru C. Cheng1, Corwin Stanford2, Steven R. Glandon1, Anthony L. Lam1, Warren R. Williams1
1Information Technology Lab. (ITL), Engineer Research and Development Center (ERDC), U.S. Army Corps of Engineers, Vicksburg, USA
2Computational and Data-Enabled Science and Engineering, Jackson State University, Jackson, USA

Tóm tắt

In standard machine learning implementations, training and inference takes place on servers located remotely from where data is gathered. With the advent of the Internet of Things (IoT), the groundwork is laid to shift half of that computing burden (inferencing) closer to where data is gathered. This paradigm shift to edge computing can significantly decrease the latency and cost of these tasks. Many small, powerful devices have been developed in recent years with the potential to fulfill that goal. In this paper, we analyze two such devices, the NVIDIA Jetson AGX Xavier Developer Kit and the Microsoft Azure Stack Edge Pro (2 GPUs). In addition, the NVIDIA DGX-1 system containerized in a ruggedized case is also taken for running inference model at the Edge. For comparison, the performance of these devices is compared to more common inferencing devices, including a laptop, desktop, and high performance computing (HPC) system. The inferencing model used for testing is the Enhanced Super-Resolution Generative Adversarial Networks (ESRGANs), which was developed using techniques borrowed primarily from other GAN designs, most notably SRGANs and Relativistic average GANs (RaGANs), along with some novel techniques. Metrics chosen for benchmarking were inferencing time, GPU power consumption, and GPU temperature. We found that inferencing using ESRGANs was approximately 10 to 20 times slower on the Jetson edge device, but used approximately 100 to 300 times less power, and was approximately 2 times cooler than any of the other devices tested. The inferencing using ESRGANs performed very similarly on the Azure device as on the more traditional methods. The Azure device performed with slightly slower speeds and equivalent temperatures to the other devices, but with slightly less power consumption.

Tài liệu tham khảo