Vision–Sensor–Electrical Integration: Literature Synthesis and System Guidance
Research Article
Open Access
CC BY

Vision–Sensor–Electrical Integration: Literature Synthesis and System Guidance

Mingyi Li 1*
1 Computer Science, Massey University, Auckland, New Zealand, 0630
*Corresponding author: 13393119453@163.com
Published on 5 November 2025
Volume Cover
ACE Vol.203
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-80590-515-8
ISBN (Online): 978-1-80590-516-5
Download Cover

Abstract

Multimodal perception enhances robustness in industrial inspection and mobile robotics by fusing complementary signals when individual modalities falter due to insufficient lighting, specular reflections, motion-induced blurring, or limited textural information. This article synthesizes evidence from peer-reviewed studies and normalizes metrics across representative datasets to characterize what RGB, depth, thermal, LiDAR, radar, and IMU achieve alone and what they achieve in combination. Using MVTec AD, KAIST, TUM VI, and nuScenes as anchors, the synthesis compares miss rate, trajectory error, 3D detection quality, and bird’s-eye-view map fidelity while considering latency, power integrity, electromagnetic compatibility, bandwidth, and maintainability. The concordant findings reveal that color-thermal integration significantly diminishes failures in pedestrian detection under low-illumination conditions, while tightly integrated visual-inertial systems curtail drift compared to purely visual odometry. Furthermore, bird’s-eye-view integration enhances 3D detection and mapping performance relative to camera-only or LiDAR-only benchmarks. The analysis also identifies system prerequisites that enable reproducible gains—precise timing, disciplined calibration, robust power and electromagnetic practice, and sufficient bandwidth—and concludes with implementation guidelines to help transfer benchmark-reported benefits to factory floors and field robots.

Keywords:

Machine vision, Sensor fusion, Industrial inspection, Electrical timing, Mobile robotics

View PDF
Li,M. (2025). Vision–Sensor–Electrical Integration: Literature Synthesis and System Guidance. Applied and Computational Engineering,203,34-40.

References

[1]. Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 9592–9600).

[2]. Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI Vision Benchmark Suite. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[3]. Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., et al. (2020). nuScenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]. Schubert, D., Goll, D., Demmel, N., Usenko, V., Stückler, J., & Cremers, D. (2018). The TUM VI benchmark for evaluating visual inertial odometry. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[5]. Hwang, S., Park, J., Kim, N., Choi, Y., & Kweon, I. S. (2015). Multispectral pedestrian detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1037–1045).

[6]. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., & Furgale, P. (2015). Keyframe based visual inertial odometry using nonlinear optimization. International Journal of Robotics Research, 34(3), 314–334. https: //doi.org/10.1177/0278364914564364

[7]. Qin, T., Li, P., & Shen, S. (2018). VINS Mono. IEEE Transactions on Robotics, 34(4), 1004–1020. https: //doi.org/10.1109/TRO.2018.2851591

[8]. Furgale, P., Rehder, J., & Siegwart, R. (2013). Unified temporal and spatial calibration for multi sensor systems. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]. Padilla, R., Netto, S. L., & da Silva, E. A. B. (2021). Analysis of object detection metrics and toolkit. Electronics, 10(3), 279. https: //doi.org/10.3390/electronics10030279

[10]. Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D., & Han, S. (2022). BEVFusion. arXiv preprint arXiv: 2205.13542. ICRA 2023.

[11]. Khaleghi, B., Khamis, A., Karray, F. O., & Razavi, S. N. (2013). Multisensor data fusion. Information Fusion, 14(1), 28–44. https: //doi.org/10.1016/j.inffus.2012.07.003

[12]. Yeong, D. J., Velasco Hernández, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles. Sensors, 21(6), 2140. https: //doi.org/10.3390/s21062140

[13]. MVTec AD dataset homepage. (n.d.). Retrieved from https: //www.mvtec.com/company/research/datasets/mvtec-ad

[14]. IEEE Standards Association. (2019). IEEE 1588-2019: Precision Time Protocol.

Cite this article

Li,M. (2025). Vision–Sensor–Electrical Integration: Literature Synthesis and System Guidance. Applied and Computational Engineering,203,34-40.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of CONF-SPML 2026 Symposium: The 2nd Neural Computing and Applications Workshop 2025

ISBN: 978-1-80590-515-8(Print) / 978-1-80590-516-5(Online)
Editor: Marwan Omar, Guozheng Rao
Conference date: 21 December 2025
Series: Applied and Computational Engineering
Volume number: Vol.203
ISSN: 2755-2721(Print) / 2755-273X(Online)