Image Reconstruction from Bernoulli-Dropped Observation Using U-Net
Research Article
Open Access
CC BY

Image Reconstruction from Bernoulli-Dropped Observation Using U-Net

Chunhei Huang 1*
1 Vanke Meisha Academic
*Corresponding author: huangjunxi@stu.vma.edu.cn
Published on 23 October 2025
Journal Cover
TNS Vol.145
ISSN (Print): 2753-8826
ISSN (Online): 2753-8818
ISBN (Print): 978-1-80590-467-0
ISBN (Online): 978-1-80590-468-7
Download Cover

Abstract

Image reconstruction under random pixel loss has a significant role in applications such as medical imaging, remote sensing, and lossy transmission. This paper explored the image restoration problem based on the Bernoulli-dropped image, where every pixel has the probability of p to be kept and (1-p) to be removed. This paper modeled the task as a supervised learning problem, utilizing a simple U-Net model (comprising three encoders and decoders) that incorporates skip connections to integrate multi-scale context information and spatial details for image restoration. In this paper, the DIV2K dataset (800 images, grayscale) is applied to the retention rate Random mask with p = 0.3 to generate an observed image that matches its original. The training used the mean square error as the loss function. The result reveals that the model is able to achieve a relatively clear reconstruction effect under the condition of a single input image. It can better preserve edge and texture information, compared to the traditional baseline method. In the end, this paper discusses the issue of choosing and discarding in the network design. Meanwhile, it points out the limitation in extreme pixel loss. At the same time, in the future, potential optimization paths were also mentioned in terms of the improvement of the loss function, the attention mechanism, and the expansion of color images.

Keywords:

Deep learning, U-Net, Image Reconstruction, Bernoulli-dropped Image Reconstruction

View PDF
Huang,C. (2025). Image Reconstruction from Bernoulli-Dropped Observation Using U-Net. Theoretical and Natural Science,145,8-16.

References

[1]. Quan, W.; Chen, J.; Liu, Y.; Yan, D.-M.; Wonka, P. Deep Learning-based Image and Video Inpainting: A Survey. International Journal of Computer Vision, 2024(accepted). arXiv: 2401.03395; doi: 10.48550/arXiv.2401.03395.O. Elharrouss, N. Almaadeed, S. Al-Maadeed, and R. Akbari, “Deep learning for image inpainting: A survey, ” Pattern Recognition, vol. 122, p. 108341, 2022. doi: 10.1016/j.patcog.2021.108341.

[2]. H. K. Aggarwal, M. P. Mai, and M. Jacob, “A projection-based cascaded U-Net model for magnetic resonance image reconstruction, ” IEEE Transactions on Medical Imaging, vol. 40, no. 5, pp. 1370–1381, 2021. doi: 10.1109/TMI.2020.3047761.

[3]. Aghabiglou, A.; Eksioglu, E. M. Projection-based Cascaded U-Net Model for MR Image Reconstruction. Computer Methods and Programs in Biomedicine, 207: 106151, 2021. doi: 10.1016/j.cmpb.2021.106151.

[4]. Liu, L.; Liu, Y. Load Image Inpainting: An Improved U-Net Based Load Missing Data Recovery Method. Applied Energy, 327: 119988, 2022. doi: 10.1016/j.apenergy.2022.119988.

[5]. Yu, Y.; Zhan, F.; Lu, S.; Pan, J.; Ma, F.; Xie, X.; Miao, C. WaveFill: A Wavelet-based Generation Network for Image Inpainting. In: Proc. IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14114–14123, 2021. doi: 10.1109/ICCV48922.2021.01385.

[6]. T. Xiang, Z. Zhang, R. Wang, C. Zhang, and Y. Wang, “BiO-Net: Learning recurrent bi-directional connections for encoder–decoder architecture, ” Medical Image Analysis, vol. 67, p. 101849, 2020. doi: 10.1016/j.media.2020.101849.

[7]. Xiang, T.; Zhang, C.; Liu, D.; Song, Y.; Huang, H.; Cai, W. BiO-Net: Learning Recurrent Bi-directional Connections for Encoder–Decoder Architecture. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, LNCS 12261, pp. 74–84. Springer, 2020. doi: 10.1007/978-3-030-59710-8_8.

[8]. Z. Chen, C. Li, Y. Li, and S. Li, “Self-attention in reconstruction bias U-Net for semantic segmentation of building footprints from high-resolution remote sensing images, ” Remote Sensing, vol. 13, no. 13, p. 2524, 2021. doi: 10.3390/rs13132524.

[9]. R. Timofte, E. Agustsson, L. Van Gool, M. Yang, L. Zhang, B. Lim, et al., “NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and results, ” in Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1110–1121, 2017.

Cite this article

Huang,C. (2025). Image Reconstruction from Bernoulli-Dropped Observation Using U-Net. Theoretical and Natural Science,145,8-16.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

About volume

Volume title: Proceedings of CONF-CIAP 2026 Symposium: International Conference on Atomic Magnetometer and Applications

ISBN: 978-1-80590-467-0(Print) / 978-1-80590-468-7(Online)
Editor: Marwan Omar, Jixi Lu, Mao Ye
Conference date: 30 January 2026
Series: Theoretical and Natural Science
Volume number: Vol.145
ISSN: 2753-8818(Print) / 2753-8826(Online)