In photography, images might appear blurred for a number of reasons, including: lens defects, camera motion, and camera focus. This work investigates the deblurring of images using deep learning techniques. The chosen approach is to use a deep stacked hierarchy using a multi-patch network. This would make it possible to identify how the deep learning of processing blurred images can be improved to generate better deblurred images. The project also investigated whether the deblurring process could be done faster and without losing image quality.
The project assesses the theoretical and practical implications that could be applied to digital image processing, while focusing on the blind-image deconvolution aspect of processing. This ‘blind’ aspect in image processing relates to the point spread function (PSF) which was involved in blurring the image; the PSF is assumed to be unknown, hence being referred to as ‘blind’. The image deblurring was executed on images selected from publicly available datasets. These newly deblurred images were then compared to those deblurred using other existing image-deblurring functions.
Since this project falls within the field of data science, the programming language adopted was Python, which is designed to support data analysis and visualization with special features, including external libraries. The implemented process enhances the quality of standard images once the program completes execution, and allows a comparison between the original image (the ground truth) to the new and improved image.
The evaluation of the system was based on test images that were not used for training in order to better determine whether the image could be improved by the trained network. Each test image was duly compared to the corresponding ground-truth image to establish the extent to which deblurring enhanced the quality of the blurred image.
Course: B.Sc. IT (Hons.) Software Development
Supervisor: Prof. John M. Abela