Image processing for diagnostics

Research Summary:

Images captured using cameras as a method of observing changes in the physical world are quite common and the tools/algorithms to capture these changes have evolved continuously along with these technologies. As the imaging technology progresses the output images have increased in quality/resolution with reduction in cost. Recently there has been a growing interest in using photogrammetry tools that can provide global information about the dynamics of structures with the added advantage of portability. This information is used for understanding fundamental physics of structures and monitoring their dynamics as well as validating and updating analytical models.

Naturally, there is a question about feasibility of the information obtained from images and the parameters of cameras limiting/deciding this information to use it as a transducer. We know how to calculate and quantify the limits of commonly used transducers as it involves solving a forward problem provided proper calibration of the devices. However, the algorithms used in computer vision are inverse problems which makes it challenging to quantify the uncertainties involved. To complicate matters further the image information itself undergoes major transformations due to optics, sensors, and environmental conditions before being displayed to the user.

An investigation needs to be made into the factors affecting the information captured from the cameras. Specifications of camera hardware (optics, aperture, shutter), type of image sensor used (CMOS vs CCD, sensor size, back side illumination) and imaging conditions (diffuse, specular) can give us a better quantification on the acquired measurement data.