Welcome to week six. We will cover this and next week, another exciting topic in image and video processing, the topic of recovery. It's a natural continuation of image and video enhancement that we covered last week. There's an important distinction between the two topics. As we saw last week, the objective of enhancement is to improve the visual quality of an image, or improve its usefulness in performing a certain task based on some subjective criteria. For image and video recovery problems, information is lost either in the spacial temporal, or the frequency domains, and the objective is it's recovery. Unlike enhancement, however, where also information is lost in some cases, in recovery a model of the process that resulted in this information loss is first established. And then an objective optimality criterion is determined based on which estimate of the original image is sought after. During this week, we'll provide examples of image and video recovery. Depending upon the application domain, recovery comes with different names such as restoration, deconvolution, out of conceivement, in painting, blotting out the factory moveout, super resolution, unsharpening, and so on. All these recovery problems are inverse problems. We will then discuss some of the important restoration approaches that are used widely in practice. There's probably an obligation area where images and videos are acquired that does not have either active or potential work in image recovery. The material, therefore, of this and next week will be very valuable in this regard. In this module of the course we will first show examples of image recovery. Then we will focus on image restoration and more specifically image deconvolution approaches, and we'll group them into deterministic and stochastic. And finally, we're going to show the formulation of other recovery problems. I used the term recovery as the umbrella term, any problem in which information is lost and needs to be recovered. As you'll see recovery problems assume specific names based on the application, restoration typically and the convolution as a special case of restoration refers to a specific modeling of the input-output relationship of a degradation system. We show here a blurred image due to defocussing. This is the situation where the automatic focussing system of a camera failed or the manual focussing was not successful. So what is acquired is this blurred image. It's hard to decipher what's in it. So applying a restoration algorithm, we can obtain a restored image that looks like this. This, by the way, a grayscale image, not a binary image. So on the left was the image of EECS department at Northwestern before I joined, and on the right is the image after I joined. The Hubble space telescope, named after the astronomer Edwin Hubble, was placed in orbit in 1990 by space shuttle and is still in operation. Between 93 and 209, five different repair missions repaired, upgraded, and replaced systems on the telescope. Now, after it was placed in orbit, it was discovered that it's main mirror was not cut to specifications, and therefore the telescope was myopic. It would observe images, such as this one of Saturn, and this is a blend image. The rings are not clear. So these images would be beamed down to stations down to Earth and people would try various restoration arguments like the ones we will be discussing later in class to obtain the stored images. Based on some work we did, here is the restoration of Saturn. The rings are now much sharper and accurate flux measurements can be made on the restored image. Here's another observation by the telescope of a galaxy, and here's the restored version of this particular image. I should mention that in getting out the restoration, the impulse response of the degradation system, which is the myopic telescope, needs to be known. And in this case, see this impulse is possible to be obtained based on geometric optics or would be obtained by pointing the telescope to a distant star and measuring the response to that. So the star represents an impulse, therefore the impulse is false. The telescope actually has been extremely successful. It has acquired beautiful images of the heavens, and major scientific results have been obtained due to its observation. Here's a cover image, and this cover image is undergoing degradation that is three dimensional, so in distorting one channel, the other two channels have eroded it. So due to this three dimensional degradation the degraded image looks like this. It has been subjected to color alterations. So therefore if three dimensional deconvolution is required and here is an example of the restored image. When the degradation is 3D, then the deconvolution is suddenly 3D. But in many instances the degradation can be two dimensional but still we might choose to do a three dimensional deconvolution by taking, in other words, the other channels into account. This could be thought as the multi channel restoration. And actually there are a number of applications in microscopy, for example, fluorescent microscopy where 3D deconvolution is required. Here's an example of a blurred image due to motion between the camera and the scene. It's an aerial picture taken by a plane, and the motion compensation system on the plane failed. And therefore this blurred image was acquired. This is a real image of a toy train, moving toy train. Due to motion again, there is glare on the train, while the background and foreground are not blurred. So in this particular case, a smart algorithm is required to first identify the parts of the image, the pixels in the image that are blurred. And the store only those big cells while the remaining of the image stays untouched. Here's an image of a so called auto radio graph, micro spheres tied to a specimen, and then this radiographic image is acquired, it's certainly blurred. Applying some of the algorithms we'll be discussing in this class here are the restored versions of these images. So the bland images, the restored images. Clearly, a lot of information has been recovered. We can see the fine details on the aerial image. There's a river there and the bridges. We can see the microsphere is defined and we can see the image of the train now being sharp. We can read the letters on the side of the train while the foreground and background are also sharp, have not been modified, have not been degraded by this operation. We show here an image acquired by a handheld camera. During exposure time the camera shook, and this is the resulting blurred image. We show some sub-images here blown up to pay attention to. The gradation is not known, and therefore a blind spatially varying restoration algorithm was developed by us and applied to this image to obtain this restoration. We see here the shape of the gradation of the impulses, possibly the gradation, and we also look specifically at the sub-images. So the improvement is crystal clear, and you can see the three people here walking the north distinction. It's all in the original observation but after the restoration there's a very good presentation of it. As you can see we are going to discuss later in the course in detail image and video compression techniques. In getting out compression we take a still image or frame, subdivide it into blocks, and then each block is compressed independently from the remaining of the blocks. So due to this specific type of processing, especially at high compression ratios, if one looks at the compressed image, one can see those annoying blocking artifacts. You can see here the boundaries of each individual block this image was divided into. So the problem at hand is to devise a recovery algorithm that will bring back some of the lost information and result in removing this blocking artifact. So here is one result we obtained with one of the algorithms. By and large the blocking artifacts have been reduced or completely eliminated. Maybe if one pays attention at the forehead of the lady and maybe the hat here and some other regions of the image. It's the same problem as in the previous slide, here we deal with video, not just a still image, so the blocking artifacts are visible in this frame of the video, and based on the technique, we also developed, this is the processed image. In this particular case, when video is involved, one can utilize not only the spatial information but also the temporal information based on motion information can find the similar blocks, the similar regions in previous frames and utilize that information to remove some of the blocking artifacts. Here's another example of a recovery problem. When images and videos are transmitted, rows of microblocks are typically placed into packets, and these packets are transmitted. If some of these packets are lost during transmission, then the received image can look like this. This is a frame of a video and this black here, the rectangles, indicate packets which were lost and therefore the density information is lost. So, we try to recover the lost information. This is a recovery problem that is typically referred to as an error concealment problem, since we try to conceal the errors that were introduced by the channel. Since we deal with video again one can use not only the spacial neighborhood information to recover this lost information but also the temporal information. Based on some of our work, here's an example of the concealed frame. It's a satisfactory result, especially for this packet here since a lot of structure was lost. But we see that this recovered nicely. This particular packet did not convey, probably, as much information. But still it's a satisfactory concealment. Here's another example, it's a still image. So these blocks were lost, the black blocks. This could represent the best type of error, so based on a different algorithm, recovery algorithm we developed, here is the recovered image, it's almost perfect. If one pays close attention there are some issues like here, for example, or here, which might be part of the hair and so on. But by and large is a very satisfactory concealed image. The last example we saw transitions nicely into the Inpainting problem. So here we're given an image like this. This is actually an old movie and these were supposed to be subtitles. But back then when people were not as skilled readers and these subtitles were rather large. So this is the only copy of the movie available and we want to remove the letters so that the original information is available. So this problem is again referred to as Inpainting, we want to paint the missing values inside the image. It's very similar to the error concealment problem. One difference is that in error concealment, the missing regions are nicely structured and their location is known. With an Inpainting problem, one has to first identify the regions in which information is missing. So in this particular example we have to identify that each and every letter and realize that whatever this is white here, it's a letter is information lost. But this white shirt here, and the information is not lost there. So this is not an easy problem that has to first be solved before inpainting. So here's an example of a not very skillful and successful inpainting. But nevertheless, it gives you an idea of the problem. The letters are removed. Again, inpainting can be applied to still images or videos. When video is available then there is both spatial and temporal information that can be utilized towards inpainting. Another recovery problem is the image super-resolution problem. We observe a set of low resolution images. And the objective is to combine them in order to obtain a high resolution one. So these are images of the same scene and there's a transformation connecting them. The transformation could be a simple subpixel shift among the frames or a rotational component can be there, or any in general [INAUDIBLE] formations. So based again on some of our work, here is an example of a super resolved image. None of the numbers is visible in the low-res images. And clearly a lot of the information was recovered since the numbers can be read and the image is sharp and high resolution. Here's another example for low resolution, images are observed and combined judiciously result in this high resolution image. Actually the dimensions of these images here are the same, the low and high, but this was done just for displaying purposes. The super-resolution of compressed video is another recovery problem. So the degradation model is shown here. We started with a video with the original resolution. A short segment is shown here. This is down-sampled by a factor of two in both dimensions, with this particular case, and then this down-sampled video is compressed using the MPEG-4. With the compression standard at 1Mbps using the specific grade controller. So here is the compressed down sampled video. So the objective is to work with this compressed down sample video and come up with an estimate of the video of the original resolution. So here is the two results. On the left is the result based on bilinear despoliation and on the right, on an algorithm that we developed. So by comparing them, side by side, it should be clear that a lot of the artifacts are not there with the so called proposed algorithm here. And it represents by and large a very good presentation of the video at the original resolution. Another recovery problem is the Dual Exposure Restoration problem, a more recent problem. According to it, two images are observed, a long exposure image and a short exposure one. On the right actually, cell phones in the market that do that. They take simultaneously two images, a long and a short exposure one. So the long exposure one is subject to blurring due to motion, shaking of the camera and so on. The short exposure one does not have this problem. But it is noisy and also the colors haven't been outed. One approach would be to work with a short exposure image and smooth the noise. And if we do so, we obtain an image like this. It's brighter. The noise has been reduced, removed. However, the colors are not, the colors of the original image. If you combine now, the long and short exposure based on some of our recent work, we can obtain the restored image shown here. So, this is definitely an improved version of the observations. This is a sharp image, while maintaining the colors of the original scene. Blind restoration was performed, and this is the estimated points with functional impulses formed, so the degradation system that gave rise to the long exposure image. Another recovery problem by the name of pansharpening, is encountered in multispectral imaging, in remote sensing, when Images are required, for example, by Landsat. So in that case, instead of having an ideal sensor that would generate high-resolution multispectral images, instead, there is spectral decimation as well as spatial decimation. Information. So the spectral decimator results in a high resolution image, but the frequency information has been lost. It's a grayscale, panchromatic image, while the spatial decimator will give each and every band of this multispectral image, but at a reduced resolution. So the problem at hand is to combine the panchromatic high resolution image and the low resolution spectral images and try to increase the resolution of the spectral images. Bring them to the resolution of the panchromatic image. It's a super-resolution problem but in this context of multispectral imaging is referred to as pansharpening. Problem. So here's an example of pansharpening. We're given the low resolution spectral, the panchromatic gray scale, and combining the two, here is a pansharpened image that we obtained with one of our algorithms. So it has the same resolution as the panchromatic. And here's all the spectral information, here it's shown clearly in pseudo-coloring of the spectral bands. Another recovery problem is that of demosaicking. Most modern digital cameras acquire images using a single image sensor. Overlaid with a color filter array. The most commonly used one is the Bayer CFA shown here. According to it 50% of the pixels are green, and 25 are red and blue, so equivalently, out of the three planes, the R, G, and B planes, 75% of the pixels are missing here, 50% in the green channel and 75% in the blue channel. So, in mosaicing is the digital image processing recovery problem used to reconstruct a full color image from the incomplete color samples. That is shown here. With this final example, I want to make the point that a degradation such as motion blur Is not only important to be removed so as to visualize the distorted image, but it needs to be taken into account when another task is performed, such as tracking. So we see here a popular and successful tracker by the name of Mean Shift. Failing to perform really well when there is motion blur. The red window shows the tracking result. So the red window stays with the object most of the time, but at a number of occasions does not coincide with the object so the tracker loses track of the object of interest. But by taking the motion bell into account with some of the work we did here we can see that now the tracking is more successful. This red square, rectangle rather, stays with the object most of the time. We tried to demonstrate with the previous examples that there are various sources of degradation which result in loss of information and therefore a recovery problem needs to be solved. It's probably safe to say that whenever images and videos are acquired there is either active or potential In solving a recovery problem. Drawing from the previous examples, here are some examples of degradation. The degradation can be due to motion between the camera and the scene, either unintentional motion, or when a motion compensation system fail, for example. Due to atmospheric turbulence. Whenever we image for example a ground based telescope the objects in the sky. We have to image through the turbulent atmosphere. Too out of focusing either because the auto focusing system failed or when we manually focus, we don't do the best possible job. Generally speaking due to the limitations of acquisition systems. The finite resolution for example imposed by the optics, constrains the pose by physics by the objects of interest we want to image. And cost considerations, very often we want with software to increase the capabilities of hardware. So we want to make for example a $1,000 camera perform like it's a $30,000 camera. Due to the finite resolution of sensors and the resolution is dependant on the Frequency of the electromagnetic spectrum at which we are imaging. Due to quantization errors, when for example, compression is performed on a signal. Through transmission errors. Areas of the views by the unfriendly channel. And of course the ever present noise. We show here a list of the recovery problems we describe through the examples we just showed. So there's a ever present noise smoothing problem. Restoration and deconvolution in 1 or 2 or 3 dimensions. Removal of compression artifacts. Super resolutions in it's various incarnations such as pansharpening or demosaicking. The two chasm problems of impainting and concealment. Dual exposure imaging Reconstructions from Projections. And these two topics that we did not show in the examples, but we will be visiting them later. I'd like to mention here that some of these problems are combined. For example, I might be interested in solving a super resolution problem with compressed images or videos when of course noise is always present in the data. Another comment is that in some applications we observe just a single image or frames of a video while in other applications there are multiple observations involved such as the dual exposure or the super resolution. The examples of discovery problems we described and additional others can be encountered in a number of applications. For example, in space exploration or images acquired by the Hubble Space Telescope. Remote sensing. Surveillance. Medicine. Neuroimaging, nondestructive testing, commercial and digital photography. Video printing, printing one frame from a video. Microscopy, molecular and cellular imaging, and multimedia communications. And of course this is not an exhaustive list of either the applications or the type of recovery problems or the list of degradations we just mentioned. So this is a very, very rich field and what we try to do is just demonstrate and describe some of the very basic principles. That one can take then and apply to all these again incarnations of the recovery problem in the very many applications that such recovery problems can be encountered.