Rain components are removed from the image based on the characteristics of the rain. The colored image is divided into high-frequency and low-frequency parts so that the high-frequency part consists of most of the rain components. Then, using the dictionary learning method, rain components are extracted from the high-frequency part. To extract more non-rain details we use color channel variance sensitivity (SVCC). Finally the rain-free component part and the low-frequency part are combined to obtain the rain-free image. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay The visual quality of images is mainly affected by weather conditions such as snow, rain, etc. In these cases rain removal is an important aspect since the rain images have a serious impact on many computer vision algorithms such as object recognition and detection, detection, tracking, etc. Here we used rain removal from a single image as for industrial and academic purposes it is more flexible. In this paper, we performed rain removal from a single-color image based on the analysis of rain pixel characteristics. Therefore, first let's take a brief summary about the simple but very useful features of rain. First, all rain pixels fall in the high-frequency part of an image since rain reflects light stronger than any other particle. Secondly, streaks of rain and other particles are distinguished by the fact that there is often an edge gap between them. Therefore, an image containing rain streaks will have a high average horizontal gradient. Rain pixels appear in constant areas of an image, and their value does not change much after applying the filter. Therefore the intensity of the background is taken as the rain pixel value in the low frequency part, and the corresponding value in the high frequency is the intensity change after being hit by the rain. Iorig = Ilf + Ihf. The algorithm shown alongside is used to extract non-rain components from a rain image in the pixel domain. Finally the components which are free from rain components are combined together to get the final image which is free from rain i.e.; Ifinal = Ilf +HF nr1+HF nr2 +HF nr3Based on the fact that rain pixels reflect light stronger than other pixels, we can roughly estimate the location of rain pixels in the image. For a normalized image, say I, we need to calculate the average values for each pixel i.eI (x,y) is the position of the pixel and 5 average values are Ij (j=1,2,3,4,5). A window Wj of it is necessary to select the appropriate size and the average values must be calculated with the pixel I(x,y) in the center, bottom left, top left, bottom right, top right. So, if the following equation becomes true for every value j, the corresponding pixel, i.e., I(x,y), is considered as the rain pixel. A position matrix of size equal to I is taken and all rain pixels are made 0 in the corresponding rain pixel. position (x,y) and taking 1 at the remaining pixel positions. This way the position matrix contains only 0 and 1. Now the original image I is multiplied with the position matrix L (the multiplication is done pixel by pixel, i.eScalar multiplication) so that all the pixels of the rain are zeroed and the resulting image is given to the3
tags