An Efﬁcient Image Haze Removal Algorithm based on New Accurate Depth and Light Estimation Algorithm

—Single image Dehazing has become a challenging task for a variety of image processing and computer applications. Many attempts have been devised to recover faded colors and improve image contrast. Such methods, however, do not achieve maximum restoration, as images are often subject to color distortion. This paper proposes an efﬁcient single image Dehazing algorithm that offers satisfactory scene radiance restoration. The proposed method stands on the estimation of two key indices; image blur and atmospheric light that can be employed in the Image Formation Model (IFM) to recover scene radiance of the hazy image. More clearly, we propose an efﬁcient depth estimation method using image blur. Most existing algorithms implement atmospheric light as a constant which often leads to inaccurate estimations, we propose a new algorithm “A-Estimate” based on blur and energy to estimate the atmospheric light accurately, an adaptive transmission map also has been proposed. Experimental results on real and synthesized hazy images demonstrate an improved performance in the proposed method when compared to existing state-of-the-art methods.


I. INTRODUCTION
Outdoor images are often degraded under bad weather conditions (e.g., foggy or hazy) by the turbid medium (e.g., dust, mist or fumes, haze) in the atmosphere during the propagation process.These images usually suffer from poor visibility such as low contrast and blur, resulting from the fact that the light is scattered and absorbed with distance from the camera.Thus, most of the automatic systems (e.g., automatic monitoring system, outdoor recognition system and smart transportation system) which depend on the definition of the input images, such as those used in the surveillance needs to understand and extract useful information, and detect image features, fail to work correctly.Therefore, improving haze removal techniques is an important task in computer vision and its applications such as image classification and aerial imagery.
Despite there are many proposed hazy image enhancement techniques, which can be classified into two classes [1]: (1) image enhancement based on processing techniques, and (2) image restoration based on physical models, the Dehazing performance still has some problems in term of image quality.First, researchers use the traditional image processing techniques to eliminate the haze from a single hazy image (such as methods which stand on histogram processing [2], [3]), but these techniques produce unacceptable restoration results because the single hazy image can hardly give much useful information.In [4], [5], [6], polarization-based methods were proposed for Dehazing with multiple image degrees.After, Narasimhan et al. [7], [8], [9] use multiple images of the same scene with different weather conditions.However, these techniques also do not perform well the restoration of the single hazy image.Lately, under the hypothesis that the local contrast of hazy images is much lower than that in hazefree images, researchers use image depth information to deal with the haze within a single image using the physical model.In [10], Tan et al. propose a maximization of local contrast approach using Markov Random Field (MRF) to remove the haze, but Tan's approach produces oversaturated images.Also, Fattal [11] proposes an Independent Component Analysis (ICA) based Dehazing approach.The problem posed by this approach is the time-consuming, and it cannot recover well the scene radiance of images with a dense haze.
On the other hand, using machine learning-based techniques (neural networks, Convolutional neural networks, deep learning), Cai et al. [28], Ren et al. [29] and Song et al. [30] propose image Dehazing models built with convolutional neural networks (CNNs) based deep architectures, which achieve some wrong results in term of saturation, the naturalness of restored image because of non-massive data in the learning process, also in term of efficiency because of redundant computations as Song et al. [30] mentions in his conclusion.
In this paper, we propose a novel haze removal algorithm using image blur and atmospheric light to estimate the depth and transmission maps (An overview of our proposed method is shown in Fig. 2).The main contributions in our work are: • We are the first to propose a depth estimation using image blur map estimation for haze removal methods.
Because larger scene depth causes object more blurry for hazy images.
• We propose a new and efficient algorithm A − Estimate to estimate the atmospheric light from the most blurry region in the blur map, which is defined by the local patch that has a minimum energy.Then, the light A is selected as the maximum pixel intensity in the patch defined.
• We propose an adaptive transmission map using the distance between the observed intensity and the closest scene point.
The rest of the paper is organized as follow: In Section 2, we review the atmospheric scattering model and the DCP based dehazing method.In Section 3, we describe the proposed method.Qualitative and quantitative experimental results using both real and synthetic hazy images are reported in Section 4. Finally, Section 5 summarizes this paper.

A. Atmospheric Scattering Model
In [31], McCartney proposes the atmospheric scattering model to illustrate the formation model of a hazy image (Fig. 3).This model is widely used in computer vision and image processing.Later, Narasimhan and Nayer in [8], [32], [33], deduce the model so that it can be expressed as follows: (1) Where I c (x) is the observed intensity of hazy image at pixel x, J c is the scene radiance representing the haze-free image, A is the atmospheric light or background light, and t(x) is the transmission medium map that describes the portion of the scene radiance that is not scattered or absorbed and reaches the camera, β is the scattering coefficient of the atmosphere which can be a constant in homogenous atmosphere condition [32], and d is the the depth map of scene.I c , J c and A all are represented in RGB color space.Since we have the input image I c the problem of dehazing is the estimation of the atmospheric light and the transmission map t(x), and then restore the scene radiance J c , using the atmospheric scattering model Equation (1).According to the atmospheric scattering model, the depth of the scene is the most important information, after we can easily get the transmission map using Equation (2), and restore the scene radiance.

B. Dark Channel Prior (DCP ) based Dehazing Method
To estimate A and t(x), the DCP has been proposed.DCP based dehazing method serves to find the minimum intensity in the hazy image through a small local patch [12].We can get the DCP of a hazy image using the following formula: [ min c∈{r,g,b} Denoted p(x) is a local patch centered at pixel x.For a hazy image, the value of dark channel often increases with depth, which means that the closest scene point has a low value of dark channel and the opposite for a far scene point.To calculate A, the top 0.1% brightest pixels in I c dark were picked in [12], where P 0.1% represents their positions in I c dark .After, A is chosen as the pixel has the highest intensity in the hazy image.The following formula is used to estimated A : In the case of non-hazy images, the transmission t(x) = 1 according to Equation (1), so I c = J c .In addition, under the assumption that for most of the pixels at least one of them has a low intensity in one of color channels within a small local patch p, the DCP of the scene radiance J rgb dark generally equals to 0 for haze-free images taken in outdoor scenes.However, this assumption concerned just the outdoor terrestrial hazefree images, and it is not true for pixels of sky area, where nearby pixels also tend to be bright.Therefore it affirms in [12] Equation ( 5) that for about 75% of the pixels in the dark channels have zero values.
{ min c∈{r,g,b} In order to get the transmission map t(x) both sides of Equation ( 1) are divided by A, and applying the minimum operator to it.Then we obtain: In the case of t(x) has negative intensities, it returns to zero.The Equation (7) describes the general approach to estimate TM t(x), and then use it to recover the scene radiance J c according to Equation (1).Fig. 4 shows the result of DCP based-dehazing method.

III. PROPOSED IMAGE DEHAZING METHOD
We propose a new single image dehazing method based on both image blur map and light scattering.This new method ensures that both estimated A and depth map are more accurate and can provide a well-restored scene radiance.First, we define the most blurry region in the hazy image using image blur map and energy information in a local patch (in our work patch size = 20 × 20) using A − estimate algorithm after, we select A as the maximum intensity in this patch.Then, based on the A selected and the image blur map, the depth d and the T M maps are estimated to recover the scene radiance.The flow diagram in Fig. 5 shows the process of our proposed method.

A. Hazy Image Blur Map Estimation
Blur is the most common undesirable problem and one of the factors that lead to quality degradation which hazy images suffer from, which means that the estimation of the amount of blur presented on a given hazy image can contribute www.ijacsa.thesai.orgsignificantly to restore and enhance the visibility of hazy images.To this end, in this section, we propose a blur map estimation method presented in four steps as follows: 1) Blur map estimation: undoubtedly, that the spatial Gaussian filter is one the most useful filters used to estimate the amount of blur presented on an image and many literature works show the effectiveness of Gaussian filter to quantify and evaluate blur within an image.According to [34], the image blur map can be estimated as the difference between the original image and the multi-scale Gaussian-filtered image.We put G ω,σ as the hazy image filtered by a ω × ω Gaussian filter with variance σ.And then we can estimate the blur map using the formula shown below Equation ( 8): Where I y is the Y channel of the YCbCr color space of the input image, because it contains the most important gray-scale information about the image, k i = 2 i n + 1, and n set to 3.
2) Detection of brightest pixels: To find the brightest points within the blur map estimated we apply a grayscale dilation morphological operation with the structuring element SE.
3) Blur map reconstruction: It consists of filling holes in the blur map obtained B r (x) to recover the set of background pixels that cannot be reached by filling in the background from the edge of the image using a morphological reconstruction.
Where C r is the hole filling morphological reconstruction.4) Finally, we use the guided filter to refine the blur map result.Where F g is the guided filter function.

B. Atmospheric Light Estimation
In most of the previous single image dehazing methods, the atmospheric light A is estimated from the most haze-opaque pixel, for example, the highest intensity pixel is used as a value of atmospheric light in [10] [12] (Fig. 6, the red square).However, the brightest pixel can be one of a white color object within the scene which can lead to dim scene radiance.To overcome the limitation of this method and because the haze changes the tone or the saturation of the atmosphere with distance from the viewpoint, some researchers [27] propose to estimate the A from the farthest point in the scene, but in some cases of outdoor hazy images also the farthest point can be one of the white scene object like clouds in the sky (Fig. 6).In this case, the estimation of the A is not accurate enough.
To deal with this problem, we propose another method to estimate the A which estimates the atmospheric light A from the most blurry region detecting using the blur map mentioned Fig. 6: Position of atmospheric light in He's, Zhu's and our method respectively (red marks) in the previous section.To find the most blurry region within the blur map we propose to use energy information and then estimate the atmospheric light we propose a new algorithm A − Estimate, more details are explained below (see Fig. 6).
For hazy images, the restored scene radiance J c varies between inferior and superior bounds [0,1] which are derived by putting A = 0 and A = 1 as: According to this equation Equation (12), restoring hazy image with wrong estimated atmospheric light leads to undesirable results, for more clarification: dim estimated A leads to a vivid scene radiance, as an opposite result is obtained when the estimated A is bright.For example, when we put A = 0 then J c = min( I(x) t(x) , 1) the recovered scene radiance J(x) could be brighter than the observed intensity I(x).To avoid this problem, DCP approach and color variance have been adapted by Emberton et al. [35] to estimate A. However, this proposition does not provide accurate results.Also, Peng et al. in [34] propose to use image variance and blurriness information in a candidate selection method to estimate the light BL for underwater images.Nevertheless, this approach is adopted for underwater images and does not work well with hazy images.In contrast, we adopt the image blur and energy information to estimate the A. We propose an A selection method, which selects the most blurry region from the hazy image using blur map and energy indices.First, to detect the most blurry region, we use a patch-based method to calculate the local energy through each patch within the blur map.In our work we set patch-size = [21 × 21].The patch that has minimum energy represent the lucky region which contains the value of A. Then, we take 0.1% brightest pixels founded in the patch selected as an estimated A. The detail of the selection algorithm is described in Algorithm 1.

C. Depth Estimation
Objects far from the camera are blurry more than those near the camera, which means that the amount of blur in a given hazy image increases with the distance from the camera.In  21: End Function contrast, we have that the depth of the image is defined by the distance from the camera viewpoint to the farthest point in the image, thus, the amount of blur in a given hazy image increases with depth.Therefore, the estimation of depth is relative to the blur estimation.In this section, we propose a scene depth estimation based on image blur and light scattering as an initial depth map.After, we propose a combination depth method to get the final depth map.For the first method we use the image blur presented before in Equation ( 11) to estimate the scene depth: For the second method is the maximum intensity prior based depth estimation (derived from DCP -based method assumption) denoted d D , is defined as: where d max defined by the following equation: Where I R , I G , I B are the three channels Red, Green, and Blue of the image I in RGB color space.This estimation is a combination of two depth estimation methods, which are sigmoidally combined to get the scene depth of a hazy image.By combining Equation ( 14), and Equation ( 15), we introduce an efficient scene depth estimation method considering blur and light scattering, as follows: where F S is an intensity normalization function to normalize the intensity range given as: Note that I is a two-dimensional image.
Where α 1 = S(avg(A), 0.5) and α 2 = S(avg(I c ), 0.1) are defined by the logistic function mentioned in Equation (18): The final step, we refine and smooth the depth map obtained d n using soft matting [36] or guided filter [26].More clarification: when a hazy image has brighter A (For example A >> 0.5), then d D is more efficient and faithful to estimate the scene depth.The red, green, and blue lights are more absorbed and scattered in far scene points.Therefore when the hazy image has a moderate level of red, green, blue global content (avg(I c ) >> 0.1), with the atmospheric light is comparatively brilliant (avg c (A) >> 0.5), then d D alone can represent well the scene depth well.In this case, Finally, when the red, green and blue lights are very little in the scene, means (avg(Is) << 0.1), then d D fails to represent well the scene depth.So signify that the blur map can estimate the scene depth well when α 1 ≈ 1, α 2 ≈ 0 .In the other cases, the combination of the two approaches is the good way to estimate the scene depth.

D. Transmission Map Estimation TM
In the DCP-based methods, the transmission map T M estimation is base on Equation (7).On the other side, the T M estimation of hazy image using IF M is based on Equation (2) which needs the estimation of the depth from the viewpoint to each spot within the scene.In addition, the distance δ between the camera and the nearest scene point must be defined.In contrast, Peng et al. [34] introduce a metric-based distance estimation to calculate δ according to Equation (19).
Note that k = arg max c∈{r,g,b} (max x |A c − I c |) and A is the estimated light.In the case of a small max y,c∈{r,g,b} , δ tends to be large.For that reason they propose the final scene depth d f which can be estimated according to the following equation: Note that d sc is defined as a transforming scale used to get the real distance.However, this estimation is relative with air light estimation and needs to an accurate estimated A and a wrong estimation can leads to color saturation in the closest parts.To this end, we propose to use the scene depth estimated in the previous section as an easy way to estimate the distance between the closest scene point and the camera, Fig. 7 for more understanding: According to Fig. 7 and the definition of image depth, we have that the closest point of the scene is that has maximum intensity within the depth map, instead of this, we propose to estimate the distance between the observed intensity and the pixel has a maximum intensity in the depth map I(x) in each color channel c.We can calculate δ using the formula below: Where dmax is maximum intensity in the depth map.Finally, to find the the scene depth D using distance δ we use the following equation: Where sc is the scaling constant to transform the relative distance to absolute distance, in our work we set sc=6 meters (Fig. 8 the second column represents the final scene depth estimated).By contrast, using Equation (23) we can get the transmission map T M as: Where β is regarded as constant in homogenous atmosphere conditions.In this paper we set (β= 0.8).With the parameters chosen in this paper, we can see that the proposed method can properly restore the hazy image as Fig. 8. Finally, we use Equation (1) to recover the scene radiance.In the next section, we can show the effectiveness and robustness of the proposed method, by discussing the experimental results.

IV. RESULTS AND DISCUSSION
In order to verify the effectiveness and reliability of the proposed Dehazing method, we test it on a wide range of hazy images (more than 500 real-world and synthetic hazy images), and then compare the results with four state-of-theart methods, He et al. [12], Quingsong et al. [27], Berman et al. [37] and Wang et al. [38].Three ways of evaluation are used to rate the performance of our proposed algorithm: • Qualitative Evaluation on both Real-World and Synthetic Hazy Images.
• Quantitative Evaluation on both Real-World and Synthetic Hazy Images.
• Computational Time Complexity Comparison.
A. Qualitative Evaluation on both Real-World and Synthetic Hazy Images 1) Qualitative Evaluation on Real-World Hazy Images: Most of the existing dehazing algorithms, are able to effectively remove haze and recover scene radiance of outdoor hazy images, thereby rendering it difficult to visually rank and compare them.In an attempt to facilitate such ranking, some challenging images (containing regions of white and gray) are selected for testing.Fig. 9 illustrates the qualitative comparison of the previous four Dehazing algorithms [12], [27], [37], and [38] on outdoor hazy images.Fig. 9(a) highlights the hazy images chosen to be recovered.Fig. 9(b-e) shows the Dehazed images using the previous Dehazing methods [12], [27], [37], and [38], respectively and Fig. 9(f) demonstrates the results of the proposed method.
As shown in Fig. 9(b), the method proposed by the authors was able to remove most of the haze and rendered wellrecovered scene objects.However, the achieved results have led to the issue of over-enhancement, (for example, the top of the mountain in the second image tends to be orange and for the first image, the sky region is much darker than it should be).This problem ensues from overestimating the transmission medium.The results of the second method [27] are much better visually (Fig. 9(c)).However, some parts of white objects are distorted (for instance, the whiteness of the shirt in the first image is darker).Moreover, for select regions in the second image, some edges are not preserved well and become invisible (see the green ground area), so we can deduce that this method can not preserve well the edges.In contrast, Berman's [37] method (Fig. 9(d)) removes the haze gradually and recovers the scene effectively .However, it suffers from the same issue as Quingsong's [27] method, some white regions are distorted (the white shirt in the first image is darker).The local contrast also seems saturated (as shown in the second image at the top of the mountain).While that Wang's [38] method performs well in term of haze removal, but it shows a big distortion in visibility and colors.Comparing the results of the four previous methods [12], [27], [37], and [38], our proposed method shows satisfactory results free from over-saturation, and can preserve edges and whiteness of objects.As shown in Fig. 9(f), the sky and clouds are clearly visible in the first image with details of all objects being enhanced (particularly the mountain in the third image).
2) Qualitative Evaluation on Synthetic Hazy Images: In order to further evaluate the performance of the proposed method, comparisons were made on synthesized hazy images collected from the Middlebury dataset [39], [40], [41], with their respective ground truth images.Fig. 10(a) shows the synthesized hazy images, the results of compared methods are given in 10(b-f), and the last column represents the ground truth of the synthesized images.The haze-free images are taken from the Middlebury stereo datasets [39], [40], [41].By observing the images in Fig. 10, it is clear that the results in Fig. 10(b) are far from the the ground truth images and appear much darker (particularly the first image, the pig and the face of the doll with red hair).Results of Fig. 10(c) show a slight haziness of distant objects (particularly the distant toys, the map in the background and the pink toy in the second image).Berman's method (Fig. 10(d)) shows results that seem to be more similar to the ground truth images.However, some inaccuracies and over-enhancement can be observed (particularly the third image has a darker background).Wang et al.'s method [38] also removes all the haze presented, however color distortion is evident (particularly in the last image).By comparison, the issue of over-saturation is absent and scene radiance is recovered in our proposed method (Fig. 10(f)).

B. Quantitative Evaluation on both Real-World and Synthetic Hazy Images
In the previous section, a qualitative comparison was presented to assess and rank the algorithms visually.This section proceeds to quantitatively assess and rank the algorithms in 1) NR-IQA on both Real-World and Synthetic Hazy Images: For NR-IQA, we adopt four widely used metrics e, r, FADE and BRISQUE the most of them are designed for dehazing quality assessment and correlate well with human perception (e,r,FADE).The indicators e and r are proposed by Hautière et al. in the well-known approach of blind contrast enhancement assessment [44], where e measures the rate of new visible edges, and r verifies the average visibility enhancement before and after restoration.FADE is a haze removability assessment which is proposed in [43].BRISQUE Blind/Referenceless Image Spatial Quality Evaluator, is an image quality assessment tool for rating possible loss of the naturalness of an image [42].Generally, lower values of FADE and BRESQUE indicate that the results are satisfactory (low FADE value indicates a greater dehazing ability) while high values of which, are not acceptable.By contrast, higher values of e and r imply better visual improvement after Dehazing.Tables I-IV contain the values of indicators (FADE, BRISQUE, e, and r respectively), of the dehazing results of state-of-the-art methods and our proposed method for both real-world and synthetic images (as shown in 9 and 10).
From the values of the FADE indicator listed in Table I, it is observed that our proposed method achieves the best results with respect to haze removability for figures (Fig. 9(1-3), Fig. 10(1,2)), and the second-best values for figures (Fig. 9 (4,5)).From these results, we deduce that the efficiency and performance of our proposed Dehazing method are proved, and it outperforms the other state-of-the-art Dehazing methods.
In order to verify the naturalness of the restored hazy images, BRESQUE scores are computed for all compared dehazing methods (including ours).As shown in Table II and according to BRESQUE values, our Dehazing results produce the least-loss of naturalness for most of the images (9(1,2,4), Fig. 10(1,2)).Thereby indicating that our method preserves the naturalness of images after restoration.It also achieves the second-best score for Fig. 9(3).By contrast, our method fails to preserve the naturalness in the nighttime hazy image (Fig. 9(5)) while He et al. method [12] achieves the best score BRESQUE.In accordance with Table III, our results attain the highest value for only the image (Fig. 9(5)) that is a nighttime hazy image, while it realizes the second highest results for the images (Fig. 9(1,2), Fig. 10(4)), and it ranks the third top for the images (Fig. 9(3,4), Fig. 10(1)).It is known that the number of new visible edges after restoration must be balanced to avoid noise amplification, and our method represents this characteristic well, as noise amplification is avoided .In contrast, the results of Wang et al. [38] and Berman et al. [37] achieve the best values of e for the images (Fig. 9(2), Fig. 10(1,4)) and (Fig. 9(1,4)), respectively.This is a result of the over-saturation and over-enhancement of local contrast.
As mentioned previously, the indicator r verifies the average visibility enhancement after the restoration.Table IV shows the r values of Dehazing results for all of the compared methods (including our proposed).According to these results, our proposed method outperforms state-of-the-art methods for most of the images (Fig. 9(1,2,3), Fig. 10(1,4)), where it achieves the best values of r, and second-best result for the image (Fig. 9(4)).Despite the results of He et al. [12] and Berman et al. [37] attain the second-best score of r for the images (Fig. 9(5,3)), respectively, the over-enhancement problem is obvious in their results.
As a recapitulation of result analysis, the power and the performance of our proposed method to remove haze, enhance the visibility, and preserve the naturalness of an image has thus 2) FR-IQA on Synthetic Hazy Images: For the FR-IQA we select the color-based structural similarity index (C-SSIM) proposed by Ahmed et al. in [45], which is an image quality assessment metric designed for color images.In addition, it is known to have high correlation with human evaluation.The goal of which is to assess the ability of an algorithm to preserve structural information.In order to implement the C-SSIM experiments on synthetic hazy images presented in Fig. 10, the ground truth images are adopted as the reference images.The values of C-SSIM corresponding to the results of all the compared Dehazing algorithms are listed in Table V and distributed in Fig. 11.A high score of C-SSIM (close to 1) represents a high similarity between a result of Dehazing method and its ground truth, The converse is true for low values of C-SSIM.
As it can be seen from Table V, that our results are ranked the best among other algorithms for (Fig. 10(1,3)), and the second-best for the images (Fig. 10(2,4)).These results prove that our proposed method can effectively preserve the structural information for images after Dehazing.Despite Berman et al. and Quingsong et al. achieve the best C-SSIM for the images (Fig. 10(2,4)) respectively, the haze residual is evident especially in Quingsong's result Fig. 10(4).This is confirmed by the FADE value in Table I.By contrast, Wang et al's.results achieve the lowest values of C-SSIM for the majority of the images, and this indicates the loss of significant structural information in most of the images.

C. Computational Time Complexity Comparison
undoubtedly that the time complexity also is an important index to evaluate and rank the algorithms.Because of that, we list the run times of each algorithm in Table VI.The software used to implement and run our proposed algorithm is MATLAB, using a MacBook with 1.2GHz Intel Core M processor and 8 GB RAM.For a given hazy image of size [N × M ], the complexity of our proposed Dehazing algorithm is O(N × M ).According to Table VI, the running time increases with respect to the size of the image.Also, Our proposed method demonstrates a remarkable computational time efficiency, and it is much faster than most of the other methods.Quinsong et al.'s method also shows an efficient computational time.

FAILURE CASE
As any Dehazing method based on the hazy model, our proposed Dehazing method has a limitation.Based on the experimental results presented in the section "Results and Discussion", we deduce that despite the good performance of our method on a wide range number of synthetic and realworld hazy images, our method performs poorly in case of nighttime hazy images.Example of the failure case is shown in Fig. 9(Img-5(f)), and this problem will be addressed in our future work.

V. CONCLUSION
In this paper, we have proposed a novel single haze removal method based on both image blur and atmospheric light estimation.We propose to use blur and atmospheric light to estimate the depth and transmission maps of hazy image for the first time, instead of the DCP based Dehazing method.Where we propose a new simple and powerful method to estimate the blur presented on the hazy image, and a new algorithm to estimate the atmospheric light based on blur and energy of image.To demonstrate the satisfaction of restoration of this proposed method we test it with real and synthesized hazy images.A large number of hazy images are well recovered using our proposed Dehazing method.Both subjective and objective comparison results showed that the proposed method can recover well the scene radiance of hazy image compared with other IFM-based Dehazing methods.
Algorithm 1 A − Estimate 1: Input: input image I, Blur map P blr .2: Output: Estimated A.

Fig. 8 :
Fig. 8: Examples of Hazy images, their depth maps, transmission maps and recovered scene radiance.Left: Input hazy image.Center: Restored depth and transmission maps, respectively.Right: Dehazed image.

Fig. 9 :
Fig. 9: Qualitative Comparison of Different Methods on Real-World Images.(a) the hazy images.(b) He et al.'s results.(c) Quingsong et al.'s results.(d) Berman et al.'s results.(e) Wang et al.'s results.(f) Our results

TABLE I :
FADE value of All the Compared Methods on Real-world and Synthetic Images (Fig.9and Fig.10)

TABLE II :
BRESQUE Score of All the Compared Methods on Real-world and Synthetic Images (Fig.9and Fig.10) TABLE III: e value of All the Compared Methods on Real-world and Synthetic Images (Fig. 9 and Fig. 10) e He et al. [12] Quingsong et al. [27] Berman et al. [37] Wang et al. [

TABLE IV :
r value of All the Compared Methods on Real-world and Synthetic Images (Fig.9and Fig.10)

TABLE V :
C-SSIM value of All the Compared Methods on Synthetic Images (Fig.10)