Method for Frequent High Resolution of Optical Sensor Image Acquisition using Satellite-Based SAR Image for Disaster Mitigation

—Method for frequent high resolution of optical sensor imagery data acquisition from satellite-based SAR (Synthetic Aperture Radar) image for disaster mitigation is proposed. The proposed method is based on Generative Adversarial Network: GAN-based super resolution and conversion method from a SAR imagery data to the corresponding optical sensor imagery data in order to increase observation frequency. Through experiments, it is found that it is possible to convert SAR imagery data to the corresponding optical sensor imagery data and also found that the spatial resolution of SAR imagery data is improved remarkably. Thus, initial stage of disaster (small scale of disaster) can be detected with resolution enhanced optical sensor imagery data derived from the corresponding SAR imagery data which results in prevention of secondary occurrence of relatively large scale of disaster. It is also found that 2.5 m of spatial resolution of optical sensor imagery data can be acquired every 2.5 days in the case that only Sentinel-1/SAR and Sentinel-2/MSI (Multi Spectral Imager) are used, for instance.


INTRODUCTION
The purpose of this research is to detect relatively smallscale weather caused disaster (an initial stage of disaster) such as landslide disaster, slope failure, flood, road submergence, sediment disaster, etc. for mitigation of a secondary occurrence of relatively large scale of disaster. For this purpose, observation frequency and spatial resolution are key issues.
In case of weather caused disaster, optical sensor data cannot be acquired due to weather conditions, rainy and cloudy nevertheless it is easy to detect disaster areas in the optical sensor images. On the other hand, SAR imagery data can be acquired in such weather conditions it is not so easy to detect disaster areas in the SAR imagery data though. Also, SAR imagery data can be acquired in nighttime. Therefore, observation frequency of SAR sensors is much higher than that of optical sensors. Furthermore, there are a variety of spatial resolutions of optical and SAR sensor data. High spatial resolution of sensor data is required for an initial stage of the disaster detection.
The method proposed here allows enhancement of the acquired SAR and optical sensor imageries using the well-known GAN 1 [1]-based Single Image Super Resolution: SISR 2 [2] and Very Deep Super Resolution: VDSR 3 [3] as well as improved super resolution [4] and also conversion of a SAR imagery data to the corresponding optical sensor imagery data [5], [6]. By combining between the optical sensor imagery data derived from the SAR imagery data and the actual optical sensor data, observation frequency is increased remarkably. Thus, frequent high resolution of optical sensor imagery data can be obtained from the SAR imagery data. This is a basic idea of the proposed method.
The proposed method is validated with satellite-based SAR and optical sensor imagery data derived from the Sentinel-1/SAR and Sentinel-2/MSI sensors. There is well reported research works on the conversion of Sentinel-1/SAR imagery data to optical sensor of imagery data through learning processes based on GAN with training samples of Sentinel-1/SAR and Sentinel-2/ MSI imagery data [7]. Before the conversion, super resolution is applied to Sentinel-1/SAR and Sentinel-2/MSI imagery data then learning processes of GANbased conversion of SAR to optical sensor imagery data. Therefore, 10 m of spatial resolutions of Sentinel-1/SAR and Sentinel-2/MSI are improved to 2.5 m of spatial resolution with the spatial resolution enhancing factor of 4.
Meanwhile, SAR imagery data is geometrically affected by the well-known fore shortening, layover, and shadowing. Therefore, some treatments, orthographic transformation and terrain correction are required to avoid such influences. In this paper, effects of these treatments are investigated.
The following section describes the related research works followed by the proposed method. Then, experiments conducted are described followed by conclusion with some discussions.

II. RELATED RESEARCH WORKS
As for the disaster mitigation from space related previous research works, there are the following published papers, Visualization of 5D assimilation data for meteorological forecasting and its related disaster mitigation utilizing VIS5D 4  [8]. Flooding and oil spill disaster relief using Sentinel of remote sensing satellite data is well reported [9]. Convolutional neural network considering physical processes and its application to disaster detection is proposed and validated with remote sensing satellite imagery data [10].
Present Status for Disaster Observation Systems Working Group is reported for research working group of Japan-US Space Research Cooperation [11]. Four Dimensional GIS and Its Application to Disaster Monitoring with Satellite Remote Sensing Data is proposed [12]. An Expectation to Remote Sensing for Disaster Management is announced for the United Nation and Japan-US Science/Technology and Space Application Program [13].
The Conference on GIS and Application of Remote Sensing to Disaster Management Four Dimensional GIS and Its Application to Disaster Monitoring with Satellite Remote Sensing Data is reported [14] together with the Current Status on Disaster Monitoring with Satellites in Japan [15]. Opening Remarks of Satellite Based Disaster Management is made for the Disaster Management Working Group [16].
Disaster related activities are introduced for Japan US Science and Technology as well as Asian Disaster Reducing Center [17]. Internet GIS and Disaster Information Clearing House is proposed [18]. Opening address of the disaster management symposium, United Nations Center for Regional Development is made [19] together with the Virtual center for disaster management for United Nations Center for Regional Development as well [20].
Joint Research on Disaster Management is also proposed and accepted in the United Nations, Center for Regional Development, UNCRD Headquarter [21]. Visualization of disaster information derived from Earth observation data is investigated [22]. Disaster monitoring with ASTER 5 onboard Terra satellite is proposed and demonstrated its usefulness [23].
ICT technology is for disaster mitigation in particular for Tsunami warning [24]. On the other hand, cellular automatabased approach for prediction of hot mudflow disaster area is investigated [25]. Simulation of hot mudflow disaster with cellular automata and verification with satellite imagery data is conducted [26].
Backup communication routing through Internet Satellite, WINDS 6 , for transmission of disaster relief data is attempted [27]. Meanwhile, deceleration in the micro traffic model and its application to simulation for evacuation from disaster area is proposed and validated [28] together with cellular automata approach for disaster propagation prediction and required data system in GIS representations [29].
Flood damage area detection method by means of coherency derived from interferometric SAR analysis with Sentinel-1A SAR is proposed [34]. Change detection method with multi-temporal satellite images based on wavelet decomposition and tiling is proposed in particular for disaster mitigations [35]. A comparative study of flooding area detection with SAR images based on thresholding and difference images acquired before and after the flooding is conducted and evaluated its usefulness [36].

III. PROPOSED METHOD
The proposed method uses GAN-based super resolution and conversion method from a SAR imagery data to the corresponding optical sensor imagery data in order to increase observation frequency.
The method proposed here allows enhancement of the acquired SAR and optical sensor imageries using the wellknown GAN-based SISR and VDSR as well as improved super resolution and also conversion of a SAR imagery data to the corresponding optical sensor imagery data. By combining between the optical sensor imagery data derived from the SAR imagery data and the actual optical sensor data, observation frequency is increased remarkably. Thus, frequent high resolution of optical sensor imagery data can be obtained from the SAR imagery data. This is a basic idea of the proposed method.
On the other hand, VDSR is a convolutional neural network architecture designed for performing single image superresolution processing. A VDSR network learns the mapping between low-resolution and high-resolution images. This mapping is possible because the low-resolution and highresolution images have similar image content, differing mainly in fine high-frequency content.
VDSR uses residual learning method. This is what trains the network to estimate the residual image. In the context of super-resolution processing, the residual image is the difference between a high-resolution reference image and a low-resolution image upscaled using bicubic interpolation to match the size of the reference image. A residual image contains information about the detailed high-frequency content of the image.
The VDSR network detects the residual image from the luminance of the color image. The luminance channel Y of an image is the brightness of each pixel as a linear combination of red, green, and blue pixel values. On the other hand, the two chrominance channels Cb and Cr of the image represent chrominance information in different linear combinations of red, green and blue pixel values. VDSR is trained using only the luma channel. This is because human perception is more sensitive to changes in brightness than to changes in color. Train the VDSR network to estimate the residual image. Then you can reconstruct the high-resolution image by adding the estimated residual image to the up-sampled low-resolution image and converting the image back to the RGB color space. www.ijacsa.thesai.org Magnification is relative to the size of the low-resolution image of the size of the reference image. Low-resolution images especially lose information about the high-frequency content of the image, so SISR becomes even worse at higher magnifications. VDSR uses a large receptive field to solve this problem. This example trains a VDSR network using expansion by scaling by multiple factors. Augmentation by scaling allows the network to take advantage of the lowmagnification image context, which improves results at high magnifications. Furthermore, the VDSR network can be generalized by accepting images with non-integer magnifications.
The proposed method is validated with satellite-based SAR and optical sensor imagery data derived from the Sentinel-1/SAR and Sentinel-2/MSI sensors. There are well reported research works on the conversion of Sentinel-1/SAR imagery data to optical sensor of imagery data through learning processes based on GAN with training samples of Sentinel-1/SAR and Sentinel-2/ MSI imagery data.
In accordance with study [7]  2) Methods of data evaluation: Semi-automatic download and image preparation using Google Earth Engine and MATLAB 9 .
3) A dataset 10 containing 282, 384 pairs of images acquired from Sentinel-1 (SAR satellite) and Sentinel-2 (optical satellite) and matched with the surrounding topography using Google Earth Engine and MATLAB. 4) Image generation by "pix2pix"by extracting 3964 pairs of SAR and optical images from the dataset. 10) Description: SEN1-2 dataset includes Sentinel-1 and Sentinel-2. It contains 282,384 pairs of corresponding SAR and optical image patches acquired by two satellites respectively. The patches are distributed over the Earth's landmass and span all four weather seasons. This is reflected in the structure of the dataset. The SAR patch is provided with an 8-bit singlechannel image representing the sigma note backscatter values in dB scale. The optical patch uses 8-bit color images representing bands 4, 3, and 2.
12) Data Size: as shown in Fig. 1.   After the necessary datasets are read, then create a ConcatDataset class so that both SAR images and optical data can be called during training and learned. Build pix2pix Generator and Discriminator respectively. Train using loaded datasets and constructed generators and classifiers.
Before the conversion, super resolution is applied to Sentinel-1/SAR and Sentinel-2/MSI imagery data then learning processes of GAN-based conversion of SAR to optical sensor imagery data. Therefore, 10 m of spatial resolutions of Sentinel-1/SAR and Sentinel-2/MSI are improved to 2.5 m of spatial resolution with the spatial resolution enhancing factor of 4.
Without high-frequency information, the image quality of high-resolution images is limited. Furthermore, SISR is an illposed problem because a single low-resolution image can generate multiple candidate high-resolution images.
The residual image is the difference between the high resolution reference image and the low resolution image upscaled using bicubic interpolation to match the size of the reference image. Then CNN training the residual image is performed.
Software that rewrites only the conversion function of the image conversion software "waifu2x" 12 using Caffe and builds it for Windows using CUDA (or cuDNN) can convert faster than CPU. From the download site 13 , it is possible to download the zip file of waifu2x-caffe.zip (650 MB, Source code (zip), Source code (tar.gz)).
The pix2pix is a machine learning model that generates a fake image based on the content of a certain image, whereas ordinary GANs generate fake images from random noise, in other words, image-to-image conversion. It is a model that learns the features. For learning, it is necessary to prepare a pair of an input image and a correct image.
Meanwhile, there is a learning model called pix2pixHD derived from pix2pix that learns image-to-image conversion. The differences between pix2pix and pix2pixHD include improved generators, classifiers, and loss functions. By generating a fake image with a magnification, it becomes easier to capture the local and global features of the image. Also, pix2pix used a loss function called the "L1 loss function" that learns low-frequency components, but by improving it, it became possible to generate high-resolution images.

A. Spatial Resolution Enhancement based on Super
Resolution Spatial resolution of Sentinel-1/SAR and Sentinel-2/MSI images in the SEN1-2 image dataset can be enhanced with the super resolution by the enhancing factor of 4. Fig. 3 shows just an example of the enhancement based on waif2x-caffe. Also, Fig. 4 shows the frequency components of the original and the spatial resolution enhanced image. It is obvious that the enhanced image has much high frequency components rather than the original image. Therefore, the spatial resolution of Sentinel-1/SAR and Sentinel-2/MSI can be improved by the factor of 4 which results in 10 m resolution of Sentinel-1 and 2 are to be 2.5 m of spatial resolution.

B. Generation of Optical Sensor of Imagery Data from the Corresponding SAR Imagery Data
Through the learning processes with SEN1-2 of 842 datasets of the Sentinel-1/SAR and Sentinel-2/MSI imagery data, the learning models of pix2pix and pix2pixHD are created. Therefore, optical sensor imagery data is to be created with an unknown SAR image is input. Fig. 5 shows the input SAR image and the created optical sensor images with pix2pix and pix2pixHD. For a reference, original optical sensor image is shown in Fig. 5. In Fig. 5(c), the created optical sensor images by pix2pix and pix2pixHD are shown upper and lower parts, respectively. Also, four cases of image creations are shown in Fig. 5(c)   For the object is a forest area, there is no big difference from the correct image. In addition, when pix2pix and pix2pixHD are compared, it can be seen that pix2pixHD can obtain high resolution even with a small number of epochs.

C. Example of Disaster Area Detection with the Trained Models of pix2pix and pix2pixHD
Sentinel-2/MSI images were generated from Sentinel1/SAR to verify the possibility of detecting a slope failure that occurred in Sakae Village, Nagano Prefecture, using a trained model with pi2pix and pix2pixHD. Information on the slope failure area is shown below. The location is latitude, longitude 36.858234 N, 138.610001E.   It is not so easy to identify the slope failure area using SAR image. Therefore, if the optical sensor image can be created from SAR image, then the slope failure area can be detected easily. The Sentinel-1/SAR of 2022/08/28 was input to the trained model with 842 image data of Sentinel-1/SAR and Sentinel-2/MSI and converted into an optical image (VV polarization) by GAN. Fig. 8 shows Sentinel-1/SAR and the created quasi-Sentinel-2/MSI images (optical sensor image). Essentially, SAR image is acquired with oblique view and has geometric distortions due to the fore shortening, layover, and shadowing. Therefore, some treatments are required to eliminate the influences of geometric distortions. That is, orthographic transformation and terrain correction are required for Sentinl-1/SAR image. Fig. 9(a) and Fig. 9(b) show the orthographic transformed and terrain corrected Sentinel-1/SAR image. By using this corrected SAR image, optical sensor image is created. The created image with pix2pixHD is shown in Fig. 9 (c).  Fig. 9, the location and the area of slope failure is getting much clear in particular, Fig. 9(c) of the created optical sensor image from the SAR image.
Sentinel-1/SAR is C-band SAR and has VH and VV polarization of receiving signal (where VH stands for that transmit with vertical polarization and received with horizontal polarization), the experiments conducted for not only VV but also VH polarizations. From these experiments, the following is concluded.

1)
In comparing VV and VH, the VV backscattering intensity of the slope failure part is larger than that of VH.
2) In the comparison of optical images by pix2pixHD, the VV optical image of the slope failure part is more like an optical image than that of VH.
3) Comparing the effects of orthographic transformation and terrain correction, the correction effect of VV backscattering intensity at the slope failure part is larger than that of VH.
Sentinel-1/SAR is C-band SAR and does not have HH polarization, so it cannot be compared with QPS/SAR-2 14 data of HH polarization in X band with 70 cm of spatial resolution, but it can detect slope failure parts to the same extent as VV and the effects of orthographic transformation and terrain correction are considered to be about the same.

V. CONCLUSION
Method for frequent high resolution of optical sensor image acquisition from satellite-based SAR image for disaster mitigation is proposed. The proposed method uses GAN-based super resolution and conversion method from a SAR imagery data to the corresponding optical sensor imagery data in order for increasing observation frequency.
Through experiments, it is found that it is possible to convert SAR imagery data to the corresponding optical sensor imagery data and also found that the spatial resolution of SAR imagery data is improved remarkably. Thus, initial stage of disaster (small scale of disaster) can be detected with resolution enhanced optical sensor imagery data derived from the corresponding SAR imagery data which results in prevention of secondary occurrence of relatively large scale of disaster.
Furthermore, it is found: 1) In comparing VV and VH, the VV backscattering intensity of the slope failure part is larger than that of VH, 2) In the comparison of optical images by pix2pixHD, the VV optical image of the slope failure part is more like an optical image than that of VH.
3) Comparing the effects of orthographic transformation and terrain correction, the correction effect of VV backscattering intensity at the slope failure part is larger than that of VH.
4) Sentinel-1/SAR is C-band SAR and does not have HH polarization, so it cannot be compared with QPS/SAR-2 data of HH polarization in X band, but it can detect slope failure parts 14 https://i-qps.net/ to the same extent as VV and the effects of orthographic transformation and terrain correction are considered to be about the same.

VI. FUTURE RESEARCH WORKS
Further investigation needs to be conducted on the creation of optical sensor image with the different parameters of pix2pixHD with more training datasets of spatial resolution enhanced Sentinel-1/SAR and Sentinal-2/MSI images.