Friday, April 27, 2018

Lab 10: Radar Image Functions

Introduction

Radar remote sensing is an active remote sensing system that collects data from the microwave portion of the electromagnetic spectrum. The goal of this lab is to be introduced to radar remote sensing. This will be done by executing the following miscellaneous image functions on radar images:
     1. Noise Reduction
     2. Spectral and Spatial Enhancement
     3. Multi-sensor Fusion
     4. Texture Analysis
     5. Polarimetric Processing
     6. Slant-range to Ground-range conversion

Methods

1. Noise Reduction
Noise reduction was performed by despeckling radar images. Noise reduction attempts to eliminate or lower the amount of salt and pepper affect present in an image. Noise reduction was performed using Erdas Imagine using the Radar Speckle Suppression tool. This tool was ran 3 times. The first time, the original image was used, the subsequent times, the output image of the previous run was used. Figure 1 shows the parameters entered for the 1st run. 

Fig 1: Radar Speckle Suppression Tool
Fig 1: Radar Speckle Suppression Tool
After all the despeckles were ran, a comparison of the histograms was done to see how the pixel values changed.


2. Spectral and Spatial Enhancement
This was performed by executing edge enhancement using the Non-directional Edge tool in Erdas Imagine. The parameters for this tool can be seen below in Figure 2. Speckle suppression was performed on the output of this image for trial number 1, and speckle suppression was performed on the input image for trial number 2 to see whether it is better to despeckle before or after edge enhancement.

Fig 2: Non-directional Edge Tool Parameters
Fig 2: Non-directional Edge Tool Parameters

Also, the Radar Speckle Suppression tool was used, but this time, the Wallis Adaptive filter was used. This filter adjusts the contrast stretch of an image defined by the moving window size. for this, a 3x3 window was used.

3. Multi-Sensor Fusion
To perform this, the Sensor Merge tool was used in Erdas Imagine. The parameters for this tool can be seen below in Figure 3. This tool takes a radar image and merges it with a Landsat image.
Fig 3: Sensor Merge Parameters
Fig 3: Sensor Merge Parameters


4. Texture Analysis
This was executed by using the Texture Analysis tool. Using this tool, texture can be quantified which can be important for vegetation analysis and determining vegetation species. This tool was ran in Erdas Imagine as well.

Fig 4: Texture Analysis Parameters
Fig 4: Texture Analysis Parameters

5. Polarimetric Processing
The rest of the lab was performed using ENVI. To execute polarimetric processing, first the images were synthesized by using the Synthesize SIR-C Data tool. Each of the combinations (HH, VV, HV, and TP) were chosen to be synthesized. The parameters for this tool can be seen below in Figure 5.

Fig 5: Synthesize Parameters
Fig 5: Synthesize Parameters
Fig 6: CEOS Header Report
Fig 6: CEOS Header Report
Synthesization was performed using different parameters on different images to experiment and to see the different outputs. After this, the results were seen using the histogram, and the stretch method used to display the histogram was altered with using the Gaussian, linear, and square root stretching schemes.

6. Slant-range to Ground-range conversion
This was performed by first previewing the CEOS Header seen in Figure 6 and then by resampling an image using the Slant to Ground Range SIR-C tool. The parameters for this tool can be seen below in Figure 7. The ground spacing in found in the header report was used as the output pixel size in the conversion.
Fig 7: Slant-Ground Parameters
Fig 7: Slant-Ground Parameters
























Results

Figure 8 shows the differences between the histograms when despeckling an image. The histograms show that as speckle suppression is performed run after run, the contrast in an image becomes greater. Also, the histograms starts to become more bi-modal than normal.

Fig 8: Comparing the Histograms of Each Speckle Suppressed Output
Fig 8: Comparing the Histograms of Each Speckle Suppressed Output

Figure 9 shows the result of the final Speckle Suppression run and compares it to the original image. The original image is shown on the left, and the final speckle suppressed image is shown on the right. Overall, the speckle suppression did eliminate some of the salt and pepper effect, but now the image appears to have worms in it.
Fig 9: Comparing the Speckle Suppressed Image With the Original Image
Fig 9: Comparing the Speckle Suppressed Image With the Original Image

Figure 10 shows the compares results of the spatial enhancement performed before and after speckle suppression. The image on the left shows the result of performing the edge enhancement without speckle suppression, and the image on the right shows the result of the edge enhancement performed after speckle reduction. Based on the visual result of the images, the image which include speckle suppression is cleaner and smoother.

Fig 10: Comparing Performing Edge Enhancement With and Without Performing a Speckle Suppression First
Fig 10: Comparing Performing Edge Enhancement With and Without Performing a Speckle Suppression First

Figure 11 shows the result of merging the radar imagery with the Landsat TM imagery. This is by far, the coolest output in this lab. The input Landsat image is shown on the left, while the output merge image is shown on the right. The main difference between the two is the colors. The Landsat image only has green, white, and black colors while the merged image has many more colors. The other main difference is that no clouds are present in the merged image. This is because the radar sensor signals can travel through clouds.
Fig 11: Comparing the Merged Output from the Input Landsat TM Image
Fig 11: Comparing the Merged Output from the Input Landsat TM Image

Figure 12 shows the result of performing texture analysis. The image from which the texture was derived is shown on the left while the texture values are shown on the right. Pixels with a higher texture value are shown in white while pixels with a lower texture value are shown in black.
Fig 12: Result of Performing Texture Analysis
Fig 12: Result of Performing Texture Analysis

Figure 13 shows the result of synthesizing the radar imagery. This output doesn't appear very satisfying and almost looks like a point cloud. However, the output looks similar to that of a true color image.

Fig 13: Result of Synthesizing a Radar Image
Fig 13: Result of Synthesizing a Radar Image

Figure 14 shows the result of performing the slant to ground range conversion. non-corrected image is on the left while the corrected image is on the right. The main difference is the horizontal stretch apparent in the the corrected image. All other qualities of the image appear to be the same. This is because the line spacing found in the header report was used to increase the pixel size in the horizontal direction.
Fig 14: Comparing Slant - to Ground Range Conversion
Fig 14: Comparing Slant - to Ground Range Conversion


Sources

Envi, 2015. Radar Imagery
Erdas Imagine, 2016. Radar Imagery
Wilson, C (2017) Lab 10 Radar Remote Sensing retrieved from
      https://drive.google.com/open?id=1POzdEKzH3HaIDzS04S2tZ31PCnoe5VKz

No comments:

Post a Comment