SAR Processing

Synthetic aperture radar (SAR) images collected with satellites have been widely used to detect layers of oil on the sea surface [Fortuny-Guasch, 2003; Girard-Ardhuin, et al., 2001].  The oil can originate from natural or man-made sources.  Thin layers of oil released from natural marine oil seeps generate SAR targets, called oil slicks, which are elongated and radar-dark.  The effect is strongly dependent on environmental factors such as wind strength and sea-state.  Previous SAR studies of natural oil seeps have employed subjective, time-consuming manual classification to delineate the location and extent of oil slicks [MacDonald, et al., 1996].  

We developed a texture-classifying neural network algorithm (TCNNA) to more rapidly process SAR images and to improve discrimination of true and false targets.  Our approach uses a combination of edge detection (Leung-Malik filter bank), collection information (e.g. beam mode), and environmental data, which are processed with a feed-forward neural network.  The TCNNA appears to be successful at reducing false targets and rapidly interpreting images collected under a wide range of environmental conditions.  Interpreted images produce binary arrays with imbedded geo-reference data that easily stored and manipulated.

Figure 1. Example of Images selected for the Training Set.

Our objective is to process the SAR images and extract from them the location of oil slicks. Ocean SAR image interpretation becomes more complicated if no ancillary meteorological and oceanographic data is available.  Unlike optical images, ocean SAR images are formed by coherent interaction of the transmitted microwave with the sea surface. A SAR image is displayed as a grey scale image, such as those SAR samples in Figure 1, where variation in brightness and dark features are present. The intensity of each pixel represents the proportion of microwave backscattered from that area on the sea surface. The change in illumination from one side of the image to the other is a factor of the sensitivity of the antenna due to the incidence angle. Calm sea surfaces appear dark in SAR images. However, rough sea surfaces may appear bright especially when the incidence angle is small.

From an archive of more than 30,000 SAR images (Garcia-Pineda, et-al 2008) we strategically selected a number of SAR images where active seeps occur and covering the entire GOM, then a complementary database of oceanographic and meteorological data was integrated to facilitate the SAR image interpretation. Through data sharing agreements with NASA and with support from the Alaska Satellite Facility (ASF), we have acquired a collection of 613 Radarsat-1 images. Radarsat-1 generates images which differ in resolution, incidence angles, ascending or descending orbits among other time and spatial constraints. The 613 Radarsat-1 images were classified based in their beam modes and time frame. We will use different versions of the TCNNA for different beam modes.

The Pixel Neighborhood Texture Descriptors:
Analysis of the pixel value with its neighbor texture is used to identify general sea conditions, also to detect boundaries between ocean features like oil slicks edges.  An important approach for describing the oil slick boundaries is to quantify whose texture content, based on the gray speckle of the satellite image. This aspect of the analysis is crucial because the brightness of the images varies based on the energy returned to the satellite antenna along the incidence angle. This method serves as a standardization of the aspect of the image. These texture controls are used in regions of 25 by 25 pixels.

To describe a pixel neighborhood we analyze the texture content based on statistical properties of the intensity histogram. One class of such measures is based on statistical moments. One of the principal approaches describing a histogram is via its central moments [Gonzalez, et al., 2004] which are defiend as:

Where xis a random variable indicating intensity, x is the histogram of the intensity levels in a region, L is the number of possible intensity levels,  n is the moment order, and m is the mean:

In addition to these texture descriptors, we used the histogram standard deviation as a measure of average contrast.

The Neural Net Algorithm.
We used the Matlab R2008a Neural Network Toolbox. The algorithm designed consists of a pixel by pixel Feed-Forward Neural Network (FFNN) classification method. This FFNN is a two-layer network; it has 46 inputs, 5 hidden neurons with a logsig transfer function and 1 output neuron. This configuration was chosen after analyzing and balancing processing time versus increasing accuracy. It computes one value to identify each pixel as “slick” or “no slick”. Instead of the hardlim function for the output neuron we select the logsig function which is perfect for learning to output Boolean values. This transfer function also allows adjusting a threshold after processing to decide the best level of classification.

Using the training set database, a gradient descent backpropagation function was chosen to train the FFNN with weights and bias values according to descent momentum and an adaptive learning rate. During the training session the learning process was stopped after 136 epochs; at that moment the highest learning rate was reached. Although the performance goal of 0.01 Mean Squared Error (MSE) was not met, the training session stopped at 0.022 (MSE) proved to be sufficient for a successful classification.

TCNNA vs Manual Classification

To first measure the accuracy of the TCNNA we used a Radarsat-1 Standard Beam mode 3 image was collected during the CHEMO III cruise in May 23 of 2006, as part of a Mineral Management Service research program [Garcia-Pineda, et al., 2008]. The size of the image is 9000 by 9000 pixels and covers an area of 100 km by 100 km. Each pixel shows an area of 12.5 by 12.5 meters. For that project, this image was manually classified by an Image Analyst Specialist using a high resolution imaging tool with an advance interactive pen tablet, from which the specialist select directly from the display those regions in the image considered as oil slick. This technique was used for 5 similar SAR images for the same project.

The first comparison between the manually classifying techniques versus the TCNNA technique is the time consumption for processing the entire image. Using parallel processing in a commercial workstation (3.0 Ghz dual processor and 4GB Ram), the time that it took to process the entire image in average was 65 minutes using the TCNNA. The time that it took the specialist to process the same image was approximately 7.5 hours.

Due to the size of these images, it is difficult to observe differences between both techniques even using full resolution in a 24in display (Figure 8). To first evaluate accuracy and correctly selection of oil slick targets and discriminating false look alikes, we first compared the overall area selected from both techniques. Manually classification selected 346.34 km2 while TCNNA selected 461.25 km2. Zooming in several regions to double resolution we observed that the TCNNA was more accurate that the manual classification which missed areas where a manual classification was difficult, because of the fuzzy boundaries of the slick.

Analyzing the manually classified image with full resolution seems to be good enough, selecting regions where the oil slick was present and discriminating false targets, but when zooming at 300% we can see how pixel classification from TCNNA is by far more accurate than the manual classification, as shown in Figure 2.

Figure 2. Using 300% zoom, it is visible how manual classification of targets can misplace and miss some boundaries of oil slicks.

 

TCNNA vs Validation Test.
A database of 775 feature vectors was constructed from 5 different Radarsat-1 images containing oil slicks. A validation test was performed using the weights and bias from the network used in the TCNNA. The overall accuracy classifying the validation test was of 95.74 % as shown in Figure 3.

xx