Atmospheric correction of high resolution data

    The present exercise refers to an introductory application of atmospheric correction performed on remotely sensed data in the area around Providence, Rhode Island, USA. The source data consists of three Landsat 5 TM images (bands 2, 3 and 4 for green [0,6–0,7 μm], red [0,7–0,8 μm] and near-IR [0,8–1,1 μm] respectively), the corresponding metadata text file, and a raster binary layer indicating the locations of the pixels of the Landsat images where lawn grass can be found. The three band layers were combined into one false-color composite image, a basic atmospheric correction was then performed using the metadata attributes and finally the correction method was evaluated by examining the spectral signature of a known type of land cover, i.e. lawn grass.

    The atmosphere accounts for distortions in the radiance that reaches the satellite sensor, mainly because of scattering. The two broadly identified scattering mechanisms, Rayleigh scattering (caused by air molecules) and Mie scattering (caused by the aerosols, such as smoke, haze, water vapor and fumes) affect the radiance captured by the satellite sensors and obscure the fine detail in the data stream. The following diagram shows the travel of the electromagnetic radiation from the sun through the atmosphere and back to the satellite sensor.

    Figure 1: The effects of the atmosphere in determining various paths for energy to illuminate a (equivalent ground) pixel and to reach the sensor 1)

    The tools available for remote sensing data processing can help reducing the atmospheric distortions and produce “clearer” data for further analysis.

    Data and methods

    The general steps followed can be summarized as follows:

    1. Data acquisition and exploration

    • Acquisition of the remote sensing data
    • Creation of a three-layer composite from the bands 2, 3 and 4
    • Exploration and close observation of the produced false-color image

    2. Atmospheric correction

    • Exploration of the metadata.txt file
    • Running the ATMOSC module using the values obtained from the metadata file and the DN value haze, after being converted to radiance values, for all three bands
    • Comparison of the results, which is done with the help of histograms for each band, before and after the correction

    3. Evaluation using spectral libraries

    • Extraction of the spectral values for the ‘lawn grass’ locations, as defined by the masking layer, from the atmospherically corrected images, using the EXTRACT module
    • Comparison with the spectral libraries

    Source data

    The atmospheric correction was performed on three Landsat 5 TM images of bands 2, 3 and 4 of the area around the city of Providence, Rhode Island, USA, acquired on the 9th of September 1987. The resulting false color composite was visually compared to the original false color composite. The correction was assessed using a raster layer with binary value pixels (0 and 1), where ‘1’ indicated an area with lawn grass land cover and ‘0’ an area without lawn grass. The spectral signature of lawn grass was given as well.

    Atmospheric correction

    Many atmospheric correction methods have been proposed for multi-spectral satellite imagery, mainly consisting of image-based methods, methods that use atmospheric modeling and methods that use ground data. ATMOSC module

    IDRISI’s module to perform atmospheric corrections on loaded imagery is ATMOSC, a sophisticated tool, available under “Restoration” menu item. The tool offers four models to perform the corrections:

    • Dark object subtraction model
    • Cos(t) model
    • Full correlation model, and
    • Apparent reflectance model

    The model approach used for the atmospheric correction of the present data is the Cos(t), created by Chavez in 1996. The model is based on a technique for approximation, in the events of insufficient data available for proper atmospheric correction. The model incorporates all the elements of the ‘dark object subtraction’ model, which looks for the lowest values in the data (usually at pixels located over deep waters, where reflectance is known to be minimal), regards them as haze and removes them from all the other values. The model requires a number of inputs, all of which can be extracted from the metadata.txt file. Spectral libraries

    The following spectral values have been measured by USGS for ‘lawn grass’ land cover and are normally used to calibrate remote sensing systems. These are the values a “perfect” remote sensing system should capture in ideal conditions and are used as reference values, for the purposes of the present analysis. The bands of particular interest are in bold.

    BandSpectral reflectance
    B14,043227E-02
    B27,830066E-02
    B34,706150E-02
    B46,998996E-01
    B53,204015E-01
    B71,464245E-01

    The source data were captured approximately three decades ago; it is thus reasonable to expect errors such as the within-band line striping caused by the age of the sensors and a generally lower-quality product. However, such types of corrections fall out of the scope this exercise.

    Results

    The two images below were created by composing the three band layers and assigning them RGB colors; blue for band 2, green for band 3 and red for band 4. The left composite is the original, unaltered data, where each pixel of each layer carries DN (digital number) values ranging from 0–255 (8-bit), while the right composite carries radiance/reflectance values ranging from 0–1 and is the atmospherically corrected version.

    Figure 2: Original (non-atmospherically-corrected) false-color composite
    Figure 3: Atmospherically corrected false-color composite

    A more precise insight into the data can be offered by the histogram view. The histograms below were adjusted by editing the Graphic View Settings: too high and too low values were excluded and the bar width was set to the most visually proper value. The histograms show the distribution of the values for each image, making it easy to identify the lowest which is used as input to the “haze” cell in ATMOSC.

    Extremely large numbers of pixels with very low values are generally regarded as errors. For example, the border of the original images have a DN value of 0 and appear as extremely high bars at the leftmost part of the respective histograms; these values have been excluded.

    Figure 4: Histograms of the distribution of pixel values. Top row: atmospherically corrected values in radiance/reflectance values; bottom row: original pixel values in DN (digital numbers)

    Lawn grass spectral values

    Average spectral values extracted from bands 2, 3 and 4 for pixels without lawn grass and for pixels with lawn grass. The software module used was IDRISI’s EXTRACT, with summary type set to ‘average’. The feature definition image was the lawn grass raster binary image. The results are presented in Table 2, where the anomaly is calculated using the following formula:

    n=G1G0SS

    where: n the anomaly in %, G1 the average spectral values for pixels with grass, G0 the average spectral values for pixels without grass and S the spectral signature of lawn grass for the specific band.

    BandNo grass “0”Grass “1”DifferenceSignatureAnomaly (%)
    B20,0005340,0034500,0029160,07830066–96,28%
    B30,0188330,0537610,0349280,070615–50,54%
    B40,1434220,6316460,4882240,6998996–30,24%

    Conclusions

    The ATMOSC module managed to produce a clearer image with darker deep waters and clearer differences between vegetation and buildup areas. However, it was unable to remove “bad data” errors, such as the line stripping particularly visible over the Atlantic Ocean.

    There is a significant difference between the observed values and the signature values. In fact, the observed values for band 2 are almost half of the ones we should have observed. It is worth considering whether a perfectly atmospherically corrected image would provide us with the actual spectral signature values of a given land cover type. The answer would be no, because there are a number of factors that affect it; in the case of grass, insufficient water availability, for example.


    1) Richards & Jia, 2006: https://www.springer.com/gp/book/9783540297116

    Categories: Remote Sensing Tags: Tags:

    Mapping urban areas in Accra

    Introduction

    Remote Sensing, in the context of the present exercise, is the acquisition of information about the land surface without a site observation. Satellite imagery is used for this purpose; the sensors are able to detect propagated signals reflected from the earth’s surface in a number of different frequencies including, but not limited to, visible light. For this exercise, we will examine whether remote sensing techniques are applicable in studies of urban areas and to what extend. We will also experiment with object oriented image processing based on segmentation and we will learn how to interpret the results of such operations.

    Part 1 refers to the analysis of a QuickBird image covering the northern urban fringe areas of Accra, Ghana.

    Part 2 assesses the results of the software operations by comparing its results to “ground-truth”, as perceived by the human eye.

    Materials and Data

    For the first part, the software used for the image analysis was eCognition by Trimble, which is well suited for analyzing high-resolution satellite images and orthophotos covering complex urban areas. The source data was a 4-band QuickBird satellite image covering an area 4,5 × 5,6 km over the northern urban fringe areas of Accra, Ghana, with spatial resolution of 0,6 m taken on 15 January 2009. For the second part, a smaller image of Accra region was used.

    Methods

    Part 1

    For the first part, the QuickBird image was loaded to eCognition Developer. The first step was to segment the image in two levels: the first level was named “object level” (OL), and the second level was named “neighborhood level” (NL). The NL was less segmented than the OL because effort was made to create larger segments of neighboring objects, which would be unions of parts of the same object (i.e. a house). The best segmentation was selected by trial and error— entering multiple combinations of input values to determine which combination gives the more realistic results. The next step was to classify the segments. Three classes were defined according to the level of urban development: “new development”, “mostly rural” and “fully urban”. For the classification to work, samples had to be assigned to each of the defined classes. This was done by selecting the most representative segments for each case. The classification output can be seen in the following image.


    Image 1: Classification output for part 1 (Accra, Ghana). The colored areas indicate segments of land. Purple: “new development”, green: “mostly rural”, blue: “fully urban”

    Part 2

    For the segmentation in Part 2, a similar operation took place. In addition to the segmentation, however, a ground-truth base had to be produced. The ground-truth was the manual digitizing of a number of buildings in ArcMap.

    After the segmentation had been produced, is was compared to the manual digitizing and the level of accuracy (accuracy assessment) was determined. In particular, the areas of the buildings of interest (the ones which would be used for ground-truth assessment) were calculated and then compared to the areas of the segments referring to each of the buildings (Image 2).


    Image 2: Classification, ground truth (blue) and differences (red)

    Image 3: Diagram of workflows followed

    Results and Discussion

    The produced segmentation map of Accra in Part 1 indicates a fully urban development (blue color) in the south-western part of the image. A second fully urban center is located slightly northern and slightly western from the main development. The mostly rural parts are located in the western and north-western parts of the image (green color). The purple colored segments are areas under urban development and are located between the “zones” of full development and rural areas, with the exception of the south-eastern part, where they seem to be mixed.

    Concerning the second part, it is obvious that the object-based segmentation cannot produce the same accurate results that a manual digitizing can produce. As shown in the images above, there is a spatial difference between the area covered by the structures and the area of their respective segments; however, the extend of the difference is not the same for all types of structures.