Tuesday, May 14, 2019

Remote Sensing Lab 8

Remote Sensing Lab 8

Goals and Background:

This lab will introduce the user to measuring and examining spectral signatures of various features. Collection, analysis, and display methods will be covered for all spectral signatures discussed. In addition, vegetation health analysis is covered as well as soil analysis, with the focus of ferrous materials (iron content).

Methods 

Part 1

We will start by collecting, displaying, and analyzing spectral signatures of 12 different features. These features include:
Standing Water
Running Water
Deciduous Forest
Evergreen Forest
Riparian Vegetation
Crops
Dry Soil
Moist Soil
Rock
Asphalt Highway
Airport Runway
Concrete Surface

Collection begins by first drawing a polygon over the feature with care not to include any unrelated pixels. Then, the Signature Editor is accessed (found under Supervised in the Raster Tab) and a new signature is added with the "Add signature from AOI" option using the already drawn polygon.

figure 1: The Signature Editor window, with the first signature of standing water added.

We will repeat the process for the other 11 features as well. To display a signature, we can select the "Plot Signature Mean" option, which will show the mean reflectance for each band (wavelength) for the spectral signature. This tool can also plot multiple signatures at a time, allowing for the user to analyze differences for similar features.

figure 2: The signature mean plot for standing water. 

Part 2

Section 1

Spectral characteristics can also be used for large scale analysis as well. Using a simple band ratio, we can examine vegetation health and abundance in the AOI of Eau Claire and Chippewa counties. Called the normalized difference vegetation index (NDVI), it makes use of the following equation:

NDVI = (NIR - RED) / (NIR + RED)

To perform this analysis, we will use the Indices tool found under Unsupervised in the Raster tab, with the NDVI option selected for the index. In addition, we specify Landsat 7 Multispectral for the sensor, to match our input image.

figure 3: The Indices tool with the NDVI index selected. 

Section 2

We can also perform a similar band ratio to determine the abundance of ferrous material (iron content) the soil has. We will make use of the following equation:

Ferrous material: MIR / NIR

We will again use the Indices tool, but this time select Ferrous Materials as the index. We will also again specify Landsat 7, as we are using the same input image as before.

figure 4: The Indices tool with Ferrous Materials selected as the index.

Results

figure 5: The spectral signatures for the various features. Similar features such as the Asphalt Highway and Airport Runway have very similar signatures throughout all 6 bands. Interesting to note, however, is where they diverge. For example, while the previously mentioned Highway and Runway are very similar, a Concrete Surface (a bridge) supposedly made from similar material greatly diverges from them in the 4th, 5th, and 6th bands. This could be due to a variety of factors, such as some of the reflectance of the water under the bridge being captured in some of the bridge's pixels

figure 6: A map showing abundance of vegetation in Eau Claire and Chippewa counties. Vegetation abundance unsurprisingly seems to follow agricultural patterns; while cropland has a high amount of vegetation, urban landscapes have very little or even lack vegetation.

figure 7: A map showing ferrous material abundance in Eau Claire and Chippewa counties. Interesting to note is the mostly western spatial distribution for high ferrous material content. This is most likely related to underlying geological deposits. 

Sources

Earth Resources Observation and Science (EROS) Center. (n.d.). Retrieved May 14, 2019, from https://www.usgs.gov/centers/eros

Tuesday, May 7, 2019

Remote Sensing Lab 7

Remote Sensing Lab 7

Goals and Background

This lab exercise will acquaint the user to a variety of photogrammetric tasks key to understanding Remote Sensing projects. These include calculation of scale and relief displacement, stereoscopy/production of anaglyph images, as well as area measurement. In addition, the user will perform orthorectification on two input images.


Methods 

Part 1: Scale, Measurement, and Relief Displacement 

Section 1

For the first exercise, we will attempt to calculate scale for vertical images near Eau Claire, WI. We will do this with two methods: 

First, with the photo distance compared to actual ground distance:

Scale =  Photo Distance / Ground Distance

Second, with the focal length of the sensor, elevation, and height of the aircraft:

Scale = Focal Length / (Height of Aircraft - Height)

In remote sensing projects, the method that is used is determined by the available data. If there is not an opportunity to collect ground level distance out in the field, the second method must be used.

Section 2:

Another important photogrammetric task is measuring area and perimeter of certain features found in images. For this exercise, we will attempt to measure the area and perimeter of the lake in the following image.

figure 1: The lake to be measured.

To achieve this, we will make use of the Measure sub tab under the Manage Data tab. To measure area, we will trace the outline of the lake with the Polygon tool. For perimeter, we will make use of the Polyline tool.

Section 3:

When examining errors in images, relief displacement is often one of the chief concerns.  We will calculate the relief displacement for an image of Eau Claire. 

To do this, we will use the following formula: 

Relief Displacement = (Real World Height * Radial Distance) / Height of camera

Part 2: Steroscopy 

For the second part of the lab, we will create a series of 3D images using different inputs. Known as anaglyph images, these 3D images aid in interpretation and visualization of elevation. 

Section 1 and 2: Creating Anaglyph Image from a DEM and a DSM

To create an anaglyph image, we will use Anaglyph Generation Tool found under the Terrain tab. We will use an input image of Eau Claire and a DEM. For the creating a anaglyph from a DSM, we will use the same process though use a DSM as input rather than a DEM. 

Part 3: Orthorectification 

Section 1

In the third and final part of the lab, we will conduct the process of orthorectification on two input images. 

To begin this process, we will create a Photogrammetric Project, which will provide a file framework for our work. For the model setup, we will select the Polynomial-based Pushbroom model category, with the SPOT Pushbroom option.
figure 2: The model setup window.

Next we will have to setup the horizontal and vertical reference systems. For the horizontal coordinate system, we will select NAD27(CONUS) with the UTM Zone 11 projection.

figure 3: The horizontal coordinate system window.

For the vertical reference system, we will choose the same coordinate system, or rather its vertical counterpart, NAVD27.

Section 2

Now that the model has been set up, we can now add our first input image. This is accomplished by using the Add icon under Images in the content panel. After adding our image, we can proceed to the first actual step of orthorectification: interior orientation. In this step we will confirm the parameters of the SPOT pushbroom sensor. 

As the image contained a header file which already contained the necessary information, we need only confirm and select OK to complete interior orientation. 

figure 4: The Interior Orientation window, with the correct sensor information already entered. 

Section 3

With interior orientation completed, we can begin exterior orientation, or making our input images planimetrically correct. This will be achieved by adding and matching Ground Control Points on our input image to a already correct reference image. 

We will accomplish with the Point Measurement tool, found in the Classic Point Measurement option under the Point Measurement tab. We will create 11 total GCPs, matched to two different reference images to fully cover the image's area. 

figure 5: The Point Measurement tool windown, with the reference image on the right and the input image on the left. The final GCP shown was labeled as 12 to help distinguish between GCPs from the the 2 reference images (by skipping GCP 10). 

With the horizontal GCPs added, we can also calculate elevation values for all of them (Z-values). This is accomplished by setting a DEM as the vertical reference, and updating the Z values on all points.

Section 4

With the all necessary information about the GCPs added, we can now set the type and usage of all points. This is accomplished through the formula dialog. We will set the type to Full and the Usage to  Control. 

With this done, we will add our second input image. Much like the first, we will have conduct interior orientation on it. As the data is same as before. as both images came from the same sensor, we will simply again confirm the already obtained sensor information. 

We will also have to start exterior orientation on the second image as well. However, instead of adding GCPs from a reference image, we can simply use the first image as reference, creating a series of additional GCPs. This involves matching points on second image to already existing points on the first. As the images do not totally overlap, we will not match every point. 

With GCPs added to both images, we can now generate tie points. These points will relate the positions of GCPs on both images, making them spatially related and in their correct, real world locations. 

figure 6: The Tie Point Generation window. 

After the tie points have been successfully calculated, we can now proceed to triangulation. Triangulation uses a mathematical model to relate the images, sensor information, and the ground control points all together. 

figure 7: The triangulation window. Settings have been adjusted as needed to better suit the input images. 

With triangulation completed, exterior orientation has also been finished. 

We can now finish orthorectification with ortho resampling. We will use the Bilinear Interpolation Method. 

figure 8: The Ortho Resampling window. Take note of the two outputs at the bottom of the window; each represents a geometrically correct output of one of the two input images. 

Results

Scale Calculation:
S= PD/GD 
S = 2.625/8822.47 
8822.47* 12 = 105869.64 
S= 2.625/105869.64 
S= 40331 
1: 40331  


S = f / (H – h) 
S = 152 mm / (20,000 – 796)  
S = 5.98 in. / (240000- 9552) 
S = 5.98 in. / (14448)  
S = 2416 
1:2416 

Area and Perimeter Calculation:
Area: 38.0209 hectares
Perimeter: 4133.50 meters

Relief Displacement calculation:
Feature measured map height: .4 in.  
.5 * 3209 = 1604.5 in. (107 ft.) Real world height 
Radial distance: 10.5 in.  
Height of camera = 3,980 ft. = 47,760 in.  
Relief Displacement = (h*r) / H 
Relief Displacement = (1604.5 * 10.5) / 47,760 
Relief Displacement = 0.35 inches  

As relief displacement is positive, to correct it the object should be plotted inwards towards the principal point.


figure 9: The first anaglyph image produced using a DEM. Though major elevation change can be seen with Polaroid glasses, the relief displacement present in the input image still lowers overall image quality. 

figure 10: The second anaglyph image produced using a DSM. Due to the nature of the DSM in comparison to the DEM, the image is of much higher quality and individual feature elevation can be seen. In addition, the input image was a already corrected orthoquad, further raising the output image's quality.




figure 11:The results of triangulation. Listed are the various RMS errors for each category of points. Included also is the total RMS error for entire process. As it is below 0.5, this is an acceptable outcome. 


figure 12: The orthorectified output images. Compared to their input counterparts, their various internal errors such as relief displacement have been corrected. In addition, with the use of the GCPs and Tie Points, they have been planimetrically corrected to be in their correct, real world locations. Take note of how certain features (such as the dry riverbed) flow seamlessly from one image to the next.


Sources: 

Chippewa County, Wi. (n.d.). Retrieved April 30, 2019, from https://www.co.chippewa.wi.us/

Eau Claire County. (2019, April 30). Retrieved from https://www.co.eau-claire.wi.us/

Natural Resources Conservation Service. (n.d.). Retrieved April 30, 2019, from https://www.nrcs.usda.gov/wps/portal/nrcs/site/national/home/

USDA. (n.d.). Retrieved April 30, 2019, from https://www.usda.gov/

ERDAS IMAGINE. (n.d.). Retrieved April 30, 2019, from https://www.hexagongeospatial.com/products/power-portfolio/erdas-imagine