Tuesday, May 14, 2019

Remote Sensing Lab 8

Remote Sensing Lab 8

Goals and Background:

This lab will introduce the user to measuring and examining spectral signatures of various features. Collection, analysis, and display methods will be covered for all spectral signatures discussed. In addition, vegetation health analysis is covered as well as soil analysis, with the focus of ferrous materials (iron content).

Methods 

Part 1

We will start by collecting, displaying, and analyzing spectral signatures of 12 different features. These features include:
Standing Water
Running Water
Deciduous Forest
Evergreen Forest
Riparian Vegetation
Crops
Dry Soil
Moist Soil
Rock
Asphalt Highway
Airport Runway
Concrete Surface

Collection begins by first drawing a polygon over the feature with care not to include any unrelated pixels. Then, the Signature Editor is accessed (found under Supervised in the Raster Tab) and a new signature is added with the "Add signature from AOI" option using the already drawn polygon.

figure 1: The Signature Editor window, with the first signature of standing water added.

We will repeat the process for the other 11 features as well. To display a signature, we can select the "Plot Signature Mean" option, which will show the mean reflectance for each band (wavelength) for the spectral signature. This tool can also plot multiple signatures at a time, allowing for the user to analyze differences for similar features.

figure 2: The signature mean plot for standing water. 

Part 2

Section 1

Spectral characteristics can also be used for large scale analysis as well. Using a simple band ratio, we can examine vegetation health and abundance in the AOI of Eau Claire and Chippewa counties. Called the normalized difference vegetation index (NDVI), it makes use of the following equation:

NDVI = (NIR - RED) / (NIR + RED)

To perform this analysis, we will use the Indices tool found under Unsupervised in the Raster tab, with the NDVI option selected for the index. In addition, we specify Landsat 7 Multispectral for the sensor, to match our input image.

figure 3: The Indices tool with the NDVI index selected. 

Section 2

We can also perform a similar band ratio to determine the abundance of ferrous material (iron content) the soil has. We will make use of the following equation:

Ferrous material: MIR / NIR

We will again use the Indices tool, but this time select Ferrous Materials as the index. We will also again specify Landsat 7, as we are using the same input image as before.

figure 4: The Indices tool with Ferrous Materials selected as the index.

Results

figure 5: The spectral signatures for the various features. Similar features such as the Asphalt Highway and Airport Runway have very similar signatures throughout all 6 bands. Interesting to note, however, is where they diverge. For example, while the previously mentioned Highway and Runway are very similar, a Concrete Surface (a bridge) supposedly made from similar material greatly diverges from them in the 4th, 5th, and 6th bands. This could be due to a variety of factors, such as some of the reflectance of the water under the bridge being captured in some of the bridge's pixels

figure 6: A map showing abundance of vegetation in Eau Claire and Chippewa counties. Vegetation abundance unsurprisingly seems to follow agricultural patterns; while cropland has a high amount of vegetation, urban landscapes have very little or even lack vegetation.

figure 7: A map showing ferrous material abundance in Eau Claire and Chippewa counties. Interesting to note is the mostly western spatial distribution for high ferrous material content. This is most likely related to underlying geological deposits. 

Sources

Earth Resources Observation and Science (EROS) Center. (n.d.). Retrieved May 14, 2019, from https://www.usgs.gov/centers/eros

Tuesday, May 7, 2019

Remote Sensing Lab 7

Remote Sensing Lab 7

Goals and Background

This lab exercise will acquaint the user to a variety of photogrammetric tasks key to understanding Remote Sensing projects. These include calculation of scale and relief displacement, stereoscopy/production of anaglyph images, as well as area measurement. In addition, the user will perform orthorectification on two input images.


Methods 

Part 1: Scale, Measurement, and Relief Displacement 

Section 1

For the first exercise, we will attempt to calculate scale for vertical images near Eau Claire, WI. We will do this with two methods: 

First, with the photo distance compared to actual ground distance:

Scale =  Photo Distance / Ground Distance

Second, with the focal length of the sensor, elevation, and height of the aircraft:

Scale = Focal Length / (Height of Aircraft - Height)

In remote sensing projects, the method that is used is determined by the available data. If there is not an opportunity to collect ground level distance out in the field, the second method must be used.

Section 2:

Another important photogrammetric task is measuring area and perimeter of certain features found in images. For this exercise, we will attempt to measure the area and perimeter of the lake in the following image.

figure 1: The lake to be measured.

To achieve this, we will make use of the Measure sub tab under the Manage Data tab. To measure area, we will trace the outline of the lake with the Polygon tool. For perimeter, we will make use of the Polyline tool.

Section 3:

When examining errors in images, relief displacement is often one of the chief concerns.  We will calculate the relief displacement for an image of Eau Claire. 

To do this, we will use the following formula: 

Relief Displacement = (Real World Height * Radial Distance) / Height of camera

Part 2: Steroscopy 

For the second part of the lab, we will create a series of 3D images using different inputs. Known as anaglyph images, these 3D images aid in interpretation and visualization of elevation. 

Section 1 and 2: Creating Anaglyph Image from a DEM and a DSM

To create an anaglyph image, we will use Anaglyph Generation Tool found under the Terrain tab. We will use an input image of Eau Claire and a DEM. For the creating a anaglyph from a DSM, we will use the same process though use a DSM as input rather than a DEM. 

Part 3: Orthorectification 

Section 1

In the third and final part of the lab, we will conduct the process of orthorectification on two input images. 

To begin this process, we will create a Photogrammetric Project, which will provide a file framework for our work. For the model setup, we will select the Polynomial-based Pushbroom model category, with the SPOT Pushbroom option.
figure 2: The model setup window.

Next we will have to setup the horizontal and vertical reference systems. For the horizontal coordinate system, we will select NAD27(CONUS) with the UTM Zone 11 projection.

figure 3: The horizontal coordinate system window.

For the vertical reference system, we will choose the same coordinate system, or rather its vertical counterpart, NAVD27.

Section 2

Now that the model has been set up, we can now add our first input image. This is accomplished by using the Add icon under Images in the content panel. After adding our image, we can proceed to the first actual step of orthorectification: interior orientation. In this step we will confirm the parameters of the SPOT pushbroom sensor. 

As the image contained a header file which already contained the necessary information, we need only confirm and select OK to complete interior orientation. 

figure 4: The Interior Orientation window, with the correct sensor information already entered. 

Section 3

With interior orientation completed, we can begin exterior orientation, or making our input images planimetrically correct. This will be achieved by adding and matching Ground Control Points on our input image to a already correct reference image. 

We will accomplish with the Point Measurement tool, found in the Classic Point Measurement option under the Point Measurement tab. We will create 11 total GCPs, matched to two different reference images to fully cover the image's area. 

figure 5: The Point Measurement tool windown, with the reference image on the right and the input image on the left. The final GCP shown was labeled as 12 to help distinguish between GCPs from the the 2 reference images (by skipping GCP 10). 

With the horizontal GCPs added, we can also calculate elevation values for all of them (Z-values). This is accomplished by setting a DEM as the vertical reference, and updating the Z values on all points.

Section 4

With the all necessary information about the GCPs added, we can now set the type and usage of all points. This is accomplished through the formula dialog. We will set the type to Full and the Usage to  Control. 

With this done, we will add our second input image. Much like the first, we will have conduct interior orientation on it. As the data is same as before. as both images came from the same sensor, we will simply again confirm the already obtained sensor information. 

We will also have to start exterior orientation on the second image as well. However, instead of adding GCPs from a reference image, we can simply use the first image as reference, creating a series of additional GCPs. This involves matching points on second image to already existing points on the first. As the images do not totally overlap, we will not match every point. 

With GCPs added to both images, we can now generate tie points. These points will relate the positions of GCPs on both images, making them spatially related and in their correct, real world locations. 

figure 6: The Tie Point Generation window. 

After the tie points have been successfully calculated, we can now proceed to triangulation. Triangulation uses a mathematical model to relate the images, sensor information, and the ground control points all together. 

figure 7: The triangulation window. Settings have been adjusted as needed to better suit the input images. 

With triangulation completed, exterior orientation has also been finished. 

We can now finish orthorectification with ortho resampling. We will use the Bilinear Interpolation Method. 

figure 8: The Ortho Resampling window. Take note of the two outputs at the bottom of the window; each represents a geometrically correct output of one of the two input images. 

Results

Scale Calculation:
S= PD/GD 
S = 2.625/8822.47 
8822.47* 12 = 105869.64 
S= 2.625/105869.64 
S= 40331 
1: 40331  


S = f / (H – h) 
S = 152 mm / (20,000 – 796)  
S = 5.98 in. / (240000- 9552) 
S = 5.98 in. / (14448)  
S = 2416 
1:2416 

Area and Perimeter Calculation:
Area: 38.0209 hectares
Perimeter: 4133.50 meters

Relief Displacement calculation:
Feature measured map height: .4 in.  
.5 * 3209 = 1604.5 in. (107 ft.) Real world height 
Radial distance: 10.5 in.  
Height of camera = 3,980 ft. = 47,760 in.  
Relief Displacement = (h*r) / H 
Relief Displacement = (1604.5 * 10.5) / 47,760 
Relief Displacement = 0.35 inches  

As relief displacement is positive, to correct it the object should be plotted inwards towards the principal point.


figure 9: The first anaglyph image produced using a DEM. Though major elevation change can be seen with Polaroid glasses, the relief displacement present in the input image still lowers overall image quality. 

figure 10: The second anaglyph image produced using a DSM. Due to the nature of the DSM in comparison to the DEM, the image is of much higher quality and individual feature elevation can be seen. In addition, the input image was a already corrected orthoquad, further raising the output image's quality.




figure 11:The results of triangulation. Listed are the various RMS errors for each category of points. Included also is the total RMS error for entire process. As it is below 0.5, this is an acceptable outcome. 


figure 12: The orthorectified output images. Compared to their input counterparts, their various internal errors such as relief displacement have been corrected. In addition, with the use of the GCPs and Tie Points, they have been planimetrically corrected to be in their correct, real world locations. Take note of how certain features (such as the dry riverbed) flow seamlessly from one image to the next.


Sources: 

Chippewa County, Wi. (n.d.). Retrieved April 30, 2019, from https://www.co.chippewa.wi.us/

Eau Claire County. (2019, April 30). Retrieved from https://www.co.eau-claire.wi.us/

Natural Resources Conservation Service. (n.d.). Retrieved April 30, 2019, from https://www.nrcs.usda.gov/wps/portal/nrcs/site/national/home/

USDA. (n.d.). Retrieved April 30, 2019, from https://www.usda.gov/

ERDAS IMAGINE. (n.d.). Retrieved April 30, 2019, from https://www.hexagongeospatial.com/products/power-portfolio/erdas-imagine

Sunday, April 28, 2019

Remote Sensing Lab 6

Remote Sensing Lab 6 

Goals and Background:

This lab exercise seeks to teach the user about the process of geometric correction. Geometric correction is the important task of correcting the location of pixels in an image, which is often the start of major Remote Sensing projects. Two types of geometric correction will be covered: that of image to map rectification and image to image registration.

Methods:

Part 1: Image to Map Rectification 

The first method we will cover is that of image to map rectification. This method involves creating and matching a series of GCPs (ground control points) to a reference map. For this exercise, the image to be corrected will be of Chicago.

We will accomplish the actual correction through the Geometric Correction interface, which is accessed through the Control Points option under the Multispectral tab. For this correction, we will use a Polynomial Geometric Model of the 1st order. This will only require 3 GCPs for the correction to be successful, though we will add an additional 4th one for increased accuracy. 

GCPs are first placed on the reference image, and then placed in the related location on the reference map, linking the two points. Once the threshold number of GCPs is reached for the polynomial order the user specified (in this case 3), the GCPs only need to be placed on the input image, as the program will place the related reference map point automatically.  

figure 1: The input image with the 4 GCPs added.

Another important part of geometric correction is reducing the margin of error. This is achieved through lowering the total Root Mean Square (RMS) error.  Though the ideal standard for total RMS is below 0.5, for now we'll accept under 2. RMS can be lowered by adjusting the location of the input image GCPs to better match those on the reference map. 

After the correct number of GCPs have been placed and RMS is at a satisfactory level, we can now perform the actual correction. For the resampling method, we will use the Nearest Neighbor method. 

figure 2: The Display Resample Image Dialog, with the Nearest Neighbor option selected.

Part 2: Image to Image Rectification 

For the second part of the exercise, we will perform an image to image rectification. This includes the same process as before, however this time instead of a reference map we will use another already geometrically correct image as reference. Our input image is of an area in Sierra Leone. 

For this correction, we will use a polynomial order of 3 rather than 1. This will require 10 GCPs instead of 3, but will result a more accurate correction. For additional accuracy, we will add a total of 12. Since we attempting a more accurate correction, we will also reduce the total RMS to under 0.5. 

In addition, we will use the Bilinear Interpolation resampling method. Though this is more computationally expensive, it will contribute to a more accurate output. 

figure 3: The Display Resample Image Dialog, with the Bilinear Interpolation option selected.

Results: 

figure 4:  The image for Image to Map Rectification. The 4 GCPs have been added and dispersed throughout the map for the best possible result. RMS total error has been reduced to below 1, which will create an accurate output image.


figure 5: The image for Image to Image Registration (input image on the left). The 12 GCPs have been added and dispersed throughout the map for the best possible result. RMS total error has been reduced to below 0.5, which will create a very accurate output image. 

Sources:

Illinois Geospatial Data Clearinghouse. (n.d.). Retrieved April 28, 2019, from https://clearinghouse.isgs.illinois.edu/

Earth Resources Observation and Science (EROS) Center. (n.d.). Retrieved April 28, 2019, from https://www.usgs.gov/centers/eros

Friday, April 19, 2019

Remote Sensing Lab 5

Remote Sensing Lab 5

Goals and Background:

In this lab exercise, the user will gain experience manipulating and generating LIDAR point cloud data and derivative products. Various manipulation includes examining the data in 2D or 3D, or examining contours, elevation, etc. Products created by the user will include Digital Surface Models and Digital Terrain Models, as well as the hill shaded visual enhancements of both.

Methods:

Part 1: Visualization

Because of the large file of LIDAR data files, they are often split into individual tiles of a certain area. By doing this division of the data, the computational cost of tools and simple point drawing is greatly lessened, and the ease of transfer between users is greatly increased. However, in order to produce derivative products later on, we will need the data in a uniform structure. The easiest way to do this is to create a LAS dataset.

Part 2: Generating LAS dataset and exploring point cloud with ESRI's ArcGIS software

To create the LAS dataset, we will use ESRI's ArcCatalog program. Within it, we can easily create a LAS dataset and add in our 40 tiles of LIDAR point cloud data. These 40 tiles form a AOI around Eau Claire, Wisconsin. 

figure 1: The created LAS dataset with the 40 tiles added. 

We will need to compute statistics for the new LAS dataset. This is completed in statistics tab in the dataset's properties. While not useful for us at the moment, they prove a valuable resource in QA/QC processes. 

Unfortunately, we will need to assign a coordinate reference system as well since our dataset is missing the reference information. However, such information is often also contained in the metadata. 

figure 2: The related metadata. Included is the information for both the planar and vertical coordinate systems. 

After obtaining the information, we can then define the CRSs in the dataset properties. The CRSs are NAD 1983 with the Lambert Conformal Conic projection and North American Vertical Datum of 1988 respectively. 

To double check that the data is in the right location, we can overlay on top of a shapefile of Eau Claire county. 

figure 3: The overlaid LAS dataset on the Eau Claire County Shapefile. The data is in its correct location centered on the city of Eau Claire. 

Using the LAS dataset toolbar in ArcMap, we can explore the data in various ways. The points can be changed to show elevation, contours, aspect, or slope. In addition, we can filter out and set what returns we want. Classification can also be done to only display returns which represent the ground. 

Expanding upon basic visualization, ArcMap also has 2D and 3D viewers the user can make use of, to better comprehend/visualize the data. 


figure 4: The 2D viewer. Displayed is one of the bridges close to the UWEC Campus.

figure 5: The 3D viewer.

Part 3: Generation of Derivative Products

Section 1: Creating DSMs and DTMs from point clouds 


We also create various products from this point cloud data. Namely, Digital Surface Models (DSM) and Digital Terrain Models (DTMs).

DSMs are a model that show the elevation of the various surfaces across the point cloud. This is created by filtering the point cloud data to only show first returns, and then using the LAS dataset to Raster tool found under conversion tools in Arc Toolbox.

For this model, we will use the following settings*:
Value field: Elevation
Interpolation Type: Binning
Cell Assignment Type: Maximum
Void Fill Method: Natural Neighbor
Sampling Type: Cellsize
Sampling Value: 6.56168

*All other settings not mentioned were left with default options.

The DSM can be enhanced using the Hillshade function found under Raster in 3D Analyst Tools to provide better visual clarity for the user.

DTMS are models which show the elevation of the ground surface of a point cloud. They can be created by filtering the point cloud data to show only ground classified returns, and then using the LAS dataset to Raster tool found under conversion tools in Arc Toolbox. 


For this model, we will use the following settings*: 
Value field: Elevation
Interpolation Type: Binning 
Cell Assignment Type: Minimum 
Void Fill Method: Natural Neighbor 
Sampling Type: Cellsize
Sampling Value: 6.56168
*All other settings not mentioned were left with default options. 


The DTM can be enhanced using the Hillshade function found under Raster in 3D Analyst Tools to provide better visual clarity for the user. 

Section 2: Creating an Intensity Image from a point cloud

We will also create an intensity image, which will show the intensity of the various returns (and in turn the reflectivity of surface features). It can be created by filtering the point cloud data to show only first returns, and then using the LAS dataset to Raster tool found under conversion tools in Arc Toolbox. 

For this model, we will use the following settings*: 
Value field: Intensity
Interpolation Type: Binning 
Cell Assignment Type: Average 
Void Fill Method: Natural Neighbor 
Sampling Type: Cellsize
Sampling Value: 6.56168


*All other settings not mentioned were left with default options. 

Results: 

figure 6: The created DSM. Note the changes in elevation around the river. 

figure 7: The created hill-shaded DSM. Clarity has been greatly increased in terms of interpretation and feature identification. 

figure 8: The created DTM. Elevation is much easier to discern. 

figure 9: The hill-shaded DTM.

Sources: 

Eau Claire County. (n.d.). Retrieved April 19, 2019, from https://www.co.eau-claire.wi.us/departments/departments-l-z/planning-development/gis-division

Price, M. (2014). Mastering Arcgis (6th ed.). Mcgraw Hill Higher Education.

Friday, April 5, 2019

Remote Sensing Lab 4

Remote Sensing Lab 4

Goals and Background:

Lab 4 seeks to better familiarize the user with various miscellaneous image functions that are commonly used in remote sensing programs. Some functions touched upon by the lab include radiometric enhancement, linking images to Google Earth for additional information, and image mosaicking, among others. By completing the lab, the user will gain experience with a variety of tools needed to successfully execute basic remote sensing projects.

Methods: 

Part 1: Image Subset 

Section1: Image Subset of study area

Th first exercise is to subset the image (or to grab a specific part of it) using the inquire box. For this exercise, we want to get a subset image of the cities of Eau Claire and Chippewa Falls. 

figure 1: The inquire box in the image centered on Eau Claire and Chippewa Falls. 

To create the actual subset image, we will use the "Create Subset Image" Tool found in the Raster Toolset. 

figure 2: The Create Subset Image tool. The "From Inquire Box" option has been selected for the bounds.

Section 2: Subsetting with the use of an AOI shapefile

Next, we want to again subset an image, but this time we will use the AOI (area of interest) shapefile method. We will again focus on Eau Claire and Chippewa Falls. To do this, we will overlay the shapefile over the image, and save out the outline as a AOI file. 

figure 3: The shapefile representing Eau Claire and Chippewa Counties. Take note of the dotted outline, representing the AOI that will be saved. 

Again at the Create Subset Image Tool, we will select the AOI option and use our previously created AOI file. 


figure 4: The Create Subset Image Tool, with the AOI option selected. 

Part 2: Image Fusion 

For the second exercise, we want to create a higher spatial resolution image from a lower resolution image to better aid spatial interpretation. We will achieve this with the Resolution Merge tool found under Pan-Sharpen in the Raster Toolset. The AOI remains on Eau Claire and Chippewa Counties. 

figure 5: The two input images to be used, with the higher spatial resolution pan-chromatic image in the right viewer. 

figure 6: The Resolution Merge Tool. The Multiplicative method and the Nearest Neighbor resampling technique were selected.


Part 3: Simple radiometric enhancement techniques

For the third exercise, we will attempt to enhance an image's spectral and radiometric quality by removing haze. We will achieve this by using the Haze Reduction Tool under Radiometric in the Raster Toolset. 

figure 7: The input image for our haze reduction. Take note of the prominent haze in the lower right corner of the image, lowering image quality. 
figure 8: The Haze Reduction tool.

Part 4: Linking the Image Viewer to Google Earth

For the fourth exercise, we will link the viewer to Google Earth, to allow for better visual interpretation. This is achieved by searching for "Google Earth" in the search bar the Help toolset, and select the "Connect to Google Earth" option. Google Earth can serve as a great selective image interpretation key, allowing the user to gain additional knowledge about features in the AOI. In addition, it can help with general feature classification as well. 

Part 5: Resampling

For the fifth exercise, we will resample an image, or change the size of its pixels. This is useful when dealing with images from different sensors, or just when a image needs to smoothed out. For this exercise, we will be resampling up, or reducing the pixel size. 

We will achieve this using the Resample Pixel Size under Spatial in the Raster Toolset. We will try Resampling with two different methods: Nearest Neighbor and Bilinear Interpolation. 

figure 9: The Resample Pixel Size Tool. The Nearest Neighbor Method has been selected. 


figure 10: The Resample Pixel Size Tool, with the Bilinear Interpolation Method selected. 

Part 6: Image Mosaicking 

Section: Mosaicking with Mosaic Express 

For the sixth exercise, we will mosaic two adjacent satellite images. Image mosaicking is useful tool when the AOI lies on the boundary of two images, or when it lies across multiple images. We will first mosaic the images using Mosaic Express under Mosaic in the Raster Toolset. Mosaic Express allows for the user to quickly mosaic images, and as such we will leave most of default parameters in place.

figure 11: The two input images to be mosaicked. The higher quality image is overlaid on top.
figure 12: The Mosaic Express Tool, with the two input images added. Default parameters have been left as is. 

Section 2: Mosaicking with Mosaic Pro

We will next mosaic images with the Mosaic Pro tool under Mosaic in the Raster Toolset. Mosaic Pro is a much more comprehensive mosaicking tool; it requires much more user input than it's shorter counterpart, Mosaic Express. However, this additional user input allows for a much better resultant image. We will use the same two images as before. 
figure 13: The two input images loaded into the Mosaic Pro interface. The higher quality image is still on top. 

In the interface, we will specify the color correction and overlap methods to be used. For color correction, we will use the Histogram Matching method, with the overlap areas only option selected. With this option, only the intersecting overlap areas will match their brightness values to the corresponding histogram, while maintaining the brightness values in the other parts of the mosaicked image. In this way, the border between the two images will seem more seamless. 

For the overlay method, we will use the default Overlay Method. With this method the brightness values from the top (or better quality image) is used for to determine brightness values in regions of intersection in the mosaicked image. 


figure 14: The Color Corrections window. The Histogram Matching Method has been selected, and the Overlap Areas option has been selected.


Part 7: Binary Change Detection 

Section 1: Creating a Difference Image. 

For the seventh and last exercise, we will determine the amount of land cover/use that changed during a 20 year period in a region including Eau Claire. We will do this by determining and displaying the brightness values that changed between the images, which are from 1991 and 2011 respectively. 


figure 15: The two input images to be used. 

The first step in the process is to create a difference image from the two input images. A difference image will provide us with an indication of what pixels changed between the two input images. To create this difference image we will use the Two Input Operators tool from under Functions in the Raster Toolset. 


figure 16: The Two Input Operators Tool. Notice how the "-" Operator has been selected, indicating that we want to see the change between the two input images. 

With the difference image created, we can examine its histogram. However, due the nature of the Two Input Operators tool, the histogram is slightly misleading. The brightness values mapped close to the mean (or the center of the histogram) are those which did not significantly change between the two images. Rather, it is the brightness values at the edges of the histogram beyond a certain change threshold that actuallty changed. This threshold can be calculated with the simple equation: 

threshold =  mean +1.5(standard deviation)

Applying the data from the difference image's metadata, we can quickly calculate the threshold:
Mean: 12.253 
Std. Dev.: 23.946 
Threshold = Mean + 1.5(St. Dev.)  
Threshold = 12.253+1.5(23.946) 
Threshold = 48.172 

We can apply the threshold to find the regions of the Histogram which contain the brightness values that changed. 


figure 17: The Histogram of the difference image, with the applied thresholds of change applied. 


Section 2: Mapping Change in Pixels in Difference Image using Spatial Modeler


After examining our previously created difference image, it is evident we will need another method to actually be able to display the areas of change in our image. We will achieve this using Model Maker, found under Model Maker in the Toolbox Toolset. Model Maker is a useful tool in that it allows the user to create and visualize processes that are not already present in the ERDAS Imagine Program. 


figure 18: Model Maker with our process visualized. Like before, we will input our two images and create a difference image. 


figure 19: The histogram of the resulting difference image. A constant was used in the difference function of our model, and so the histogram has been shifted to only positive brightness values. The area of change is now only on the upper bound of the histogram. We can calculate this upper threshold only with the following equation:

Mean: 17.818 
St. Dev.: 18.082 
Change threshold: Mean + 3(St. Dev.) 
Change threshold: 17.818 + 3(18.082) 
Change Threshold: 72.064 

With our new upper threshold calculated, we can now develop a new model that will display only those pixels which changed between our two original input images.

figure 20: The new model, with the difference image as the input. A simple Either IF OR function will be used. If pixels in the image exceed the change threshold, they will be displayed. Otherwise, all other values will be made dark.

With only the pixels that changed displayed in a new image, we can display the image in ArcMap or a similar mapping program to be better visualized. 

Results

figure 21: The successfully subset image of Eau Claire and Chippewa Falls . 


figure 22: The successfully subset AOI image of Eau Claire and Chippewa Falls. Note that it shares the shape of the shapefile that was used to create it. 


figure 23: The increased spatial resolution pan sharpened image. The new image is of a much higher quality than the original multispectral image. 


figure 24: The haze reduced image. Notice the haze in the lower right corner has been cleared up. 


figure 25: The resampled image using the nearest neighbor method. 


figure 26: The resampled image using the bilinear interpolation method. It is generally smoother and more seamless than its nearest neighbor counterpart.


figure 27: The mosaicked image made using Mosaic Express.


figure 28: The mosaicked image made using Mosaic Pro. While the boundary can still be seen, it is generally more seamless and uniform in color. 

Sources: 

Earth Resources Observation and Science (EROS) Center. (n.d.). Retrieved April 5, 2019, from https://www.usgs.gov/centers/eros

Price, M. (2014). Mastering Arcgis (6th ed.). Mcgraw Hill Higher Education.