Thursday, November 17, 2016

Lab 6: Geometric Correction

Introduction

Lidar data is collected in the form raw mass points that need to undergo processing steps in order to ensure they are accurate and formatted properly to be used by the client. The processing steps are divided into two parts; preprocessing and postprocessing. Geometric correction is an important preprocessing step in which individual pixels are put into their proper X, Y position. This lab is designed to provide experience with two important types of geometric correction that are usually performed on satellite images: image to map rectification and image to image registration.Both processes are done using Erdas Imagine.

Methods

Part 1: Image to Map Rectification

Image rectification is the process of converting data file coordinates to a map coordinate system through the creation of ground control points (GCPs) on the image to be rectified and a set in the same geographically positions on a reference map of the same area. The map coordinates of the GCPs are used to transform the image and produce a planimetric image. In this section of the lab a United States Geological Survey map of the Chicago metropolitan statistical area is used as a reference to rectify a Landsat TM image of the same area. In order to begin the process, two viewers must be open with the viewer containing the image to be rectified active. The interpolation process fits polynomials to the GCP data, the necessary order of the polynomial increases as the amount of distortion increases. As the order of the polynomial being applied increases, so does the number of GCPs necessary for the interpolation to run. The amount of distortion in the image provided for this section of the lab was not extensive so a first order polynomial was used and only three GCPs required, however four pairs will be placed. Until the minimum number of GCPs has been met each GCP pair must be manually placed in the same location on each image, subsequent GCPs only need to placed on one image and are automatically placed on the other image. After the minimum amount of GCPs have been created the status of the model changes from "has no solution" to "solution is current" and RMS values appear for each GCP. There is also a total RMS value at the bottom right of the screen. An ideal total RMS is less than 0.05; anything above 2.0 is too high and will not result in an accurate transformation. The goal for this section of the lab is 2.0 or less, which is achieve by zooming in on GCP pairs and placing them more accurately. The accuracy of all points must be verified, including the pairs with automatically placed counterparts. When an ideal RMS is achieved, the geometric correction process is ready to be run. The interpolation method can be changed to fit the correction being done, since the distortion in this image is low a simple nearest neighbor interpolation should result in an accurate correction. After the process has run, the resulting image is brought in and examined for accuracy.

Part 2: Image to Image Registration

In this section of the lab a corrected Landsat TM image of Sierra Leone is used as a reference to correct a distorted image of the same area. To assess the level distortion in the image it is brought into an Erdas viewer and the corrected reference image overlaid in the same viewer. At this point the viewer swipe can be used to swipe aside the reference image in either a vertical or horizontal direction to compare the images. In this case the level of distortion is quite high and will require a polynomial higher than a first order. For this example a third order polynomial will be necessary, requiring a minimum of ten GCPs. The correction process is the same as in part one except points one through ten will need to manually placed on both sides and then the model solution will become current. After the minimum of ten GCPs has been reached, two additional points will be placed. In this section of the lab the total RMS error will need to be 0.05 or below, this is achieved by meticulously examining each GCP pair and improving their accuracy. Due to the extent of distortion, when the correction process is ready to be run the interpolation method should be changed to one that results in a more spatially accurate image. The bilinear interpolation will be more suited to produce a more spatially accurate image from one with this level of distortion. With an ideal RMS error achieved and the proper interpolation method selected, the output image is highly accurate.

Results

Geometric correction is an important preprocessing step in the process of transforming Lidar raw mass points into an end product that can be used by the client. In this lab, the image to map rectification process has used map coordinate counterparts to rectify the pixel coordinates, resulting in a planimetric image (Figure 1). In this process the target RMS was quite difficult to reach, and after meticulous relocation of each GCP the RMS error remained around 1.89. The image to image to registration process used a similar procedure using a known corrected image to correct the distorted image. In the examples used in this lab,an ideal RMS error was much easier to obtain in the image to image registration process. Other than the spatial distortion, the images were exactly the same and it was possible to zoom into the same four pixels of each image and place the GCP at their intersection. This method resulted in a 0.00 RMS error after the first 10 GCPs were placed (firgure 2). Subsequent points beyond this increased the RMS as the automatically placed point were not as accurate. Correction of these inaccuracies resulted in a final RMS error of  .2259 (figure 3).


Figure 1. Result of image-to-map rectification, rectified image on left and original on right

Figure 2. A perfect RMS error after 10 GCPs had been added 


Figure 3. After 12 GCPs had been added the RMS was corrected to 0.2259, well within the ideal range


Sources

Illinois Geospatial Data Clearing House. (n.d.).

Earth Resources Observation and Science Center. (n.d.). United States Geological Survey.

Wilson, C. (2016). Lab 6: Geometric Correction. Eau Claire, WI, USA.


Thursday, November 10, 2016

Lab 5: LiDAR remote sensing

Goals and Background
The demand for LiDAR data grew rapidly during the early 2000s when data processing systems and IT architecture were improved to be able to handle the terabytes of data produced by LiDAR scanners. Previous to this time photogrammetry was used to process radar images, but this method only produced what the scanners could “see”. Lidar scanners are able to “see” through forested areas and are capable of gathering data about the surface below the forest (Schukman & Renslow, 2014). As LiDAR technology continues to improve, the demand for its data as well as careers in the field also continue to grow. This lab aims to provide fundamental knowledge about LiDAR data structure and processing through specific activities involving the processing and retrieval of surface and terrain models as well as the processing and creation of intensity image and other derivative products from point cloud. The files used in this lab will be in LAS format.

Methods
Part 1
In the first part of this lab, all LAS files are displayed together in ERDAS Imagine to visualize the point clouds as a whole image (figure 1). The LAS dataset opened in ERDAS is not projected, however it is still possible to verify each tile`s location within the study area. To do this a file containing quarter sections of Eau Claire country is opened in ArcMap and the labels changed to correspond with the individual LAS file names in ERDAS. By selecting a quarter section in ArcMap and then selecting the same one in ERDAS, the location of each tile can be visualized within the study area.
Part 2
This section of the lab introduces a hypothetical scenario in which you are a GIS manager who needs to perform a quality check of LiDAR point cloud data in LAS format as well as verify the current classification of the Lidar. To begin, a dataset of all the LAS files is created using Arc Catalog (figure 2) and the statistics for the dataset populated using the statistics tab in the dataset properties. Most older Lidar datasets do not have a coordinate system defined, as was the case for this set. The coordinate system information for the point clouds can be found in the metadata (figure 3) and then defined under the corresponding X, Y and Z coordinate system tabs in the dataset properties window. Now that the dataset has been appropriately configured it can be brought into ArcMap and visualized as point cloud in 2D and 3D. A shapefile corresponding to the area encompassing the LAS file can also be brought into ArcMap to verify the projection was set correctly. Even though the dataset is spatially located correctly, the identify tool will reveal that each individual file does not have a coordinate system since this was applied to the dataset as a whole. The point cloud can now be visualized by activating that LAS Dataset tool bar and zooming in to 2 tiles or closer. The point clouds can be visualized by elevation, aspect, slope, and contour.  It is important to note that to view the elevation points the number of classes in symbology needs to be changed from 9 to 8. Another method to enhance visualization for a specific purpose is to change the filter settings, both classifications and returns can be manipulated. For example, when the contour surface is displayed with all returns and all classifications it is quite excessive and difficult to read. By choosing to only display ground contours, the contour lines of the Earth`s surface becomes much easier to interpret (figure 4). When the dataset is displayed as points, they represent the first returns of the laser pulse and do not have a classification, but the profile and 3D profile view tools can be used to create an accurate depiction of small features within the image (figure 5).
Part 3
In this section digital surface models (DSM) and digital terrain models (DTM) will be derived from the point clouds. The models are rasters and will need to be given a spatial resolution. The spatial resolution should not be smaller than the nominal pulse spacing (NPS), to determine this the average NPS for the dataset should be estimated using either the list of point spacing values in the dataset properties tab or using the point file information tool (figure 6). With this information you are now ready to create a DSM with first return, a DTM, and Hillshades of each. Since there will be numerous outputs, the current workspace should be set to where these will be saved so this location does not have to be re-set for each output. The LAS Dataset to Raster tool is used to create the DSM and DTM from points elevation with the filter set to first return and ground respectively. Each of these outputs can be enhanced by developing a hillshade, this is done using the 3D analyst tool called Hillshade. The output of the hillshade from the first return DMS adds a 3D appearance to the image (figure 7). The Effects tool bar contains a Swipe tool that allows you to swipe aside the top layer to view another active layer underneath it. This tool could be useful when one image is serving as an ancillary image or to compare outputs. An image can also be created based on intensity, whose values are generated by the first returns. The LAS Dataset to Raster tool is also used to accomplish this only the value field is changed to intensity instead of elevation. The resulting image in ArcMap is very dark and obscure, however when opened in ERDAS Imagine the output is much improved (figure 8).

Results
Lidar technology is a growing industry with an increasing number of careers available in this field. One is also likely to encounter Lidar data in many other geospatial careers and a fundamental knowledge of data structure and processing is crucial. This lab has provided experience with various components of Lidar data as well as tools for processing the data as needed.

Figure 2. Creating dataset of all LAS files             
 
               
Figure 1. Point clouds open in ERDAS 



Figure 3. Coordinate systems found in
metadata
Figure 4. Contours filtered to display onlyground classification




Figure 5. A bridge displayed using profile
and 3D view tools
Figure 6. Point File Information tool to find average
NPS





Figure 7. Output of Hillshade tool applied to a DSM
Figure 8. Intensity image displayed in ERDAS Imagine (left)
and  ArcMap (right)





Sources

Claire, E. (2013). Lidar Point Cloud and Tile Index.
Price, M. (2014). Mastering ArcGIS 6th Edition.
Schukman, K., & Renslow, M. (2014). Penn State Topographic Mapping with Lidar. Retrieved from History of Lidar Development: https://www.e-education.psu.edu/geog481/l1_p4.html


Tuesday, November 1, 2016

Lab 4: Miscellaneous image functions

Goals and Background

                Remotely sensed imagery is interpreted and analyzed for use in a very broad range of disciplines. Sometimes this imagery contains flaws that make it difficult or impossible to interpret in the manner necessary for a specific need and there is not always sufficient alternative imagery. In order to solve this problem, many different functions have been created to correct the various problems that may occur. These miscellaneous image functions can be performed to obtain an area of interest (AOI), enhance visual image interpretation, and/or improve the quality of an image. This lab is designed to provide hands on experience with some of these functions available in ERDAS Imagine. The functions included in this lab are; image subsetting, image fusion, radiometric enhancement, linking image viewer to Google Earth, resampling, image mosaicking, and image differencing. At the end of this lab each of these functions should be familiar and one should be able to repeat each process using ERDAS Imagine.

Methods

Part 1: Image Subsetting to Create an Area of Interest
                There are two ways to subset an area of interest from an existing image. The first is through the use of an Inquire box; to do this, bring the image containing the area of interest into the viewer and click on raster tools. Next, right click anywhere on the image and select Inquire box from the menu and a white, square shaped inquire box will appear on the image. To move the inquire box to the area of interest, position the cursor inside the box and hold down the left mouse button to drag the Inquire box. The size of the box can also be adjusted in the same manner by placing the cursor over the bottom right corner. After the box has been moved and adjusted to the desired location and size, click apply in the Inquire box viewer. The area of interest is now ready to be subsetted; from the raster tool menu along the top of the screen select Subset & Chip and Create Subset Image. In the subset interface that opens, click on output file make sure to select an appropriate location and name to save the file. Also on the right side of the subset interface, click on the “From Inquire Box” button to bring in the coordinates within the Inquire box. After this is completed, hit OK to run the tool and dismiss when it is finished, bring the image into the viewer (figure 1).
                The second method to subset an area of interest is by creating an area of interest shape file, which is useful when the study area is not in a square or rectangular shape. This method also begins with an image that includes the area of interest opened in a viewer. You then add the shape file of your area of interest to the viewer as well, make sure to change the file type to .shp in order to locate the shape file. After the shape file has been placed in the viewer, hold down the shift key and click on each polygon that is part of the area of interest to select them. At the top of the screen, click on the home menu and click on “paste from selected object” and dotted lines should appear around the area of interest. The pasted area can now be saved as an area of interest file; click on file -> save as -> AOI Layer As and save it in the appropriate folder. Now click on Raster at the top of the screen, choose Subset & Chip, and name the file to be saved. At the bottom center of the subset interface, click on the AOI button, navigate to the AOI file created a moment ago, click OK, and bring in the new subsetted image (figure 2).
Part 2: Image Fusion (Pan-Sharpening)
The image fusion tool creates an image with a higher spatial resolution than the original image to improve visual interpretation. Open two viewers, bring in the image to be pan-sharpened in one viewer and the image containing the spatial resolution the image will be changed to in the other viewer. Click on Raster at the top of the screen, choose the Pan Sharpen tool, then select Resolution Merge from the menu. A Resolution Merge widow with many parameters will now open. Beginning at the top left of the window, click on the folder below “High Resolution Input File” to select the image containing the target spatial resolution. Next, the “Multispectral Input File” will be the image to be pan sharpened and the “Output File” will need to be given a name and location. The next step is to choose one of the three pan sharpen Methods and Resampling Techniques, choose Multiplicative and Nearest Neighbor. All the necessary parameters have been set, click ok. After the model has run, the image can now be brought into a viewer alongside another viewer containing the original image for comparison (figure 3).
Part 3: Simple Radiometric Enhancement Techniques
                One radiometric enhancement technique is haze reduction, which is done to improve the spectral and radiometric quality of the image, begin with an image that contains haze open in a viewer. Click on Raster, Radiometric, and Haze Reduction at the top of the screen. In the Haze Reduction window that opens, set the output file, and click OK. Now in a second viewer, bring in the newly created file and compare the two. The new image should be bolder with haze visibility greatly reduced.
Part 4: Linking Image Viewer to Google Earth
                The ability to link an image in Erdas to the same spatial area in Google Earth is a relatively new development. Google Earth provides high resolution imagery and can be used as an ancillary image to serve as a selective image interpretation key. Begin by fitting an image open in the viewer to the frame and click on the Google Earth icon at the top center of the screen, then click on Connect to Google Earth. Place the Google Earth Viewer side by side with the Erdas viewer and click on Match GE to View, they will now be placed in the same extent. Now click on Sync GE to View and zoom in to view the differences in quality between the two images (figure 4).
Part 5: Resampling
                The resampling tool can be used to increase or decrease pixel size. The process of increasing pixel size decreases the number of pixels and the file size, so it is called resampling down. On the other hand, decreasing pixel size increases the number of pixels and the file size, so it is called resampling up. Before running this tool, place the file to be resampled in a viewer and check the pixel size in the metadata. Now, click on Raster, Spatial, and Resample Pixel Size. In the Resample window, choose the input file and give the output file an appropriate name and location. The Resample Method will default to Nearest Neighbor, leave this as is and change the pixel size to half of the original size in both the XCell and YCell. Finally, check the box next to Square Cells to ensure the output pixels are squares and click OK. Repeat this process, but change the Resample Method to Bilinear Interpolation and compare each output to the original image.
Part 6: Image Mosaicking
                Image mosaic is a process to combine to images when an area of interest is very large and/or overlaps the boundaries of more than one image There are two ways to perform an image mosaic, the first is called Mosaic Express. As stated in to title, it is quick and easy to use. The output image however, is not of high quality and should only be used for visual interpretation purposes. The second method, Mosaic Pro produces a much more improved image, but requires much user input to achieve these results. Both images need to be displayed in the same viewer to begin, but do not immediately add the images after navigating to them. Highlight the first image and in the Select Layers to Add window click on the Multiple tab and select Multiple Images in Virtual Mosaic, then click on the Raster Options Tab. In this tab select Fit to Frame and make sure the Background Transparent image is also checked. The image can now be added to the viewer, now follow the same exact process for the other image. Under Raster, click on Mosaic and Mosaic Express and add each image, adding the image to be on top first. Next, click on Root Name, title the image, and run the model. To begin Mosaic Pro, add the two images in the same manner as before and click of Mosaic Pro. In the Mosaic Pro window, click on Add Images. In the Add Images window click on the Image Area Options tab and select Compute Active Area. Experiment with the various functions in the Pro window. In the Color Corrections option, check Use Histogram Matching and select Overlap Areas. Do not change anything else, run the model, and bring the result into a viewer (figure 5).
Part 7: Binary Change Detection

                Binary change detection involves the estimation and changing of pixel brightness values in order to detect change over time. To create a difference image, begin with two images of the same area at different times, activate the Raster tools, click on Functions, and then Two Image Functions. In the window put the more recent file as File #1 and the older as #2, and give a name to the Output File.  Also change the operator from + to – and change the Layer under each file to just one layer to expedite the process. Now the upper and lower threshold of change/no change need to be estimated; open the Metadata to view the histogram and note the mean and standard deviation. Plug these values into the equation “mean + 1.5*std”, then use the histogram to approximate the center value and add it to the result to get the upper threshold. Then subtract the result of the equation above from the estimated center value to get the lower threshold (figure 6). These changes will now be mapped out using model maker from the top menu. In model maker, place two Raster objects side by side, Function object below them, another Raster object below that and connect them with arrows.  Double click on each object; placing the newer image on the right raster and older in the left, input the function (newer file – older file + constant) and name the output Raster. Click on the red lightning bolt icon to run the model and recheck the function if there is an error. View the outputs histogram, adding the constant has removed negative values and only the upper threshold needs to be calculated. Use the equation “mean + 3*std” and a model can now be made to display the changes. Place one Raster object, a Function object below, and another Raster below that. Use the output image from the previous model as the input in this model. In the Function object options, change it to Conditional and choose “Either If Or” and make the equation read “EITHER 1 IF [(put available input here)> threshold value] OR 0 OTHERWISE” and then name the output file. Use ArcMap to overlay the output file on the older image file and create a simple layout showing change and no change areas (figure 7). 

Results

     Below are images captured throughout the lab and referred to in the methods section.This lab has provided experience with many miscellaneous image functions, from simple functions like Haze Reduction to the more user intensive Mosaic Pro. Remote Sensing technology does not always obtain perfect imagery and all of the functions included in this lab are important for the visual interpretation process.




Figure 1. Inquire Box subset
Figure 2. AOI subset

Figure 3.Original Image vs. Multiplicative Pan Sharpen

Figure 4. Linked and Synced Google Earth View

Figure 5. Mosaic Pro result


Figure 6.Histogram with lower and upper
change/no change thresholds

Figure 7. Map result of Binary Change Detection



Sources

Earth Resources Observation and Science Center, United States Geological Survey

 Price, M. (2014). Mastering ArcGIS 6th Edition Dataset. McGraw Hill

Wilson, C. (2016). Lab 4: Miscellaneous image functions . Eau Claire , WI, USA.