Monday, December 15, 2014

Lab 11: LIDAR Remote Sensing

Introduction:

The main purpose of this lab was to give the class an introduction on LIDAR data and how to process it.  Though this is just an introduction to LIDAR, learning how to simply process this advanced technology will put the class at a great advantage in the marketplace due to there being little to no one who knows how to process the imagery correctly and it's a ever-growing necessary skill.  Some objectives of the lab include:  retrieval and processing of surface terrain models, and processing a point cloud to create various products from it.


Methods:

The first portion of the lab had the class visualize a LIDAR point cloud in ERDAS Imagine.  The LAS files were brought in and viewed, yet ERDAS has limited LIDAR functionality when compared to ArcGIS.  

Knowing this ArcMap was opened to analyze the LIDAR data.  The first step to using LIDAR point clouds with ArcGIS is creating an LAS dataset.  The LAS dataset was created using the ArcCatalog plugin in ArcMap.  The various LAS files were then imported into the dataset and their properties were analysed to check for errors (Figure 1).  One way to perform quality assurance/quality control that was learned was to look at the Min Z and the Max Z to see if they match the expected elevation range well.  When importing the LAS files, the xy coordinate system and the z coordinate system needed to be selected.  This data was able to be found within the metadata file that came with the LAS data.

This is inside the LAS Dataset properties looking at the various LAS files that were imported into it.  (Figure 1)

From here the LAS dataset was able to be viewed in ArcMap.  At first it just appeared like several squares.  This was the different LAS files split up to reduce space.  The points automatically don't appear until zoomed in enough to help speed up processing.

At this point the LAS Dataset toolbar was explored to learn the various options for viewing and analyzing a LIDAR point cloud.  Some of these options included viewing the point cloud as a TIN or simply as elevation points.  Different side profiles of the data could also be viewed using the toolbar.

Several products were then generated using the LIDAR point cloud.  These included a digital surface model (DSM) that was generated using the first-return points, a digital terrain model (DTM) using the last return, and hillshades using both the DSM and DTM.  A LIDAR intensity image was also generated as a final product (Figure 2).

This LIDAR intensity image has a spectral signature near the NIR and is very fine in resolution.  (Figure 2)


Conclusion:

Learning how to utilize and process LIDAR data may seem like a daunting task at first but it actually isn't as complicated as some would make it seem.  ArcGIS and the the LAS Datasets within ArcGIS are rather user-friendly and useful as many different products can be generated from the LIDAR point clouds.  LIDAR is a technology that will only continue to grow in the future as demand for it will increase due to it's unparalleled precision at this time.  Possessing he skills to utilize and process LIDAR data will be invaluable to the class in the future. 

Lab 10: Object-Based Classification

Introduction:

This lab exercise was set up to teach the class how to properly perform object-based classification using eCognition, an object-based image processing tool.  Object-based classification integrates both spectral and spatial information to aid in extracting land use/land cover features from remotely sensed imagery.  Some of the objectives in this lab included:  segmenting an image into spatial and spectral clusters, selecting which of these clusters (objects) to use as training samples using a nearest neighbor classifier, and executing and refining the object-based classification output.


Methods:

As this was the first time the class had used eCognition, the first portion of the lab involved getting to know the software and then importing an image of the same study area of Eau Claire and Chippewa Counties that was classified in previous lab exercises.  A project was then created with the imagery being set using layer mixing to the 4, 3, 2 band combination that the class has become so used to.

At this point, it was time to create the different image objects (Figure 1).  This is a simple process that involved opening up the process tree and creating a new process.  Multiresolution segmentation was used along with a scale parameter of 10.  From there, the process was executed and the various objects were created.  It was possible to view the objects in several different ways including pixel view, object mean view, with or without the outline, and transparent or not.

The different objects that were created can be seen here as outlined in blue.  (Figure 1)

From here classes were created, these included:  agriculture, forest, green vegetation, urban, and water.  A nearest neighbor classifier using mean was then assigned to the classes.  Sample objects were selected  based on the classes knowledge of the spectral reflectance of the classes and the appearance of the land cover in a 4, 3, 2 band image (Figure 2).  After the samples were selected the classification was run and it was possible to review the results.

Here several green vegetation training samples and one urban training sample can be seen.  (Figure 2)

One interesting aspect of object-based classification is that it's possible to easily, manually edit objects that are known to be incorrectly classified.  After all manual editing was performed it was possible to export the result to a raster and create a map from the classified image (Figure 3).


This is the final map of the image that was classified using object-based classification.  (Figure 3)


Conclusion:

Image based classification is an interesting, newer way to perform image classification.  It is more useful in some application than other and seems to be especially customizable if the original result isn't up to a high enough quality.  eCognition is also an extremely useful and rather user-friendly software that could help make object-based classification more relevant in many future applications.

Monday, November 17, 2014

Lab 7: Digital Change Detection

Introduction:

This exercise was intended to teach the class different methods of measuring change in land use/land cover over time.  Different methods were used to qualify and quantify change through visual methods, post-classification change detection, and a model that mapped detailed from-to changes in land use/land cover.


Methods:

Write Function Memory Insertion:

Two images, one from 1991 and one from 2011 of Eau Claire and surrounding counties were compared using write function memory insertion.  First, the red band of the 2011 image was stacked with the near infrared band of the 1991 image.  Then the color guns were set with the red color gun displaying the 2011 red band and the blue and green color guns were set to the 1991 NIR band.  This gave an image that was able to display where most of the change occurred based on brightness and coloring (Figure 1).  The areas that are the brightest red are the areas that have experienced the most change.  This method is able to show where change has occurred but doesn't quantify it in any way.  Also there is no from-to change information available.

It can be seen from this image that most of the change that occurred in the area appears to have been near he rivers or in the urban areas.  (Figure 1)

Post-Classification Comparison Change Detection:

Post-classification comparison change detection provides the from-to information that write insert memory function did not and is much better method at assessing quantitative changes.  Two classified images of the Milwaukee Metropolitan Statistical Area (MSA) for 2001 and 2006 were assessed using this method.

The first step involved looking at the measurements of the different classes to simply quantify percentage changes and put them into a table (Figure 2).  This was simply done by looking at the measurements in the attribute table and doing some simple conversions to get the measurements into hectares.

It was found that the majority of the change occurred in bare soil and open spaces, at least as far as percentage change.  This could be due to more development taking place in these areas as they are much more able to be effected than urban areas.  Also there are less of these areas so the smallest changes have the highest percentage of change.  (Figure 2)
Mapping this change is crucial in many applications, particularly environmental assessment and monitoring.  Of particular interest in this exercise was the change from wetlands to urban, forest to urban, agriculture to urban, wetlands to agriculture, and agriculture to bare soil.  These various changes were mapped out using he Wilson-Lula algorithm (Figure 3).  From here all of the images were brought into ArcMap and a map was generated to show the change that occurred (Figure 4).

The corresponding classes were all loaded into the algorithm at once in model maker in order to produce five different images of each of the desired changes.  (Figure 3)

This is the map generated to show the various types of change that occurred and where they occurred over the period from 2001 to 2006.  (Figure 4)


Conclusion:

Digital change detection is a very useful and applicable part of remote sensing.  Not only is it useful but there are many different methods of performing it.  Different considerations should be made when assessing which type of digital change detection should be run particularly whether one wants a quantifiable result or simply wants to display change location.  Through this lab, the class has become much more comfortable with performing different methods of digital change detection and being able to interpret the results.

Tuesday, October 28, 2014

Lab 6: Classification Accuracy Assessment

Introduction: 

This lab was designed to teach the class some methods of evaluating the accuracy of classification results.  Checking the accuracy of the classification is always necessary following image classification.  The goals in this lab involved learning how to collect ground reference testing samples to perform accuracy assessment and how to interpret the accuracy assessment.


Methods:

The accuracy assessment in this lab was run on the unsupervised classified image created in Lab 4.  The aerial photo used as the reference image was form the National Agriculture Imagery Program from the United States Department of Agriculture.  The image was taken in 2005, while the classified image is from 2000.  This difference in temporal scale should typically be avoided when performing accuracy assessment.

Both of the images were brought into two separate viewers in ERDAS Imagine (Figure 1).  From there the accuracy assessment tool was opened and the classified image was opened in it.  At this point the reference image was selected to have points generated in it.  From here, 125 stratified random points were generated with a minimum of fifteen points per each classification category (Figure 2).

The reference image on the right was used to run accuracy assessment on the classified image created in Lab 4 on the right.  (Figure 1)

This is the Add Random Points tool.  As it can be seen, 125 stratified random samples were selected.  (Figure 2)

All of random points then needed to be classified one by one based on where they fell in the reference image (Figure 3).  As each point was classified, it turned yellow to show that it didn't need further attention (Figure 4).

This shows some of the random points that were selected.  The numbers under the 'Reference' column were the numbers assigned according to the reference image.  (Figure 3)

As all 125 points were assigned a category according to the reference image, they would turn yellow.  This shows all 125 points on the reference image.  (Figure 4)

Once all of the reference categories were assigned, the accuracy report was created.  This report showed the various accuracy values of the image.  The different accuracy values reported were the overall classification accuracy, the producer's accuracy for each feature, the user's accuracy for each feature, and the overall Kappa statistics, how well the actual correct areas can be associated to correctness and not just to chance.  The accuracy report was put into a manageable and presentable table for presenting (Figure 5).

These are the results of the accuracy report for the unsupervised classified image.  The classification accuracy is overall too low to use this image.  Classification should be reattempted.  (Figure 5)

The supervised classification image from Lab 5 was then run through the same accuracy assessment process (Figure 6).  The accuracy values for this classification were even lower, particularly the classification of urban/built-up areas.

The results from the accuracy assessment of the supervised classified image are actually rather troubling as the overall accuracy is a putrid 52%.  The image should definitely be reclassified.  (Figure 6)


Conclusion:

Performing accuracy assessment is an extremely crucial part of performing land use/land cover classification.  It is a time consuming process, though it is necessary for post-processing of classified imagery before it can be used for anything else.  If accuracy assessment had not been run on these images, they may have been used for policy decisions despite their horrid accuracy levels.  This could have vastly affected lives of people if the important accuracy assessment hadn't been run to show that the images weren't suited to be used in policy making in any way.

Sources:

United States Department of Agriculture. (2005). National Agriculture Imagery Program, Retrieved         October 23, 2014

Wednesday, October 22, 2014

Lab 5: Pixel-based Supervised Classification

Introduction:

This weeks lab was designed to properly educate the class on how to extract sociocultural and biophysical data from remotely sensed images through pixel-based supervised classification.  The lab is designed to instruct the class how to properly select training samples in order to create a supervised classifier, how to analyze the quality of the training samples which were collected, and how to produce a useful and meaningful land use/land cover map with this data.  This method will be compared and contrasted with the unsupervised classification run in Lab 4.


Methods:

The first step in performing supervised classification is to collect training samples (Figure 1).  These training samples will be of the different classes that are desired in the final land use/land cover map.  They should have typical spectral signatures of the desired features.  For example, water training samples should be of both standing and turbid water, and forest samples should include both dry and riparian vegetation.  These samples are simply selected by drawing a polygon in the desired area to be sampled and then uploading it to the signature editor tool.  These training fields can more accurately be delineated by performing field work or by using high resolution aerial photos.  For this lab, the class was just asked to link Google Earth to an image of the Eau Claire and Chippewa County area.  Twelve water training samples were collected along with eleven forest, nine agriculture, eleven urban area, and seven bare soil.  The various sample signatures were organized, classified (Figure 2), and plotted (Figure 3).

This shows a the first training sample collected for water.  As can be seen, a simple polygon is drawn in the desired area of the training sample.  Its spectral signature is then uploaded into the signature editor tool to be saved.  (Figure 1)
The various classes were all given similar colors
after they were organized and named.  (Figure 2)
After the training samples were classified and colorized they were
plotted here.  One of the objectives of this plot is to make sure there
is maximum separability between the classes.
(Figure 3)
Once training samples that may not have had enough separability were eliminated it was time to put the classes together and merge the signatures (Figure 4).

This is the signature mean plot that resulted from merging the training samples.  The five desired classes in the land use/land cover map can be seen on the right.  (Figure 4)
The training samples collected were then saved as a signature file.  From here the supervised classification tool (Figure 5) was opened and the signature file was uploaded.  The tool was then run and a land use/land cover classified image was created.  Then a map was generated from the image to show land use/land cover of the area (Figure 6).  This map that was generated doesn't seem very accurate as the urban/built-up area is much more spread out than it is in real life.  This error could be due to the lack of separability between bare soil, agricultural land, and urban/built-up areas.

This is the supervised classification tool.  Running it is as easy as inputting the image and the signature file that was saved from the training samples.  (Figure 5)
This is the land use/land cover map that was generated from the supervised classification.  Unfortunately it appears as if the urban/built-up area covers much of the areas that should be bare soil or urban land.  (Figure 6)


Conclusion:

The supervised classification in this case didn't create a very well done map.  The classes don't seem correct and just seem unnatural.  This is likely due to user error in gathering training samples and a lack of separability.  Compared to the map of Lab 4, this map seems to misrepresent many features, particularly the urban, agricultural, and bare soil classes.  In the future, higher quality reference imagery, instead of just Google Earth, should be used to collect better training samples.  Also, a higher separability should be the goal to try and avoid this.  This was a good lesson in teaching the errors that can occur in supervised classification and what should be done in the future to avoid them.

Wednesday, October 8, 2014

Lab 4: Unsupervised Classification

Introduction:

The goal of this lab exercise was to teach the class how to extract sociocultural and biophysical information from remotely sensed imagery by using an unsupervised classification algorithm.  Image classification is a huge part of remote sensing and this lab was designed to teach how to perform it.  The lab was specifically designed to help the class garner an understanding of input configuration requirements and execution of an unsupervised classifier and teach how to recode multiple spectral clusters generated by an unsupervised classifier into useful land use/land cover classes.


Methods:

Experimenting with Unsupervised ISODATA Classification Algorithm

An iterative self-organizing data analysis technique is one option as an available classification algorithm.  The image that was to be classified was a satellite image of Eau Claire and Chippewa Counties in Wisconsin (Figure 1).  The image was loaded into ERDAS Imagine, then the unsupervised classification tool was opened.  The Isodata option was then selected.  Also the number of classes to be made was set to ten.  Running the tool produced a coded image, however at this point, it was impossible to tell what each coded value meant.

This is the original image of Eau Claire and Chippewa Counties to be classified.  The land use/land cover data will be extracted from this image later on in the write up.  (Figure 1)

Recoding of Unsurpervised Clusters into Meaningful Land Use/Land Cover Classes:

The next step in the process was to recode the clusters into colors that suited the land use/land cover.  Water was to be set as blue, forest as dark green, agriculture as pink, urban/built up areas as red, and bare soil as sienna.  The raster editor table was opened and the various features were compared by linking a historic view of Google Earth to the ERDAS viewer.  Each land cover cluster was thoroughly analyzed until a final product was created that was recoded into the appropriate color classes (Figure 2).

Here is the classified image according to land use/land cover,  Blue is water, dark green is forest, red is urban build-up, sienna is bare soil, and pink is agriculture.  When compared to the actual land coverage on Google Earth, it appeared that the 10 classes originally generated by the unsupervised classification tool were too broad and didn't cover enough of the variability as there were areas where bare soil was forest or urban classified zones were actually agricultural land or bare soil.   (Figure 2)

Improving the Accuracy of Unsupervised Classification:

In order to try and improve on the accuracy of the Isodata unsupervised classification, the unsupervised classification tool was run once again on the image of Eau Claire and Chippewa Counties (Figure 1).  However, this time the number of classes created was increased to twenty, while the convergence threshold was set to 0.92 instead of 0.95 (Figure 3).

This is the unsupervised classification tool with the new setting for the second attempt at running unclassified classification.  (Figure 3)
The tool was run and the data was once again recoded like in the earlier parts of the lab.  Only this time there were twice as many classes to recode, allowing more "gray" areas such as transition zones to be sorted into the correct classification (Figure 4).

This is the second classified image.  When comparing it with the first it appears as if there's less bare soil, more forested and agricultural areas, and the urban areas are more concentrated.  When comparing these classifications to the Google Earth historical imagery later on, it appeared as if this second classification was more accurate than the first.  (Figure 4)

Recoding Land Use/Land Cover Classes for Map Generation:

At this point, the image was once again recoded to give all of the blue (water) areas a value of 1, all of the green (forest) areas a value of 2, all of the pink (agriculture) areas a value of 3, all of the red areas (urban build up) a value of 4, and all of the bare soil areas a value of 5 (Figure 5).  Doing this allowed for a final map to be generated of the land use/land cover (Figure 6) as it was easy enough to bring it into ArcMap to create a finished product (Figure 6).

This shows the process of recoding each class into one number in order to use the values to generate a map.  The New Value section is the section that had to be altered in order to create the desired effect.  (Figure 5)
Figure 6


Conclusion:

Using unsupervised classification to find land use/land cover from satellite imagery is a relatively pain free process that can be accurate to a point.  This accuracy seems to increase the more classes that are created as can be seen when comparing the ten class image to the 20 class image.  However, this method has its limitations.  It makes assumptions and relies on the user to ultimately determine the classes post-classification.  Ultimately, this seems to be a viable method in creating land use/land cover maps that can be used at smaller scales.

Tuesday, October 7, 2014

Lab 3: Radiometric and Atmospheric Correction

Introduction:

This lab exercise was designed to give experience to the class in correcting atmospheric interference in remotely sensed images.  It involves performing both relative atmospheric correction and absolute atmospheric correction of remotely sensed images.  The methods empirical line calibration, dark object subtraction, and multidate image normalization were used to perform the atmospheric correction.


Methods:

Empirical Line Calibration:

Empirical line calibration (ELC) is a method of performing atmospheric correction which forces remotely sensed data to match in-situ spectral signature requirements.  These spectral requirements are found by using a spectral library.  In this lab a spectral library was used to perform ELC on a 2011 image of the Eau Claire area.

The first step was to bring the image into ERDAS Imagine and then open up the Spectral Analysis Work Station tool.  From there, the image was loaded into the tool and the atmospheric correction tool (Figure 1) was opened to begin collecting samples and referencing them to features within the spectral library.  Points were placed on the image in certain areas and then referenced to an ASTER spectral library and a USGS spectral library.  An example of a point selected was a point in the middle of Lake Wissota; this was referenced to a tap water feature in the ASTER spectral library, as tap water was the only freshwater feature available.  Another example was asphaltic concrete (Figure 2).  This example helps show the limited capabilities of the ELC method.

The atmospheric adjustment tool can be used in ELC to find points and relate them to land surface features in a spectral library.  In this case the features used were asphaltic concrete, pine wood, grass, alunite AL706 and tap water.             (Figure 1)

The spectral reflectance between the selected point in the image determined to be concrete and the selected reflectance in the spectral library can be seen in this graph.  From this point an equation was developed to bring the taken point closer to the expected reflectance in the spectral library.  (Figure 2)

After all of the points were selected and referenced to some sort of spectral signature in the library, equations were developed by the tool to bring the reflectance in the image closer to the expected reflectance from the libraries.  The regression equation developed was then ran by running the preprocess-atmospheric adjustment tool.  Saving the preprocessed image was the last step in completing ELC to correct for atmospheric interference.


Enhanced Image Based Dark Object Subtraction:

Enhanced image based dark object subtraction (DOS) is a relatively robust method  of correcting for atmospheric interference.  DOS involves first converting the image to an at-satellite spectral radiance image and the second involves converting the at-satellite spectral radiance image to true surface reflectance.  This process was performed using the same image from 2011 of the Eau Claire area as in the first part.

Model maker (Figure 3) played a large part in running DOS on the image.  The first step was to use the equation given to convert every band of the image separately into an at-satellite spectral radiance image.  Each band was brought into the model maker and had the equations run on them.

The model maker window with all of the inputs, equations, and outputs is shown here.  This model was run on all six of the bands of the Eau Claire 2011 image to convert them at at-satellite radiance images as the first part of DOS requires.    (Figure 3)

From here, model maker was once again used to convert all of the radiance images into true surface reflectance images (Figure 4).  The information needed for the equations, such as the pixel values and atmospheric transmittance from ground to sensor was all either obtained from the metadata, available online, or given.  At this point all of the layers were stacked to create the final true surface reflectance image.


This is a look at the equation to convert the radiance image of band one into a true surface reflectance image.  Completing these equations for each band of the image was a rather painstaking and involved process as different values exist for every band.  (Figure 4)


Multidate Image Normalization:

Multidate image normalization is a relative atmospheric correction method that is normally used when it is impossible to obtain in situ measurements to perform atmospheric correction or when metadata isn't available for an image.  It is used to normalize interference between two different images taken at different dates.  Multidate image normalization is mainly used for image change detection.

This process was run on images from the Chicago area.  One of the images was from 2000, while the other was taken in 2009.  The first step was to open up spectral profile plots to gather pseudo-invariant features (PIFs) which are like ground control points in a way.  These PIFs were gathered in each image, in the same spot, and only over features that experienced very little change such as airports or water (Figure 5).  The spectral reflectance of the different points can then be viewed in the spectral profile windows (Figure 6).  A total of fifteen PIFs were gathered in this case.

All of the points were selected in the same spot in both images.  A total of fifteen points were selected over features that would've experienced little to no change between 2000 and 2009, such as airports and water features.  (Figure 5)

The spectral signatures for the fifteen PIFs in both images can be seen in the two spectral profile viewers here.  (Figure 6)

At this point, the data from the bands of each of the fifteen PIFs was extrapolated and brought into Excel (Figure 7) to graph (Figure 8) and find the equations to normalize the two images.  Image normalization correction models (Figure 9) were then developed to generate the final products.  The band layers were then stacked to complete the process.

This is data on the reflectance of all of the PIFs taken in the various bands.  From here graphs were made in order to find the necessary equations to convert each band.  (Figure 7)

This is an example of one of the graphs generated using the PIF data.  This graph is of band one and the line equation was used to run the model to create the normalized image.  (Figure 8)

This model was made in order to finish the image normalization.  The equations in the models were created from the equations generated in the Excel graphs.  (Figure 9)


Conclusion:

Most processes and analysis in remote sensing applications cannot be performed without first ensuring that the images to be used have low error.  This means that atmospheric correction is a hugely important topic that applies to almost all remote sensing applications.  This lab was an excellent way to introduce and compare several techniques of performing atmospheric corrections.  The most robust seemed to be DOS, however it also requires the most in situ data.  In the end, the atmospheric correction technique used depends on data available and of course, the task at hand.