Tuesday, October 28, 2014

Lab 6: Classification Accuracy Assessment

Introduction: 

This lab was designed to teach the class some methods of evaluating the accuracy of classification results.  Checking the accuracy of the classification is always necessary following image classification.  The goals in this lab involved learning how to collect ground reference testing samples to perform accuracy assessment and how to interpret the accuracy assessment.


Methods:

The accuracy assessment in this lab was run on the unsupervised classified image created in Lab 4.  The aerial photo used as the reference image was form the National Agriculture Imagery Program from the United States Department of Agriculture.  The image was taken in 2005, while the classified image is from 2000.  This difference in temporal scale should typically be avoided when performing accuracy assessment.

Both of the images were brought into two separate viewers in ERDAS Imagine (Figure 1).  From there the accuracy assessment tool was opened and the classified image was opened in it.  At this point the reference image was selected to have points generated in it.  From here, 125 stratified random points were generated with a minimum of fifteen points per each classification category (Figure 2).

The reference image on the right was used to run accuracy assessment on the classified image created in Lab 4 on the right.  (Figure 1)

This is the Add Random Points tool.  As it can be seen, 125 stratified random samples were selected.  (Figure 2)

All of random points then needed to be classified one by one based on where they fell in the reference image (Figure 3).  As each point was classified, it turned yellow to show that it didn't need further attention (Figure 4).

This shows some of the random points that were selected.  The numbers under the 'Reference' column were the numbers assigned according to the reference image.  (Figure 3)

As all 125 points were assigned a category according to the reference image, they would turn yellow.  This shows all 125 points on the reference image.  (Figure 4)

Once all of the reference categories were assigned, the accuracy report was created.  This report showed the various accuracy values of the image.  The different accuracy values reported were the overall classification accuracy, the producer's accuracy for each feature, the user's accuracy for each feature, and the overall Kappa statistics, how well the actual correct areas can be associated to correctness and not just to chance.  The accuracy report was put into a manageable and presentable table for presenting (Figure 5).

These are the results of the accuracy report for the unsupervised classified image.  The classification accuracy is overall too low to use this image.  Classification should be reattempted.  (Figure 5)

The supervised classification image from Lab 5 was then run through the same accuracy assessment process (Figure 6).  The accuracy values for this classification were even lower, particularly the classification of urban/built-up areas.

The results from the accuracy assessment of the supervised classified image are actually rather troubling as the overall accuracy is a putrid 52%.  The image should definitely be reclassified.  (Figure 6)


Conclusion:

Performing accuracy assessment is an extremely crucial part of performing land use/land cover classification.  It is a time consuming process, though it is necessary for post-processing of classified imagery before it can be used for anything else.  If accuracy assessment had not been run on these images, they may have been used for policy decisions despite their horrid accuracy levels.  This could have vastly affected lives of people if the important accuracy assessment hadn't been run to show that the images weren't suited to be used in policy making in any way.

Sources:

United States Department of Agriculture. (2005). National Agriculture Imagery Program, Retrieved         October 23, 2014

No comments:

Post a Comment