Wednesday, October 8, 2014

Lab 4: Unsupervised Classification

Introduction:

The goal of this lab exercise was to teach the class how to extract sociocultural and biophysical information from remotely sensed imagery by using an unsupervised classification algorithm.  Image classification is a huge part of remote sensing and this lab was designed to teach how to perform it.  The lab was specifically designed to help the class garner an understanding of input configuration requirements and execution of an unsupervised classifier and teach how to recode multiple spectral clusters generated by an unsupervised classifier into useful land use/land cover classes.


Methods:

Experimenting with Unsupervised ISODATA Classification Algorithm

An iterative self-organizing data analysis technique is one option as an available classification algorithm.  The image that was to be classified was a satellite image of Eau Claire and Chippewa Counties in Wisconsin (Figure 1).  The image was loaded into ERDAS Imagine, then the unsupervised classification tool was opened.  The Isodata option was then selected.  Also the number of classes to be made was set to ten.  Running the tool produced a coded image, however at this point, it was impossible to tell what each coded value meant.

This is the original image of Eau Claire and Chippewa Counties to be classified.  The land use/land cover data will be extracted from this image later on in the write up.  (Figure 1)

Recoding of Unsurpervised Clusters into Meaningful Land Use/Land Cover Classes:

The next step in the process was to recode the clusters into colors that suited the land use/land cover.  Water was to be set as blue, forest as dark green, agriculture as pink, urban/built up areas as red, and bare soil as sienna.  The raster editor table was opened and the various features were compared by linking a historic view of Google Earth to the ERDAS viewer.  Each land cover cluster was thoroughly analyzed until a final product was created that was recoded into the appropriate color classes (Figure 2).

Here is the classified image according to land use/land cover,  Blue is water, dark green is forest, red is urban build-up, sienna is bare soil, and pink is agriculture.  When compared to the actual land coverage on Google Earth, it appeared that the 10 classes originally generated by the unsupervised classification tool were too broad and didn't cover enough of the variability as there were areas where bare soil was forest or urban classified zones were actually agricultural land or bare soil.   (Figure 2)

Improving the Accuracy of Unsupervised Classification:

In order to try and improve on the accuracy of the Isodata unsupervised classification, the unsupervised classification tool was run once again on the image of Eau Claire and Chippewa Counties (Figure 1).  However, this time the number of classes created was increased to twenty, while the convergence threshold was set to 0.92 instead of 0.95 (Figure 3).

This is the unsupervised classification tool with the new setting for the second attempt at running unclassified classification.  (Figure 3)
The tool was run and the data was once again recoded like in the earlier parts of the lab.  Only this time there were twice as many classes to recode, allowing more "gray" areas such as transition zones to be sorted into the correct classification (Figure 4).

This is the second classified image.  When comparing it with the first it appears as if there's less bare soil, more forested and agricultural areas, and the urban areas are more concentrated.  When comparing these classifications to the Google Earth historical imagery later on, it appeared as if this second classification was more accurate than the first.  (Figure 4)

Recoding Land Use/Land Cover Classes for Map Generation:

At this point, the image was once again recoded to give all of the blue (water) areas a value of 1, all of the green (forest) areas a value of 2, all of the pink (agriculture) areas a value of 3, all of the red areas (urban build up) a value of 4, and all of the bare soil areas a value of 5 (Figure 5).  Doing this allowed for a final map to be generated of the land use/land cover (Figure 6) as it was easy enough to bring it into ArcMap to create a finished product (Figure 6).

This shows the process of recoding each class into one number in order to use the values to generate a map.  The New Value section is the section that had to be altered in order to create the desired effect.  (Figure 5)
Figure 6


Conclusion:

Using unsupervised classification to find land use/land cover from satellite imagery is a relatively pain free process that can be accurate to a point.  This accuracy seems to increase the more classes that are created as can be seen when comparing the ten class image to the 20 class image.  However, this method has its limitations.  It makes assumptions and relies on the user to ultimately determine the classes post-classification.  Ultimately, this seems to be a viable method in creating land use/land cover maps that can be used at smaller scales.

No comments:

Post a Comment