Monday, December 15, 2014

Lab 11: LIDAR Remote Sensing

Introduction:

The main purpose of this lab was to give the class an introduction on LIDAR data and how to process it.  Though this is just an introduction to LIDAR, learning how to simply process this advanced technology will put the class at a great advantage in the marketplace due to there being little to no one who knows how to process the imagery correctly and it's a ever-growing necessary skill.  Some objectives of the lab include:  retrieval and processing of surface terrain models, and processing a point cloud to create various products from it.


Methods:

The first portion of the lab had the class visualize a LIDAR point cloud in ERDAS Imagine.  The LAS files were brought in and viewed, yet ERDAS has limited LIDAR functionality when compared to ArcGIS.  

Knowing this ArcMap was opened to analyze the LIDAR data.  The first step to using LIDAR point clouds with ArcGIS is creating an LAS dataset.  The LAS dataset was created using the ArcCatalog plugin in ArcMap.  The various LAS files were then imported into the dataset and their properties were analysed to check for errors (Figure 1).  One way to perform quality assurance/quality control that was learned was to look at the Min Z and the Max Z to see if they match the expected elevation range well.  When importing the LAS files, the xy coordinate system and the z coordinate system needed to be selected.  This data was able to be found within the metadata file that came with the LAS data.

This is inside the LAS Dataset properties looking at the various LAS files that were imported into it.  (Figure 1)

From here the LAS dataset was able to be viewed in ArcMap.  At first it just appeared like several squares.  This was the different LAS files split up to reduce space.  The points automatically don't appear until zoomed in enough to help speed up processing.

At this point the LAS Dataset toolbar was explored to learn the various options for viewing and analyzing a LIDAR point cloud.  Some of these options included viewing the point cloud as a TIN or simply as elevation points.  Different side profiles of the data could also be viewed using the toolbar.

Several products were then generated using the LIDAR point cloud.  These included a digital surface model (DSM) that was generated using the first-return points, a digital terrain model (DTM) using the last return, and hillshades using both the DSM and DTM.  A LIDAR intensity image was also generated as a final product (Figure 2).

This LIDAR intensity image has a spectral signature near the NIR and is very fine in resolution.  (Figure 2)


Conclusion:

Learning how to utilize and process LIDAR data may seem like a daunting task at first but it actually isn't as complicated as some would make it seem.  ArcGIS and the the LAS Datasets within ArcGIS are rather user-friendly and useful as many different products can be generated from the LIDAR point clouds.  LIDAR is a technology that will only continue to grow in the future as demand for it will increase due to it's unparalleled precision at this time.  Possessing he skills to utilize and process LIDAR data will be invaluable to the class in the future. 

Lab 10: Object-Based Classification

Introduction:

This lab exercise was set up to teach the class how to properly perform object-based classification using eCognition, an object-based image processing tool.  Object-based classification integrates both spectral and spatial information to aid in extracting land use/land cover features from remotely sensed imagery.  Some of the objectives in this lab included:  segmenting an image into spatial and spectral clusters, selecting which of these clusters (objects) to use as training samples using a nearest neighbor classifier, and executing and refining the object-based classification output.


Methods:

As this was the first time the class had used eCognition, the first portion of the lab involved getting to know the software and then importing an image of the same study area of Eau Claire and Chippewa Counties that was classified in previous lab exercises.  A project was then created with the imagery being set using layer mixing to the 4, 3, 2 band combination that the class has become so used to.

At this point, it was time to create the different image objects (Figure 1).  This is a simple process that involved opening up the process tree and creating a new process.  Multiresolution segmentation was used along with a scale parameter of 10.  From there, the process was executed and the various objects were created.  It was possible to view the objects in several different ways including pixel view, object mean view, with or without the outline, and transparent or not.

The different objects that were created can be seen here as outlined in blue.  (Figure 1)

From here classes were created, these included:  agriculture, forest, green vegetation, urban, and water.  A nearest neighbor classifier using mean was then assigned to the classes.  Sample objects were selected  based on the classes knowledge of the spectral reflectance of the classes and the appearance of the land cover in a 4, 3, 2 band image (Figure 2).  After the samples were selected the classification was run and it was possible to review the results.

Here several green vegetation training samples and one urban training sample can be seen.  (Figure 2)

One interesting aspect of object-based classification is that it's possible to easily, manually edit objects that are known to be incorrectly classified.  After all manual editing was performed it was possible to export the result to a raster and create a map from the classified image (Figure 3).


This is the final map of the image that was classified using object-based classification.  (Figure 3)


Conclusion:

Image based classification is an interesting, newer way to perform image classification.  It is more useful in some application than other and seems to be especially customizable if the original result isn't up to a high enough quality.  eCognition is also an extremely useful and rather user-friendly software that could help make object-based classification more relevant in many future applications.

Monday, November 17, 2014

Lab 7: Digital Change Detection

Introduction:

This exercise was intended to teach the class different methods of measuring change in land use/land cover over time.  Different methods were used to qualify and quantify change through visual methods, post-classification change detection, and a model that mapped detailed from-to changes in land use/land cover.


Methods:

Write Function Memory Insertion:

Two images, one from 1991 and one from 2011 of Eau Claire and surrounding counties were compared using write function memory insertion.  First, the red band of the 2011 image was stacked with the near infrared band of the 1991 image.  Then the color guns were set with the red color gun displaying the 2011 red band and the blue and green color guns were set to the 1991 NIR band.  This gave an image that was able to display where most of the change occurred based on brightness and coloring (Figure 1).  The areas that are the brightest red are the areas that have experienced the most change.  This method is able to show where change has occurred but doesn't quantify it in any way.  Also there is no from-to change information available.

It can be seen from this image that most of the change that occurred in the area appears to have been near he rivers or in the urban areas.  (Figure 1)

Post-Classification Comparison Change Detection:

Post-classification comparison change detection provides the from-to information that write insert memory function did not and is much better method at assessing quantitative changes.  Two classified images of the Milwaukee Metropolitan Statistical Area (MSA) for 2001 and 2006 were assessed using this method.

The first step involved looking at the measurements of the different classes to simply quantify percentage changes and put them into a table (Figure 2).  This was simply done by looking at the measurements in the attribute table and doing some simple conversions to get the measurements into hectares.

It was found that the majority of the change occurred in bare soil and open spaces, at least as far as percentage change.  This could be due to more development taking place in these areas as they are much more able to be effected than urban areas.  Also there are less of these areas so the smallest changes have the highest percentage of change.  (Figure 2)
Mapping this change is crucial in many applications, particularly environmental assessment and monitoring.  Of particular interest in this exercise was the change from wetlands to urban, forest to urban, agriculture to urban, wetlands to agriculture, and agriculture to bare soil.  These various changes were mapped out using he Wilson-Lula algorithm (Figure 3).  From here all of the images were brought into ArcMap and a map was generated to show the change that occurred (Figure 4).

The corresponding classes were all loaded into the algorithm at once in model maker in order to produce five different images of each of the desired changes.  (Figure 3)

This is the map generated to show the various types of change that occurred and where they occurred over the period from 2001 to 2006.  (Figure 4)


Conclusion:

Digital change detection is a very useful and applicable part of remote sensing.  Not only is it useful but there are many different methods of performing it.  Different considerations should be made when assessing which type of digital change detection should be run particularly whether one wants a quantifiable result or simply wants to display change location.  Through this lab, the class has become much more comfortable with performing different methods of digital change detection and being able to interpret the results.

Tuesday, October 28, 2014

Lab 6: Classification Accuracy Assessment

Introduction: 

This lab was designed to teach the class some methods of evaluating the accuracy of classification results.  Checking the accuracy of the classification is always necessary following image classification.  The goals in this lab involved learning how to collect ground reference testing samples to perform accuracy assessment and how to interpret the accuracy assessment.


Methods:

The accuracy assessment in this lab was run on the unsupervised classified image created in Lab 4.  The aerial photo used as the reference image was form the National Agriculture Imagery Program from the United States Department of Agriculture.  The image was taken in 2005, while the classified image is from 2000.  This difference in temporal scale should typically be avoided when performing accuracy assessment.

Both of the images were brought into two separate viewers in ERDAS Imagine (Figure 1).  From there the accuracy assessment tool was opened and the classified image was opened in it.  At this point the reference image was selected to have points generated in it.  From here, 125 stratified random points were generated with a minimum of fifteen points per each classification category (Figure 2).

The reference image on the right was used to run accuracy assessment on the classified image created in Lab 4 on the right.  (Figure 1)

This is the Add Random Points tool.  As it can be seen, 125 stratified random samples were selected.  (Figure 2)

All of random points then needed to be classified one by one based on where they fell in the reference image (Figure 3).  As each point was classified, it turned yellow to show that it didn't need further attention (Figure 4).

This shows some of the random points that were selected.  The numbers under the 'Reference' column were the numbers assigned according to the reference image.  (Figure 3)

As all 125 points were assigned a category according to the reference image, they would turn yellow.  This shows all 125 points on the reference image.  (Figure 4)

Once all of the reference categories were assigned, the accuracy report was created.  This report showed the various accuracy values of the image.  The different accuracy values reported were the overall classification accuracy, the producer's accuracy for each feature, the user's accuracy for each feature, and the overall Kappa statistics, how well the actual correct areas can be associated to correctness and not just to chance.  The accuracy report was put into a manageable and presentable table for presenting (Figure 5).

These are the results of the accuracy report for the unsupervised classified image.  The classification accuracy is overall too low to use this image.  Classification should be reattempted.  (Figure 5)

The supervised classification image from Lab 5 was then run through the same accuracy assessment process (Figure 6).  The accuracy values for this classification were even lower, particularly the classification of urban/built-up areas.

The results from the accuracy assessment of the supervised classified image are actually rather troubling as the overall accuracy is a putrid 52%.  The image should definitely be reclassified.  (Figure 6)


Conclusion:

Performing accuracy assessment is an extremely crucial part of performing land use/land cover classification.  It is a time consuming process, though it is necessary for post-processing of classified imagery before it can be used for anything else.  If accuracy assessment had not been run on these images, they may have been used for policy decisions despite their horrid accuracy levels.  This could have vastly affected lives of people if the important accuracy assessment hadn't been run to show that the images weren't suited to be used in policy making in any way.

Sources:

United States Department of Agriculture. (2005). National Agriculture Imagery Program, Retrieved         October 23, 2014

Wednesday, October 22, 2014

Lab 5: Pixel-based Supervised Classification

Introduction:

This weeks lab was designed to properly educate the class on how to extract sociocultural and biophysical data from remotely sensed images through pixel-based supervised classification.  The lab is designed to instruct the class how to properly select training samples in order to create a supervised classifier, how to analyze the quality of the training samples which were collected, and how to produce a useful and meaningful land use/land cover map with this data.  This method will be compared and contrasted with the unsupervised classification run in Lab 4.


Methods:

The first step in performing supervised classification is to collect training samples (Figure 1).  These training samples will be of the different classes that are desired in the final land use/land cover map.  They should have typical spectral signatures of the desired features.  For example, water training samples should be of both standing and turbid water, and forest samples should include both dry and riparian vegetation.  These samples are simply selected by drawing a polygon in the desired area to be sampled and then uploading it to the signature editor tool.  These training fields can more accurately be delineated by performing field work or by using high resolution aerial photos.  For this lab, the class was just asked to link Google Earth to an image of the Eau Claire and Chippewa County area.  Twelve water training samples were collected along with eleven forest, nine agriculture, eleven urban area, and seven bare soil.  The various sample signatures were organized, classified (Figure 2), and plotted (Figure 3).

This shows a the first training sample collected for water.  As can be seen, a simple polygon is drawn in the desired area of the training sample.  Its spectral signature is then uploaded into the signature editor tool to be saved.  (Figure 1)
The various classes were all given similar colors
after they were organized and named.  (Figure 2)
After the training samples were classified and colorized they were
plotted here.  One of the objectives of this plot is to make sure there
is maximum separability between the classes.
(Figure 3)
Once training samples that may not have had enough separability were eliminated it was time to put the classes together and merge the signatures (Figure 4).

This is the signature mean plot that resulted from merging the training samples.  The five desired classes in the land use/land cover map can be seen on the right.  (Figure 4)
The training samples collected were then saved as a signature file.  From here the supervised classification tool (Figure 5) was opened and the signature file was uploaded.  The tool was then run and a land use/land cover classified image was created.  Then a map was generated from the image to show land use/land cover of the area (Figure 6).  This map that was generated doesn't seem very accurate as the urban/built-up area is much more spread out than it is in real life.  This error could be due to the lack of separability between bare soil, agricultural land, and urban/built-up areas.

This is the supervised classification tool.  Running it is as easy as inputting the image and the signature file that was saved from the training samples.  (Figure 5)
This is the land use/land cover map that was generated from the supervised classification.  Unfortunately it appears as if the urban/built-up area covers much of the areas that should be bare soil or urban land.  (Figure 6)


Conclusion:

The supervised classification in this case didn't create a very well done map.  The classes don't seem correct and just seem unnatural.  This is likely due to user error in gathering training samples and a lack of separability.  Compared to the map of Lab 4, this map seems to misrepresent many features, particularly the urban, agricultural, and bare soil classes.  In the future, higher quality reference imagery, instead of just Google Earth, should be used to collect better training samples.  Also, a higher separability should be the goal to try and avoid this.  This was a good lesson in teaching the errors that can occur in supervised classification and what should be done in the future to avoid them.

Wednesday, October 8, 2014

Lab 4: Unsupervised Classification

Introduction:

The goal of this lab exercise was to teach the class how to extract sociocultural and biophysical information from remotely sensed imagery by using an unsupervised classification algorithm.  Image classification is a huge part of remote sensing and this lab was designed to teach how to perform it.  The lab was specifically designed to help the class garner an understanding of input configuration requirements and execution of an unsupervised classifier and teach how to recode multiple spectral clusters generated by an unsupervised classifier into useful land use/land cover classes.


Methods:

Experimenting with Unsupervised ISODATA Classification Algorithm

An iterative self-organizing data analysis technique is one option as an available classification algorithm.  The image that was to be classified was a satellite image of Eau Claire and Chippewa Counties in Wisconsin (Figure 1).  The image was loaded into ERDAS Imagine, then the unsupervised classification tool was opened.  The Isodata option was then selected.  Also the number of classes to be made was set to ten.  Running the tool produced a coded image, however at this point, it was impossible to tell what each coded value meant.

This is the original image of Eau Claire and Chippewa Counties to be classified.  The land use/land cover data will be extracted from this image later on in the write up.  (Figure 1)

Recoding of Unsurpervised Clusters into Meaningful Land Use/Land Cover Classes:

The next step in the process was to recode the clusters into colors that suited the land use/land cover.  Water was to be set as blue, forest as dark green, agriculture as pink, urban/built up areas as red, and bare soil as sienna.  The raster editor table was opened and the various features were compared by linking a historic view of Google Earth to the ERDAS viewer.  Each land cover cluster was thoroughly analyzed until a final product was created that was recoded into the appropriate color classes (Figure 2).

Here is the classified image according to land use/land cover,  Blue is water, dark green is forest, red is urban build-up, sienna is bare soil, and pink is agriculture.  When compared to the actual land coverage on Google Earth, it appeared that the 10 classes originally generated by the unsupervised classification tool were too broad and didn't cover enough of the variability as there were areas where bare soil was forest or urban classified zones were actually agricultural land or bare soil.   (Figure 2)

Improving the Accuracy of Unsupervised Classification:

In order to try and improve on the accuracy of the Isodata unsupervised classification, the unsupervised classification tool was run once again on the image of Eau Claire and Chippewa Counties (Figure 1).  However, this time the number of classes created was increased to twenty, while the convergence threshold was set to 0.92 instead of 0.95 (Figure 3).

This is the unsupervised classification tool with the new setting for the second attempt at running unclassified classification.  (Figure 3)
The tool was run and the data was once again recoded like in the earlier parts of the lab.  Only this time there were twice as many classes to recode, allowing more "gray" areas such as transition zones to be sorted into the correct classification (Figure 4).

This is the second classified image.  When comparing it with the first it appears as if there's less bare soil, more forested and agricultural areas, and the urban areas are more concentrated.  When comparing these classifications to the Google Earth historical imagery later on, it appeared as if this second classification was more accurate than the first.  (Figure 4)

Recoding Land Use/Land Cover Classes for Map Generation:

At this point, the image was once again recoded to give all of the blue (water) areas a value of 1, all of the green (forest) areas a value of 2, all of the pink (agriculture) areas a value of 3, all of the red areas (urban build up) a value of 4, and all of the bare soil areas a value of 5 (Figure 5).  Doing this allowed for a final map to be generated of the land use/land cover (Figure 6) as it was easy enough to bring it into ArcMap to create a finished product (Figure 6).

This shows the process of recoding each class into one number in order to use the values to generate a map.  The New Value section is the section that had to be altered in order to create the desired effect.  (Figure 5)
Figure 6


Conclusion:

Using unsupervised classification to find land use/land cover from satellite imagery is a relatively pain free process that can be accurate to a point.  This accuracy seems to increase the more classes that are created as can be seen when comparing the ten class image to the 20 class image.  However, this method has its limitations.  It makes assumptions and relies on the user to ultimately determine the classes post-classification.  Ultimately, this seems to be a viable method in creating land use/land cover maps that can be used at smaller scales.

Tuesday, October 7, 2014

Lab 3: Radiometric and Atmospheric Correction

Introduction:

This lab exercise was designed to give experience to the class in correcting atmospheric interference in remotely sensed images.  It involves performing both relative atmospheric correction and absolute atmospheric correction of remotely sensed images.  The methods empirical line calibration, dark object subtraction, and multidate image normalization were used to perform the atmospheric correction.


Methods:

Empirical Line Calibration:

Empirical line calibration (ELC) is a method of performing atmospheric correction which forces remotely sensed data to match in-situ spectral signature requirements.  These spectral requirements are found by using a spectral library.  In this lab a spectral library was used to perform ELC on a 2011 image of the Eau Claire area.

The first step was to bring the image into ERDAS Imagine and then open up the Spectral Analysis Work Station tool.  From there, the image was loaded into the tool and the atmospheric correction tool (Figure 1) was opened to begin collecting samples and referencing them to features within the spectral library.  Points were placed on the image in certain areas and then referenced to an ASTER spectral library and a USGS spectral library.  An example of a point selected was a point in the middle of Lake Wissota; this was referenced to a tap water feature in the ASTER spectral library, as tap water was the only freshwater feature available.  Another example was asphaltic concrete (Figure 2).  This example helps show the limited capabilities of the ELC method.

The atmospheric adjustment tool can be used in ELC to find points and relate them to land surface features in a spectral library.  In this case the features used were asphaltic concrete, pine wood, grass, alunite AL706 and tap water.             (Figure 1)

The spectral reflectance between the selected point in the image determined to be concrete and the selected reflectance in the spectral library can be seen in this graph.  From this point an equation was developed to bring the taken point closer to the expected reflectance in the spectral library.  (Figure 2)

After all of the points were selected and referenced to some sort of spectral signature in the library, equations were developed by the tool to bring the reflectance in the image closer to the expected reflectance from the libraries.  The regression equation developed was then ran by running the preprocess-atmospheric adjustment tool.  Saving the preprocessed image was the last step in completing ELC to correct for atmospheric interference.


Enhanced Image Based Dark Object Subtraction:

Enhanced image based dark object subtraction (DOS) is a relatively robust method  of correcting for atmospheric interference.  DOS involves first converting the image to an at-satellite spectral radiance image and the second involves converting the at-satellite spectral radiance image to true surface reflectance.  This process was performed using the same image from 2011 of the Eau Claire area as in the first part.

Model maker (Figure 3) played a large part in running DOS on the image.  The first step was to use the equation given to convert every band of the image separately into an at-satellite spectral radiance image.  Each band was brought into the model maker and had the equations run on them.

The model maker window with all of the inputs, equations, and outputs is shown here.  This model was run on all six of the bands of the Eau Claire 2011 image to convert them at at-satellite radiance images as the first part of DOS requires.    (Figure 3)

From here, model maker was once again used to convert all of the radiance images into true surface reflectance images (Figure 4).  The information needed for the equations, such as the pixel values and atmospheric transmittance from ground to sensor was all either obtained from the metadata, available online, or given.  At this point all of the layers were stacked to create the final true surface reflectance image.


This is a look at the equation to convert the radiance image of band one into a true surface reflectance image.  Completing these equations for each band of the image was a rather painstaking and involved process as different values exist for every band.  (Figure 4)


Multidate Image Normalization:

Multidate image normalization is a relative atmospheric correction method that is normally used when it is impossible to obtain in situ measurements to perform atmospheric correction or when metadata isn't available for an image.  It is used to normalize interference between two different images taken at different dates.  Multidate image normalization is mainly used for image change detection.

This process was run on images from the Chicago area.  One of the images was from 2000, while the other was taken in 2009.  The first step was to open up spectral profile plots to gather pseudo-invariant features (PIFs) which are like ground control points in a way.  These PIFs were gathered in each image, in the same spot, and only over features that experienced very little change such as airports or water (Figure 5).  The spectral reflectance of the different points can then be viewed in the spectral profile windows (Figure 6).  A total of fifteen PIFs were gathered in this case.

All of the points were selected in the same spot in both images.  A total of fifteen points were selected over features that would've experienced little to no change between 2000 and 2009, such as airports and water features.  (Figure 5)

The spectral signatures for the fifteen PIFs in both images can be seen in the two spectral profile viewers here.  (Figure 6)

At this point, the data from the bands of each of the fifteen PIFs was extrapolated and brought into Excel (Figure 7) to graph (Figure 8) and find the equations to normalize the two images.  Image normalization correction models (Figure 9) were then developed to generate the final products.  The band layers were then stacked to complete the process.

This is data on the reflectance of all of the PIFs taken in the various bands.  From here graphs were made in order to find the necessary equations to convert each band.  (Figure 7)

This is an example of one of the graphs generated using the PIF data.  This graph is of band one and the line equation was used to run the model to create the normalized image.  (Figure 8)

This model was made in order to finish the image normalization.  The equations in the models were created from the equations generated in the Excel graphs.  (Figure 9)


Conclusion:

Most processes and analysis in remote sensing applications cannot be performed without first ensuring that the images to be used have low error.  This means that atmospheric correction is a hugely important topic that applies to almost all remote sensing applications.  This lab was an excellent way to introduce and compare several techniques of performing atmospheric corrections.  The most robust seemed to be DOS, however it also requires the most in situ data.  In the end, the atmospheric correction technique used depends on data available and of course, the task at hand.

Tuesday, September 9, 2014

Lab 1: Image Quality Assessment and Statistical Analysis

Introduction:

This is the first lab assigned in Geography 438, Advanced Remote Sensing, at the University of Wisconsin-Eau Claire.  It focuses on learning how to extract statistical information from satellite images, developing a model to calculate image correlation analysis, and interpreting the results of the correlation analysis for image classification.  The main focus of the lab is learning how to identify and eliminate data redundancy from satellite images by applying statistical techniques and analysis.  This is a key part of performing image preprocessing.

Methods:

Exploring Data Quality through Feature Space Plots:

The first technique of exploring data quality used was looking at feature space plots (Figure 1).  These images help show whether or not two bands may be highly correlated.  If they do appear to be highly correlated, there may be redundancy present and one of the bands should be eliminated or further statistical tests should be run to detect correlation.

An image of the Eau Claire area taken in 2007 was added to the viewer in ERDAS Imagine.  From here feature space plots were created using combinations of all of the available bands by using the Raster toolbar and looking under Supervised.  By making feature space plots of all of the available band combinations, bands that may correlate and therefore be redundant can be identified (Figure 2).  Also bands that have a high amount of variation can be located as well (Figure 3).

These are all of the feature space plots created by running the tool in ERDAS Imagine.  The plots work by plotting the reflectance of one of the bands on the x-axis and the other on the y-axis.  As can be seen some of the plots show a large amount of variation and little correlation between the bands, while others appear to be highly correlated with a one to one relationship.  (Figure 1)
This feature space plot shows the relationship between the reflectance in bands 2 and 3.  These two bands appear to be highly correlated and one of them may need to be eliminated in order to reduce redundancy in the image for future use.  (Figure 2)
This feature space plot shows the relationship between the reflectance in bands 4 and 6.  These two bands appear to be greatly varied and have a low correlation.  (Figure 3)


Assessing Image Quality through Correlation Analysis:

The next part of the lab involved creating a model to run correlation analysis on the same image that feature space plots were created for.  Creating feature space plots is a good idea to explore whether or not correlation analysis should be run, while correlation analysis gives more finite information about the bands and whether or not there is redundancy present.

The first step was to open up model builder and begin constructing the necessary model (Figure 4).  This model was rather simple to construct as all that was required was an input, a function to calculate correlation (Figure 5), and an output matrix table.

This is the model that was designed in order to perform correlation analysis on the image.  As it can be seen, this model is rather simple and only has one input, function, and output.  (Figure 4)

This is the function that was performed to calculate the correlation of the various bands of the image.  The input can be seen here along with the option to ignore the value zero, which is necessary in this case.  (Figure 5)

After the model was run an output matrix was created and cleaned up to look professional the results could be easily seen and the bands with the highest correlation could be found (Figure 6).  Correlation is measure on a scale of -1 to 1.  The closer the number is to 1 or -1, the higher the correlation.  The closer the correlation value is to zero, the less correlation present.  If two bands have a correlation value of greater than 0.95, one of them should be eliminated as to avoid redundancy.

This is the correlation matrix generated by the constructed model for the image of the area surrounding Eau Claire.  The two bands with the highest correlation and the most redundancy are bands 2 and 3.  This could mean that one of these bands should be eliminated before proceeding with more image processing/analysis.  However, the value isn't greater than 0.95, which is the typical cutoff so it may not be necessary if it is felt by the analyzer that both of these bands will be instrumental in further analysis.  (Figure 6)
The same process was then run with high resolution images of the Florida Keys (Figure 7) and the Sundarbans (Figure 8).  These results can be seen in matrix tables of Figure 9 and Figure 10 respectively.

This is a high resolution image taken of an area in the Florida Keys which was analyzed using correlation analysis.     (Figure 7).

This is a high resolution image taken of an area in the Sundarbans which was analyzed using correlation analysis.       (Figure 8)

This is the final correlation matrix of the Florida Keys image.  It can be seen that Band 1 and Band 2 are highly correlated and it should be highly considered to eliminate one of them to reduce the redundancy before proceeding with other analysis of the image.  (Figure 9)

This is the final correlation matrix of the Sundarbans image.  It can be seen that Band 1 and Band 2 are highly correlated and it should be highly considered to eliminate one of them to reduce the redundancy before proceeding with other analysis of the image.  Figure 10)

Conclusion:

It is important when performing image preprocessing to check for redundancy in an image.  This can be explored initially by creating a feature space plot which will show if correlation analysis may be necessary.  If correlation analysis does appear to be necessary then it can be easily run by creating a model.  Once the correlations analysis has been run and the output table created, it can be seen which bands may be redundant by looking at the correlation values and observing how near they may be to 1.  From here, redundant bands should be eliminated before moving on.

Sunday, May 4, 2014

Lab 8: Spectral Signature Analysis

Introduction:

This assigned lab involved using satellite imagery in ERDAS to check the spectral reflectance signatures of various surfaces around the Eau Claire area.  The purpose of this is to observe the spectral reflectance patterns on a graph and analyze them.


Methods:

An satellite image of the area surrounding Eau Claire provided by professor Cyril Wilson and LANDSAT was brought into a viewer in ERDAS Imagine.  From a set of twelve different Earth surfaces was listed to be found and have their spectral signatures analyzed in a graph.  The twelve surfaces were:  standing water, moving water, vegetation, riparian vegetation, crops, urban grass, dry soil, moist soil, rock, asphalt highway, airport runway, and a parking lot (concrete surface).

Collecting a spectral signature is a rather uncomplicated process that seems rather simplistic.  It involves digitizing a polygon on the specific area of the surface feature desired.  From here, opening up the raster processing tools allows the opening of the signature editor.  The spectral mean plot of the area analyzed can be seen (Figure 1).  This shows the various reflectance levels of the different bands that make up the various spectral wavelengths of the electromagnetic spectrum.  Band one is the blue band, band two is the green, band three is red, band four is near infrared (NIR), band five is shortwave infrared, and band six is thermal infrared.  The levels of these bands varied based on the type of surface analyzed as the reflectance varies greatly based on the material.

The spectral signature of standing water in Lake Wissota was high in the blue band, and near non-existent in the infrared bands, which is to be expected of water as it absorbs nearly all of the energy at the higher wavelengths.    (Figure 1)

All twelve of the aforementioned features were gathered methodically (Figure 2 and Figure 3).  The features were identified by linking Google Earth to ERDAS.  This use of Google Earth as a key aided greatly in locating features such as dry soil and concrete.

All twelve of the required features were gathered and analyzed to see if they matched what was expected as far as reflectance values.  (Figure 2)

The various spectral signatures can all be viewed in one window for comparison purposes.  The greatest variation among the different features appear to be in the shortwave infrared.  (Figure 3)

Conclusion:

The various surface signatures can be easily gathered using ERDAS and the signature editor tool.  These reflectance signatures at times have error in them due to atmospheric interference or other forms of interference in general.  Though correcting for this allows for proper viewing of spectral signatures and can help determine vegetation quality, surface moisture content, or just what the surface is in general (among many other uses).

Tuesday, April 29, 2014

Lab 7: Introduction to Photogrammetric Tasks

Introduction:

The purpose of this lab exercise was to teach how to perform key photogrammetric tasks on aerial photographs and satellite imagery.  It covered the mathematics behind the calculation of photographic scales, measurement of areas and their perimeters, and calculating relief displacement in images.  The latter portion of the lab was designed to focus on performing orthorectification on satellite images.  These processes are extremely relevant in the field of remote sensing and as the instructor, Professor Cyril Wilson, explained, with the skills learned in this lab anyone in the class would be able to get a job working with remote sensing technologies.  This lab was much more technical than previous labs and therefore required more time and precision as well.


Methods:

Part 1:

This first section of the lab covered map scales, measurement of these scales and objects in the image, and calculating relief displacement of an image.

The first required task of this section was to calculate the scale of an aerial photograph of the city of Eau Claire (Figure 1).  The distance from point A to point B had been calculated in the real world to be 8822.49'.  From here the scale of the aerial photograph was found by measuring out the distance from point A to point B in the image, which came out to be 2.65".  The scale was then calculated by looking at the difference between the two measurements.  8822.49'=105869.64".  This gives that every 2.65" on the image covers 105869.64" in real life.  Dividing these two numbers by 2.65 gives a scale of approximately 1:39,950.

The scale of this aerial image of Eau Claire, WI was calculated to be 1:39,950 using the distance measured in the image from point A to point B and comparing it to the distance measured in real life which was given.  (Figure 1)

The scale in a similar aerial image of Eau Claire, WI was then calculated using the focal length of the camera lens, the altitude of the aircraft, and the elevation of Eau Claire County.  The focal length of the camera lens was given as 152 mm which was converted to .152 m.  The altitude of the aircraft was given as 20,000 feet which was converted to 6096 m.  The elevation of Eau Claire County was then given as 796 ft (242.6 m).  The scale was calculated by subtracting the elevation of Eau Claire County from the aircraft elevation, then dividing the camera lens length by this value.  The numbers came out that the scale equaled 1:39,265.

The measurement of a lagoon and its area and perimeter were then required to be performed.  This was simply done by digitizing a polygon in ERDAS (Figure 2) and reading off the measurements of said polygon.

The polygon was digitized around the lagoon in order to calculate the area and perimeter of the lagoon.  More or less precision can be used when using this technique, though it is important for the geometry of the image to be correct in order to obtain accurate measurements.  (Figure 2) 

It was then required to calculate the relief displacement of a zoomed in portion of an aerial photograph of an area near the University of Wisconsin-Eau Claire campus (Figure 3).  This involved knowing the scale of the aerial photograph (1:3,209), knowing the height of the camera above the datum (47,760"), measuring the distance on the image from the principle point, the point of which the camera of the aircraft is centered on (10.5"), and calculating the real life height of object A (the smokestack [1123.15"]), this was calculated by measuring and using the image scale.  By taking the the height of the smokestack multiplying it by the distance from the smokestack on the image and dividing it by the height of the airplane above the datum the relief displacement of the smokestack on the image was found to be .246" away from the principle point.


The smokestack (object A) in this image is shown to be leaning away from the principle point.  This is due to relief displacement.  This relief displacement was calculated to be .246".  A correction should be run so that the smokestack is shown from directly above.  (Figure 3)

Part 2:

This portion of the lab involved creating a stereoscopic image using an orthorectified image and a DEM (Figure 4) of the city of Eau Claire.  Using the Anaglyph Generation tool in ERDAS, the DEM and the image were input and an output of a stereoscopic image was created (Figure 5), which can be viewed when wearing Polaroid glasses.


The left is the orthorectified image of Eau Claire, WI while the right is a DEM of the same area.  These two images were input in the Anaglyph Generation tool to create Figure 5.  (Figure 4)

This is a screenshot of the anaglyphic image output.  When viewed with Polaroid glasses it can be seen that the lack of accuracy in the DEM may have caused the lack of accuracy when compared to reality of this image.  The wooded area of Putnam Park is extremely steep and changes elevation rapidly, this was hard for the DEM and the stereoscopic image to account for.  (Figure 5)

Part 3:

This large portion of the lab involved orthorectification of images using ERDAS Imagine Lecia Photogrammetric Suite (LPS) (Figure 6)
.
This is the LPS window after the image to be orthorectified has been input, though the image can't be seen yet as Ground Control Points (GCPs) need to be collected. (Figure 6)

Two SPOT satellite images of Palm Springs, California were required to be orthorectified.  LPS Project Manager was opened and the imagery needing to be orthorectified was added after ensuring that all settings in the project were set to ideal values.  From here the Classic Point Measurement Tool (Figure 7) was opened to collect GCPs on the first image of Palm Springs.  The reference image was then brought in (Figure 8).  The reference image in this case was an image of Palm Springs which had already been orthorectified.


The Classic Point Measurment Tool can be used to orthorectify an image by creating GCPs and generating tie points.  Here, the input image that is required to be orthorectified can be seen.  (Figure7)
The previously orthorectified image which was used to place GCPs can be seen to the left, while the first Palm Springs image that is required to be orthorectified can be seen to the right.  From here GCPs were added back and forth between the two images.  (Figure 8)

A total of nine GCPs were collected between these two images (nine on each image).  Another separate reference image was then added in order to ensure accuracy and a final two GCPs were added.  The GCPs then had their Type and Usage changed to Full and Control to properly designate the points.  A DEM was then brought in to create a z (elevation) value for the GCPs.

Some of the set control points are listed here.  The x and y references were set using the two orthorectified reference images while the z reference was set using a DEM.  (Figure 9)
The other image that was required to be orthorectified was then brought in and had GCPs were created to help orthorectify both images.  These GCPs used the already placed GCPs in the first image and placed on the second image (Figure 10).  From here tie points could be created.

The triangles represent GCPs between the two images to be orthorectified.  The area where they overlap is where the GCPs will be drawn from to create one output image.  (Figure 10)
Automatic tie point generation (Figure 11) was then run which used the GCPs and created more tie points in order to ensure the images match up well (Figure 12).  Orthorectification was then run with gave a single output image using the two original images (Figure 13).

The Triangulation Tool created tie points between the two images which can be seen in Figure 12.  (Figure 11)
The triangles represent the GCPs between the two images, while the squares represent the locations of tie points generated automatically.  From here the orthorectification was run and an output image was given.  (Figure 12)
The final orthorectified image of Palm Springs, Californina is a combination of the two original images that were required to be orthorectified.  GCPs and tie points were used to ensure geometric integrity between the two images.  (Figure 13)

Discussion:

The process of creating the final orthorectified image was rather painstaking and required the use of a large amount of GCPs and tie points and a high degree of accuracy.  However, this paid off greatly as the two input images fused together almost seamlessly (Figure 14).

It is almost near impossible to tell where one input image ends and the other begins.  This is a zoomed in view of the orthorectified image at the border of the two original input images, and all of the features clearly match up extremely well.  This is due to the high degree of accuracy of the GCPs and tie points and points to orthorectification as being a powerful tool in remote sensing. (Figure 14)

Conclusion:

All of the skills learned in this lab are extremely technical.  Not only that but they are extremely useful and powerful as well.  Knowing how to do what was learnt in this lab well can lead to obtaining a job as orthorectification and the other skills learnt are extremely useful.  It's rare to find people who possess the knowledge and skills to perform these tasks.  This lab taught these skills well and they should be repeatable in the future by any member of the class who completed them.