Evaluation of Texture as an Input of Spatial Context for Machine Learning Mapping of Wildland Fire Effects

Faculty Mentor Information

Dale Hamilton Barry Myers

Presentation Date

7-2017

Abstract

A variety of machine learning algorithms have been used to map wildland fire effects, but previous attempts to map post-fire effects have been conducted using relatively low resolution satellite imagery. Small unmanned aircraft systems (sUAS) provide opportunities to acquire imagery with much higher spatial resolution than is possible with satellites or manned aircraft. In order to investigate improvements achievable in the accuracy of post-fire effects mapping, using machine learning algorithms on hyperspatial (sub-decimeter) drone imagery, texture was added as an additional input to the machine learning classifier. A tool was created which measured spatial context of imagery based on a variety of texture metrics. Spatial context was subsequently analyzed, using information gain to identify the optimal texture metric to use as an additional classifier input. Classification accuracy was compared using color imagery as an input as opposed to additionally including texture as a fourth input to the machine learning classifier. It was found that adding texture as a fourth input increases classifier accuracy when mapping post fire effects.

This document is currently not available here.

Share

COinS
 

Evaluation of Texture as an Input of Spatial Context for Machine Learning Mapping of Wildland Fire Effects

A variety of machine learning algorithms have been used to map wildland fire effects, but previous attempts to map post-fire effects have been conducted using relatively low resolution satellite imagery. Small unmanned aircraft systems (sUAS) provide opportunities to acquire imagery with much higher spatial resolution than is possible with satellites or manned aircraft. In order to investigate improvements achievable in the accuracy of post-fire effects mapping, using machine learning algorithms on hyperspatial (sub-decimeter) drone imagery, texture was added as an additional input to the machine learning classifier. A tool was created which measured spatial context of imagery based on a variety of texture metrics. Spatial context was subsequently analyzed, using information gain to identify the optimal texture metric to use as an additional classifier input. Classification accuracy was compared using color imagery as an input as opposed to additionally including texture as a fourth input to the machine learning classifier. It was found that adding texture as a fourth input increases classifier accuracy when mapping post fire effects.