Fire Monitoring and Assessment Platform (FireMAP): Utilization of Machine Learning Classifiers for Mapping Wildland Fire Extent and Severity
Abstract
Most wildlands in the US have evolved with fire and depend on periodic blazes for health and regeneration. The Fire Monitoring and Assessment Platform (FireMAP) provides a responsive, affordable and safe capability to monitor the severity of wildland fires. After a fire has been extinguished, the Unmanned Aerial System will fly over the affected area and acquire imagery, which will be georeferenced and mosaicked to create a composite image. The software then analyzes the imagery, identifying the extent as well as the severity of the burn.
FireMAP utilizes machine learning classifiers to classify all of the pixels within an image as to whether they burned based on the pixels’ electromagnetic signatures. Optionally, the classifiers may also classify unburned pixels by vegetation type.
While the classifiers have shown much promise for the identification of identifying burn extent and vegetation types, many challenges still remain in identifying unique spectral signatures between visually similar surfaces. Furthermore, the current temporal performance of the classifiers is too slow. Therefore, there is a lot of improvement needed in the current prototype classifiers.
Fire Monitoring and Assessment Platform (FireMAP): Utilization of Machine Learning Classifiers for Mapping Wildland Fire Extent and Severity
Most wildlands in the US have evolved with fire and depend on periodic blazes for health and regeneration. The Fire Monitoring and Assessment Platform (FireMAP) provides a responsive, affordable and safe capability to monitor the severity of wildland fires. After a fire has been extinguished, the Unmanned Aerial System will fly over the affected area and acquire imagery, which will be georeferenced and mosaicked to create a composite image. The software then analyzes the imagery, identifying the extent as well as the severity of the burn.
FireMAP utilizes machine learning classifiers to classify all of the pixels within an image as to whether they burned based on the pixels’ electromagnetic signatures. Optionally, the classifiers may also classify unburned pixels by vegetation type.
While the classifiers have shown much promise for the identification of identifying burn extent and vegetation types, many challenges still remain in identifying unique spectral signatures between visually similar surfaces. Furthermore, the current temporal performance of the classifiers is too slow. Therefore, there is a lot of improvement needed in the current prototype classifiers.