Remote Sensing, Vol. 18, Pages 141: Shape-Aware Refinement of Deep Learning Detections from UAS Imagery for Tornado-Induced Treefall Mapping
Remote Sensing doi: 10.3390/rs18010141
Authors:
Mitra Nasimi
Richard L. Wood
This study presents a geometry-based post-processing framework developed to refine deep-learning detections of tornado-damaged trees. The YOLO11-based instance segmentation framework served as the baseline, but its predictions often included multiple masks for a single tree or incomplete fragments of the same trunk, particularly in dense canopy areas or within tiled orthomosaics. Overlapping masks led to duplicated predictions of the same tree, while fragmentation broke a single fallen trunk into disconnected parts. Both issues reduced the accuracy of tree-count estimates and weakened orientation analysis, two factors that are critical for treefall methods. To resolve these problems, a Shape-Aware Non-Maximum Suppression (SA-NMS) procedure was introduced. The method evaluated each mask’s collinearity and, based on its geometric condition, decided whether segments should be merged, separated, or suppressed. A spatial assessment then aggregated prediction vectors within a defined Region of Interest (ROI), reconnecting trunks that were divided by obstacles or tile boundaries. The proposed method, applied to high-resolution orthomosaics from the December 2021 Land Between the Lakes tornado, achieved 76.4% and 77.1% instance-level orientation agreement accuracy in two validation zones.
Source link
Mitra Nasimi www.mdpi.com
