FUELVISION: A Multimodal Data Fusion and Multimodel Ensemble Algorithm for Wildfire Fuels Mapping
arxiv(2024)
摘要
Accurate assessment of fuel conditions is a prerequisite for fire ignition
and behavior prediction, and risk management. The method proposed herein
leverages diverse data sources including Landsat-8 optical imagery, Sentinel-1
(C-band) Synthetic Aperture Radar (SAR) imagery, PALSAR (L-band) SAR imagery,
and terrain features to capture comprehensive information about fuel types and
distributions. An ensemble model was trained to predict landscape-scale fuels
such as the 'Scott and Burgan 40' using the as-received Forest Inventory and
Analysis (FIA) field survey plot data obtained from the USDA Forest Service.
However, this basic approach yielded relatively poor results due to the
inadequate amount of training data. Pseudo-labeled and fully synthetic datasets
were developed using generative AI approaches to address the limitations of
ground truth data availability. These synthetic datasets were used for
augmenting the FIA data from California to enhance the robustness and coverage
of model training. The use of an ensemble of methods including deep learning
neural networks, decision trees, and gradient boosting offered a fuel mapping
accuracy of nearly 80%. Through extensive experimentation and evaluation, the
effectiveness of the proposed approach was validated for regions of the 2021
Dixie and Caldor fires. Comparative analyses against high-resolution data from
the National Agriculture Imagery Program (NAIP) and timber harvest maps
affirmed the robustness and reliability of the proposed approach, which is
capable of near-real-time fuel mapping.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要