谷歌浏览器插件
订阅小程序
在清言上使用

How well do the volunteers label land cover types in manual interpretation of remote sensing imagery?

Yan Wang,Chenxi Li, Xueyi Liu, Hongdong Li,Zhiying Yao,Yuanyuan Zhao

International Journal of Digital Earth(2024)

引用 0|浏览7
暂无评分
摘要
ABSTRACTHigh-quality samples for training and validation are crucial for land cover classification, especially in some complex scenarios. The reliability, representativeness, and generalizability of the sample set are important for further researches. However, manual interpretation is subjective and prone to errors. Therefore, this study investigated the following questions: (1) How much difference is there in the interpreters’ performance across educational levels? (2) Do the accuracies of human and AI (Artificial Intelligence) improve with increased training and supporting material? (3) How sensitive are the accuracies of land cover types to different supporting material? (4) Does interpretation accuracy change with interpreters’ consistency? The experiment involved 50 interpreters completing five cycles of manual image interpretation. Higher educational background interpreters showed better performance: accuracies pre-training at 52.22% and 58.61%, post-training at 61.13% and 70.21%. Accuracy generally increased with more supporting material. Ultra-high-resolution images and background knowledge contributed the most to accuracy improvement, while the time series of normalized difference vegetation index (NDVI) contributed the least. Group consistency was a reliable indicator of volunteer sample reliability. In the case of limited samples, AI was not as good as manual interpretation. To ensure quality in samples through manual interpretation, we recommend inviting educated volunteers, providing training, preparing effective support material, and filtering based on group consistency.
更多
查看译文
关键词
Image interpretation,Land cover,Training and test sample,Reference material,Crowdsourcing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要