• 首页 | 主办单位 | 期刊简介 | 编委会 | 作者指南 | 刊物订阅 | 下载中心 | 联系我们 | English | 期刊界
引用本文:田甜,王迪,王珍,李会宾.基于深度学习模型的种植结构复杂区农作物精细分类研究[J].中国农业资源与区划,2022,43(12):147~158
【打印本页】   【HTML】   【下载PDF全文】   查看/发表评论  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览 990次   下载 268 本文二维码信息
码上扫一扫!
分享到: 微信 更多
基于深度学习模型的种植结构复杂区农作物精细分类研究
田甜1,2,王迪1,2,王珍3,李会宾1
1.中国农业科学院农业资源与农业区划研究所,北京 100081;2.农业农村部农业遥感重点实验室,北京 100081;3.鞍山师范学院化学与生命科学学院,辽宁鞍山 114007
摘要:
目的 卫星遥感技术具有覆盖范围广、探测周期短、调查成本低等优势而广泛应用于大区域农作物分类。然而在种植结构复杂区(如城乡结合部),因其地块破碎、同期生长的作物种类多且分布分散,利用传统的统计分类或机器学习方法进行农作物分类时仍存在精度不高的问题。为提高种植结构复杂区农作物分类精度。方法 文章选取河北省廊坊市广阳区为研究区,以GF-1 PMS全色多光谱融合影像为数据源,采用U-Net、PSPNet及DeepLabv3+,3种深度学习模型进行农作物分类研究。分析模型参数对农作物分类精度的影响,评价3种深度学习模型的农作物分类精度,优选农作物精细分类方法。结果 (1)学习率与3种深度学习模型的分类精度呈正相关关系,较大的学习率(0.01,0.001)下,3种模型收敛速度快,分类精度高。批样本量与模型分类稳定性相关,批样本量设为100时,3种模型的分类稳定性最好。(2)相比PSPNet、DeepLabv3+模型,U-Net模型分类效果最好,总体分类精度为89.32%。(3)GF-1 PMS影像结合U-Net模型可有效提升种植结构复杂区农作物分类精度,大宗作物春玉米、夏玉米的分类精度在80%以上,花生、红薯、蔬菜小宗作物分类精度在60%以上。结论 该研究可为准确获取种植结构复杂区的农作物类型、面积及空间分布信息提供参考依据。
关键词:  深度学习  农作物分类  遥感  卷积神经网络  GF-1卫星
DOI:10.7621/cjarrp.1005-9121.20221216
分类号:P237
基金项目:国家自然科学基金(重点)资助项目“基于“三位一体”空间抽样理论研究及其二联查找表研建”(41531179)
PRECISE CLASSIFICATION OF CROPS IN COMPLEX PLANTING STRUCTURE AREA BASED ON DEEP LEARNING MODEL
Tian Tian1,2, Wang Di1,2, Wang Zhen3, Li Huibin1
1.Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing 100081, China;2.Key Laboratory of Agricultural Remote Sensing, Ministry of Agriculture and Rural Affairs, China;3.School of Chemistry and Life Science, Anshan Normal University, Anshan 114007, Liaoning, China
Abstract:
Satellite remote sensing technology has the advantages of wide coverage, short detection period and low survey cost, and is widely used in crop classification in large areas. However, in areas with complex planting structures (such as urban-rural borders), due to the fragmentation of the plots and the large and scattered distribution of crops grown in the same period, the traditional statistical classification or machine learning methods are still used for crop classification. There is still a problem of low accuracy. In order to improve the classification accuracy of crops in areas with complex planting structure, in this study, Guangyang District, Langfang city, Hebei province was selected as the research area, and the GF-1 PMS panchromatic multispectral fusion image was used, and three deep learning models of U-Net, PSPNet and DeepLabv3+ were used to conduct crop classification research. The influence of model parameters on crop classification accuracy was analyzed, the crop classification accuracy of the three deep learning models was evaluated, and the precise crop classification method was optimized. The results were listed as follows. (1) There was a positive correlation between the learning rate and the classification accuracy of the three deep learning models. With a larger learning rate (0.01, 0.001), the three models had faster convergence speed and higher classification accuracy. The batch sample size was related to the model classification stability. When the batch sample size was set to 100, the classification stability of the three models was the best. (2) Compared with the PSPNet and DeepLabv3+ models, the U-Net model had the best classification effect, with an overall classification accuracy of 89.32%. (3) GF-1 PMS images combined with the U-Net model could effectively improve the classification accuracy of crops in areas with complex planting structures, and the classification accuracy of spring corn and summer corn was more than 80%, and that of peanut, sweet potato and vegetable crops was more than 60%. It concludes that this research can provide reference for accurately obtaining the crop type, area and spatial distribution information in the complex planting structure.
Key words:  deep learning  crop classification  remote sensing  convolutional neural network  GF-1 satellite
版权所有:  您是本站第    位访问者
主管单位:中华人民共和国农业农村部 主办单位:中国农业科学院农业资源与农业区划研究所
中国农业绿色发展研究会 地址:北京市海淀区中关村南大街12号
电话:010-82109628、82109647 电子邮件:quhuabjb@caas.cn
技术支持:北京勤云科技发展有限公司  京ICP备11039015号