基于参数高效微调的跨模态枸杞虫害识别模型D-PAG
收稿日期: 2024-09-09
录用日期: 2024-10-14
网络出版日期: 2024-12-02
基金资助
国家自然科学基金项目(32460444);北方民族大学重点科研项目(2023ZRLG12);北方民族大学研究生创新项目(YCX24126)
D-PAG: Cross-modal Wolfberry Pest Recognition Model Based on Parameter-Efficient Fine-Tuning
Received date: 2024-09-09
Accepted date: 2024-10-14
Online published: 2024-12-02
随着多模态基础模型(大模型)的发展,如何高效地将其迁移到特定领域或任务中成为目前的热点、难点问题。该研究以多模态大模型CLIP为基础模型,使用参数高效微调方法Prompt、Adapter将CLIP迁移到枸杞虫害识别任务中,提出了用于枸杞虫害识别的跨模态参数高效微调模型D-PAG。D-PAG模型首先在CLIP编码器的输入层或隐层中嵌入了可学习的Prompt与Adapter,用于训练,学习虫害特征;然后利用门控单元将Prompt、Adapter集成到CLIP编码器网络中,平衡两者对特征提取的影响大小,在Adapter中设计了GCS-Adapter注意力用以加强跨模态语义信息融合。为了验证方法的有效性,在枸杞虫害数据集和细粒度数据集IP102上进行了实验。验证实验结果表明,在枸杞数据集上仅用20%样本数量训练便可达到98.8%的准确率,使用40%样本数量训练准确率达到了99.5%;在IP102上验证,准确率达到75.6%,与ViT持平。该方案可在少样本条件下,通过引入极少额外参数,将多模态大模型基础知识高效迁移到特定虫害识别领域,为高效使用大模型解决农业图像处理问题提供了新的技术方案。
邢嘉璐, 刘建平, 周国民, 刘立波, 王健 . 基于参数高效微调的跨模态枸杞虫害识别模型D-PAG[J]. 农业大数据学报, 2024 , 6(4) : 509 -521 . DOI: 10.19788/j.issn.2096-6369.000067
With the development of multimodal foundation models (large models), efficiently transferring them to specific domains or tasks has become a current hot topic. This study uses the multimodal large model CLIP as the base model and employs parameter-efficient fine-tuning methods, such as Prompt and Adapter, to adapt CLIP to the task of goji berry pest identification. It introduces a cross-modal parameter-efficient fine-tuning model for goji berry pest recognition, named D-PAG. Firstly, learnable Prompts and Adapters are embedded in the input or hidden layers of the CLIP encoder to capture pest features. Then, gated units are utilized to integrate the Prompt and Adapter, further balancing the learning capacity. A GCS-Adapter is designed within the Adapter to enhance the attention mechanism for cross-modal semantic information fusion. To validate the effectiveness of the method, experiments were conducted on the goji berry pest dataset and the fine-grained dataset IP102. The experimental results indicate that with only 20% of the sample size, an accuracy of 98.8% was achieved on the goji dataset, and an accuracy of 99.5% was reached with 40% of the samples. On IP102, an accuracy of 75.6% was attained, comparable to ViT. This approach allows for efficient transfer of the foundational knowledge of multimodal large models to the specific domain of pest recognition with minimal additional parameters, providing a new technical solution for efficiently addressing agricultural image processing problems.
Key words: wolfberry; pest identification; parameter-efficient fine-tuning; large model; CLIP
| [1] | DAI G, FAN J, TIAN Z, et al. PPLC-Net: Neural network-based plant disease identification model supported by weather data augmentation and multi-level attention mechanism[J]. Journal of King Saud University - Computer and Information Sciences, 2023, 35(5):101555.https://doi.org/10.1016/j.jksuci.2023.101555. |
| [2] | 周国民. 我国农业大数据应用进展综述[J]. 农业大数据学报, 2019, 1(1):16-23.DOI:10.19788/j.issn.2096-6369.190102. |
| [3] | 张凌栩, 韩锐, 李文明, 等. 大数据深度学习系统研究进展与典型农业应用[J]. 农业大数据学报, 2019, 1(2):88-104. DOI:10.19788/j.issn.2096-6369.190208. |
| [4] | HUANG M L, CHUANG T C, LIAO Y C. Application of transfer learning and image augmentation technology for tomato pest identification[J]. Sustainable Computing: Informatics and Systems, 2022, 33:100646. https://doi.org/10.1016/j.suscom.2021.100646. |
| [5] | SAPNA N, RAJNI J, SUDEEP M, et al. Deep transfer learning model for disease identification in wheat crop[J]. Ecological Informatics, 2023, 75:102068. https://doi.org/10.1016/j.ecoinf.2023.02068. |
| [6] | BAO W, CHENG T, ZHOU X G, et al. An improved DenseNet model to classify the damage caused by cotton aphid[J]. Computers and Electronics in Agriculture, 2022, 203:107485.https://doi.org/10.1016/j.compag.2022.107485. |
| [7] | SHENG Y, LI X, QILEI H. Inception convolutional vision transformers for plant disease identification[J]. Internet of Things, 2023, 21:100650. https://doi.org/10.1016/j.iot.2022.100650. |
| [8] | SUDHESH K M, SOWMYA V, SAINAMOLE KURIAN P, et al. AI based rice leaf disease identification enhanced by Dynamic Mode Decomposition[J]. Engineering Applications of Artificial Intelligence, 2023, 120:105836. https://doi.org/10.1016/j.engappai.2023.105836. |
| [9] | CHODEY M D, SHARIFF N C. Pest detection via hybrid classification model with fuzzy C-means segmentation and proposed texture feature[J]. Biomedical Signal Processing and Control, 2023, 84:104710. |
| [10] | 梁炜健, 郭庆文, 王春桃, 等. 基于空间注意力增强ResNeSt-101网络和迁移元学习的小样本害虫分类(英文)[J]. 农业工程学报, 2024, 40(6):285-297. |
| [11] | RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]// International conference on machine learning, PMLR. 2021:8748-8763. arXiv. 2103.00020. |
| [12] | COULIBALY S, KAMSU-FOGUEM B, KAMISSOKO D, et al. Explainable deep convolutional neural networks for insect pest recognition[J]. Journal of Cleaner Production, 2022, 371:133638. https://doi.org/10.1016/j.jclepro.2022.133638. |
| [13] | NIGAM S, JAIN R, MARWAHA S, et al. Deep transfer learning model for disease identification in wheat crop. Ecological Informatics, 2023, 75, 102068. https://doi.org/10.1016/j.ecoinf.2023.102068. |
| [14] | ZHOU C, ZHONG Y, ZHOU S, et al. Rice leaf disease identification by residual-distilled transformer[J]. Engineering Applications of Artificial Intelligence, 2023, 121:106020. https://doi.org/10.1016/j.engappai.2023.106020. |
| [15] | DAI G, FAN J, DEWI C. ITF-WPI: Image and text based cross-modal feature fusion model for wolfberry pest recognition[J]. Computers and Electronics in Agriculture, 2023, 212:108129. https://doi.org/10.1016/j.compag.2023.108129. |
| [16] | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[OL]. arXiv.1312.6199. |
| [17] | TRIPATHY S, TABASUM M. Autoencoder: An unsupervised deep learning approach[M]//Dutta P, Chakrabarti S, Bhattacharya A, et al(Eds.). Emerging Technologies in Data Mining and Information Security. Springer, 2023:261-267. |
| [18] | KINGMA D P, WELLING M. Auto-encoding variational bayes[OL]. arXiv:1312.6114. |
| [19] | HE K, CHEN X, XIE S, et al. 2021. Masked autoencoders are scalable vision learners[OL]. 2021. arXiv:2111.06377. |
| [20] | DEVLIN J, CHANG M W, LEE K, et al. BERT: Pretraining of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1 (Long and Short Papers), 2019:4171-4186. DOI:10.18653/v1/N19-1423. |
| [21] | ZHONG Z, FRIEDMAN D, CHEN D. Factual probing is [MASK]: Learning vs. learning to recall[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, 2021: 5017-5033. DOI:10.18653/v1/2021.naacl-main.398. |
| [22] | HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter efficient transfer learning for NLP[C]// International Conference on Machine Learning, PMLR. 2019: 2790-2799. https://proceedings.mlr.press/v97/houlsby19a.html. |
| [23] | LIU H, TAM D, MUQEETH M, et al. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA. 2024. DOI:10.5555/3600270.3600412. |
| [24] | BEN ZAKEN E, GOLDBERG Y, RAVFOGEL S. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models[C]// MURESAN S, NAKOV P, VILLAVICENCIO A (Eds.). Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers), Dublin, Ireland. 2022:1-9. DOI:10.18653/v1/2022.acl-short.1. |
| [25] | HU E J, SHEN Y, WALLIS P, et al. Lora: Low-rank adaptation of large language models[OL]. 2021. arXiv.2106.09685. |
| [26] | ZHOU K, YANG J, LOY C C, et al. Learning to prompt for vision-language models[J]. International Journal of Computer Vision, 2022, 130:2337-2348. https://doi.org/10.1007/s11263-022-01653-1. |
| [27] | JIA M, TANG L, CHEN B C, et al. Visual prompt tuning[C]// Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII, Springer-Verlag: 709-727. DOI:10.1007/978-3-031-19827-4_41. |
| [28] | XING J, LIU J, WANG J, et al. A survey of efficient fine-tuning methods for vision-language models - prompt and adapter[J]. Computers Graphics, 2024, 119: 103885. DOI: 10.1016/j.cag.2024.01.012. |
| [29] | ROY S, ETEMAD A. Consistency-guided prompt learning for vision-language models. 2024. arXiv:2306.01195. |
| [30] | 陈磊, 刘立波, 王晓丽. 2020年宁夏枸杞虫害图文跨模态检索数据集[J]. 中国科学数据(中英文网络版), 2022, 7(3):149-156. |
| [31] | WU X, ZHAN C, LAI Y K, et al. Ip102: A large-scale benchmark dataset for insect pest recognition[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019: 8779 - 8788. DOI:10.1109/CVPR.2019.00899. |
| [32] | GAO P, GENG S, ZHANG R, et al. Clip-adapter: Better vision- language models with feature adapters[J]. International Journal of Computer Vision, 2021. DOI:10. 1007/s11263-023-01891-x. |
| [33] | KHATTAK M U, RASHEED H, MAAZ M, et al. Maple: Multi-modal prompt learning[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023:19113 - 19122. DOI: 10.1109/CVPR52729.2023.01832. |
| [34] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770 - 778. DOI:10.1109/CVPR.2016.90. |
| [35] | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[OL]. arXiv:2010. 11929. |
/
| 〈 |
|
〉 |