[1] |
DAI G, FAN J, TIAN Z, et al. PPLC-Net: Neural network-based plant disease identification model supported by weather data augmentation and multi-level attention mechanism[J]. Journal of King Saud University - Computer and Information Sciences, 2023, 35(5):101555.https://doi.org/10.1016/j.jksuci.2023.101555.
|
[2] |
周国民. 我国农业大数据应用进展综述[J]. 农业大数据学报, 2019, 1(1):16-23.DOI:10.19788/j.issn.2096-6369.190102.
|
[3] |
张凌栩, 韩锐, 李文明, 等. 大数据深度学习系统研究进展与典型农业应用[J]. 农业大数据学报, 2019, 1(2):88-104. DOI:10.19788/j.issn.2096-6369.190208.
|
[4] |
HUANG M L, CHUANG T C, LIAO Y C. Application of transfer learning and image augmentation technology for tomato pest identification[J]. Sustainable Computing: Informatics and Systems, 2022, 33:100646. https://doi.org/10.1016/j.suscom.2021.100646.
|
[5] |
SAPNA N, RAJNI J, SUDEEP M, et al. Deep transfer learning model for disease identification in wheat crop[J]. Ecological Informatics, 2023, 75:102068. https://doi.org/10.1016/j.ecoinf.2023.02068.
|
[6] |
BAO W, CHENG T, ZHOU X G, et al. An improved DenseNet model to classify the damage caused by cotton aphid[J]. Computers and Electronics in Agriculture, 2022, 203:107485.https://doi.org/10.1016/j.compag.2022.107485.
|
[7] |
SHENG Y, LI X, QILEI H. Inception convolutional vision transformers for plant disease identification[J]. Internet of Things, 2023, 21:100650. https://doi.org/10.1016/j.iot.2022.100650.
|
[8] |
SUDHESH K M, SOWMYA V, SAINAMOLE KURIAN P, et al. AI based rice leaf disease identification enhanced by Dynamic Mode Decomposition[J]. Engineering Applications of Artificial Intelligence, 2023, 120:105836. https://doi.org/10.1016/j.engappai.2023.105836.
|
[9] |
CHODEY M D, SHARIFF N C. Pest detection via hybrid classification model with fuzzy C-means segmentation and proposed texture feature[J]. Biomedical Signal Processing and Control, 2023, 84:104710.
|
[10] |
梁炜健, 郭庆文, 王春桃, 等. 基于空间注意力增强ResNeSt-101网络和迁移元学习的小样本害虫分类(英文)[J]. 农业工程学报, 2024, 40(6):285-297.
|
[11] |
RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]// International conference on machine learning, PMLR. 2021:8748-8763. arXiv. 2103.00020.
|
[12] |
COULIBALY S, KAMSU-FOGUEM B, KAMISSOKO D, et al. Explainable deep convolutional neural networks for insect pest recognition[J]. Journal of Cleaner Production, 2022, 371:133638. https://doi.org/10.1016/j.jclepro.2022.133638.
|
[13] |
NIGAM S, JAIN R, MARWAHA S, et al. Deep transfer learning model for disease identification in wheat crop. Ecological Informatics, 2023, 75, 102068. https://doi.org/10.1016/j.ecoinf.2023.102068.
|
[14] |
ZHOU C, ZHONG Y, ZHOU S, et al. Rice leaf disease identification by residual-distilled transformer[J]. Engineering Applications of Artificial Intelligence, 2023, 121:106020. https://doi.org/10.1016/j.engappai.2023.106020.
|
[15] |
DAI G, FAN J, DEWI C. ITF-WPI: Image and text based cross-modal feature fusion model for wolfberry pest recognition[J]. Computers and Electronics in Agriculture, 2023, 212:108129. https://doi.org/10.1016/j.compag.2023.108129.
|
[16] |
SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[OL]. arXiv.1312.6199.
|
[17] |
TRIPATHY S, TABASUM M. Autoencoder: An unsupervised deep learning approach[M]//Dutta P, Chakrabarti S, Bhattacharya A, et al(Eds.). Emerging Technologies in Data Mining and Information Security. Springer, 2023:261-267.
|
[18] |
KINGMA D P, WELLING M. Auto-encoding variational bayes[OL]. arXiv:1312.6114.
|
[19] |
HE K, CHEN X, XIE S, et al. 2021. Masked autoencoders are scalable vision learners[OL]. 2021. arXiv:2111.06377.
|
[20] |
DEVLIN J, CHANG M W, LEE K, et al. BERT: Pretraining of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1 (Long and Short Papers), 2019:4171-4186. DOI:10.18653/v1/N19-1423.
|
[21] |
ZHONG Z, FRIEDMAN D, CHEN D. Factual probing is [MASK]: Learning vs. learning to recall[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, 2021: 5017-5033. DOI:10.18653/v1/2021.naacl-main.398.
|
[22] |
HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter efficient transfer learning for NLP[C]// International Conference on Machine Learning, PMLR. 2019: 2790-2799. https://proceedings.mlr.press/v97/houlsby19a.html.
|
[23] |
LIU H, TAM D, MUQEETH M, et al. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems, Curran Associates Inc., Red Hook, NY, USA. 2024. DOI:10.5555/3600270.3600412.
|
[24] |
BEN ZAKEN E, GOLDBERG Y, RAVFOGEL S. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models[C]// MURESAN S, NAKOV P, VILLAVICENCIO A (Eds.). Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers), Dublin, Ireland. 2022:1-9. DOI:10.18653/v1/2022.acl-short.1.
|
[25] |
HU E J, SHEN Y, WALLIS P, et al. Lora: Low-rank adaptation of large language models[OL]. 2021. arXiv.2106.09685.
|
[26] |
ZHOU K, YANG J, LOY C C, et al. Learning to prompt for vision-language models[J]. International Journal of Computer Vision, 2022, 130:2337-2348. https://doi.org/10.1007/s11263-022-01653-1.
|
[27] |
JIA M, TANG L, CHEN B C, et al. Visual prompt tuning[C]// Computer Vision - ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIII, Springer-Verlag: 709-727. DOI:10.1007/978-3-031-19827-4_41.
|
[28] |
XING J, LIU J, WANG J, et al. A survey of efficient fine-tuning methods for vision-language models - prompt and adapter[J]. Computers Graphics, 2024, 119: 103885. DOI: 10.1016/j.cag.2024.01.012.
|
[29] |
ROY S, ETEMAD A. Consistency-guided prompt learning for vision-language models. 2024. arXiv:2306.01195.
|
[30] |
陈磊, 刘立波, 王晓丽. 2020年宁夏枸杞虫害图文跨模态检索数据集[J]. 中国科学数据(中英文网络版), 2022, 7(3):149-156.
|
[31] |
WU X, ZHAN C, LAI Y K, et al. Ip102: A large-scale benchmark dataset for insect pest recognition[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019: 8779 - 8788. DOI:10.1109/CVPR.2019.00899.
|
[32] |
GAO P, GENG S, ZHANG R, et al. Clip-adapter: Better vision- language models with feature adapters[J]. International Journal of Computer Vision, 2021. DOI:10. 1007/s11263-023-01891-x.
|
[33] |
KHATTAK M U, RASHEED H, MAAZ M, et al. Maple: Multi-modal prompt learning[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023:19113 - 19122. DOI: 10.1109/CVPR52729.2023.01832.
|
[34] |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 770 - 778. DOI:10.1109/CVPR.2016.90.
|
[35] |
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[OL]. arXiv:2010. 11929.
|