REFERENCES

1. Zhou C. The status of citrus Huanglongbing in China. Trop plant pathol 2020;45:279-84.

2. York T, Jain R. Fundamentals of image sensor performance. Available from: https://www.semanticscholar.org/paper/Fundamentals-of-Image-Sensor-Performance/0011fde4eacafac957ae52d030dbb08202dca1b6. [Last accessed on 17 Oct 2023].

3. Mankoff KD, Russo TA. The Kinect: a low-cost, high-resolution, short-range 3D camera. Earth Surf Process Landforms 2013;38:926-36.

4. Lee S, Ahn H, Seo J, et al. Practical monitoring of undergrown pigs for IoT-based large-scale smart farm. IEEE Access 2019;7:173796-810.

5. Bernotas G, Scorza LC, Hansen MF, et al. A photometric stereo-based 3D imaging system using computer vision and deep learning for tracking plant growth. Gigascience 2019;8:giz056.

6. Veeranampalayam Sivakumar AN, Li J, Scott S, et al. Comparison of object detection and patch-based classification deep learning models on mid-to late-season weed detection in UAV imagery. Remote Sensing 2020;12:2136.

7. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2014. pp. 580-87.

8. Maimaitijiang M, Sagan V, Sidike P, et al. Crop monitoring using satellite/UAV data fusion and machine learning. Remote Sensing 2020;12:1357.

9. Lottes P, Khanna R, Pfeifer J, Siegwart R, Stachniss C. UAV-based crop and weed classification for smart farming. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE; 2017. pp. 3024-31.

10. Sharif N, Nadeem U, Shah SAA, Bennamoun M, Liu W. Vision to language: Methods, metrics and datasets. Machine Learning Paradigms: Advances in Deep Learning-based Technological Applications 2020:9-62.

11. Home | Food and Agriculture Organization of the United Nations — fao. org;. Available from: https://www.fao.org/home/en/. [Last accessed on 16 Oct 2023].

12. Plant Disease Management Strategies — apsnet. org;. Available from: https://www.apsnet.org. [Last accessed on 16 Oct 2023].

13. Wang J, Yu L, Yang J, Dong H. DBA_SSD: a novel end-to-end object detection algorithm applied to plant disease detection. Information 2021;12:474.

14. Liu W, Anguelov D, Erhan D, et al. Ssd: single shot multibox detector. In: Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part Ⅰ 14. Springer; 2016. pp. 21-37.

15. Hughes D, Salathe M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv. org; 2020.

16. Kaur P, Harnal S, Gautam V, Singh MP, Singh SP. An approach for characterization of infected area in tomato leaf disease based on deep learning and object detection technique. Eng Appl Artif Intel 2022;115:105210.

17. Dang F, Chen D, Lu Y, Li Z. YOLOWeeds: A novel benchmark of YOLO object detectors for multi-class weed detection in cotton production systems. Comput Electron Agr 2023;205:107655.

18. Correa JL, Todeschini M, Pérez D, et al. Multi species weed detection with Retinanet one-step network in a maize field. In: Precision agriculture’21. Wageningen Academic Publishers; 2021. pp. 79-86.

19. Sanchez PR, Zhang H, Ho SS, De Padua E. Comparison of one-stage object detection models for weed detection in mulched onions. In: 2021 IEEE International Conference on Imaging Systems and Techniques (IST). IEEE; 2021. pp. 1-6.

20. Cap QH, Uga H, Kagiwada S, Iyatomi H. Leafgan: an effective data augmentation method for practical plant disease diagnosis. IEEE Transactions on Automation Science and Engineering 2020;19:1258-67.

21. Douarre C, Crispim-Junior CF, Gelibert A, Tougne L, Rousseau D. Novel data augmentation strategies to boost supervised segmentation of plant disease. Comput Electron Agr 2019;165:104967.

22. Zeng Q, Ma X, Cheng B, Zhou E, Pang W. Gans-based data augmentation for citrus disease severity detection using deep learning. IEEE Access 2020;8:172882-91.

23. Nazki H, Lee J, Yoon S, Park DS. Image-to-image translation with GAN for synthetic data augmentation in plant disease datasets. Korean Institute Smart Media 2019;8:46-57.

24. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature 1986;323:533-6.

25. Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 1980;36:193-202.

26. LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86:2278-324.

27. Mohanty SP, Hughes DP, Salathé M. Using deep learning for image-based plant disease detection. Front Plant Sci 2016;7:1419.

28. Saleem MH, Potgieter J, Arif KM. Plant disease detection and classification by deep learning. Plants 2019;8:468.

29. Li L, Zhang S, Wang B. Plant disease detection and classification by deep learning—a review. IEEE Access 2021;9:56683-98.

30. Singh V, Sharma N, Singh S. A review of imaging techniques for plant disease detection. Artificial Intelligence in Agriculture 2020;4:229-42.

31. Newhall A. Herbert Hice Whetzel: Pioneer American Plant Pathologist. Annu Rev Phytopathol 2003;11:27-36.

32. Noble R, Coventry E. Suppression of soil-borne plant diseases with composts: a review. Biocontrol Sci Techn 2005;15:3-20.

33. Vegetable Diseases Cornell Home Page — vegetablemdonline. ppath. cornell. edu;. [Accessed 11-May-2023]. http://vegetablemdonline.ppath.cornell.edu/index.html.

34. Adhikari S, Unit D, Shrestha B, Baiju B. Tomato plant diseases detection system. Available from: https://kec.edu.np/wp-content/uploads/2018/10/15.pdf. [Last accessed on 17 Oct 2023].

35. Sen Y, van der Wolf J, Visser RG, van Heusden S. Bacterial canker of tomato: current knowledge of detection, management, resistance, and interactions. Plant Dis 2015;99:4-13.

36. Natarajan VA, Babitha MM, Kumar MS. Detection of disease in tomato plant using deep learning techniques. Available from: https://www.researchgate.net/publication/349860175_Detection_of_disease_in_tomato_plant_using_Deep_Learning_Techniques. [Last accessed on 17 Oct 2023].

37. Ren S, He K, Girshick R, Sun J. Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2015;39:1137-49.

38. Girshick R. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision; 2015. pp. 1440-48.

39. Priyadharshini G, Dolly DRJ. Comparative I investigations on tomato leaf disease detection and classification using CNN, R-CNN, fast R-CNN and faster R-CNN. In: 2023 9th International Conference on Advanced Computing and Communication Systems (ICACCS). IEEE; 2023. pp. 1540-45.

40. Qi J, Liu X, Liu K, et al. An improved YOLOv5 model based on visual attention mechanism: Application to recognition of tomato virus disease. Comput Electron Agr 2022;194:106780.

41. Wang X, Liu J. Tomato anomalies detection in greenhouse scenarios based on YOLO-Dense. Front Plant Sci 2021;12:634103.

42. Bochkovskiy A, Wang CY, Liao HYM. Yolov4: optimal speed and accuracy of object detection. arXiv preprint arXiv: 200410934 2020.

43. Roy AM, Bose R, Bhaduri J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput & Applic 2022;34:3895.

44. Nagamani H, Sarojadevi H. Tomato leaf disease detection using deep learning techniques. 2020 5th International Conference on Communication and Electronics Systems (ICCES) 2022;13: pp. 979-983.

45. Liu J, Wang X. Tomato diseases and pests detection based on improved Yolo V3 convolutional neural network. Front Plant Sci 2020;11:898.

46. Wang Q, Qi F, Sun M, et al. Identification of tomato disease types and detection of infected areas based on deep convolutional neural networks and object detection techniques. Comput Intel Neurosc 2019;2019.

47. Ranjana P, Reddy JPK, Manoj JB, Sathvika K. Plant Leaf Disease Detection Using Mask R-CNN. In: Hu Y, Tiwari S, Trivedi MC, Mishra KK, editors. Ambient Communications and Computer Systems. Singapore: Springer Nature; 2022. pp. 303-14.

48. He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. IEEE Trans Pattern Anal Mach Intell 2020;42:386-97.

49. Citrus: Identifying Diseases and Disorders of Leaves and Twigsx2014;UC IPM — ipm. ucanr. edu;. Available from: https://ipm.ucanr.edu/PMG/C107/m107bpleaftwigdis.html. [Last accessed on 17 Oct 2023].

50. Su H, Wen G, Xie W, et al. Research on citrus pest and disease recognition method in Guangxi based on regional convolutional neural network model. Southwest China Journal of Agricultural Sciences 2020;33:805-10.

51. Uijlings JR, Van De Sande KE, Gevers T, Smeulders AW. Selective search for object recognition. Int J Comput Vis 2013;104:154-71.

52. Dhiman P, Kukreja V, Manoharan P, et al. A novel deep learning model for detection of severity level of the disease in citrus fruits. Electronics 2022;11:495.

53. Dai F, Wang F, Yang D, et al. Detection method of citrus psyllids with field high-definition camera based on improved cascade region-based convolution neural networks. Front Plant Sci 2022;12:3136.

54. Syed-Ab-Rahman SF, Hesamian MH, Prasad M. Citrus disease detection and classification using end-to-end anchor-based deep learning model. Appl Intell 2022;52:927-38.

55. Uğuz S, Şikaroğlu G, Yağız A. Disease detection and physical disorders classification for citrus fruit images using convolutional neural network. Food Measure 2023;17:2353-62.

56. Song C, Wang C, Yang Y. Automatic detection and image recognition of precision agriculture for citrus diseases. In: 2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE). IEEE; 2020. pp. 187-90.

57. Qiu RZ, Chen SP, Chi MX, et al. An automatic identification system for citrus greening disease (Huanglongbing) using a YOLO convolutional neural network. Front Plant Sci 2022;13:1002606.

58. Dananjayan S, Tang Y, Zhuang J, Hou C, Luo S. Assessment of state-of-the-art deep learning based citrus disease detection techniques using annotated optical leaf images. Comput Electron Agr 2022;193:106658.

59. da Silva JC, Silva MC, Luz EJ, Delabrida S, Oliveira RA. Using mobile edge AI to detect and map diseases in citrus orchards. Sensors 2023;23:2165.

60. Kundu N, Rani G, Dhaka VS, et al. Disease detection, severity prediction, and crop loss estimation in MaizeCrop using deep learning. Artificial Intelligence in Agriculture 2022;6:276-91.

61. Kumar MS, Ganesh D, Turukmane AV, Batta U, Sayyadliyakat KK. Deep convolution neural network based solution for detecting plant diseases. J Pharm Negat Result 2022:464-71.

62. He J, Liu T, Li L, Hu Y, Zhou G. MFaster r-CNN for maize leaf diseases detection based on machine vision. Arab J Sci Eng 2023;48:1437-49.

63. Pillay N, Gerber M, Holan K, Whitham SA, Berger DK. Quantifying the severity of common rust in maize using mask r-cnn. In: Artificial Intelligence and Soft Computing: 20th International Conference, ICAISC 2021, Virtual Event, June 21-23, 2021, Proceedings, Part Ⅰ 20. Springer; 2021. pp. 202-13.

64. Gerber M, Pillay N, Holan K, Whitham SA, Berger DK. Automated hyper-parameter tuning of a mask R-CNN for quantifying common rust severity in Maize. In: 2021 International Joint Conference on Neural Networks (IJCNN. IEEE; 2021. pp. 1-7.

65. Stewart EL, Wiesner-Hanks T, Kaczmar N, et al. Quantitative phenotyping of Northern leaf blight in UAV images using deep learning. Rem Sen 2019;11:2209.

66. Li Y, Sun S, Zhang C, Yang G, Ye Q. One-stage disease detection method for maize leaf based on multi-scale feature fusion. Appl Sci 2022;12:7960.

67. Ahmad A, Aggarwal V, Saraswat D, El Gamal A, Johal G. Deep learning-based disease identification and severity estimation tool for tar spot in corn. In: 2022 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers; 2022. p. 1.

68. Jocher G, Chaurasia A, Stoken A, et al. ultralytics/yolov5: v7.0 - YOLOv5 SOTA realtime instance segmentation. Zenodo; 2022.

69. Austria YC, Mirabueno MCA, Lopez DJD, et al. EZM-AI: a Yolov5 machine vision inference approach of the philippine corn leaf diseases detection system. In: 2022 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET). IEEE; 2022. pp. 1-6.

70. KIYOHARA T, TOKUSHIGE Y. Inoculation experiments of a nematode, Bursaphelenchus sp., onto pine trees. J Japan For Res 1971;53:210-18.

71. Ryss AY, Kulinich OA, Sutherland JR. Pine wilt disease: a short review of worldwide research. For Stud China 2011;13:132-8.

72. Wu K, Zhang J, Yin X, Wen S, Lan Y. An improved YOLO model for detecting trees suffering from pine wilt disease at different stages of infection. Remote Sens Lett 2023;14:114-23.

73. Tan M, Le Q. Efficientnet: rethinking model scaling for convolutional neural networks. In: International conference on machine learning. PMLR; 2019. pp. 6105-14.

74. Misra D. Mish: a self regularized non-monotonic activation function. arXiv preprint arXiv: 190808681 2019.

75. Zhu X, Wang R, Shi W, Yu Q, Li X, Chen X. Automatic detection and classification of dead nematode-infested pine wood in stages based on YOLO v4 and googLeNet. Forests 2023;14:601.

76. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. pp. 1-9.

77. Sun Z, Ibrayim M, Hamdulla A. Detection of pine wilt nematode from drone images using UAV. Sensors 2022;22:4704.

78. Woo S, Park J, Lee JY, Kweon IS. Cbam: CBAM: convolutional block attention module. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors. Computer Vision-ECCV 2018. Cham: Springer International Publishing; 2018. pp. 3-19.

79. Gong H, Ding Y, Li D, Wang W, Li Z. Recognition of pine wood affected by pine wilt disease based on YOLOv5. In: 2022 China Automation Congress (CAC). IEEE; 2022. pp. 4753-7.

80. Deng X, Tong Z, Lan Y, Huang Z. Detection and location of dead trees with pine wilt disease based on deep learning and UAV remote sensing. AgriEngineering 2020;2:294-307.

81. Hu G, Zhu Y, Wan M, et al. Detection of diseased pine trees in unmanned aerial vehicle images by using deep convolutional neural networks. Geocarto Int 2022;37:3520-39.

82. Hu G, Wang T, Wan M, Bao W, Zeng W. UAV remote sensing monitoring of pine forest diseases based on improved Mask R-CNN. Int J Remote Sens 2022;43:1274-305.

83. Qin B, Sun F, Shen W, Dong B, Ma S, et al. Deep learning-based pine nematode trees’ identification using multispectral and visible UAV imagery. Drones 2023;7:183.

84. Park HG, Yun JP, Kim MY, Jeong SH. Multichannel object detection for detecting suspected trees with pine wilt disease using multispectral drone imagery. IEEE J Sel Top Appl Earth Observations Remote Sensing 2021;14:8350-8.

85. Sedivy EJ, Wu F, Hanzawa Y. Soybean domestication: the origin, genetic architecture and molecular bases. New Phytol 2017;214:539-53.

86. Zhang K, Wu Q, Chen Y. Detecting soybean leaf disease from synthetic image using multi-feature fusion faster R-CNN. Comput Electron Agr 2021;183:106064.

87. Xin M, Wang Y. An image recognition algorithm of soybean diseases and insect pests based on migration learning and deep convolution network. In: 2020 International Wireless Communications and Mobile Computing (IWCMC); 2020. pp. 1977-80.

88. Li H, Shi H, Du A, et al. Symptom recognition of disease and insect damage based on Mask R-CNN, wavelet transform, and F-RNet. Front Plant Sci 2022;13:922797.

89. Soeb MJA, Jubayer MF, Tarin TA, et al. Tea leaf disease detection and identification based on YOLOv7 (YOLO-T). Sci Rep 2023;13:6078.

90. Bao W, Fan T, Hu G, Liang D, Li H. Detection and identification of tea leaf diseases based on AX-RetinaNet. Sci Rep 2022;12:2183.

91. Lin J, Bai D, Xu R, Lin H. TSBA-YOLO: an improved tea diseases detection model based on attention mechanisms and feature fusion. Forests 2023;14:619.

92. Lee SH, Wu CC, Chen SF. Development of image recognition and classification algorithm for tea leaf diseases using convolutional neural network. In: 2018 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers; 2018. pp. 1-7.

93. Liu S, Huang D, Wang Y. Receptive field block net for accurate and fast object detection. In: Proceedings of the European conference on computer vision (ECCV); 2018. pp. 385-400.

94. Bao W, Zhu Z, Hu G, Zhou X, Zhang D, Yang X. UAV remote sensing detection of tea leaf blight based on DDMA-YOLO. Comput Electron Agr 2023;205:107637.

95. Dwivedi R, Dey S, Chakraborty C, Tiwari S. Grape disease detection network based on multi-task learning and attention features. IEEE Sensors J 2021;21:17573-80.

96. Xie X, Ma Y, Liu B, He J, Li S, Wang H. A deep-learning-based real-time detector for grape leaf diseases using improved convolutional neural networks. Front Plant Sci 2020;11:751.

97. Viola P, Jones M. Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition; 2001.

98. Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05); 2005. pp. 886-93.

99. Felzenszwalb P, McAllester D, Ramanan D. A discriminatively trained, multiscale, deformable part model. In: 2008 IEEE conference on computer vision and pattern recognition; 2008. pp. 1-8.

100. Felzenszwalb PF, Girshick RB, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell 2010;32:1627-45.

101. Ott P, Everingham M. Shared parts for deformable part-based models. In: CVPR 2011; 2011. pp. 1513-20.

102. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Commun ACM 2017;60:84-90.

103. Russakovsky O, Deng J, Su H, et al. Imagenet large scale visual recognition challenge. Int J Comput Vis 2015;115:211-52.

104. Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results;. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html.

105. Dai J, Li Y, He K, Sun J. R-fcn: Object detection via region-based fully convolutional networks. Advances in neural information processing systems Curran Associates, Inc; 2016;29.

106. Cai Z, Vasconcelos N. Cascade R-CNN: delving into high quality object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2018. pp. 6154-62.

107. Lin TY, Maire M, Belongie S, et al. Microsoft coco: common objects in context. In: Computer Vision-ECCV 2014: 13th European Conference; 2014. pp. 740-55.

108. Pang J, Chen K, Shi J, Feng H, Ouyang W, Lin D. Libra R-CNN: Towards balanced learning for object detection. In: P2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019. pp. 821-30.

109. Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. pp. 2117-25.

110. Redmon J, Farhadi A. Yolov3: an incremental improvement. arXiv preprint 2018: 180402767.

111. Wang K, Liew JH, Zou Y, Zhou D, Feng J. PANet: few-shot image semantic segmentation with prototype alignment. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019. pp. 9196-205.

112. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. pp. 779-88.

113. Redmon J, Farhadi A. YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017. pp. 6517-25.

114. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. pmlr; 2015. pp. 448-56.

115. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. pp. 770-8.

116. Wang CY, Liao HYM, Wu YH, Chen PY, Hsieh JW, Yeh IH. CSPNet: a new backbone that can enhance learning capability of CNN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2020. pp. 1571-80.

117. Wang CY, Bochkovskiy A, Liao HYM. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2023. pp. 7464-75.

118. Lin TY, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision; 2017. pp. 2980-8.

119. Law H, Deng J. Cornernet: Detecting objects as paired keypoints. In: Computer Vision — ECCV 2018. Cham: Springer International Publishing; 2018. pp. 765-81.

120. Duan K, Bai S, Xie L, Qi H, Huang Q, Tian Q. CenterNet: keypoint triplets for object detection. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV); 2019. pp. 6568-77.

121. Vaswani A, Shazeer N, Parmar N, et al. Attention is All You Need; 2017. Available from: https://arxiv.org/pdf/1706.03762.pdf. [Last accessed on 17 Oct 2023].

122. Carion N, Massa F, Synnaeve G, Usunier N, Kirillov A, Zagoruyko S. End-to-end object detection with transformers. In: Computer Vision-ECCV 2020: 16th European Conference; 2020. pp. 213-29.

123. Zhang H, Li F, Liu S, et al. Dino: DETR with improved denoising anchor boxes for end-to-end object detection. arXiv preprint 2022: 220303605.

124. Zhu X, Su W, Lu L, Li B, Wang X, Dai J. Deformable detr: deformable transformers for end-to-end object detection. arXiv preprint 2020: 201004159.

125. Huang Q, Wang D, Dong Z, et al. CoDeNet: efficient deployment of input-adaptive object detection on embedded fpgas. In: The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays; 2021. pp. 206-16.

126. Xu Y, Chen Q, Kong S, et al. Real-time object detection method of melon leaf diseases under complex background in greenhouse. J Real-Time Image Pr 2022;19:985-95.

127. Sanga S, Mero V, Machuve D, Mwanganda D. Mobile-based deep learning models for banana diseases detection. arXiv preprint 2020: 200403718.

128. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. pp. 2818-26.

129. Agrio. Available from: https://agrio.app/. [Last accessed on 17 Oct 2023].

130. Plantix. Available from: https://plantix.net/en/. [Last accessed on 17 Oct 2023].

131. Siddiqua A, Kabir MA, Ferdous T, Ali IB, Weston LA. Evaluating plant disease detection mobile applications: Quality and limitations. Agronomy 2022;12:1869.

132. Rezatofighi H, Tsoi N, Gwak J, Sadeghian A, Reid I, et al. Generalized intersection over union: A metric and a loss for bounding box regression. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019. pp. 658-66.

133. Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D. Distance-IoU loss: faster and better learning for bounding box regression. AAAI 2020;34:12993-3000.

134. Fitt BD, Huang YJ, Bosch Fvd, West JS. Coexistence of related pathogen species on arable crops in space and time. Annu Rev Phytopathol 2006;44:163-82.

135. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, et al. Language models are few-shot learners. Advances in neural information processing systems 2020;33:1877-901.

136. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners. Available from: https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf. [Last accessed on 17 Oct 2023].

137. Wang XA, Tang J, Whitty M. Data-centric analysis of on-tree fruit detection: Experiments with deep learning. Comput Electron Agr 2022;194:106748.

138. Kirillov A, Mintun E, Ravi N, et al. Segment Anything. arXiv 2023: 230402643.

139. Mu D, Sun W, Xu G, Li W. Random blur data augmentation for scene text recognition. IEEE Access 2021;9:136636-46.

140. Atienza R. Data augmentation for scene text recognition. In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW); 2021. pp. 1561-70.

141. Zhu Z, Luo Y, Qi G, Meng J, Li Y, et al. Remote sensing image defogging networks based on dual self-attention boost residual octave convolution. Remote Sensing 2021;13:3104.

142. Wang YK, Fan CT. Single image defogging by multiscale depth fusion. IEEE Trans Image Process 2014;23:4826-37.

143. Liang W, Long J, Li KC, Xu J, Ma N, Lei X. ACM transactions on multimedia computing, communications, and applications. ACM Journals 2021;17:410.

144. Zeng Z, Wang Z, Wang Z, Zheng Y, Chuang Y, Satoh S. Illumination-adaptive person re-identification. IEEE Trans Multimedia 2020;22:3064-74.

145. Liu K, Ye Z, Guo H, Cao D, Chen L, Wang F. FISS GAN: A generative adversarial network for foggy image semantic segmentation. IEEE/CAA J Autom Sinica 2021;8:1428-39.

146. Jeong Y, Choi H, Kim B, Gwon Y. Defoggan: predicting hidden information in the starcraft fog of war with generative adversarial nets. AAAI 2020;34:4296-303.

147. Liu W, Yao R, Qiu G. A physics based generative adversarial network for single image defogging. Image Vision Comput 2019;92:103815.

148. Ma R, Shen X, Zhang S, Torres JM. Single image defogging algorithm based on conditional generative adversarial network. Math Probl Eng 2020;2020:1-8.

149. Liu W, Ren G, Yu R, Guo S, Zhu J, Zhang L. Image-adaptive YOLO for object detection in adverse weather conditions. AAAI 2022;36:1792-800.

150. Hsu HK, Yao CH, Tsai YH, Hung WC, Tseng HY, et al. Progressive domain adaptation for object detection. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision; 2020. pp. 749-57.

151. Li YJ, Dai X, Ma CY, Liu YC, Chen K, et al. Cross-domain adaptive teacher for object detection. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV); 2022. pp. 738-46.

152. Fuentes A, Yoon S, Kim T, Park DS. Open set self and across domain adaptation for tomato disease recognition with deep learning techniques. Front Plant Sci 2021;12:2872.

153. Wu X, Fan X, Luo P, Choudhury SD, Tjahjadi T, Hu C. From Laboratory to Field: Unsupervised Domain Adaptation for Plant Disease Recognition in the Wild. Plant Phenomics 2023;5:0038.

154. Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part Ⅳ 11. Springer; 2010. pp. 213-26.

155. Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K. Visda: a synthetic-to-real benchmark for visual domain adaptation. In: 018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2018. pp. 2102-25.

Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/