High-throughput soybean seeds phenotyping with convolutional neural networks and transfer learning

Plant Methods - Tập 17 - Trang 1-17 - 2021
Si Yang1,2, Lihua Zheng1,3, Peng He4, Tingting Wu5, Shi Sun5, Minjuan Wang1,6,2
1College of Information and Electrical Engineering, China Agricultural University, Beijing, China
2Key Laboratory of Agricultural Informatization Standardization, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing, China
3Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agricultural University, Beijing, China
4College of Information Engineering, Northwest A&F University, Yangling, China
5Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing, China
6College of Information Science and Engineering, Shandong Agriculture and Engineering University, Jinan, China

Tóm tắt

Effective soybean seed phenotyping demands large-scale accurate quantities of morphological parameters. The traditional manual acquisition of soybean seed morphological phenotype information is error-prone, and time-consuming, which is not feasible for large-scale collection. The segmentation of individual soybean seed is the prerequisite step for obtaining phenotypic traits such as seed length and seed width. Nevertheless, traditional image-based methods for obtaining high-throughput soybean seed phenotype are not robust and practical. Although deep learning-based algorithms can achieve accurate training and strong generalization capabilities, it requires a large amount of ground truth data which is often the limitation step. We showed a novel synthetic image generation and augmentation method based on domain randomization. We synthesized a plenty of labeled image dataset automatedly by our method to train instance segmentation network for high throughput soybean seeds segmentation. It can pronouncedly decrease the cost of manual annotation and facilitate the preparation of training dataset. And the convolutional neural network can be purely trained by our synthetic image dataset to achieve a good performance. In the process of training Mask R-CNN, we proposed a transfer learning method which can reduce the computing costs significantly by finetuning the pre-trained model weights. We demonstrated the robustness and generalization ability of our method by analyzing the result of synthetic test datasets with different resolution and the real-world soybean seeds test dataset. The experimental results show that the proposed method realized the effective segmentation of individual soybean seed and the efficient calculation of the morphological parameters of each seed and it is practical to use this approach for high-throughput objects instance segmentation and high-throughput seeds phenotyping.

Tài liệu tham khảo

Carther KFI, Ketehouli T, Ye N, et al. Comprehensive genomic analysis and expression profiling of diacylglycerol kinase (DGK) gene family in soybean (Glycine max) under Abiotic stresses. Int J Mol Sci. 2019;20(6):1361. Shuai H, Meng Y, Luo X, et al. Exogenous auxin represses soybean seed germination through decreasing the gibberellin/abscisic acid (GA/ABA) ratio. Sci Rep. 2017;7(1):1–11. Fehr WR, Justin JR. Principles of cultivar development, vol. 2, crop species. Soil Sci. 1988;145(5):390. Jiang S, An H, Luo J, et al. Comparative analysis of transcriptomes to identify genes associated with fruit size in the early stage of fruit development in Pyrus pyrifolia. Int J Mol Sci. 2018;19(8):2342. Momin MA, Yamamoto K, Miyamoto M, et al. Machine vision based soybean quality evaluation. Comput Electron Agric. 2017;140:452–60. Baek JH, Lee E, Kim N, et al. High throughput phenotyping for various traits on soybean seeds using image analysis. Sensors. 2020;20(1):248. Kezhu T, Yuhua C, Weixian S, et al. Identification of diseases for soybean seeds by computer vision applying BP neural network. Int J Agric Biol Eng. 2014;7(3):43–50. Liu D, Ning X, Li Z, et al. Discriminating and elimination of damaged soybean seeds based on image characteristics. J Stored Prod Res. 2015;60:67–74. Rahman A, Cho BK. Assessment of seed quality using non-destructive measurement techniques: a review. Seed Sci Res. 2016;26(4):285–305. Barbedo JGA. Counting clustered soybean seeds. 2012 12th International Conference on Computational Science and Its Applications. IEEE, 2012; pp. 142–145. Li Y, Jia J, Zhang L, et al. Soybean seed counting based on pod image using two-column convolution neural network. IEEE Access. 2019;7:64177–85. Uzal LC, Grinblat GL, Namías R, et al. Seed-per-pod estimation for plant breeding using deep learning. Comput Electron Agric. 2018;150:196–204. Kong Y, Fang S, Wu X, et al. Novel and automatic rice thickness extraction based on photogrammetry using rice edge features. Sensors. 2019;19(24):5561. Quan L, Feng H, Lv Y, et al. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R-CNN. Biosys Eng. 2019;184:1–23. Tanabata T, Shibaya T, Hori K, et al. SmartGrain: high-throughput phenotyping software for measuring seed shape through image analysis. Plant Physiol. 2012;160(4):1871–80. Igathinathane C, Pordesimo LO, Columbus EP, et al. Shape identification and particles size distribution from basic shape parameters using ImageJ. Comput Electron Agric. 2008;63(2):168–82. Lamprecht MR, Sabatini DM, Carpenter AE. Cell ProfilerTM: free, versatile software for automated biological image analysis. Biotechniques. 2007;42(1):71–5. Faroq ALT, Adam H, Dos Anjos A, et al. P-TRAP: a panicle trait phenotyping tool. BMC Plant Biol. 2013;13(1):122. Groves FE, Bourland FM. Estimating seed surface area of cottonseed. J Cotton Sci. 2010;14:74–81. Yang S, Zheng L, Gao W, et al. An efficient processing approach for colored point cloud-based high-throughput seedling phenotyping. Remote Sens. 2020;12(10):1540. Chandra AL, Desai SV, Balasubramanian VN, et al. Active learning with point supervision for cost-effective panicle detection in cereal crops. Plant Methods. 2020;16(1):1–16. Pound M P, Atkinson J A, Wells D M, et al. Deep learning for multi-task plant phenotyping. Proceedings of the IEEE International Conference on Computer Vision Workshops. 2017. pp. 2055–2063. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. Lee SH, Chan CS, Mayo SJ, et al. How deep learning extracts and learns leaf features for plant classification. Pattern Recogn. 2017;71:1–13. Toda Y, Okura F. How convolutional neural networks diagnose plant disease. Plant Phenomics. 2019;2019:9237136. Liu L, Ouyang W, Wang X, et al. Deep learning for generic object detection: a survey. Int J Comput Vision. 2020;128(2):261–318. Zou Z, Shi Z, Guo Y, et al. Object detection in 20 years: a survey. arXiv preprint arXiv:1905.05055,2019. Chen H, Sun K, Tian Z, et al. BlendMask: Top-down meets bottom-up for instance segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020; pp. 8573–8581. Kulikov V, Lempitsky V. Instance segmentation of biological images using harmonic embeddings. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020; pp. 3843–3851. Sun J, Tárnok A, Su X. Deep learning-based single-cell optical image studies. Cytometry A. 2020;97(3):226–40. Bosilj P, Aptoula E, Duckett T, et al. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J Field Robotics. 2020;37(1):7–19. Nellithimaru AK, Kantor GA. ROLS: Robust Object-level SLAM for grape counting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019. pp. 0–0. Chandra AL, Desai SV, Guo W, et al. Computer vision with deep learning for plant phenotyping in agriculture: A survey. arXiv preprint arXiv:2006.11391,2020. Kamilaris A, Prenafeta-Boldú FX. Deep learning in agriculture: a survey. Comput Electron Agric. 2018;147:70–90. Deng J, Dong W, Socher R, et al. Imagenet: A large-scale hierarchical image database. 2009 IEEE conference on computer vision and pattern recognition. IEEE, 2009; pp. 248–255. Lin T Y, Maire M, Belongie S, et al. Microsoft coco: common objects in context. European conference on computer vision. Springer, Cham, 2014; pp. 740–755. Desai SV, Balasubramanian VN, Fukatsu T, et al. Automatic estimation of heading date of paddy rice using deep learning. Plant Methods. 2019;15(1):76. Ghosal S, Zheng B, Chapman SC, et al. A weakly supervised deep learning framework for sorghum head detection and counting. Plant Phenomics. 2019;2019:1525874. Sakurai S, Uchiyama H, Shimada A, et al. Two-step Transfer Learning for Semantic Plant Segmentation//ICPRAM. 2018: 332–339. Kuznichov D, Zvirin A, Honen Y, et al. Data augmentation for leaf segmentation and counting tasks in Rosette plants. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2019; pp. 1–15. Toda Y, Okura F, Ito J, et al. Training instance segmentation neural network with synthetic datasets for crop seed phenotyping. Commun Biol. 2020;3(1):1–12. Ma X, Chen Q, Yu Y, et al. A two-level transfer learning algorithm for evolutionary multitasking. Front Neurosci. 2019;13:1408. Coulibaly S, Kamsu-Foguem B, Kamissoko D, et al. Deep neural networks with transfer learning in millet crop images. Comput Ind. 2019;108:115–20. Russell BC, Torralba A, Murphy KP, et al. LabelMe: a database and web-based tool for image annotation. Int J Comput Vision. 2008;77(1–3):157–73. He K, Gkioxari G, Dollár P, et al. Mask r-cnn. Proceedings of the IEEE international conference on computer vision. 2017; pp. 2961–2969. Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems. 2015. pp. 91–99. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. pp. 3431–3440. Abdulla. W. Mask r-cnn for object detection and instance segmentation on keras and tensorflow. https://github.com/matterport/Mask_RCNN, 2017. 4 He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016; pp. 770–778. Misra T, Arora A, Marwaha S, et al. SpikeSegNet-a deep learning approach utilizing encoder-decoder network with hourglass for spike segmentation and counting in wheat plant from visual imaging. Plant Methods. 2020;16(1):1–20.