RESUMEN
BACKGROUND: To evaluate the effectiveness of the computed tomographic (CT) volumetric analysis in postoperative lung function assessment and the predicting value for postoperative complications in patients who had segmentectomy for lung cancer. METHODS: CT scanning and pulmonary function examination were performed for 100 patients with lung cancer. CT volumetric analyses were performed by specific software, for the volume of the inspiratory phase (Vin), the mean inspiratory lung density (MLDin), the volume of expiratory phase (Vex), and the mean lung density at expiratory phase (MLDex). Pulmonary function examination results and CT volumetric analysis results were used to predict postoperative lung function. The concordance and correlations of these values were assessed by Bland-Altman analysis and Pearson correlation analysis, respectively. Multivariate binomial logistic regression analysis was executed to assess the associations of CT data with complication occurrence. RESULTS: Correlations between CT scanning data and pulmonary function examination results were significant in both pre- and post-operation (0.8083 ≤ r ≤ 0.9390). Forced vital capacity (FVC), forced expiratory volume in the first second (FEV1), and the ratio of FVC and FEV1 estimated by CT volumetric analyses showed high concordance with those detected by pulmonary function examination. Preoperative (Vin-Vex) and (MLDex- MLDin) values were identified as predictors for post-surgery complications, with hazard ratios of 5.378 and 6.524, respectively. CONCLUSIONS: CT volumetric imaging analysis has the potential to determine the pre- and post-operative lung function, as well as to predict post-surgery complication occurrence in lung cancer patients with pulmonary lobectomy.
Asunto(s)
Neoplasias Pulmonares , Complicaciones Posoperatorias , Pruebas de Función Respiratoria , Humanos , Neoplasias Pulmonares/cirugía , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Femenino , Persona de Mediana Edad , Anciano , Complicaciones Posoperatorias/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Neumonectomía/efectos adversos , Pulmón/diagnóstico por imagen , Pulmón/fisiopatología , Adulto , Periodo Posoperatorio , Anciano de 80 o más Años , Capacidad VitalRESUMEN
The automated harvesting of strawberry brings benefits such as reduced labor costs, sustainability, increased productivity, less waste, and improved use of natural resources. The accurate detection of strawberries in a greenhouse can be used to assist in the effective recognition and location of strawberries for the process of strawberry collection. Furthermore, being able to detect and characterize strawberries based on field images is an essential component in the breeding pipeline for the selection of high-yield varieties. The existing manual examination method is error-prone and time-consuming, which makes mechanized harvesting difficult. In this work, we propose a robust architecture, named "improved Faster-RCNN," to detect strawberries in ground-level RGB images captured by a self-developed "Large Scene Camera System." The purpose of this research is to develop a fully automatic detection and plumpness grading system for living plants in field conditions which does not require any prior information about targets. The experimental results show that the proposed method obtained an average fruit extraction accuracy of more than 86%, which is higher than that obtained using three other methods. This demonstrates that image processing combined with the introduced novel deep learning architecture is highly feasible for counting the number of, and identifying the quality of, strawberries from ground-level images. Additionally, this work shows that deep learning techniques can serve as invaluable tools in larger field investigation frameworks, specifically for applications involving plant phenotyping.
RESUMEN
Achieving the non-contact and non-destructive observation of broccoli head is the key step to realize the acquisition of high-throughput phenotyping information of broccoli. However, the rapid segmentation and grading of broccoli head remains difficult in many parts of the world due to low equipment development level. In this paper, we combined an advanced computer vision technique with a deep learning architecture to allow the acquisition of real-time and accurate information about broccoli head. By constructing a private image dataset with 100s of broccoli-head images (acquired using a self-developed imaging system) under controlled conditions, a deep convolutional neural network named "Improved ResNet" was trained to extract the broccoli pixels from the background. Then, a yield estimation model was built based on the number of extracted pixels and the corresponding pixel weight value. Additionally, the Particle Swarm Optimization Algorithm (PSOA) and the Otsu method were applied to grade the quality of each broccoli head according to our new standard. The trained model achieved an Accuracy of 0.896 on the test set for broccoli head segmentation, demonstrating the feasibility of this approach. When testing the model on a set of images with different light intensities or with some noise, the model still achieved satisfactory results. Overall, our approach of training a deep learning model using low-cost imaging devices represents a means to improve broccoli breeding and vegetable trade.
RESUMEN
The number of panicles per unit area is a common indicator of rice yield and is of great significance to yield estimation, breeding, and phenotype analysis. Traditional counting methods have various drawbacks, such as long delay times and high subjectivity, and they are easily perturbed by noise. To improve the accuracy of rice detection and counting in the field, we developed and implemented a panicle detection and counting system that is based on improved region-based fully convolutional networks, and we use the system to automate rice-phenotype measurements. The field experiments were conducted in target areas to train and test the system and used a rotor light unmanned aerial vehicle equipped with a high-definition RGB camera to collect images. The trained model achieved a precision of 0.868 on a held-out test set, which demonstrates the feasibility of this approach. The algorithm can deal with the irregular edge of the rice panicle, the significantly different appearance between the different varieties and growing periods, the interference due to color overlapping between panicle and leaves, and the variations in illumination intensity and shading effects in the field. The result is more accurate and efficient recognition of rice-panicles, which facilitates rice breeding. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a global scale.