Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Cancers (Basel) ; 15(3)2023 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-36765573

RESUMEN

BACKGROUND: Aberrant DNA methylation is an early event during tumorigenesis. In the present study, we aimed to construct a methylation diagnostic tool using urine sediment for the detection of urothelial bladder carcinoma, and improved the diagnostic performance of the model by incorporating single-nucleotide polymorphism (SNP) sites. METHODS: A three-stage analysis was carried out to construct the model and evaluate the diagnostic performance. In stage I, two small cohorts from Xiangya hospital were recruited to validate and identify the detailed regions of collected methylation biomarkers. In stage II, proof-of-concept study cohorts from the Hunan multicenter were recruited to construct a diagnostic tool. In stage III, a blinded cohort comprising suspicious UBC patients was recruited from Beijing single center to further test the robustness of the model. RESULTS: In stage I, single NRN1 exhibited the highest AUC compared with six other biomarkers and the Random Forest model. At the best cutoff value of 5.16, a single NRN1 biomarker gave a diagnosis with a sensitivity of 0.93 and a specificity of 0.97. In stage II, the Random Forest algorithm was applied to construct a diagnostic tool, consisting of NRN1, TERT C228T and FGFR3 p.S249C. The tool exhibited AUC values of 0.953, 0.946 and 0.951 in training, test and all cohorts. At the best cutoff value, the model resulted in a sensitivity of 0.871 and a specificity of 0.947. In stage III, the diagnostic tool achieved a good discrimination in the external validation cohort, with an overall AUC of 0.935, sensitivity of 0.864 and specificity of 0.895. Additionally, the model exhibited a superior sensitivity and comparable specificity compared with conventional cytology and FISH. CONCLUSIONS: The diagnostic tool exhibited a highly specific and robust performance. It may be used as a replaceable approach for the detection of UBC.

2.
Front Plant Sci ; 13: 885167, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35909783

RESUMEN

The measurement of grapevine phenotypic parameters is crucial to quantify crop traits. However, individual differences in grape bunches pose challenges in accurately measuring their characteristic parameters. Hence, this study explores a method for estimating grape feature parameters based on point cloud information: segment the grape point cloud by filtering and region growing algorithm, and register the complete grape point cloud model by the improved iterative closest point algorithm. After estimating model phenotypic size characteristics, the grape bunch surface was reconstructed using the Poisson algorithm. Through the comparative analysis with the existing four methods (geometric model, 3D convex hull, 3D alpha-shape, and voxel-based), the estimation results of the algorithm proposed in this study are the closest to the measured parameters. Experimental data show that the coefficient of determination (R 2) of the Poisson reconstruction algorithm is 0.9915, which is 0.2306 higher than the coefficient estimated by the existing alpha-shape algorithm (R 2 = 0.7609). Therefore, the method proposed in this study provides a strong basis for the quantification of grape traits.

3.
Cancers (Basel) ; 14(14)2022 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-35884598

RESUMEN

BACKGROUND: To improve the selection of patients for ureteroscopy, avoid excessive testing and reduce costs, we aimed to develop and validate a diagnostic urine assay for upper tract urinary carcinoma (UTUC). METHODS: In this cohort study we recruited 402 patients from six Hunan hospitals who underwent ureteroscopy for hematuria, including 95 patients with UTUC and 307 patients with non-UTUC findings. Midstream morning urine samples were collected before ureteroscopy and surgery. DNA was extracted and qPCR was used to analyze mutations in TERT and FGFR3 and the methylation of NRN1. In the training set, the random forest algorithm was used to build an optimal panel. Lastly, the Beijing cohort (n = 76) was used to validate the panel. RESULTS: The panel combining the methylation with mutation markers led to an AUC of 0.958 (95% CI: 0.933-0.975) with a sensitivity of 91.58% and a specificity of 94.79%. The panel presented a favorable diagnostic value for UTUC vs. other malignant tumors (AUC = 0.920) and UTUC vs. benign disease (AUC = 0.975). Furthermore, combining the panel with age revealed satisfactory results, with 93.68% sensitivity, 94.44% specificity, AUC = 0.970 and NPV = 98.6%. In the external validation process, the model showed an AUC of 0.971, a sensitivity of 95.83% and a specificity of 92.31, respectively. CONCLUSIONS: A novel diagnostic model for analyzing hematuria patients for the risk of UTUC was developed, which could lead to a reduction in the need for invasive examinations. Combining NRN1 methylation and gene mutation (FGFR3 and TERT) with age resulted in a validated accurate prediction model.

4.
Sensors (Basel) ; 22(14)2022 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-35891034

RESUMEN

When performing robotic automatic sorting and assembly operations of multi-category hardware, there are some problems with the existing convolutional neural network visual recognition algorithms, such as large computing power consumption, low recognition efficiency, and a high rate of missed detection and false detection. A novel efficient convolutional neural algorithm for multi-category aliasing hardware recognition is proposed in this paper. On the basis of SSD, the novel algorithm uses Resnet-50 instead of VGG16 as the backbone feature extraction network, and it integrates ECA-Net and Improved Spatial Attention Block (ISAB): two attention mechanisms to improve the ability of learning and extract target features. Then, we pass the weighted features to extra feature layers to build an improved SSD algorithm. At last, in order to compare the performance difference between the novel algorithm and the existing algorithms, three kinds of hardware with different sizes are chosen to constitute an aliasing scene that can simulate an industrial site, and some comparative experiments have been completed finally. The experimental results show that the novel algorithm has an mAP of 98.20% and FPS of 78, which are better than Faster R-CNN, YOLOv4, YOLOXs, EfficientDet-D1, and original SSD in terms of comprehensive performance. The novel algorithm proposed in this paper can improve the efficiency of robotic sorting and assembly of multi-category hardware.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Computadores
5.
Front Plant Sci ; 13: 868745, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35651761

RESUMEN

As one of the representative algorithms of deep learning, a convolutional neural network (CNN) with the advantage of local perception and parameter sharing has been rapidly developed. CNN-based detection technology has been widely used in computer vision, natural language processing, and other fields. Fresh fruit production is an important socioeconomic activity, where CNN-based deep learning detection technology has been successfully applied to its important links. To the best of our knowledge, this review is the first on the whole production process of fresh fruit. We first introduced the network architecture and implementation principle of CNN and described the training process of a CNN-based deep learning model in detail. A large number of articles were investigated, which have made breakthroughs in response to challenges using CNN-based deep learning detection technology in important links of fresh fruit production including fruit flower detection, fruit detection, fruit harvesting, and fruit grading. Object detection based on CNN deep learning was elaborated from data acquisition to model training, and different detection methods based on CNN deep learning were compared in each link of the fresh fruit production. The investigation results of this review show that improved CNN deep learning models can give full play to detection potential by combining with the characteristics of each link of fruit production. The investigation results also imply that CNN-based detection may penetrate the challenges created by environmental issues, new area exploration, and multiple task execution of fresh fruit production in the future.

6.
Front Robot AI ; 8: 626989, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34239899

RESUMEN

Reliable and robust fruit-detection algorithms in nonstructural environments are essential for the efficient use of harvesting robots. The pose of fruits is crucial to guide robots to approach target fruits for collision-free picking. To achieve accurate picking, this study investigates an approach to detect fruit and estimate its pose. First, the state-of-the-art mask region convolutional neural network (Mask R-CNN) is deployed to segment binocular images to output the mask image of the target fruit. Next, a grape point cloud extracted from the images was filtered and denoised to obtain an accurate grape point cloud. Finally, the accurate grape point cloud was used with the RANSAC algorithm for grape cylinder model fitting, and the axis of the cylinder model was used to estimate the pose of the grape. A dataset was acquired in a vineyard to evaluate the performance of the proposed approach in a nonstructural environment. The fruit detection results of 210 test images show that the average precision, recall, and intersection over union (IOU) are 89.53, 95.33, and 82.00%, respectively. The detection and point cloud segmentation for each grape took approximately 1.7 s. The demonstrated performance of the developed method indicates that it can be applied to grape-harvesting robots.

7.
Prostate Cancer Prostatic Dis ; 24(1): 49-57, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-32873917

RESUMEN

Benign prostatic hyperplasia (BPH) and associated lower urinary tract symptoms are common clinical concerns that affect aging men all over the world. The underlying molecular and cellular mechanisms remain elusive. Over the past few years, a number of animal models of BPH, including spontaneous model, BPH-induction model, xenograft model, metabolic syndrome model, mechanical obstruction model, and transgenic model, have been established that may provide useful tools to fill these critical knowledge gaps. In this review, we therefore outlined the present status quo for animal models of BPH, comparing the pros and cons with respect to their ability to mimic the etiological, histological, and clinical hallmarks of BPH and discussed their applicability for future research.


Asunto(s)
Hiperplasia Prostática/epidemiología , Medición de Riesgo/métodos , Animales , Modelos Animales de Enfermedad , Salud Global , Incidencia , Masculino
8.
Front Plant Sci ; 11: 510, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32508853

RESUMEN

The utilization of machine vision and its associated algorithms improves the efficiency, functionality, intelligence, and remote interactivity of harvesting robots in complex agricultural environments. Machine vision and its associated emerging technology promise huge potential in advanced agricultural applications. However, machine vision and its precise positioning still have many technical difficulties, making it difficult for most harvesting robots to achieve true commercial applications. This article reports the application and research progress of harvesting robots and vision technology in fruit picking. The potential applications of vision and quantitative methods of localization, target recognition, 3D reconstruction, and fault tolerance of complex agricultural environment are focused, and fault-tolerant technology designed for utilization with machine vision and robotic systems are also explored. The two main methods used in fruit recognition and localization are reviewed, including digital image processing technology and deep learning-based algorithms. The future challenges brought about by recognition and localization success rates are identified: target recognition in the presence of illumination changes and occlusion environments; target tracking in dynamic interference-laden environments, 3D target reconstruction, and fault tolerance of the vision system for agricultural robots. In the end, several open research problems specific to recognition and localization applications for fruit harvesting robots are mentioned, and the latest development and future development trends of machine vision are described.

9.
Sensors (Basel) ; 17(11)2017 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-29112177

RESUMEN

Recognition and matching of litchi fruits are critical steps for litchi harvesting robots to successfully grasp litchi. However, due to the randomness of litchi growth, such as clustered growth with uncertain number of fruits and random occlusion by leaves, branches and other fruits, the recognition and matching of the fruit become a challenge. Therefore, this study firstly defined mature litchi fruit as three clustered categories. Then an approach for recognition and matching of clustered mature litchi fruit was developed based on litchi color images acquired by binocular charge-coupled device (CCD) color cameras. The approach mainly included three steps: (1) calibration of binocular color cameras and litchi image acquisition; (2) segmentation of litchi fruits using four kinds of supervised classifiers, and recognition of the pre-defined categories of clustered litchi fruit using a pixel threshold method; and (3) matching the recognized clustered fruit using a geometric center-based matching method. The experimental results showed that the proposed recognition method could be robust against the influences of varying illumination and occlusion conditions, and precisely recognize clustered litchi fruit. In the tested 432 clustered litchi fruits, the highest and lowest average recognition rates were 94.17% and 92.00% under sunny back-lighting and partial occlusion, and sunny front-lighting and non-occlusion conditions, respectively. From 50 pairs of tested images, the highest and lowest matching success rates were 97.37% and 91.96% under sunny back-lighting and non-occlusion, and sunny front-lighting and partial occlusion conditions, respectively.

10.
Sensors (Basel) ; 16(12)2016 Dec 10.
Artículo en Inglés | MEDLINE | ID: mdl-27973409

RESUMEN

The automatic fruit detection and precision picking in unstructured environments was always a difficult and frontline problem in the harvesting robots field. To realize the accurate identification of grape clusters in a vineyard, an approach for the automatic detection of ripe grape by combining the AdaBoost framework and multiple color components was developed by using a simple vision sensor. This approach mainly included three steps: (1) the dataset of classifier training samples was obtained by capturing the images from grape planting scenes using a color digital camera, extracting the effective color components for grape clusters, and then constructing the corresponding linear classification models using the threshold method; (2) based on these linear models and the dataset, a strong classifier was constructed by using the AdaBoost framework; and (3) all the pixels of the captured images were classified by the strong classifier, the noise was eliminated by the region threshold method and morphological filtering, and the grape clusters were finally marked using the enclosing rectangle method. Nine hundred testing samples were used to verify the constructed strong classifier, and the classification accuracy reached up to 96.56%, higher than other linear classification models. Moreover, 200 images captured under three different illuminations in the vineyard were selected as the testing images on which the proposed approach was applied, and the average detection rate was as high as 93.74%. The experimental results show that the approach can partly restrain the influence of the complex background such as the weather condition, leaves and changing illumination.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA