Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38489169

RESUMO

BACKGROUND: At present, most articles mainly focused on the diagnosis of thyroid nodules by using artificial intelligence (AI), and there was little research on the detection performance of AI in thyroid nodules. OBJECTIVE: To explore the value of a real-time AI based on computer-aided diagnosis system in the detection of thyroid nodules and to analyze the factors influencing the detection accuracy. METHODS: From June 1, 2022 to December 31, 2023, 224 consecutive patients with 587 thyroid nodules were prospective collected. Based on the detection results determined by two experienced radiologists (both with more than 15 years experience in thyroid diagnosis), the detection ability of thyroid nodules of radiologists with different experience levels (junior radiologist with 1 year experience and senior radiologist with 5 years experience in thyroid diagnosis) and real-time AI were compared. According to the logistic regression analysis, the factors influencing the real-time AI detection of thyroid nodules were analyzed. RESULTS: The detection rate of thyroid nodules by real-time AI was significantly higher than that of junior radiologist (P = 0.013), but lower than that of senior radiologist (P = 0.001). Multivariate logistic regression analysis showed that nodules size, superior pole, outside (near carotid artery), close to vessel, echogenicity (isoechoic, hyperechoic, mixed-echoic), morphology (not very regular, irregular), margin (unclear), ACR TI-RADS category 4 and 5 were significant independent influencing factors (all P < 0.05). With the combination of real-time AI and radiologists, junior and senior radiologist increased the detection rate to 97.4% (P < 0.001) and 99.1% (P = 0.015) respectively. CONCLUSONS: The real-time AI has good performance in thyroid nodule detection and can be a good auxiliary tool in the clinical work of radiologists.

2.
EClinicalMedicine ; 67: 102391, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38274117

RESUMO

Background: Clinical appearance and high-frequency ultrasound (HFUS) are indispensable for diagnosing skin diseases by providing internal and external information. However, their complex combination brings challenges for primary care physicians and dermatologists. Thus, we developed a deep multimodal fusion network (DMFN) model combining analysis of clinical close-up and HFUS images for binary and multiclass classification in skin diseases. Methods: Between Jan 10, 2017, and Dec 31, 2020, the DMFN model was trained and validated using 1269 close-ups and 11,852 HFUS images from 1351 skin lesions. The monomodal convolutional neural network (CNN) model was trained and validated with the same close-up images for comparison. Subsequently, we did a prospective and multicenter study in China. Both CNN models were tested prospectively on 422 cases from 4 hospitals and compared with the results from human raters (general practitioners, general dermatologists, and dermatologists specialized in HFUS). The performance of binary classification (benign vs. malignant) and multiclass classification (the specific diagnoses of 17 types of skin diseases) measured by the area under the receiver operating characteristic curve (AUC) were evaluated. This study is registered with www.chictr.org.cn (ChiCTR2300074765). Findings: The performance of the DMFN model (AUC, 0.876) was superior to that of the monomodal CNN model (AUC, 0.697) in the binary classification (P = 0.0063), which was also better than that of the general practitioner (AUC, 0.651, P = 0.0025) and general dermatologists (AUC, 0.838; P = 0.0038). By integrating close-up and HFUS images, the DMFN model attained an almost identical performance in comparison to dermatologists (AUC, 0.876 vs. AUC, 0.891; P = 0.0080). For the multiclass classification, the DMFN model (AUC, 0.707) exhibited superior prediction performance compared with general dermatologists (AUC, 0.514; P = 0.0043) and dermatologists specialized in HFUS (AUC, 0.640; P = 0.0083), respectively. Compared to dermatologists specialized in HFUS, the DMFN model showed better or comparable performance in diagnosing 9 of the 17 skin diseases. Interpretation: The DMFN model combining analysis of clinical close-up and HFUS images exhibited satisfactory performance in the binary and multiclass classification compared with the dermatologists. It may be a valuable tool for general dermatologists and primary care providers. Funding: This work was supported in part by the National Natural Science Foundation of China and the Clinical research project of Shanghai Skin Disease Hospital.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37721886

RESUMO

Image classification plays an important role in remote sensing. Earth observation (EO) has inevitably arrived in the big data era, but the high requirement on computation power has already become a bottleneck for analyzing large amounts of remote sensing data with sophisticated machine learning models. Exploiting quantum computing might contribute to a solution to tackle this challenge by leveraging quantum properties. This article introduces a hybrid quantum-classical convolutional neural network (QC-CNN) that applies quantum computing to effectively extract high-level critical features from EO data for classification purposes. Besides that, the adoption of the amplitude encoding technique reduces the required quantum bit resources. The complexity analysis indicates that the proposed model can accelerate the convolutional operation in comparison with its classical counterpart. The model's performance is evaluated with different EO benchmarks, including Overhead-MNIST, So2Sat LCZ42, PatternNet, RSI-CB256, and NaSC-TG2, through the TensorFlow Quantum platform, and it can achieve better performance than its classical counterpart and have higher generalizability, which verifies the validity of the QC-CNN model on EO data classification tasks.

4.
EClinicalMedicine ; 60: 102027, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37333662

RESUMO

Background: Identifying patients with clinically significant prostate cancer (csPCa) before biopsy helps reduce unnecessary biopsies and improve patient prognosis. The diagnostic performance of traditional transrectal ultrasound (TRUS) for csPCa is relatively limited. This study was aimed to develop a high-performance convolutional neural network (CNN) model (P-Net) based on a TRUS video of the entire prostate and investigate its efficacy in identifying csPCa. Methods: Between January 2021 and December 2022, this study prospectively evaluated 832 patients from four centres who underwent prostate biopsy and/or radical prostatectomy. All patients had a standardised TRUS video of the whole prostate. A two-dimensional CNN (2D P-Net) and three-dimensional CNN (3D P-Net) were constructed using the training cohort (559 patients) and tested on the internal validation cohort (140 patients) as well as on the external validation cohort (133 patients). The performance of 2D P-Net and 3D P-Net in predicting csPCa was assessed in terms of the area under the receiver operating characteristic curve (AUC), biopsy rate, and unnecessary biopsy rate, and compared with the TRUS 5-point Likert score system as well as multiparametric magnetic resonance imaging (mp-MRI) prostate imaging reporting and data system (PI-RADS) v2.1. Decision curve analyses (DCAs) were used to determine the net benefits associated with their use. The study is registered at https://www.chictr.org.cn with the unique identifier ChiCTR2200064545. Findings: The diagnostic performance of 3D P-Net (AUC: 0.85-0.89) was superior to TRUS 5-point Likert score system (AUC: 0.71-0.78, P = 0.003-0.040), and similar to mp-MRI PI-RADS v2.1 score system interpreted by experienced radiologists (AUC: 0.83-0.86, P = 0.460-0.732) and 2D P-Net (AUC: 0.79-0.86, P = 0.066-0.678) in the internal and external validation cohorts. The biopsy rate decreased from 40.3% (TRUS 5-point Likert score system) and 47.6% (mp-MRI PI-RADS v2.1 score system) to 35.5% (2D P-Net) and 34.0% (3D P-Net). The unnecessary biopsy rate decreased from 38.1% (TRUS 5-point Likert score system) and 35.2% (mp-MRI PI-RADS v2.1 score system) to 32.0% (2D P-Net) and 25.8% (3D P-Net). 3D P-Net yielded the highest net benefit according to the DCAs. Interpretation: 3D P-Net based on a prostate grayscale TRUS video achieved satisfactory performance in identifying csPCa and potentially reducing unnecessary biopsies. More studies to determine how AI models better integrate into routine practice and randomized controlled trials to show the values of these models in real clinical applications are warranted. Funding: The National Natural Science Foundation of China (Grants 82202174 and 82202153), the Science and Technology Commission of Shanghai Municipality (Grants 18441905500 and 19DZ2251100), Shanghai Municipal Health Commission (Grants 2019LJ21 and SHSLCZDZK03502), Shanghai Science and Technology Innovation Action Plan (21Y11911200), and Fundamental Research Funds for the Central Universities (ZD-11-202151), Scientific Research and Development Fund of Zhongshan Hospital of Fudan University (Grant 2022ZSQD07).

5.
J Vis Exp ; (194)2023 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-37154577

RESUMO

In recent years, the incidence of thyroid cancer has been increasing. Thyroid nodule detection is critical for both the detection and treatment of thyroid cancer. Convolutional neural networks (CNNs) have achieved good results in thyroid ultrasound image analysis tasks. However, due to the limited valid receptive field of convolutional layers, CNNs fail to capture long-range contextual dependencies, which are important for identifying thyroid nodules in ultrasound images. Transformer networks are effective in capturing long-range contextual information. Inspired by this, we propose a novel thyroid nodule detection method that combines the Swin Transformer backbone and Faster R-CNN. Specifically, an ultrasound image is first projected into a 1D sequence of embeddings, which are then fed into a hierarchical Swin Transformer. The Swin Transformer backbone extracts features at five different scales by utilizing shifted windows for the computation of self-attention. Subsequently, a feature pyramid network (FPN) is used to fuse the features from different scales. Finally, a detection head is used to predict bounding boxes and the corresponding confidence scores. Data collected from 2,680 patients were used to conduct the experiments, and the results showed that this method achieved the best mAP score of 44.8%, outperforming CNN-based baselines. In addition, we gained better sensitivity (90.5%) than the competitors. This indicates that context modeling in this model is effective for thyroid nodule detection.


Assuntos
Neoplasias da Glândula Tireoide , Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia , Fontes de Energia Elétrica , Processamento de Imagem Assistida por Computador
6.
Behav Brain Res ; 448: 114456, 2023 06 25.
Artigo em Inglês | MEDLINE | ID: mdl-37116662

RESUMO

Chronic social defeat has been found to be stressful and to affect many aspects of the brain and behaviors in males. However, relatively little is known about its effects on females. In the present study, we examined the effects of repeated social defeat on social approach and anxiety-like behaviors as well as the neuronal activation in the brain of sexually naïve female Mongolian gerbils (Meriones unguiculatus). Our data indicate that repeated social defeats for 20 days reduced social approach and social investigation, but increased risk assessment or vigilance to an unfamiliar conspecific. Such social defeat experience also increased anxiety-like behavior and reduced locomotor activity. Using ΔFosB-immunoreactive (ΔFosB-ir) staining as a marker of neuronal activation in the brain, we found significant elevations by social defeat experience in the density of ΔFosB-ir stained neurons in several brain regions, including the prelimbic (PL) and infralimbic (IL) subnuclei of the prefrontal cortex (PFC), CA1 subfields (CA1) of the hippocampus, central subnuclei of the amygdala (CeA), the paraventricular nucleus (PVN), dorsomedial nucleus (DMH), and ventrolateral subdivision of the ventromedial nucleus (VMHvl) of the hypothalamus. As these brain regions have been implicated in social behaviors and stress responses, our data suggest that the specific patterns of neuronal activation in the brain may relate to the altered social and anxiety-like behaviors following chronic social defeat in female Mongolian gerbils.


Assuntos
Encéfalo , Derrota Social , Masculino , Animais , Feminino , Gerbillinae , Encéfalo/metabolismo , Comportamento Social , Neurônios/metabolismo , Estresse Psicológico , Proteínas Proto-Oncogênicas c-fos/metabolismo
7.
ISPRS J Photogramm Remote Sens ; 195: 192-203, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36726963

RESUMO

Remote sensing (RS) image scene classification has obtained increasing attention for its broad application prospects. Conventional fully-supervised approaches usually require a large amount of manually-labeled data. As more and more RS images becoming available, how to make full use of these unlabeled data is becoming an urgent topic. Semi-supervised learning, which uses a few labeled data to guide the self-training of numerous unlabeled data, is an intuitive strategy. However, it is hard to apply it to cross-dataset (i.e., cross-domain) scene classification due to the significant domain shift among different datasets. To this end, semi-supervised domain adaptation (SSDA), which can reduce the domain shift and further transfer knowledge from a fully-labeled RS scene dataset (source domain) to a limited-labeled RS scene dataset (target domain), would be a feasible solution. In this paper, we propose an SSDA method termed bidirectional sample-class alignment (BSCA) for RS cross-domain scene classification. BSCA consists of two alignment strategies, unsupervised alignment (UA) and supervised alignment (SA), both of which can contribute to decreasing domain shift. UA concentrates on reducing the distance of maximum mean discrepancy across domains, with no demand for class labels. In contrast, SA aims to achieve the distribution alignment both from source samples to the associate target class centers and from target samples to the associate source class centers, with awareness of their classes. To validate the effectiveness of the proposed method, extensive ablation, comparison, and visualization experiments are conducted on an RS-SSDA benchmark built upon four widely-used RS scene classification datasets. Experimental results indicate that in comparison with some state-of-the-art methods, our BSCA achieves the superior cross-domain classification performance with compact feature representation and low-entropy classification boundary. Our code will be available at https://github.com/hw2hwei/BSCA.

8.
Front Endocrinol (Lausanne) ; 13: 1018321, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36237194

RESUMO

Background: Dynamic artificial intelligence (AI) ultrasound intelligent auxiliary diagnosis system (Dynamic AI) is a joint application of AI technology and medical imaging data, which can perform a real-time synchronous dynamic analysis of nodules. The aim of this study is to investigate the value of dynamic AI in differentiating benign and malignant thyroid nodules and its guiding significance for treatment strategies. Methods: The data of 607 patients with 1007 thyroid nodules who underwent surgical treatment were reviewed and analyzed, retrospectively. Dynamic AI was used to differentiate benign and malignant nodules. The diagnostic efficacy of dynamic AI was evaluated by comparing the results of dynamic AI examination, preoperative fine needle aspiration cytology (FNAC) and postoperative pathology of nodules with different sizes and properties in patients of different sexes and ages. Results: The sensitivity, specificity and accuracy of dynamic AI in the diagnosis of thyroid nodules were 92.21%, 83.20% and 89.97%, respectively, which were highly consistent with the postoperative pathological results (kappa = 0.737, p < 0.001). There is no statistical difference in accuracy between people with different ages and sexes and nodules of different sizes, which showed the good stability. The accuracy of dynamic AI in malignant nodules (92.21%) was significantly higher than that in benign nodules (83.20%) (p < 0.001). The specificity and positive predictive value were significantly higher, and the misdiagnosis rate was significantly lower in dynamic AI than that of preoperative ultrasound ACR TI-RADS (p < 0.001). The accuracy of dynamic AI in nodules with diameter ≤ 0.50 cm was significantly higher than that of preoperative ultrasound (p = 0.044). Compared with FNAC, the sensitivity (96.58%) and accuracy (94.06%) of dynamic AI were similar. Conclusions: The dynamic AI examination has high diagnostic value for benign and malignant thyroid nodules, which can effectively assist surgeons in formulating scientific and reasonable individualized diagnosis and treatment strategies for patients.


Assuntos
Nódulo da Glândula Tireoide , Inteligência Artificial , Biópsia por Agulha Fina , Humanos , Estudos Retrospectivos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/cirurgia , Ultrassonografia/métodos
9.
Remote Sens Environ ; 269: 112794, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35115734

RESUMO

Urbanization is the second largest mega-trend right after climate change. Accurate measurements of urban morphological and demographic figures are at the core of many international endeavors to address issues of urbanization, such as the United Nations' call for "Sustainable Cities and Communities". In many countries - particularly developing countries -, however, this database does not yet exist. Here, we demonstrate a novel deep learning and big data analytics approach to fuse freely available global radar and multi-spectral satellite data, acquired by the Sentinel-1 and Sentinel-2 satellites. Via this approach, we created the first-ever global and quality controlled urban local climate zones classification covering all cities across the globe with a population greater than 300,000 and made it available to the community (https://doi.org/10.14459/2021mp1633461). Statistical analysis of the data quantifies a global inequality problem: approximately 40% of the area defined as compact or light/large low-rise accommodates about 60% of the total population, whereas approximately 30% of the area defined as sparsely built accommodates only about 10% of the total population. Beyond, patterns of urban morphology were discovered from the global classification map, confirming a morphologic relationship to the geographical region and related cultural heritage. We expect the open access of our dataset to encourage research on the global change process of urbanization, as a multidisciplinary crowd of researchers will use this baseline for spatial perspective in their work. In addition, it can serve as a unique dataset for stakeholders such as the United Nations to improve their spatial assessments of urbanization.

10.
IEEE Trans Image Process ; 31: 678-690, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34914588

RESUMO

Building extraction in VHR RSIs remains a challenging task due to occlusion and boundary ambiguity problems. Although conventional convolutional neural networks (CNNs) based methods are capable of exploiting local texture and context information, they fail to capture the shape patterns of buildings, which is a necessary constraint in the human recognition. To address this issue, we propose an adversarial shape learning network (ASLNet) to model the building shape patterns that improve the accuracy of building segmentation. In the proposed ASLNet, we introduce the adversarial learning strategy to explicitly model the shape constraints, as well as a CNN shape regularizer to strengthen the embedding of shape features. To assess the geometric accuracy of building segmentation results, we introduced several object-based quality assessment metrics. Experiments on two open benchmark datasets show that the proposed ASLNet improves both the pixel-based accuracy and the object-based quality measurements by a large margin. The code is available at: https://github.com/ggsDing/ASLNet.


Assuntos
Processamento de Imagem Assistida por Computador , Tecnologia de Sensoriamento Remoto , Humanos , Redes Neurais de Computação
11.
Eur J Radiol ; 128: 109061, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32442836

RESUMO

PURPOSE: Investigate the clinical value of improving diagnostic accuracy for arteries of lower extremities with low energy images in dual-energy spectral CT (DEsCT) imaging. METHOD: 110 (mean age, 67 ±â€¯10 years) and 72 (mean age, 65 ±â€¯13 years) patients underwent CT angiography (CTA) in the lower extremities using dual-energy and conventional (100kVp) imaging mode, retrospectively. The 50 keV monochromatic images were reconstructed in the DEsCT group for analysis. The quantitative and qualitative image quality of the two groups were compared using various statistical analysis methods and the diagnostic accuracy for the degree of vessel stenosis was compared using DSA as the gold standard. Consistency test was used for intra-group evaluation. A P < 0.05 was considered statistically significant. RESULTS: The use of 50 keV images in DEsCT significantly increased the enhancement in arteries of LKS (544.91 ±â€¯106.37HU vs. 339.65 ±â€¯83.74HU, P < 0.001) and provided higher SNR (19.92±9.39 vs. 17.39±4.99, P = 0.04) and CNR (45.60±16.61 vs. 38.70±18.17, P < 0.01) compared with conventional 100kVp images. Mann-Whitney test showed that the subjective image quality of the arteries of lower knee segment (LKS) in the DEsCT group was higher than in the conventional group (P = 0.01). The diagnostic efficiency of DEsCT group was better than that of conventional group, mainly in arteries of LKS (95.91 % vs. 87.85 %, for 50 % stenosis, P<0.001; 94.32 % vs. 89.58 % for occlusion, P = 0.02). CONCLUSIONS: The use of 50 keV DEsCT images enhances the contrast in the lower extremity arteries and improves the diagnostic accuracy for the arteries of LKS, compared with the conventional CTA protocols.


Assuntos
Angiografia por Tomografia Computadorizada/métodos , Extremidade Inferior/irrigação sanguínea , Extremidade Inferior/diagnóstico por imagem , Doença Arterial Periférica/diagnóstico por imagem , Imagem Radiográfica a Partir de Emissão de Duplo Fóton/métodos , Idoso , Artérias/diagnóstico por imagem , Constrição Patológica/diagnóstico por imagem , Feminino , Humanos , Masculino , Estudos Prospectivos , Reprodutibilidade dos Testes , Estudos Retrospectivos
12.
ISPRS J Photogramm Remote Sens ; 159: 184-197, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31929682

RESUMO

Automatic building extraction from optical imagery remains a challenge due to, for example, the complexity of building shapes. Semantic segmentation is an efficient approach for this task. The latest development in deep convolutional neural networks (DCNNs) has made accurate pixel-level classification tasks possible. Yet one central issue remains: the precise delineation of boundaries. Deep architectures generally fail to produce fine-grained segmentation with accurate boundaries due to their progressive down-sampling. Hence, we introduce a generic framework to overcome the issue, integrating the graph convolutional network (GCN) and deep structured feature embedding (DSFE) into an end-to-end workflow. Furthermore, instead of using a classic graph convolutional neural network, we propose a gated graph convolutional network, which enables the refinement of weak and coarse semantic predictions to generate sharp borders and fine-grained pixel-level classification. Taking the semantic segmentation of building footprints as a practical example, we compared different feature embedding architectures and graph neural networks. Our proposed framework with the new GCN architecture outperforms state-of-the-art approaches. Although our main task in this work is building footprint extraction, the proposed method can be generally applied to other binary or multi-label segmentation tasks.

13.
J Clean Prod ; 196: 1188-1197, 2018 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-30245554

RESUMO

The rich content of nutrients in human waste provides an outlook for turning it from pollutants to potential resources. The pilot-scale resource-oriented toilet with forward osmosis technology was demonstrated to have advantages to recover clean water, nitrogen, phosphorus, potassium, biogas, and heat from urine and feces. For the possibility of further full-scale implementation in different scenarios, six resource-oriented toilet systems and one conventional toilet system were designed in this study. The methodology of cost-benefit analysis and life cycle assessment were applied to analyze the life cycle economic feasibility and environmental sustainability of these systems. As results indicated, resource-oriented toilets with forward osmosis technology concentrating urine proved to have both economic and environmental benefit. The economic net present value results of new resource-oriented toilets were much better than conventional toilet. The energy consumption in resource-oriented toilets contributes a lot to the environmental impacts while resource recovery such as the fertilizer production and fresh water harvest in resource-oriented toilet systems offsets a lot. Taking both life cycle economic feasibility and environmental sustainability into consideration, the partial resource-oriented toilet (only recovering nutrients from urine) is the best choice, and the totally independent resource-oriented toilet could be applied to replace conventional toilets in areas without any external facilities such as sewer and water supply system etc.

14.
Phys Rev E Stat Nonlin Soft Matter Phys ; 85(3 Pt 2): 036708, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-22587210

RESUMO

We perform three-dimensional under-resolved direct numerical simulations of forced compressible turbulence using the smoothed particle hydrodynamics (SPH) method and investigate the Lagrangian intermittency of the resulting hydrodynamic fields. The analysis presented here is motivated by the presence of typical stretched tails in the probability density function (PDF) of the particle accelerations previously observed in two-dimensional SPH simulations of uniform shear flow [Ellero et al., Phys. Rev. E 82, 046702 (2010)]. In order to produce a stationary isotropic compressible turbulent state, the real-space stochastic forcing method proposed by Kida and Orszag is applied, and the statistics of particle quantities are evaluated. We validate our scheme by checking the behavior of the energy spectrum in the supersonic case where the expected Burgers-like scaling is obtained. By discretizing the continuum equations along fluid particle trajectories, the SPH method allows us to extract Lagrangian statistics in a straightforward fashion without the need for extra tracer particles. In particular, Lagrangian PDF of the density, particle accelerations as well as their Lagrangian structure functions and local scaling exponents are analyzed. The results for low-order statistics of Lagrangian intermittency in compressible turbulence demonstrate the implicit subparticle-scale modeling of the SPH discretization scheme.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA