Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32.375
Filtrar
1.
PLoS One ; 15(9): e0239562, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32966330

RESUMO

Reproducible and unbiased methods to quantify alveolar structure are important for research on many lung diseases. However, manually estimating alveolar structure through stereology is time consuming and inter-observer variability is high. The objective of this work was to develop and validate a fast, reproducible and accurate (semi-)automatic alternative. A FIJI-macro was designed that automatically segments lung images to binary masks, and counts the number of test points falling on tissue and the number of intersections of the air-tissue interface with a set of test lines. Manual selection remains necessary for the recognition of non-parenchymal tissue and alveolar exudates. Volume density of alveolar septa ([Formula: see text]) and mean linear intercept of the airspaces (Lm) as measured by the macro were compared to theoretical values for 11 artificial test images and to manually counted values for 17 lungs slides using linear regression and Bland-Altman plots. Inter-observer agreement between 3 observers, measuring 8 lungs both manually and automatically, was assessed using intraclass correlation coefficients (ICC). [Formula: see text] and Lm measured by the macro closely approached theoretical values for artificial test images (R2 of 0.9750 and 0.9573 and bias of 0.34% and 8.7%). The macro data in lungs were slightly higher for [Formula: see text] and slightly lower for Lm in comparison to manually counted values (R2 of 0.8262 and 0.8288 and bias of -6.0% and 12.1%). Visually, semi-automatic segmentation was accurate. Most importantly, manually counted [Formula: see text] and Lm had only moderate to good inter-observer agreement (ICC 0.859 and 0.643), but agreements were excellent for semi-automatically counted values (ICC 0.956 and 0.900). This semi-automatic method provides accurate and highly reproducible alveolar morphometry results. Future efforts should focus on refining methods for automatic detection of non-parenchymal tissue or exudates, and for assessment of lung structure on 3D reconstructions of lungs scanned with microCT.


Assuntos
Displasia Broncopulmonar/patologia , Interpretação de Imagem Assistida por Computador/métodos , Alvéolos Pulmonares/patologia , Animais , Displasia Broncopulmonar/diagnóstico por imagem , Modelos Animais de Doenças , Feminino , Técnicas Histológicas/estatística & dados numéricos , Variações Dependentes do Observador , Gravidez , Alvéolos Pulmonares/diagnóstico por imagem , Coelhos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Microtomografia por Raio-X/estatística & dados numéricos
2.
Br J Radiol ; 93(1114): 20200543, 2020 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-32877210

RESUMO

OBJECTIVES: To evaluate interobserver agreement for T2 weighted (T2W) and diffusion-weighted MRI (DW-MRI) contours of locally advanced rectal cancer (LARC); and to evaluate manual and semi-automated delineations of restricted diffusion tumour subvolumes. METHODS: 20 cases of LARC were reviewed by 2 radiation oncologists and 2 radiologists. Contours of gross tumour volume (GTV) on T2W, DW-MRI and co-registered T2W/DW-MRI were independently delineated and compared using Dice Similarity Coefficient (DSC), mean distance to agreement (MDA) and other metrics of interobserver agreement. Restricted diffusion subvolumes within GTVs were manually delineated and compared to semi-automatically generated contours corresponding to intratumoral apparent diffusion coefficient (ADC) centile values. RESULTS: Observers were able to delineate subvolumes of restricted diffusion with moderate agreement (DSC 0.666, MDA 1.92 mm). Semi-automated segmentation based on the 40th centile intratumoral ADC value demonstrated moderate average agreement with consensus delineations (DSC 0.581, MDA 2.44 mm), with errors noted in image registration and luminal variation between acquisitions. A small validation set of four cases with optimised planning MRI demonstrated improvement (DSC 0.669, MDA 1.91 mm). CONCLUSION: Contours based on co-registered T2W and DW-MRI could be used for delineation of biologically relevant tumour subvolumes. Semi-automated delineation based on patient-specific intratumoral ADC thresholds may standardise subvolume delineation if registration between acquisitions is sufficiently accurate. ADVANCES IN KNOWLEDGE: This is the first study to evaluate the feasibility of semi-automated diffusion-based subvolume delineation in LARC. This approach could be applied to dose escalation or 'dose painting' protocols to improve delineation reproducibility.


Assuntos
Adenocarcinoma/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Retais/diagnóstico por imagem , Adenocarcinoma/patologia , Adenocarcinoma/terapia , Adulto , Idoso , Idoso de 80 Anos ou mais , Competência Clínica , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estadiamento de Neoplasias , Variações Dependentes do Observador , Neoplasias Retais/patologia , Neoplasias Retais/terapia , Reprodutibilidade dos Testes , Estudos Retrospectivos , Carga Tumoral
3.
BMC Bioinformatics ; 21(Suppl 11): 270, 2020 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-32921304

RESUMO

BACKGROUND: Melanoma is one of the most aggressive types of cancer that has become a world-class problem. According to the World Health Organization estimates, 132,000 cases of the disease and 66,000 deaths from malignant melanoma and other forms of skin cancer are reported annually worldwide ( https://apps.who.int/gho/data/?theme=main ) and those numbers continue to grow. In our opinion, due to the increasing incidence of the disease, it is necessary to find new, easy to use and sensitive methods for the early diagnosis of melanoma in a large number of people around the world. Over the last decade, neural networks show highly sensitive, specific, and accurate results. OBJECTIVE: This study presents a review of PubMed papers including requests «melanoma neural network¼ and «melanoma neural network dermatoscopy¼. We review recent researches and discuss their opportunities acceptable in clinical practice. METHODS: We searched the PubMed database for systematic reviews and original research papers on the requests «melanoma neural network¼ and «melanoma neural network dermatoscopy¼ published in English. Only papers that reported results, progress and outcomes are included in this review. RESULTS: We found 11 papers that match our requests that observed convolutional and deep-learning neural networks combined with fuzzy clustering or World Cup Optimization algorithms in analyzing dermatoscopic images. All of them require an ABCD (asymmetry, border, color, and differential structures) algorithm and its derivates (in combination with ABCD algorithm or separately). Also, they require a large dataset of dermatoscopic images and optimized estimation parameters to provide high specificity, accuracy and sensitivity. CONCLUSIONS: According to the analyzed papers, neural networks show higher specificity, accuracy and sensitivity than dermatologists. Neural networks are able to evaluate features that might be unavailable to the naked human eye. Despite that, we need more datasets to confirm those statements. Nowadays machine learning becomes a helpful tool in early diagnosing skin diseases, especially melanoma.


Assuntos
Aprendizado Profundo , Detecção Precoce de Câncer , Interpretação de Imagem Assistida por Computador/métodos , Melanoma/diagnóstico por imagem , Neoplasias Cutâneas/diagnóstico por imagem , Confiabilidade dos Dados , Humanos , Sensibilidade e Especificidade
4.
Curr Opin Ophthalmol ; 31(5): 324-328, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32769696

RESUMO

PURPOSE OF REVIEW: To review four recent controversial topics arising from deep learning applications in ophthalmology. RECENT FINDINGS: The controversies of four recent topics surrounding deep learning applications in ophthalmology are discussed, including the following: lack of explainability, limited generalizability, potential biases and protection of patient confidentiality in large-scale data transfer. SUMMARY: These controversial issues spanning the domains of clinical medicine, public health, computer science, ethics and legal issues, are complex and likely will benefit from an interdisciplinary approach if artificial intelligence in ophthalmology is to succeed over the next decade.


Assuntos
Inteligência Artificial , Oftalmopatias/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Oftalmologia , Big Data , Humanos
5.
Radiol Clin North Am ; 58(5): 875-884, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32792120

RESUMO

Indeterminate renal masses remain a diagnostic challenge for lesions not initially characterized as angiomyolipoma or Bosniak I/II cysts. Differential for indeterminate renal masses include oncocytoma, fat-poor angiomyolipoma, and clear cell, papillary, and chromophobe renal cell carcinoma. Qualitative and quantitative techniques using data derived from multiphase contrast-enhanced imaging have provided methods for specific differentiation and subtyping of indeterminate renal masses, with emerging applications such as radiocytogenetics. Early and accurate characterization of indeterminate renal masses by multiphase contrast-enhanced imaging will optimize triage of these lesions into surgical, ablative, and active surveillance treatment plans.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Imagem por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Meios de Contraste , Diagnóstico Diferencial , Humanos , Aumento da Imagem/métodos , Rim/diagnóstico por imagem , Rim/patologia , Triagem
6.
Radiol Clin North Am ; 58(5): 995-1008, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32792129

RESUMO

Radiomics allows for high throughput extraction of quantitative data from images. This is an area of active research as groups try to capture and quantify imaging parameters and convert these into descriptive phenotypes of organs or tumors. Texture analysis is one radiomics tool that extracts information about heterogeneity within a given region of interest. This is used with or without associated machine learning classifiers or a deep learning approach is applied to similar types of data. These tools have shown utility in characterizing renal masses, renal cell carcinoma, and assessing response to targeted therapeutic agents in metastatic renal cell carcinoma.


Assuntos
Inteligência Artificial , Carcinoma de Células Renais/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Renais/diagnóstico por imagem , Imagem por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Diagnóstico Diferencial , Humanos , Rim/diagnóstico por imagem
7.
Artigo em Inglês | MEDLINE | ID: mdl-32746195

RESUMO

Recent works highlighted the significant potential of lung ultrasound (LUS) imaging in the management of subjects affected by COVID-19. In general, the development of objective, fast, and accurate automatic methods for LUS data evaluation is still at an early stage. This is particularly true for COVID-19 diagnostic. In this article, we propose an automatic and unsupervised method for the detection and localization of the pleural line in LUS data based on the hidden Markov model and Viterbi Algorithm. The pleural line localization step is followed by a supervised classification procedure based on the support vector machine (SVM). The classifier evaluates the healthiness level of a patient and, if present, the severity of the pathology, i.e., the score value for each image of a given LUS acquisition. The experiments performed on a variety of LUS data acquired in Italian hospitals with both linear and convex probes highlight the effectiveness of the proposed method. The average overall accuracy in detecting the pleura is 84% and 94% for convex and linear probes, respectively. The accuracy of the SVM classification in correctly evaluating the severity of COVID-19 related pleural line alterations is about 88% and 94% for convex and linear probes, respectively. The results as well as the visualization of the detected pleural line and the predicted score chart, provide a significant support to medical staff for further evaluating the patient condition.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Pleura/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Ultrassonografia/métodos , Algoritmos , Humanos , Pandemias , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte
8.
Artigo em Inglês | MEDLINE | ID: mdl-32784133

RESUMO

In this article, we present a novel method for line artifacts quantification in lung ultrasound (LUS) images of COVID-19 patients. We formulate this as a nonconvex regularization problem involving a sparsity-enforcing, Cauchy-based penalty function, and the inverse Radon transform. We employ a simple local maxima detection technique in the Radon transform domain, associated with known clinical definitions of line artifacts. Despite being nonconvex, the proposed technique is guaranteed to convergence through our proposed Cauchy proximal splitting (CPS) method, and accurately identifies both horizontal and vertical line artifacts in LUS images. To reduce the number of false and missed detection, our method includes a two-stage validation mechanism, which is performed in both Radon and image domains. We evaluate the performance of the proposed method in comparison to the current state-of-the-art B-line identification method, and show a considerable performance gain with 87% correctly detected B-lines in LUS images of nine COVID-19 patients.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Ultrassonografia/métodos , Idoso , Algoritmos , Artefatos , Betacoronavirus , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Pleura/diagnóstico por imagem , Curva ROC
9.
Eur J Radiol ; 130: 109202, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32745895

RESUMO

BACKGROUND: So far, only a few studies evaluated the correlation between CT features and clinical outcome in patients with COVID-19 pneumonia. PURPOSE: To evaluate CT ability in differentiating critically ill patients requiring invasive ventilation from patients with less severe disease. METHODS: We retrospectively collected data from patients admitted to our institution for COVID-19 pneumonia between March 5th-24th. Patients were considered critically ill or non-critically ill, depending on the need for mechanical ventilation. CT images from both groups were analyzed for the assessment of qualitative features and disease extension, using a quantitative semiautomatic method. We evaluated the differences between the two groups for clinical, laboratory and CT data. Analyses were conducted on a per-protocol basis. RESULTS: 189 patients were analyzed. PaO2/FIO2 ratio and oxygen saturation (SaO2) were decreased in critically ill patients. At CT, mixed pattern (ground glass opacities (GGO) and consolidation) and GGO alone were more frequent respectively in critically ill and in non-critically ill patients (p < 0.05). Lung volume involvement was significantly higher in critically ill patients (38.5 % vs. 5.8 %, p < 0.05). A cut-off of 23.0 % of lung involvement showed 96 % sensitivity and 96 % specificity in distinguishing critically ill patients from patients with less severe disease. The fraction of involved lung was related to lactate dehydrogenase (LDH) levels, PaO2/FIO2 ratio and SaO2 (p < 0.05). CONCLUSION: Lung disease extension, assessed using quantitative CT, has a significant relationship with clinical severity and may predict the need for invasive ventilation in patients with COVID-19.


Assuntos
Betacoronavirus , Infecções por Coronavirus/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Idoso , Estado Terminal , Estudos de Avaliação como Assunto , Feminino , Humanos , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Pandemias , Projetos de Pesquisa , Estudos Retrospectivos , Fatores de Risco , Sensibilidade e Especificidade
10.
PLoS One ; 15(8): e0234169, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32810131

RESUMO

Toxoplasma gondii is an obligate intracellular parasite infecting up to one third of the human population. The central event in the pathogenesis of toxoplasmosis is the conversion of tachyzoites into encysted bradyzoites. A novel approach to analyze the structure of in vivo-derived tissue cysts may be the increasingly used computational image analysis. The objective of this study was to quantify the geometrical complexity of T. gondii cysts by morphological, particle, and fractal analysis, as well as to determine if it is impacted by parasite strain, cyst age, and host type. A total of 31 images of T. gondii brain cysts of four type-2 strains (Me49, and local isolates BGD1, BGD14, and BGD26) was analyzed using ImageJ software. The parameters of interest included diameter, circularity, packing density (PD), fractal dimension (FD), and lacunarity. Although cyst diameter varied widely, its negative correlation with PD was observed. Circularity was remarkably close to 1, indicating a perfectly round shape of the cysts. PD and FD did not vary among cysts of different strains, age, and derived from mice of different genetic background. Conversely, lacunarity, which is a measure of heterogeneity, was significantly lower for BGD1 strain vs. all other strains, and higher for Me49 vs. BGD14 and BGD26, but did not differ among Me49 cysts of different age, or those derived from genetically different mice. The results indicate a highly uniform structure and occupancy of the different T. gondii tissue cysts. This study furthers the use of image analysis in describing the structural complexity of T. gondii cyst morphology, and presents the first application of fractal analysis for this purpose. The presented results show that use of a freely available software is a cost-effective approach to advance automated image scoring for T. gondii cysts.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Toxoplasma/citologia , Toxoplasmose Animal/patologia , Toxoplasmose Animal/parasitologia , Animais , Encéfalo/parasitologia , Encéfalo/patologia , Cistos/parasitologia , Cistos/patologia , Feminino , Fractais , Interações Hospedeiro-Parasita , Humanos , Camundongos , Camundongos Endogâmicos BALB C , Toxoplasma/patogenicidade , Toxoplasma/ultraestrutura
11.
PLoS One ; 15(8): e0237213, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32797099

RESUMO

Bone metastasis is one of the most frequent diseases in prostate cancer; scintigraphy imaging is particularly important for the clinical diagnosis of bone metastasis. Up to date, minimal research has been conducted regarding the application of machine learning with emphasis on modern efficient convolutional neural networks (CNNs) algorithms, for the diagnosis of prostate cancer metastasis from bone scintigraphy images. The advantageous and outstanding capabilities of deep learning, machine learning's groundbreaking technological advancement, have not yet been fully investigated regarding their application in computer-aided diagnosis systems in the field of medical image analysis, such as the problem of bone metastasis classification in whole-body scans. In particular, CNNs are gaining great attention due to their ability to recognize complex visual patterns, in the same way as human perception operates. Considering all these new enhancements in the field of deep learning, a set of simpler, faster and more accurate CNN architectures, designed for classification of metastatic prostate cancer in bones, is explored. This research study has a two-fold goal: to create and also demonstrate a set of simple but robust CNN models for automatic classification of whole-body scans in two categories, malignant (bone metastasis) or healthy, using solely the scans at the input level. Through a meticulous exploration of CNN hyper-parameter selection and fine-tuning, the best architecture is selected with respect to classification accuracy. Thus a CNN model with improved classification capabilities for bone metastasis diagnosis is produced, using bone scans from prostate cancer patients. The achieved classification testing accuracy is 97.38%, whereas the average sensitivity is approximately 95.8%. Finally, the best-performing CNN method is compared to other popular and well-known CNN architectures used for medical imaging, like VGG16, ResNet50, GoogleNet and MobileNet. The classification results show that the proposed CNN-based approach outperforms the popular CNN methods in nuclear medicine for metastatic prostate cancer diagnosis in bones.


Assuntos
Neoplasias Ósseas/secundário , Redes Neurais de Computação , Neoplasias da Próstata/patologia , Imagem Corporal Total/métodos , Neoplasias Ósseas/classificação , Neoplasias Ósseas/diagnóstico por imagem , Diagnóstico por Computador/métodos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Masculino , Cintilografia/métodos , Software
12.
PLoS One ; 15(8): e0237587, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32804986

RESUMO

In radiomics studies, researchers usually need to develop a supervised machine learning model to map image features onto the clinical conclusion. A classical machine learning pipeline consists of several steps, including normalization, feature selection, and classification. It is often tedious to find an optimal pipeline with appropriate combinations. We designed an open-source software package named FeAture Explorer (FAE). It was programmed with Python and used NumPy, pandas, and scikit-learning modules. FAE can be used to extract image features, preprocess the feature matrix, develop different models automatically, and evaluate them with common clinical statistics. FAE features a user-friendly graphical user interface that can be used by radiologists and researchers to build many different pipelines, and to compare their results visually. To prove the effectiveness of FAE, we developed a candidate model to classify the clinical-significant prostate cancer (CS PCa) and non-CS PCa using the PROSTATEx dataset. We used FAE to try out different combinations of feature selectors and classifiers, compare the area under the receiver operating characteristic curve of different models on the validation dataset, and evaluate the model using independent test data. The final model with the analysis of variance as the feature selector and linear discriminate analysis as the classifier was selected and evaluated conveniently by FAE. The area under the receiver operating characteristic curve on the training, validation, and test dataset achieved results of 0.838, 0.814, and 0.824, respectively. FAE allows researchers to build radiomics models and evaluate them using an independent testing dataset. It also provides easy model comparison and result visualization. We believe FAE can be a convenient tool for radiomics studies and other medical studies involving supervised machine learning.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Humanos , Masculino , Imageamento por Ressonância Magnética Multiparamétrica , Curva ROC , Software , Aprendizado de Máquina Supervisionado
14.
Curr Opin Ophthalmol ; 31(5): 303-311, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32740061

RESUMO

PURPOSE OF REVIEW: As artificial intelligence continues to develop new applications in ophthalmic image recognition, we provide here an introduction for ophthalmologists and a primer on the mechanisms of deep learning systems. RECENT FINDINGS: Deep learning has lent itself to the automated interpretation of various retinal imaging modalities, including fundus photography and optical coherence tomography. Convolutional neural networks (CNN) represent the primary class of deep neural networks applied to these image analyses. These have been configured to aid in the detection of diabetes retinopathy, AMD, retinal detachment, glaucoma, and ROP, among other ocular disorders. Predictive models for retinal disease prognosis and treatment are also being validated. SUMMARY: Deep learning systems have begun to demonstrate a reliable level of diagnostic accuracy equal or better to human graders for narrow image recognition tasks. However, challenges regarding the use of deep learning systems in ophthalmology remain. These include trust of unsupervised learning systems and the limited ability to recognize broad ranges of disorders.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem/métodos , Oftalmopatias/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Oftalmologistas , Humanos , Redes Neurais de Computação
15.
Neurology ; 95(8): e943-e952, 2020 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-32646955

RESUMO

OBJECTIVE: To evaluate progressive white matter (WM) degeneration in amyotrophic lateral sclerosis (ALS). METHODS: Sixty-six patients with ALS and 43 healthy controls were enrolled in a prospective, longitudinal, multicenter study in the Canadian ALS Neuroimaging Consortium (CALSNIC). Participants underwent a harmonized neuroimaging protocol across 4 centers that included diffusion tensor imaging (DTI) for assessment of WM integrity. Three visits were accompanied by clinical assessments of disability (ALS Functional Rating Scale-Revised [ALSFRS-R]) and upper motor neuron (UMN) function. Voxel-wise whole-brain and quantitative tract-wise DTI assessments were done at baseline and longitudinally. Correction for site variance incorporated data from healthy controls and from healthy volunteers who underwent the DTI protocol at each center. RESULTS: Patients with ALS had a mean progressive decline in fractional anisotropy (FA) of the corticospinal tract (CST) and frontal lobes. Tract-wise analysis revealed reduced FA in the CST, corticopontine/corticorubral tract, and corticostriatal tract. CST FA correlated with UMN function, and frontal lobe FA correlated with the ALSFRS-R score. A progressive decline in CST FA correlated with a decline in the ALSFRS-R score and worsening UMN signs. Patients with fast vs slow progression had a greater reduction in FA of the CST and upper frontal lobe. CONCLUSIONS: Progressive WM degeneration in ALS is most prominent in the CST and frontal lobes and, to a lesser degree, in the corticopontine/corticorubral tracts and corticostriatal pathways. With the use of a harmonized imaging protocol and incorporation of analytic methods to address site-related variances, this study is an important milestone toward developing DTI biomarkers for cerebral degeneration in ALS. CLINICALTRIALSGOV IDENTIFIER: NCT02405182.


Assuntos
Esclerose Amiotrófica Lateral/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Tratos Piramidais/diagnóstico por imagem , Substância Branca/diagnóstico por imagem , Adulto , Idoso , Esclerose Amiotrófica Lateral/patologia , Córtex Cerebral/patologia , Imagem de Tensor de Difusão , Progressão da Doença , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Degeneração Neural/diagnóstico por imagem , Degeneração Neural/patologia , Neuroimagem/métodos , Estudos Prospectivos , Tratos Piramidais/patologia , Substância Branca/patologia
16.
Nat Commun ; 11(1): 3673, 2020 07 22.
Artigo em Inglês | MEDLINE | ID: mdl-32699250

RESUMO

Causal reasoning can shed new light on the major challenges in machine learning for medical imaging: scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. A causal perspective on these issues allows decisions about data collection, annotation, preprocessing, and learning strategies to be made and scrutinized more transparently, while providing a detailed categorisation of potential biases and mitigation techniques. Along with worked clinical examples, we highlight the importance of establishing the causal relationship between images and their annotations, and offer step-by-step recommendations for future studies.


Assuntos
Diagnóstico por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Causalidade , Humanos
17.
Zhonghua Wai Ke Za Zhi ; 58(7): 520-524, 2020 Jul 01.
Artigo em Chinês | MEDLINE | ID: mdl-32610422

RESUMO

Objective: To investigate the effectiveness of an enhanced CT automatic recognition system based on Faster R-CNN for pancreatic cancer and its clinical value. Methods: In this study, 4 024 enhanced CT imaging sequences of 315 patients with pancreatic cancer from January 2013 to May 2016 at the Affiliated Hospital of Qingdao University were collected retrospectively, and 2 614 imaging sequences were input into the faster R-CNN system as training dataset to create an automatic image recognition model, which was then validated by reading 1 410 enhanced CT images of 135 cases of pancreatic cancer.In order to identify its effectiveness, 3 750 CT images of 150 patients with pancreatic lesions were read and a followed-up was carried out.The accuracy and recall rate in detecting nodules were recorded and regression curves were generated.In addition, the accuracy, sensitivity and specificity of Faster R-CNN diagnosis were analyzed, the ROC curves were generated and the area under the curves were calculated. Results: Based on the enhanced CT images of 135 cases, the area under the ROC curve was 0.927 calculated by Faster R-CNN. The accuracy, specificity and sensitivity were 0.902, 0.913 and 0.801 respectively.After the data of 150 patients with pancreatic cancer were verified, 893 CT images showed positive and 2 857 negative.Ninety-eight patients with pancreatic cancer were diagnosed by Faster R-CNN.After the follow-up, it was found that 53 cases were post-operatively proved to be pancreatic ductal carcinoma, 21 cases of pancreatic cystadenocarcinoma, 12 cases of pancreatic cystadenoma, 5 cases of pancreatic cyst, and 7 cases were untreated.During 5 to 17 months after operation, 6 patients died of abdominal tumor infiltration, liver and lung metastasis.Of the 52 patients who were diagnosed negative by Faster R-CNN, 9 were post-operatively proved to be pancreatic ductal carcinoma. Conclusion: Faster R-CNN system has clinical value in helping imaging physicians to diagnose pancreatic cancer.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Humanos , Neoplasias Pancreáticas/diagnóstico , Curva ROC , Estudos Retrospectivos , Sensibilidade e Especificidade
18.
IEEE Trans Med Imaging ; 39(8): 2595-2605, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32730212

RESUMO

The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Algoritmos , Betacoronavirus , Infecções Comunitárias Adquiridas/diagnóstico por imagem , Humanos , Pandemias , Curva ROC , Radiografia Torácica , Tomografia Computadorizada por Raios X
19.
Curr Opin Ophthalmol ; 31(5): 312-317, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32694266

RESUMO

PURPOSE OF REVIEW: In this article, we review the current state of artificial intelligence applications in retinopathy of prematurity (ROP) and provide insight on challenges as well as strategies for bringing these algorithms to the bedside. RECENT FINDINGS: In the past few years, there has been a dramatic shift from machine learning approaches based on feature extraction to 'deep' convolutional neural networks for artificial intelligence applications. Several artificial intelligence for ROP approaches have demonstrated adequate proof-of-concept performance in research studies. The next steps are to determine whether these algorithms are robust to variable clinical and technical parameters in practice. Integration of artificial intelligence into ROP screening and treatment is limited by generalizability of the algorithms to maintain performance on unseen data and integration of artificial intelligence technology into new or existing clinical workflows. SUMMARY: Real-world implementation of artificial intelligence for ROP diagnosis will require massive efforts targeted at developing standards for data acquisition, true external validation, and demonstration of feasibility. We must now focus on ethical, technical, clinical, regulatory, and financial considerations to bring this technology to the infant bedside to realize the promise offered by this technology to reduce preventable blindness from ROP.


Assuntos
Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Retinopatia da Prematuridade/diagnóstico , Algoritmos , Humanos , Recém-Nascido , Aprendizado de Máquina , Redes Neurais de Computação
20.
BMC Neurol ; 20(1): 262, 2020 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-32605601

RESUMO

BACKGROUND: In this study, we explored whether the proposed short-echo-time magnitude (setMag) image derived from quantitative susceptibility mapping (QSM) could resemble NM-MRI image in substantia nigra (SN), by quantitatively comparing the spatial similarity and diagnosis performances for Parkinson's disease (PD). METHODS: QSM and NM-MRI were performed in 18 PD patients and 15 healthy controls (HCs). The setMag images were calculated using the short-echo-time magnitude images. Bilateral hyperintensity areas of SN (SNhyper) were manually segmented on setMag and NM-MRI images by two raters in a blinded manner. The inter-rater reliability was evaluated by the intraclass correlation coefficients (ICC) and the Dice similarity coefficient (DSC). Then the inter-modality (i.e. setMag and NM-MRI) spatial similarity was quantitatively assessed using DSC and volume of the consensual voxels identified by both of two raters. The performances of mean SNhyper volume for PD diagnosis on setMag and NM-MRI images were evaluated using receiver operating characteristic (ROC) analysis. RESULTS: The SNhyper segmented by two raters showed substantial to excellent inter-rater reliability for both setMag and NM-MRI images. The DSCs of SNhyper between setMag and NM-MRI images showed substantial to excellent voxel-wise overlap in HCs (0.80 ~ 0.83) and PD (0.73 ~ 0.76), and no significant difference was found between the SNhyper volumes of setMag and NM-MRI images in either HCs or PD (p > 0.05). The mean SNhyper volume was significantly decreased in PD patients in comparison with HCs on both setMag images (77.61 mm3 vs 95.99 mm3, p < 0.0001) and NM-MRI images (79.06 mm3 vs 96.00 mm3, p < 0.0001). Areas under the curve (AUCs) of mean SNhyper volume for PD diagnosis were 0.904 on setMag and 0.906 on NM-MRI images. No significant difference was found between the two curves (p = 0.96). CONCLUSIONS: SNhyper on setMag derived from QSM demonstrated substantial spatial overlap with that on NM-MRI and provided comparable PD diagnostic performance, providing a new QSM-based multi-contrast imaging strategy for future PD studies.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Melaninas/análise , Doença de Parkinson/diagnóstico por imagem , Substância Negra/diagnóstico por imagem , Idoso , Feminino , Humanos , Imagem por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Curva ROC , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA