Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 162
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Scand J Gastroenterol ; 59(8): 925-932, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38950889

RESUMO

OBJECTIVES: Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS: A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS: For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS: The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador , Endossonografia , Neoplasias Gastrointestinais , Tumores do Estroma Gastrointestinal , Curva ROC , Humanos , Diagnóstico Diferencial , Tumores do Estroma Gastrointestinal/diagnóstico por imagem , Tumores do Estroma Gastrointestinal/patologia , Tumores do Estroma Gastrointestinal/diagnóstico , Neoplasias Gastrointestinais/diagnóstico por imagem , Neoplasias Gastrointestinais/diagnóstico , Feminino , Pessoa de Meia-Idade , Masculino , Idoso , Adulto , Medição de Risco , Sensibilidade e Especificidade , Idoso de 80 Anos ou mais
2.
Cell Biochem Funct ; 42(5): e4088, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38973163

RESUMO

The field of image processing is experiencing significant advancements to support professionals in analyzing histological images obtained from biopsies. The primary objective is to enhance the process of diagnosis and prognostic evaluations. Various forms of cancer can be diagnosed by employing different segmentation techniques followed by postprocessing approaches that can identify distinct neoplastic areas. Using computer approaches facilitates a more objective and efficient study of experts. The progressive advancement of histological image analysis holds significant importance in modern medicine. This paper provides an overview of the current advances in segmentation and classification approaches for images of follicular lymphoma. This research analyzes the primary image processing techniques utilized in the various stages of preprocessing, segmentation of the region of interest, classification, and postprocessing as described in the existing literature. The study also examines the strengths and weaknesses associated with these approaches. Additionally, this study encompasses an examination of validation procedures and an exploration of prospective future research roads in the segmentation of neoplasias.


Assuntos
Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Linfoma Folicular , Linfoma Folicular/diagnóstico , Linfoma Folicular/patologia , Humanos
3.
BMC Med Imaging ; 24(1): 253, 2024 Sep 20.
Artigo em Inglês | MEDLINE | ID: mdl-39304839

RESUMO

BACKGROUND: Breast cancer is one of the leading diseases worldwide. According to estimates by the National Breast Cancer Foundation, over 42,000 women are expected to die from this disease in 2024. OBJECTIVE: The prognosis of breast cancer depends on the early detection of breast micronodules and the ability to distinguish benign from malignant lesions. Ultrasonography is a crucial radiological imaging technique for diagnosing the illness because it allows for biopsy and lesion characterization. The user's level of experience and knowledge is vital since ultrasonographic diagnosis relies on the practitioner's expertise. Furthermore, computer-aided technologies significantly contribute by potentially reducing the workload of radiologists and enhancing their expertise, especially when combined with a large patient volume in a hospital setting. METHOD: This work describes the development of a hybrid CNN system for diagnosing benign and malignant breast cancer lesions. The models InceptionV3 and MobileNetV2 serve as the foundation for the hybrid framework. Features from these models are extracted and concatenated individually, resulting in a larger feature set. Finally, various classifiers are applied for the classification task. RESULTS: The model achieved the best results using the softmax classifier, with an accuracy of over 95%. CONCLUSION: Computer-aided diagnosis greatly assists radiologists and reduces their workload. Therefore, this research can serve as a foundation for other researchers to build clinical solutions.


Assuntos
Neoplasias da Mama , Ultrassonografia Mamária , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Ultrassonografia Mamária/métodos , Redes Neurais de Computação , Interpretação de Imagem Assistida por Computador/métodos , Diagnóstico por Computador/métodos
4.
Methods ; 203: 78-89, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35436513

RESUMO

As a common cause of hydronephrosis in children, ureteropelvic junction obstruction (UPJO) may lead to a series of progressive renal dysfunction. Ultrasonography is a primary screening of UPJO, yet its further examinations are laborious, time-consuming, and mostly radioactive. The deep learning based automatic diagnosis algorithms on UPJO or hydronephrosis ultrasound images are still rare and performance remains unsatisfactory owning to limitation of manually identified region of interest, small dataset and labels from single institution. To relieve the burden of children, parents, and doctors, and avoid wasting every bit information in all datasets, we hence designed a deep learning based mutual promotion model for the auto diagnosis of UPJO. This model consists of a semantic segmentation section and a classification section, they shared a mutual usage of a transformation structure by separately training the encoder and decoder and loop this circle. Thorough comparative experiments are conducted and situations are explored by ablation experiments, results shown our methods outperformed classic networks with an accuracy of 0.891 and an F1-score of 0.895. Our design can jointly utilize different supervisions and maximize the use of all the characteristics of each dataset, and automatically diagnose the severity of UPJO on the basis of ultrasound images by first segmentate then classify the images, moreover, not only is the final result excellent, but also the midway segmentation result is also very accurate and have smooth edges that are convenient for doctors to recognize with their naked eyes. All in all, our proposed method can be an important auxiliary tool for smart healthcare.


Assuntos
Hidronefrose , Obstrução Ureteral , Algoritmos , Criança , Humanos , Hidronefrose/diagnóstico por imagem , Hidronefrose/etiologia , Ultrassom , Ultrassonografia , Obstrução Ureteral/complicações , Obstrução Ureteral/cirurgia
5.
Sensors (Basel) ; 23(3)2023 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-36772510

RESUMO

The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. In this context, Deep Neural Network (DNN) has been proven to be a powerful tool for disease detection. Pulmonary Embolism (PE) is considered the leading cause of death disease, with a death toll of 180,000 per year in the US alone. It appears due to a blood clot in pulmonary arteries, which blocks the blood supply to the lungs or a part of the lung. An early diagnosis and treatment of PE could reduce the mortality rate. Doctors and radiologists prefer Computed Tomography (CT) scans as a first-hand tool, which contain 200 to 300 images of a single study for diagnosis. Most of the time, it becomes difficult for a doctor and radiologist to maintain concentration going through all the scans and giving the correct diagnosis, resulting in a misdiagnosis or false diagnosis. Given this, there is a need for an automatic Computer-Aided Diagnosis (CAD) system to assist doctors and radiologists in decision-making. To develop such a system, in this paper, we proposed a deep learning framework based on DenseNet201 to classify PE into nine classes in CT scans. We utilized DenseNet201 as a feature extractor and customized fully connected decision-making layers. The model was trained on the Radiological Society of North America (RSNA)-Pulmonary Embolism Detection Challenge (2020) Kaggle dataset and achieved promising results of 88%, 88%, 89%, and 90% in terms of the accuracy, sensitivity, specificity, and Area Under the Curve (AUC), respectively.


Assuntos
Aprendizado Profundo , Embolia Pulmonar , Humanos , Tomografia Computadorizada por Raios X/métodos , Diagnóstico por Computador/métodos , Embolia Pulmonar/diagnóstico por imagem , Computadores , Sensibilidade e Especificidade
6.
J Digit Imaging ; 36(4): 1408-1418, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37095310

RESUMO

The presence of cranial and facial bone fractures is an important finding on non-enhanced head computed tomography (CT) scans from patients who have sustained head trauma. Some prior studies have proposed automatic cranial fracture detections, but studies on facial fractures are lacking. We propose a deep learning system to automatically detect both cranial and facial bone fractures. Our system incorporated models consisting of YOLOv4 for one-stage fracture detection and improved ResUNet (ResUNet++) for the segmentation of cranial and facial bones. The results from the two models mapped together provided the location of the fracture and the name of the fractured bone as the final output. The training data for the detection model were the soft tissue algorithm images from a total of 1,447 head CT studies (a total of 16,985 images), and the training data for the segmentation model included 1,538 selected head CT images. The trained models were tested on a test dataset consisting of 192 head CT studies (a total of 5,890 images). The overall performance achieved a sensitivity of 88.66%, a precision of 94.51%, and an F1 score of 0.9149. Specifically, the cranial and facial regions were evaluated and resulted in a sensitivity of 84.78% and 80.77%, a precision of 92.86% and 87.50%, and F1 scores of 0.8864 and 0.8400, respectively. The average accuracy for the segmentation labels concerning all predicted fracture bounding boxes was 80.90%. Our deep learning system could accurately detect cranial and facial bone fractures and identify the fractured bone region simultaneously.


Assuntos
Inteligência Artificial , Fraturas Cranianas , Humanos , Fraturas Cranianas/diagnóstico por imagem , Ossos Faciais/diagnóstico por imagem , Ossos Faciais/lesões , Tomografia Computadorizada por Raios X/métodos , Algoritmos
7.
J Xray Sci Technol ; 31(1): 167-180, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36404568

RESUMO

BACKGROUND: Pancreatic cancer is a highly lethal disease. The preoperative distinction between pancreatic serous cystic neoplasm (SCN) and mucinous cystic neoplasm (MCN) remains a clinical challenge. OBJECTIVE: The goal of this study is to provide clinicians with supportive advice and avoid overtreatment by constructing a convolutional neural network (CNN) classifier to automatically identify pancreatic cancer using computed tomography (CT) images. METHODS: We construct a CNN model using a dataset of 6,173 CT images obtained from 107 pathologically confirmed pancreatic cancer patients at Shanghai Changhai Hospital from January 2017 to February 2022. We divide CT slices into three categories namely, SCN, MCN, and no tumor, to train the DenseNet201-based CNN model with multi-head spatial attention mechanism (MSAM-DenseNet201). The attention module enhances the network's attention to local features and effectively improves the network performance. The trained model is applied to process all CT image slices and finally realize the two categories classification of MCN and SCN patients through a joint voting strategy. RESULTS: Using a 10-fold cross validation method, this new MSAM-DenseNet201 model achieves a classification accuracy of 92.52%, a precision of 92.16%, a sensitivity of 92.16%, and a specificity of 92.86%, respectively. CONCLUSIONS: This study demonstrates the feasibility of using a deep learning network or classification model to help diagnose MCN and SCN cases. This, the new method has great potential for developing new computer-aided diagnosis systems and applying in future clinical practice.


Assuntos
Neoplasias Císticas, Mucinosas e Serosas , Neoplasias Pancreáticas , Humanos , China , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/patologia , Aprendizado de Máquina , Neoplasias Pancreáticas
8.
J Digit Imaging ; 35(2): 281-301, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35013827

RESUMO

Hypertensive retinopathy (HR) refers to changes in the morphological diameter of the retinal vessels due to persistent high blood pressure. Early detection of such changes helps in preventing blindness or even death due to stroke. These changes can be quantified by computing the arteriovenous ratio and the tortuosity severity in the retinal vasculature. This paper presents a decision support system for detecting and grading HR using morphometric analysis of retinal vasculature, particularly measuring the arteriovenous ratio (AVR) and retinal vessel tortuosity. In the first step, the retinal blood vessels are segmented and classified as arteries and veins. Then, the width of arteries and veins is measured within the region of interest around the optic disk. Next, a new iterative method is proposed to compute the AVR from the caliber measurements of arteries and veins using Parr-Hubbard and Knudtson methods. Moreover, the retinal vessel tortuosity severity index is computed for each image using 14 tortuosity severity metrics. In the end, a hybrid decision support system is proposed for the detection and grading of HR using AVR and tortuosity severity index. Furthermore, we present a new publicly available retinal vessel morphometry (RVM) dataset to evaluate the proposed methodology. The RVM dataset contains 504 retinal images with pixel-level annotations for vessel segmentation, artery/vein classification, and optic disk localization. The image-level labels for vessel tortuosity index and HR grade are also available. The proposed methods of iterative AVR measurement, tortuosity index, and HR grading are evaluated using the new RVM dataset. The results indicate that the proposed method gives superior performance than existing methods. The presented methodology is a novel advancement in automated detection and grading of HR, which can potentially be used as a clinical decision support system.


Assuntos
Retinopatia Hipertensiva , Disco Óptico , Humanos , Retinopatia Hipertensiva/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos/diagnóstico por imagem
9.
J Xray Sci Technol ; 30(1): 89-109, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34842222

RESUMO

BACKGROUND: Coronavirus Disease 2019 (COVID-19) is contagious, producing respiratory tract infection, caused by a newly discovered coronavirus. Its death toll is too high, and early diagnosis is the main problem nowadays. Infected people show a variety of symptoms such as fatigue, fever, tastelessness, dry cough, etc. Some other symptoms may also be manifested by radiographic visual identification. Therefore, Chest X-Rays (CXR) play a key role in the diagnosis of COVID-19. METHODS: In this study, we use Chest X-Rays images to develop a computer-aided diagnosis (CAD) of the disease. These images are used to train two deep networks, the Convolution Neural Network (CNN), and the Long Short-Term Memory Network (LSTM) which is an artificial Recurrent Neural Network (RNN). The proposed study involves three phases. First, the CNN model is trained on raw CXR images. Next, it is trained on pre-processed CXR images and finally enhanced CXR images are used for deep network CNN training. Geometric transformations, color transformations, image enhancement, and noise injection techniques are used for augmentation. From augmentation, we get 3,220 augmented CXRs as training datasets. In the final phase, CNN is used to extract the features of CXR imagery that are fed to the LSTM model. The performance of the four trained models is evaluated by the evaluation techniques of different models, including accuracy, specificity, sensitivity, false-positive rate, and receiver operating characteristic (ROC) curve. RESULTS: We compare our results with other benchmark CNN models. Our proposed CNN-LSTM model gives superior accuracy (99.02%) than the other state-of-the-art models. Our method to get improved input, helped the CNN model to produce a very high true positive rate (TPR 1) and no false-negative result whereas false negative was a major problem while using Raw CXR images. CONCLUSIONS: We conclude after performing different experiments that some image pre-processing and augmentation, remarkably improves the results of CNN-based models. It will help a better early detection of the disease that will eventually reduce the mortality rate of COVID.


Assuntos
COVID-19 , Aprendizado Profundo , Teste para COVID-19 , Computadores , Humanos , SARS-CoV-2
10.
J Xray Sci Technol ; 30(2): 377-388, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35095015

RESUMO

BACKGROUND: Pancreatic cancer is one of the most aggressive cancers with approximate 10% five-year survival rate. To reduce mortality rate, accurate detection and diagnose of suspicious pancreatic tumors at an early stage plays an important role. OBJECTIVE: To develop and test a new radiomics-based computer-aided diagnosis (CAD) scheme of computed tomography (CT) images to detect and classify suspicious pancreatic tumors. METHODS: A retrospective dataset consisting of 77 patients who had suspicious pancreatic tumors detected on CT images was assembled in which 33 tumors are malignant. A CAD scheme was developed using the following 5 steps namely, (1) apply an image pre-processing algorithm to filter and reduce image noise, (2) use a deep learning model to detect and segment pancreas region, (3) apply a modified region growing algorithm to segment tumor region, (4) compute and select optimal radiomics features, and (5) train and test a support vector machine (SVM) model to classify the detected pancreatic tumor using a leave-one-case-out cross-validation method. RESULTS: By using the area under receiver operating characteristic (ROC) curve (AUC) as an evaluation index, SVM model yields AUC = 0.750 with 95% confidence interval [0.624, 0.885] to classify pancreatic tumors. CONCLUSIONS: Study results indicate that radiomics features computed from CT images contain useful information associated with risk of tumor malignancy. This study also built a foundation to support further effort to develop and optimize CAD schemes with more advanced image processing and machine learning methods to more accurately and robustly detect and classify pancreatic tumors in future.


Assuntos
Diagnóstico por Computador , Neoplasias Pancreáticas , Diagnóstico por Computador/métodos , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem , Curva ROC , Estudos Retrospectivos , Máquina de Vetores de Suporte , Tomografia Computadorizada por Raios X
11.
Expert Syst Appl ; 207: 118029, 2022 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-35812003

RESUMO

In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates is applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net has a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.

12.
Sensors (Basel) ; 21(21)2021 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-34770340

RESUMO

Parkinson's disease (PD) is the second most common neurodegenerative disorder affecting over 6 million people globally. Although there are symptomatic treatments that can increase the survivability of the disease, there are no curative treatments. The prevalence of PD and disability-adjusted life years continue to increase steadily, leading to a growing burden on patients, their families, society and the economy. Dopaminergic medications can significantly slow down the progression of PD when applied during the early stages. However, these treatments often become less effective with the disease progression. Early diagnosis of PD is crucial for immediate interventions so that the patients can remain self-sufficient for the longest period of time possible. Unfortunately, diagnoses are often late, due to factors such as a global shortage of neurologists skilled in early PD diagnosis. Computer-aided diagnostic (CAD) tools, based on artificial intelligence methods, that can perform automated diagnosis of PD, are gaining attention from healthcare services. In this review, we have identified 63 studies published between January 2011 and July 2021, that proposed deep learning models for an automated diagnosis of PD, using various types of modalities like brain analysis (SPECT, PET, MRI and EEG), and motion symptoms (gait, handwriting, speech and EMG). From these studies, we identify the best performing deep learning model reported for each modality and highlight the current limitations that are hindering the adoption of such CAD tools in healthcare. Finally, we propose new directions to further the studies on deep learning in the automated detection of PD, in the hopes of improving the utility, applicability and impact of such tools to improve early detection of PD globally.


Assuntos
Aprendizado Profundo , Doença de Parkinson , Inteligência Artificial , Marcha , Humanos , Doença de Parkinson/diagnóstico , Fala
13.
Sensors (Basel) ; 21(16)2021 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-34451072

RESUMO

Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians.


Assuntos
Colonoscopia , Redes Neurais de Computação , Bases de Dados Factuais , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Projetos de Pesquisa
14.
J Xray Sci Technol ; 29(4): 617-633, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33967076

RESUMO

The Tc-99m methylene diphosphonate (MDP) whole body bone scan (WBBS) has been widely accepted as a method of choice for the initial diagnosis of bone and joint changes in patients with oncologic diseases. The WBBS has shown high sensitivity but relatively low specificity due to bone variation. This study aims to use the self-developing irregular flux viewer (IFV) system to predict possible bone lesions in planar WBBS. The study uses gradient vector flow (GVF) and self-organizing map (SOM) methods to analyze the blood fluid-dynamics and evaluate hot points. The evaluation includes a selection of 368 patients with bone metastasis from prostate cancer, lung cancer and breast cancer. Finally, we compare IFV values with BONENAVI version data. BONENAVI is a computer-assisted diagnosis system that analyzes bone scintigraphy automatically. The analysis shows that the IFV system achieves sensitivities of 93% for prostate cancer, 91% for breast cancer, and 83% for lung cancer, respectively. On the other hand, our proposed approach achieves a higher sensitivity than the results of BONEVAVI version 2.0.5 for prostate cancer (88%), breast cancer (86%) and lung cancer (82%), respectively. The study results demonstrate that the high sensitivity and specificity of the IFV system can provide assistance for image interpretation and generate prediction values for WBBS.


Assuntos
Neoplasias Ósseas , Neoplasias da Próstata , Neoplasias Ósseas/diagnóstico por imagem , Osso e Ossos/patologia , Diagnóstico por Computador/métodos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Sensibilidade e Especificidade , Medronato de Tecnécio Tc 99m
15.
Optik (Stuttg) ; 241: 167199, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34028466

RESUMO

Todays, COVID-19 has caused much death and its spreading speed is increasing, regarding virus mutation. This outbreak warns diagnosing infected people is an important issue. So, in this research, a computer-aided diagnosis (CAD) system called COV-CAD is proposed for diagnosing COVID-19 disease. This COV-CAD system is created by a feature extractor, a classification method, and a content-based imaged retrieval (CBIR) system. The proposed feature extractor is created by using the modified AlexNet CNN. The first modification changes ReLU activation functions to LeakyReLU for increasing efficiency. The second change is converting a fully connected (FC) layer of AlexNet CNN with a new FC, which results in reducing learnable parameters and training time. Another FC layer with dimensions 1 × 64 is added at the end of the feature extractor as the feature vector. In the classification section, a new classification method is defined in which the majority voting technique is applied on outputs of CBIR, SVM, KNN, and Random Forest for final diagnosing. Furthermore, in retrieval section, the proposed method uses CBIR because of its ability to retrieve the most similar images to the image of a patient. Since this feature helps physicians to find the most similar cases, they could conduct further statistical evaluations on profiles of similar patients. The system has been evaluated by accuracy, sensitivity, specificity, F1-score, and mean average precision and its accuracy for CT and X-ray datasets is 93.20% and 99.38%, respectively. The results demonstrate that the proposed method is more efficient than other similar studies.

16.
Adv Exp Med Biol ; 1213: 59-72, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32030663

RESUMO

For computer-aided diagnosis (CAD), detection, segmentation, and classification from medical imagery are three key components to efficiently assist physicians for accurate diagnosis. In this chapter, a completely integrated CAD system based on deep learning is presented to diagnose breast lesions from digital X-ray mammograms involving detection, segmentation, and classification. To automatically detect breast lesions from mammograms, a regional deep learning approach called You-Only-Look-Once (YOLO) is used. To segment breast lesions, full resolution convolutional network (FrCN), a novel segmentation model of deep network, is implemented and used. Finally, three conventional deep learning models including regular feedforward CNN, ResNet-50, and InceptionResNet-V2 are separately adopted and used to classify or recognize the detected and segmented breast lesion as either benign or malignant. To evaluate the integrated CAD system for detection, segmentation, and classification, the publicly available and annotated INbreast database is used over fivefold cross-validation tests. The evaluation results of the YOLO-based detection achieved detection accuracy of 97.27%, Matthews's correlation coefficient (MCC) of 93.93%, and F1-score of 98.02%. Moreover, the results of the breast lesion segmentation via FrCN achieved an overall accuracy of 92.97%, MCC of 85.93%, Dice (F1-score) of 92.69%, and Jaccard similarity coefficient of 86.37%. The detected and segmented breast lesions are classified via CNN, ResNet-50, and InceptionResNet-V2 achieving an average overall accuracies of 88.74%, 92.56%, and 95.32%, respectively. The performance evaluation results through all stages of detection, segmentation, and classification show that the integrated CAD system outperforms the latest conventional deep learning methodologies. We conclude that our CAD system could be used to assist radiologists over all stages of detection, segmentation, and classification for diagnosis of breast lesions.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Mamografia/métodos , Humanos
17.
Adv Exp Med Biol ; 1213: 47-58, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32030662

RESUMO

Image-based computer-aided diagnosis (CAD) algorithms by the use of convolutional neural network (CNN) which do not require the image-feature extractor are powerful compared with conventional feature-based CAD algorithms which require the image-feature extractor for classification of lung abnormalities. Moreover, computer-aided detection and segmentation algorithms by the use of CNN are useful for analysis of lung abnormalities. Deep learning will improve the performance of CAD systems dramatically. Therefore, they will change the roles of radiologists in the near future. In this article, we introduce development and evaluation of such image-based CAD algorithms for various kinds of lung abnormalities such as lung nodules and diffuse lung diseases.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Pneumopatias/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Humanos
18.
J Digit Imaging ; 33(2): 399-407, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31388865

RESUMO

Bone age assessment (BAA) is a radiological process to identify the growth disorders in children. Although this is a frequent task for radiologists, it is cumbersome. The objective of this study is to assess the bone age of children from newborn to 18 years old in an automatic manner through computer vision methods including histogram of oriented gradients (HOG), local binary pattern (LBP), and scale invariant feature transform (SIFT). Here, 442 left-hand radiographs are applied from the University of Southern California (USC) hand atlas. In this experiment, for the first time, HOG-LBP-dense SIFT features with background subtraction are applied to assess the bone age of the subject group. For this purpose, features are extracted from the carpal and epiphyseal regions of interest (ROIs). The SVM and 5-fold cross-validation are used for classification. The accuracy of female radiographs is 73.88% and of the male is 68.63%. The mean absolute error is 0.5 years for both genders' radiographs. The accuracy a within 1-year range is 95.32% for female and 96.51% for male radiographs. The accuracy within a 2-year range is 100% and 99.41% for female and male radiographs, respectively. The Cohen's kappa statistical test reveals that this proposed approach, Cohen's kappa coefficients are 0.71 for female and 0.66 for male radiographs, p value < 0.05, is in substantial agreement with the bone age assessed by experienced radiologists within the USC dataset. This approach is robust and easy to implement, thus, qualified for computer-aided diagnosis (CAD). The reduced processing time and number of ROIs facilitate BAA.


Assuntos
Osso e Ossos/diagnóstico por imagem , Diagnóstico por Computador , Criança , Feminino , Mãos , Humanos , Recém-Nascido , Masculino , Radiografia , Máquina de Vetores de Suporte
19.
J Digit Imaging ; 33(1): 243-251, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31172331

RESUMO

The volume of pelvic hematoma at CT has been shown to be the strongest independent predictor of major arterial injury requiring angioembolization in trauma victims with pelvic fractures, and also correlates with transfusion requirement and mortality. Measurement of pelvic hematomas (unopacified extraperitoneal blood accumulated from time of injury) using semi-automated seeded region growing is time-consuming and requires trained experts, precluding routine measurement at the point of care. Pelvic hematomas are markedly variable in shape and location, have irregular ill-defined margins, have low contrast with respect to viscera and muscle, and reside within anatomically distorted pelvises. Furthermore, pelvic hematomas occupy a small proportion of the entire volume of a chest, abdomen, and pelvis (C/A/P) trauma CT. The challenges are many, and no automated methods for segmentation and volumetric analysis have been described to date. Traditional approaches using fully convolutional networks result in coarse segmentations and class imbalance with suboptimal convergence. In this study, we implement a modified coarse-to-fine deep learning approach-the Recurrent Saliency Transformation Network (RSTN) for pelvic hematoma volume segmentation. RSTN previously yielded excellent results in pancreas segmentation, where low contrast with adjacent structures, small target volume, variable location, and fine contours are also problematic. We have curated a unique single-institution corpus of 253 C/A/P admission trauma CT studies in patients with bleeding pelvic fractures with manually labeled pelvic hematomas. We hypothesized that RSTN would result in sufficiently high Dice similarity coefficients to facilitate accurate and objective volumetric measurements for outcome prediction (arterial injury requiring angioembolization). Cases were separated into five combinations of training and test sets in an 80/20 split and fivefold cross-validation was performed. Dice scores in the test set were 0.71 (SD ± 0.10) using RSTN, compared to 0.49 (SD ± 0.16) using a baseline Deep Learning Tool Kit (DLTK) reference 3D U-Net architecture. Mean inference segmentation time for RSTN was 0.90 min (± 0.26). Pearson correlation between predicted and manual labels was 0.95 with p < 0.0001. Measurement bias was within 10 mL. AUC of hematoma volumes for predicting need for angioembolization was 0.81 (predicted) versus 0.80 (manual). Qualitatively, predicted labels closely followed hematoma contours and avoided muscle and displaced viscera. Further work will involve validation using a federated dataset and incorporation into a predictive model using multiple segmented features.


Assuntos
Aprendizado Profundo , Hematoma , Hematoma/diagnóstico por imagem , Humanos , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
20.
J Digit Imaging ; 32(3): 408-416, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30324429

RESUMO

Ultrasound (US) is a valuable imaging modality used to detect primary breast malignancy. However, radiologists have a limited ability to distinguish between benign and malignant lesions on US, leading to false-positive and false-negative results, which limit the positive predictive value of lesions sent for biopsy (PPV3) and specificity. A recent study demonstrated that incorporating an AI-based decision support (DS) system into US image analysis could help improve US diagnostic performance. While the DS system is promising, its efficacy in terms of its impact also needs to be measured when integrated into existing clinical workflows. The current study evaluates workflow schemas for DS integration and its impact on diagnostic accuracy. The impact on two different reading methodologies, sequential and independent, was assessed. This study demonstrates significant accuracy differences between the two workflow schemas as measured by area under the receiver operating curve (AUC), as well as inter-operator variability differences as measured by Kendall's tau-b. This evaluation has practical implications on the utilization of such technologies in diagnostic environments as compared to previous studies.


Assuntos
Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Computador/métodos , Ultrassonografia Mamária , Diagnóstico Diferencial , Humanos , Valor Preditivo dos Testes , Software , Fluxo de Trabalho
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA