Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Sci Rep ; 14(1): 851, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38191606

ABSTRACT

The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Neural Networks, Computer , Oxygen , Patients
2.
Cancers (Basel) ; 15(21)2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37958390

ABSTRACT

Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.

3.
Sci Rep ; 13(1): 166, 2023 Jan 04.
Article in English | MEDLINE | ID: mdl-36599906

ABSTRACT

Counting number of triangles in the graph is considered a major task in many large-scale graph analytics problems such as clustering coefficient, transitivity ratio, trusses, etc. In recent years, MapReduce becomes one of the most popular and powerful frameworks for analyzing large-scale graphs in clusters of machines. In this paper, we propose two new MapReduce algorithms based on graph partitioning. The two algorithms avoid the problem of duplicate counting triangles that other algorithms suffer from. The experimental results show a high efficiency of the two algorithms in comparison with an existing algorithm, overcoming it in the execution time performance, especially in very large-scale graphs.

4.
Sensors (Basel) ; 22(20)2022 Oct 15.
Article in English | MEDLINE | ID: mdl-36298186

ABSTRACT

Diabetic retinopathy (DR) is a major health problem that can lead to vision loss if not treated early. In this study, a three-step system for DR detection utilizing optical coherence tomography (OCT) is presented. First, the proposed system segments the retinal layers from the input OCT images. Second, 3D features are extracted from each retinal layer that include the first-order reflectivity and the 3D thickness of the individual OCT layers. Finally, backpropagation neural networks are used to classify OCT images. Experimental studies on 188 cases confirm the advantages of the proposed system over related methods, achieving an accuracy of 96.81%, using the leave-one-subject-out (LOSO) cross-validation. These outcomes show the potential of the suggested method for DR detection using OCT images.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnostic imaging , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Neural Networks, Computer
5.
Bioengineering (Basel) ; 9(10)2022 Sep 22.
Article in English | MEDLINE | ID: mdl-36290461

ABSTRACT

Lung cancer is among the most common mortality causes worldwide. This scientific article is a comprehensive review of current knowledge regarding screening, subtyping, imaging, staging, and management of treatment response for lung cancer. The traditional imaging modality for screening and initial lung cancer diagnosis is computed tomography (CT). Recently, a dual-energy CT was proven to enhance the categorization of variable pulmonary lesions. The National Comprehensive Cancer Network (NCCN) recommends usage of fluorodeoxyglucose positron emission tomography (FDG PET) in concert with CT to properly stage lung cancer and to prevent fruitless thoracotomies. Diffusion MR is an alternative to FDG PET/CT that is radiation-free and has a comparable diagnostic performance. For response evaluation after treatment, FDG PET/CT is a potent modality which predicts survival better than CT. Updated knowledge of lung cancer genomic abnormalities and treatment regimens helps to improve the radiologists' skills. Incorporating the radiologic experience is crucial for precise diagnosis, therapy planning, and surveillance of lung cancer.

6.
Sensors (Basel) ; 22(9)2022 May 04.
Article in English | MEDLINE | ID: mdl-35591182

ABSTRACT

Diabetic retinopathy (DR) is a devastating condition caused by progressive changes in the retinal microvasculature. It is a leading cause of retinal blindness in people with diabetes. Long periods of uncontrolled blood sugar levels result in endothelial damage, leading to macular edema, altered retinal permeability, retinal ischemia, and neovascularization. In order to facilitate rapid screening and diagnosing, as well as grading of DR, different retinal modalities are utilized. Typically, a computer-aided diagnostic system (CAD) uses retinal images to aid the ophthalmologists in the diagnosis process. These CAD systems use a combination of machine learning (ML) models (e.g., deep learning (DL) approaches) to speed up the diagnosis and grading of DR. In this way, this survey provides a comprehensive overview of different imaging modalities used with ML/DL approaches in the DR diagnosis process. The four imaging modalities that we focused on are fluorescein angiography, fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). In addition, we discuss limitations of the literature that utilizes such modalities for DR diagnosis. In addition, we introduce research gaps and provide suggested solutions for the researchers to resolve. Lastly, we provide a thorough discussion about the challenges and future directions of the current state-of-the-art DL/ML approaches. We also elaborate on how integrating different imaging modalities with the clinical information and demographic data will lead to promising results for the scientists when diagnosing and grading DR. As a result of this article's comparative analysis and discussion, it remains necessary to use DL methods over existing ML models to detect DR in multiple modalities.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Macular Edema , Diabetic Retinopathy/diagnostic imaging , Fluorescein Angiography/adverse effects , Humans , Retina/diagnostic imaging , Tomography, Optical Coherence/methods
7.
Cancers (Basel) ; 14(7)2022 Apr 06.
Article in English | MEDLINE | ID: mdl-35406614

ABSTRACT

Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.

8.
Diagnostics (Basel) ; 12(3)2022 Mar 12.
Article in English | MEDLINE | ID: mdl-35328249

ABSTRACT

Early grading of coronavirus disease 2019 (COVID-19), as well as ventilator support machines, are prime ways to help the world fight this virus and reduce the mortality rate. To reduce the burden on physicians, we developed an automatic Computer-Aided Diagnostic (CAD) system to grade COVID-19 from Computed Tomography (CT) images. This system segments the lung region from chest CT scans using an unsupervised approach based on an appearance model, followed by 3D rotation invariant Markov-Gibbs Random Field (MGRF)-based morphological constraints. This system analyzes the segmented lung and generates precise, analytical imaging markers by estimating the MGRF-based analytical potentials. Three Gibbs energy markers were extracted from each CT scan by tuning the MGRF parameters on each lesion separately. The latter were healthy/mild, moderate, and severe lesions. To represent these markers more reliably, a Cumulative Distribution Function (CDF) was generated, then statistical markers were extracted from it, namely, 10th through 90th CDF percentiles with 10% increments. Subsequently, the three extracted markers were combined together and fed into a backpropagation neural network to make the diagnosis. The developed system was assessed on 76 COVID-19-infected patients using two metrics, namely, accuracy and Kappa. In this paper, the proposed system was trained and tested by three approaches. In the first approach, the MGRF model was trained and tested on the lungs. This approach achieved 95.83% accuracy and 93.39% kappa. In the second approach, we trained the MGRF model on the lesions and tested it on the lungs. This approach achieved 91.67% accuracy and 86.67% kappa. Finally, we trained and tested the MGRF model on lesions. It achieved 100% accuracy and 100% kappa. The results reported in this paper show the ability of the developed system to accurately grade COVID-19 lesions compared to other machine learning classifiers, such as k-Nearest Neighbor (KNN), decision tree, naïve Bayes, and random forest.

9.
Diagnostics (Basel) ; 12(2)2022 Feb 11.
Article in English | MEDLINE | ID: mdl-35204552

ABSTRACT

Early diagnosis of diabetic retinopathy (DR) is of critical importance to suppress severe damage to the retina and/or vision loss. In this study, an optical coherence tomography (OCT)-based computer-aided diagnosis (CAD) method is proposed to detect DR early using structural 3D retinal scans. This system uses prior shape knowledge to automatically segment all retinal layers of the 3D-OCT scans using an adaptive, appearance-based method. After the segmentation step, novel texture features are extracted from the segmented layers of the OCT B-scans volume for DR diagnosis. For every layer, Markov-Gibbs random field (MGRF) model is used to extract the 2nd-order reflectivity. In order to represent the extracted image-derived features, we employ cumulative distribution function (CDF) descriptors. For layer-wise classification in 3D volume, using the extracted Gibbs energy feature, an artificial neural network (ANN) is fed the extracted feature for every layer. Finally, the classification outputs for all twelve layers are fused using a majority voting schema for global subject diagnosis. A cohort of 188 3D-OCT subjects are used for system evaluation using different k-fold validation techniques and different validation metrics. Accuracy of 90.56%, 93.11%, and 96.88% are achieved using 4-, 5-, and 10-fold cross-validation, respectively. Additional comparison with deep learning networks, which represent the state-of-the-art, documented the promise of our system's ability to diagnose the DR early.

10.
Med Phys ; 49(2): 988-999, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34890061

ABSTRACT

PURPOSE: To assess whether the integration between (a) functional imaging features that will be extracted from diffusion-weighted imaging (DWI); and (b) shape and texture imaging features as well as volumetric features that will be extracted from T2-weighted magnetic resonance imaging (MRI) can noninvasively improve the diagnostic accuracy of thyroid nodules classification. PATIENTS AND METHODS: In a retrospective study of 55 patients with pathologically proven thyroid nodules, T2-weighted and diffusion-weighted MRI scans of the thyroid gland were acquired. Spatial maps of the apparent diffusion coefficient (ADC) were reconstructed in all cases. To quantify the nodules' morphology, we used spherical harmonics as a new parametric shape descriptor to describe the complexity of the thyroid nodules in addition to traditional volumetric descriptors (e.g., tumor volume and cuboidal volume). To capture the inhomogeneity of the texture of the thyroid nodules, we used the histogram-based statistics (e.g., kurtosis, entropy, skewness, etc.) of the T2-weighted signal. To achieve the main goal of this paper, a fusion system using an artificial neural network (NN) is proposed to integrate both the functional imaging features (ADC) with the structural morphology and texture features. This framework has been tested on 55 patients (20 patients with malignant nodules and 35 patients with benign nodules), using leave-one-subject-out (LOSO) for training/testing validation tests. RESULTS: The functionality, morphology, and texture imaging features were estimated for 55 patients. The accuracy of the computer-aided diagnosis (CAD) system steadily improved as we integrate the proposed imaging features. The fusion system combining all biomarkers achieved a sensitivity, specificity, positive predictive value, negative predictive value, F1-score, and accuracy of 92.9 % $92.9\%$ (confidence interval [CI]: 78.9 % -- 99.5 % $78.9\%\text{--}99.5\%$ ), 95.8 % $95.8\%$ (CI: 87.4 % -- 99.7 % $87.4\%\text{--}99.7\%$ ), 93 % $93\%$ (CI: 80.7 % -- 99.5 % $80.7\%\text{--}99.5\%$ ), 96 % $96\%$ (CI: 88.8 % -- 99.7 % $88.8\%\text{--}99.7\%$ ), 92.8 % $92.8\%$ (CI: 83.5 % -- 98.5 % $83.5\%\text{--}98.5\%$ ), and 95.5 % $95.5\%$ (CI: 88.8 % -- 99.2 % $88.8\%\text{--}99.2\%$ ), respectively, using the LOSO cross-validation approach. CONCLUSION: The results demonstrated in this paper show the promise that integrating the functional features with morphology as well as texture features by using the current state-of-the-art machine learning approaches will be extremely useful for identifying thyroid nodules as well as diagnosing their malignancy.


Subject(s)
Thyroid Nodule , Diffusion Magnetic Resonance Imaging , Humans , Machine Learning , Magnetic Resonance Imaging , Retrospective Studies , Thyroid Nodule/diagnostic imaging
11.
Sensors (Basel) ; 21(16)2021 Aug 14.
Article in English | MEDLINE | ID: mdl-34450923

ABSTRACT

A new segmentation technique is introduced for delineating the lung region in 3D computed tomography (CT) images. To accurately model the distribution of Hounsfield scale values within both chest and lung regions, a new probabilistic model is developed that depends on a linear combination of Gaussian (LCG). Moreover, we modified the conventional expectation-maximization (EM) algorithm to be run in a sequential way to estimate both the dominant Gaussian components (one for the lung region and one for the chest region) and the subdominant Gaussian components, which are used to refine the final estimated joint density. To estimate the marginal density from the mixed density, a modified k-means clustering approach is employed to classify the Gaussian subdominant components to determine which components belong properly to a lung and which components belong to a chest. The initial segmentation, based on the LCG-model, is then refined by the imposition of 3D morphological constraints based on a 3D Markov-Gibbs random field (MGRF) with analytically estimated potentials. The proposed approach was tested on CT data from 32 coronavirus disease 2019 (COVID-19) patients. Segmentation quality was quantitatively evaluated using four metrics: Dice similarity coefficient (DSC), overlap coefficient, 95th-percentile bidirectional Hausdorff distance (BHD), and absolute lung volume difference (ALVD), and it achieved 95.67±1.83%, 91.76±3.29%, 4.86±5.01, and 2.93±2.39, respectively. The reported results showed the capability of the proposed approach to accurately segment healthy lung tissues in addition to pathological lung tissues caused by COVID-19, outperforming four current, state-of-the-art deep learning-based lung segmentation approaches.


Subject(s)
COVID-19 , Algorithms , Humans , Image Processing, Computer-Assisted , Lung/diagnostic imaging , SARS-CoV-2 , Tomography, X-Ray Computed
12.
Sci Rep ; 11(1): 12095, 2021 06 08.
Article in English | MEDLINE | ID: mdl-34103587

ABSTRACT

The primary goal of this manuscript is to develop a computer assisted diagnostic (CAD) system to assess pulmonary function and risk of mortality in patients with coronavirus disease 2019 (COVID-19). The CAD system processes chest X-ray data and provides accurate, objective imaging markers to assist in the determination of patients with a higher risk of death and thus are more likely to require mechanical ventilation and/or more intensive clinical care.To obtain an accurate stochastic model that has the ability to detect the severity of lung infection, we develop a second-order Markov-Gibbs random field (MGRF) invariant under rigid transformation (translation or rotation of the image) as well as scale (i.e., pixel size). The parameters of the MGRF model are learned automatically, given a training set of X-ray images with affected lung regions labeled. An X-ray input to the system undergoes pre-processing to correct for non-uniformity of illumination and to delimit the boundary of the lung, using either a fully-automated segmentation routine or manual delineation provided by the radiologist, prior to the diagnosis. The steps of the proposed methodology are: (i) estimate the Gibbs energy at several different radii to describe the inhomogeneity in lung infection; (ii) compute the cumulative distribution function (CDF) as a new representation to describe the local inhomogeneity in the infected region of lung; and (iii) input the CDFs to a new neural network-based fusion system to determine whether the severity of lung infection is low or high. This approach is tested on 200 clinical X-rays from 200 COVID-19 positive patients, 100 of whom died and 100 who recovered using multiple training/testing processes including leave-one-subject-out (LOSO), tenfold, fourfold, and twofold cross-validation tests. The Gibbs energy for lung pathology was estimated at three concentric rings of increasing radii. The accuracy and Dice similarity coefficient (DSC) of the system steadily improved as the radius increased. The overall CAD system combined the estimated Gibbs energy information from all radii and achieved a sensitivity, specificity, accuracy, and DSC of 100%, 97% ± 3%, 98% ± 2%, and 98% ± 2%, respectively, by twofold cross validation. Alternative classification algorithms, including support vector machine, random forest, naive Bayes classifier, K-nearest neighbors, and decision trees all produced inferior results compared to the proposed neural network used in this CAD system. The experiments demonstrate the feasibility of the proposed system as a novel tool to objectively assess disease severity and predict mortality in COVID-19 patients. The proposed tool can assist physicians to determine which patients might require more intensive clinical care, such a mechanical respiratory support.


Subject(s)
COVID-19/diagnostic imaging , COVID-19/physiopathology , Lung/diagnostic imaging , Lung/physiopathology , Radiography, Thoracic , Tomography, X-Ray Computed , Adult , Aged , Deep Learning , Female , Humans , Image Processing, Computer-Assisted , Male , Middle Aged , Stochastic Processes
13.
Am J Ophthalmol ; 216: 201-206, 2020 08.
Article in English | MEDLINE | ID: mdl-31982407

ABSTRACT

PURPOSE: To determine if combining clinical, demographic, and imaging data improves automated diagnosis of nonproliferative diabetic retinopathy (NPDR). DESIGN: Cross-sectional imaging and machine learning study. METHODS: This was a retrospective study performed at a single academic medical center in the United States. Inclusion criteria were age >18 years and a diagnosis of diabetes mellitus (DM). Exclusion criteria were non-DR retinal disease and inability to image the macula. Optical coherence tomography (OCT) and OCT angiography (OCTA) were performed, and data on age, sex, hypertension, hyperlipidemia, and hemoglobin A1c were collected. Machine learning techniques were then applied. Multiple pathophysiologically important features were automatically extracted from each layer on OCT and each OCTA plexus and combined with clinical data in a random forest classifier to develop the system, whose results were compared to the clinical grading of NPDR, the gold standard. RESULTS: A total of 111 patients with DM II were included in the study, 36 with DM without DR, 53 with mild NPDR, and 22 with moderate NPDR. When OCT images alone were analyzed by the system, accuracy of diagnosis was 76%, sensitivity 85%, specificity 87%, and area under the curve (AUC) was 0.78. When OCT and OCTA data together were analyzed, accuracy was 92%, sensitivity 95%, specificity 98%, and AUC 0.92. When all data modalities were combined, the system achieved an accuracy of 96%, sensitivity 100%, specificity 94%, and AUC 0.96. CONCLUSIONS: Combining common clinical data points with OCT and OCTA data enhances the power of computer-aided diagnosis of NPDR.


Subject(s)
Biomarkers/metabolism , Diabetic Retinopathy/diagnosis , Diagnosis, Computer-Assisted , Fluorescein Angiography , Tomography, Optical Coherence , Adult , Aged , Aged, 80 and over , Area Under Curve , Cross-Sectional Studies , Diabetic Retinopathy/metabolism , Female , Humans , Machine Learning , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...