Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Comput Methods Biomech Biomed Engin ; 26(2): 160-173, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35297747

RESUMO

Early prediction of COVID-19 mortality outcome can decrease expiration risk by alerting healthcare personnel to assure efficient resource allocation and treatment planning. This study introduces a machine learning framework for the prediction of COVID-19 mortality using demographics, vital signs, and laboratory blood tests (complete blood count (CBC), coagulation, kidney, liver, blood gas, and general). 41 features from 244 COVID-19 patients were recorded on the first day of admission. In this study, first, the features in each of the eight categories were investigated. Afterward, features that have an area under the receiver operating characteristic curve (AUC) above 0.6 and the p-value criterion from the Wilcoxon rank-sum test below 0.005 were used as selected features for further analysis. Then five feature reduction methods, Forward Feature selection, minimum Redundancy Maximum Relevance, Relieff, Linear Discriminant Analysis, and Neighborhood Component Analysis were utilized to select the best combination of features. Finally, seven classifiers frameworks, random forest (RF), support vector machine, logistic regression (LR), K nearest neighbors, Artifical neural network, bagging, and boosting were used to predict the mortality outcome of COVID-19 patients. The results revealed that the combination of features in CBC and then vital signs had the highest mortality classification parameters, respectively. Furthermore, the RF classifier with hierarchical feature selection algorithms via Forward Feature selection had the highest classification power with an accuracy of 92.08 ± 2.56. Therefore, our proposed method can be confidently used as a valuable assistant prognostic tool to sieve patients with high mortality risks.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico , Algoritmo Florestas Aleatórias , Algoritmos , Redes Neurais de Computação , Curva ROC
2.
Int J Imaging Syst Technol ; 32(1): 102-110, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35464345

RESUMO

Severity assessment of the novel Coronavirus (COVID-19) using chest computed tomography (CT) scan is crucial for the effective administration of the right therapeutic drugs and also for monitoring the progression of the disease. However, determining the severity of COVID-19 needs a highly expert radiologist by visual assessment, which is time-consuming, boring, and subjective. This article introduces an advanced machine learning tool to determine the severity of COVID-19 to mild, moderate, and severe from the lung CT images. We have used a set of quantitative first- and second-order statistical texture features from each image. The first-order texture features extracted from the image histogram are variance, skewness, and kurtosis. The second-order texture features extraction methods are gray-level co-occurrence matrix, gray-level run length matrix, and gray-level size zone matrix. Finally, using the extracted features, CT images of each person are classified using random forest (RF) as an ensemble method based on majority voting of the decision trees outputs to four classes. We have used a dataset of CT scans labeled as being normal (231), mild (563), moderate (120), and severe (42) determined by expert radiologists. The experimental results indicate the combination of all feature extraction methods, and RF achieves the highest result compared with the other strategies in detecting the four classes of severity of COVID-19 from CT images with an accuracy of 90.95%. This proposed system can work well and can be used as an assistant diagnostic tool for quantification of lung involvement of COVID-19 to monitor the progression of the disease.

3.
J Med Signals Sens ; 12(1): 25-31, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35265462

RESUMO

Background: The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images. Methods: We fused two MRI and PET images automatically with a pretrained convolutional neural network (CNN, VGG19). First, the PET image was converted from red-green-blue space to hue-saturation-intensity space to save the hue and saturation information. We started with extracting features from images by using a pretrained CNN. Then, we used the weights extracted from two MRI and PET images to construct a fused image. Fused image was constructed with multiplied weights to images. For solving the problem of reduced contrast, we added the constant coefficient of the original image to the final result. Finally, quantitative criteria (entropy, mutual information, discrepancy, and overall performance [OP]) were applied to evaluate the results of fusion. We compared the results of our method with the most widely used methods in the spatial and transform domain. Results: The quantitative measurement values we used were entropy, mutual information, discrepancy, and OP that were 3.0319, 2.3993, 3.8187, and 0.9899, respectively. The final results showed that our method based on quantitative assessments was the best and easiest way to fused images, especially in the spatial domain. Conclusion: It concluded that our method used for MRI-PET image fusion was more accurate.

4.
Radiol Phys Technol ; 11(4): 382-391, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30196479

RESUMO

In the forward intensity-modulated radiotherapy (FIMRT) technique of treatment planning, the isocenter cannot fulfill the requirements of the International Commission on Radiation Units and Measurements (ICRU) reference point. This study aimed to propose dose prescription points to be used in breast and head/neck cancer patients treated using the FIMRT technique. Two-hundred patients with head/neck (n = 100) and breast (n = 100) cancers were selected. Treatment plans involved using the FIMRT technique. The suggested reference points (SRPs) for dose prescription were placed at selected locations in the lateral neck site and the tangential site in breast and head/neck cancer patients, respectively. Next, doses at the SRPs (DSRPs) were compared to the planning treatment volume (PTV)-equivalent uniform dose (EUD) and mean dose in the PTV (Dmean) extracted from dose-volume histograms. The differences between DSRPs and Dmean for the PTVs were less than 2% in 78 cases and less than 1% in 28 cases. The DSRPs were comparable with the EUD in both types of PTVs; moreover, paired t test analyses indicated that the SRP doses and PTV EUD were statistically similar (p > 0.05). In 116 cases, the discrepancy between DSRP and EUD was less than 2% and less than 1% in 74 cases. The SRPs satisfy the ICRU requirement that the reference point dose should be clinically relevant and representative of the dose throughout the PTV. A consensus on the definition of the reference point is expected to lead to accurate inter-comparison between results from different treatment centers, facilitating the optimization of treatment procedures.


Assuntos
Neoplasias da Mama/radioterapia , Neoplasias de Cabeça e Pescoço/radioterapia , Doses de Radiação , Radioterapia de Intensidade Modulada/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Dosagem Radioterapêutica
5.
J Med Eng Technol ; 38(4): 211-9, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24758393

RESUMO

Image fusion means to integrate information from one image to another image. Medical images according to the nature of the images are divided into structural (such as CT and MRI) and functional (such as SPECT, PET). This article fused MRI and PET images and the purpose is adding structural information from MRI to functional information of PET images. The images decomposed with Nonsubsampled Contourlet Transform and then two images were fused with applying fusion rules. The coefficients of the low frequency band are combined by a maximal energy rule and coefficients of the high frequency bands are combined by a maximal variance rule. Finally, visual and quantitative criteria were used to evaluate the fusion result. In visual evaluation the opinion of two radiologists was used and in quantitative evaluation the proposed fusion method was compared with six existing methods and used criteria were entropy, mutual information, discrepancy and overall performance.


Assuntos
Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Algoritmos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA