Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Bases de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Pers Med ; 12(4)2022 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-35455730

RESUMO

Oral cavity cancer (OCC) is associated with high morbidity and mortality rates when diagnosed at late stages. Early detection of increased risk provides an opportunity for implementing prevention strategies surrounding modifiable risk factors and screening to promote early detection and intervention. Historical evidence identified a gap in the training of primary care providers (PCPs) surrounding the examination of the oral cavity. The absence of clinically applicable analytical tools to identify patients with high-risk OCC phenotypes at point-of-care (POC) causes missed opportunities for implementing patient-specific interventional strategies. This study developed an OCC risk assessment tool prototype by applying machine learning (ML) approaches to a rich retrospectively collected data set abstracted from a clinical enterprise data warehouse. We compared the performance of six ML classifiers by applying the 10-fold cross-validation approach. Accuracy, recall, precision, specificity, area under the receiver operating characteristic curve, and recall-precision curves for the derived voting algorithm were: 78%, 64%, 88%, 92%, 0.83, and 0.81, respectively. The performance of two classifiers, multilayer perceptron and AdaBoost, closely mirrored the voting algorithm. Integration of the OCC risk assessment tool developed by clinical informatics application into an electronic health record as a clinical decision support tool can assist PCPs in targeting at-risk patients for personalized interventional care.

2.
J Imaging ; 5(1)2018 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-34470183

RESUMO

Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA