Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE J Biomed Health Inform ; 28(7): 4224-4237, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954562

RESUMEN

The real-world Electronic Health Records (EHRs) present irregularities due to changes in the patient's health status, resulting in various time intervals between observations and different physiological variables examined at each observation point. There have been recent applications of Transformer-based models in the field of irregular time series. However, the full attention mechanism in Transformer overly focuses on distant information, ignoring the short-term correlations of the condition. Thereby, the model is not able to capture localized changes or short-term fluctuations in patients' conditions. Therefore, we propose a novel end-to-end Deformable Neighborhood Attention Transformer (DNA-T) for irregular medical time series. The DNA-T captures local features by dynamically adjusting the receptive field of attention and aggregating relevant deformable neighborhoods in irregular time series. Specifically, we design a Deformable Neighborhood Attention (DNA) module that enables the network to attend to relevant neighborhoods by drifting the receiving field of neighborhood attention. The DNA enhances the model's sensitivity to local information and representation of local features, thereby capturing the correlation of localized changes in patients' conditions. We conduct extensive experiments to validate the effectiveness of DNA-T, outperforming existing state-of-the-art methods in predicting the mortality risk of patients. Moreover, we visualize an example to validate the effectiveness of the proposed DNA.


Asunto(s)
Registros Electrónicos de Salud , Humanos , Algoritmos
2.
Comput Methods Programs Biomed ; 249: 108141, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38574423

RESUMEN

BACKGROUND AND OBJECTIVE: Lung tumor annotation is a key upstream task for further diagnosis and prognosis. Although deep learning techniques have promoted automation of lung tumor segmentation, there remain challenges impeding its application in clinical practice, such as a lack of prior annotation for model training and data-sharing among centers. METHODS: In this paper, we use data from six centers to design a novel federated semi-supervised learning (FSSL) framework with dynamic model aggregation and improve segmentation performance for lung tumors. To be specific, we propose a dynamically updated algorithm to deal with model parameter aggregation in FSSL, which takes advantage of both the quality and quantity of client data. Moreover, to increase the accessibility of data in the federated learning (FL) network, we explore the FAIR data principle while the previous federated methods never involve. RESULT: The experimental results show that the segmentation performance of our model in six centers is 0.9348, 0.8436, 0.8328, 0.7776, 0.8870 and 0.8460 respectively, which is superior to traditional deep learning methods and recent federated semi-supervised learning methods. CONCLUSION: The experimental results demonstrate that our method is superior to the existing FSSL methods. In addition, our proposed dynamic update strategy effectively utilizes the quality and quantity information of client data and shows efficiency in lung tumor segmentation. The source code is released on (https://github.com/GDPHMediaLab/FedDUS).


Asunto(s)
Algoritmos , Neoplasias Pulmonares , Humanos , Automatización , Neoplasias Pulmonares/diagnóstico por imagen , Programas Informáticos , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador
3.
Artículo en Inglés | MEDLINE | ID: mdl-38145509

RESUMEN

Although federated learning (FL) has achieved outstanding results in privacy-preserved distributed learning, the setting of model homogeneity among clients restricts its wide application in practice. This article investigates a more general case, namely, model-heterogeneous FL (M-hete FL), where client models are independently designed and can be structurally heterogeneous. M-hete FL faces new challenges in collaborative learning because the parameters of heterogeneous models could not be directly aggregated. In this article, we propose a novel allosteric feature collaboration (AlFeCo) method, which interchanges knowledge across clients and collaboratively updates heterogeneous models on the server. Specifically, an allosteric feature generator is developed to reveal task-relevant information from multiple client models. The revealed information is stored in the client-shared and client-specific codes. We exchange client-specific codes across clients to facilitate knowledge interchange and generate allosteric features that are dimensionally variable for model updates. To promote information communication between different clients, a dual-path (model-model and model-prediction) communication mechanism is designed to supervise the collaborative model updates using the allosteric features. Client models are fully communicated through the knowledge interchange between models and between models and predictions. We further provide theoretical evidence and convergence analysis to support the effectiveness of AlFeCo in M-hete FL. The experimental results show that the proposed AlFeCo method not only performs well on classical FL benchmarks but also is effective in model-heterogeneous federated antispoofing. Our codes are publicly available at https://github.com/ybaoyao/AlFeCo.

4.
Adv Sci (Weinh) ; 10(6): e2204717, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36575159

RESUMEN

Deep learning (DL) on brain magnetic resonance imaging (MRI) data has shown excellent performance in differentiating individuals with Alzheimer's disease (AD). However, the value of DL in detecting progressive structural MRI (sMRI) abnormalities linked to AD pathology has yet to be established. In this study, an interpretable DL algorithm named the Ensemble of 3-dimensional convolutional neural network (Ensemble 3DCNN) with enhanced parsing techniques is proposed to investigate the longitudinal trajectories of whole-brain sMRI changes denoting AD onset and progression. A set of 2369 T1-weighted images from the multi-centre Alzheimer's Disease Neuroimaging Initiative and Open Access Series of Imaging Studies cohorts are applied to model derivation, validation, testing, and pattern analysis. An Ensemble-3DCNN-based P-score is generated, based on which multiple brain regions, including amygdala, insular, parahippocampal, and temporal gyrus, exhibit early and connected progressive neurodegeneration. Complex individual variability in the sMRI is also observed. This study combining non-invasive sMRI and interpretable DL in detecting patterned sMRI changes confirmed AD pathological progression, shedding new light on predicting AD progression using whole-brain sMRI.


Asunto(s)
Enfermedad de Alzheimer , Aprendizaje Profundo , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neuroimagen , Encéfalo/diagnóstico por imagen , Encéfalo/patología
5.
JHEP Rep ; 4(3): 100441, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35198928

RESUMEN

BACKGROUND & AIMS: Accurate hepatocellular carcinoma (HCC) risk prediction facilitates appropriate surveillance strategy and reduces cancer mortality. We aimed to derive and validate novel machine learning models to predict HCC in a territory-wide cohort of patients with chronic viral hepatitis (CVH) using data from the Hospital Authority Data Collaboration Lab (HADCL). METHODS: This was a territory-wide, retrospective, observational, cohort study of patients with CVH in Hong Kong in 2000-2018 identified from HADCL based on viral markers, diagnosis codes, and antiviral treatment for chronic hepatitis B and/or C. The cohort was randomly split into training and validation cohorts in a 7:3 ratio. Five popular machine learning methods, namely, logistic regression, ridge regression, AdaBoost, decision tree, and random forest, were performed and compared to find the best prediction model. RESULTS: A total of 124,006 patients with CVH with complete data were included to build the models. In the training cohort (n = 86,804; 6,821 HCC), ridge regression (area under the receiver operating characteristic curve [AUROC] 0.842), decision tree (0.952), and random forest (0.992) performed the best. In the validation cohort (n = 37,202; 2,875 HCC), ridge regression (AUROC 0.844) and random forest (0.837) maintained their accuracy, which was significantly higher than those of HCC risk scores: CU-HCC (0.672), GAG-HCC (0.745), REACH-B (0.671), PAGE-B (0.748), and REAL-B (0.712) scores. The low cut-off (0.07) of HCC ridge score (HCC-RS) achieved 90.0% sensitivity and 98.6% negative predictive value (NPV) in the validation cohort. The high cut-off (0.15) of HCC-RS achieved high specificity (90.0%) and NPV (95.6%); 31.1% of patients remained indeterminate. CONCLUSIONS: HCC-RS from the ridge regression machine learning model accurately predicted HCC in patients with CVH. These machine learning models may be developed as built-in functional keys or calculators in electronic health systems to reduce cancer mortality. LAY SUMMARY: Novel machine learning models generated accurate risk scores for hepatocellular carcinoma (HCC) in patients with chronic viral hepatitis. HCC ridge score was consistently more accurate than existing HCC risk scores. These models may be incorporated into electronic medical health systems to develop appropriate cancer surveillance strategies and reduce cancer death.

6.
IEEE Trans Image Process ; 31: 419-432, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34874854

RESUMEN

Many unsupervised domain adaptation (UDA) methods have been developed and have achieved promising results in various pattern recognition tasks. However, most existing methods assume that raw source data are available in the target domain when transferring knowledge from the source to the target domain. Due to the emerging regulations on data privacy, the availability of source data cannot be guaranteed when applying UDA methods in a new domain. The lack of source data makes UDA more challenging, and most existing methods are no longer applicable. To handle this issue, this paper analyzes the cross-domain representations in source-data-free unsupervised domain adaptation (SF-UDA). A new theorem is derived to bound the target-domain prediction error using the trained source model instead of the source data. On the basis of the proposed theorem, information bottleneck theory is introduced to minimize the generalization upper bound of the target-domain prediction error, thereby achieving domain adaptation. The minimization is implemented in a variational inference framework using a newly developed latent alignment variational autoencoder (LA-VAE). The experimental results show good performance of the proposed method in several cross-dataset classification tasks without using source data. Ablation studies and feature visualization also validate the effectiveness of our method in SF-UDA.

7.
IEEE Trans Cybern ; 52(5): 3394-3407, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-32795976

RESUMEN

Medical time series of laboratory tests has been collected in electronic health records (EHRs) in many countries. Machine-learning algorithms have been proposed to analyze the condition of patients using these medical records. However, medical time series may be recorded using different laboratory parameters in different datasets. This results in the failure of applying a pretrained model on a test dataset containing a time series of different laboratory parameters. This article proposes to solve this problem with an unsupervised time-series adaptation method that generates time series across laboratory parameters. Specifically, a medical time-series generation network with similarity distillation is developed to reduce the domain gap caused by the difference in laboratory parameters. The relations of different laboratory parameters are analyzed, and the similarity information is distilled to guide the generation of target-domain specific laboratory parameters. To further improve the performance in cross-domain medical applications, a missingness-aware feature extraction network is proposed, where the missingness patterns reflect the health conditions and, thus, serve as auxiliary features for medical analysis. In addition, we also introduce domain-adversarial networks in both feature level and time-series level to enhance the adaptation across domains. Experimental results show that the proposed method achieves good performance on both private and publicly available medical datasets. Ablation studies and distribution visualization are provided to further analyze the properties of the proposed method.


Asunto(s)
Algoritmos , Destilación , Registros Electrónicos de Salud , Humanos , Aprendizaje Automático , Factores de Tiempo
8.
IEEE Trans Neural Netw Learn Syst ; 32(10): 4665-4679, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33055037

RESUMEN

Influenced by the dynamic changes in the severity of illness, patients usually take examinations in hospitals irregularly, producing a large volume of irregular medical time-series data. Performing diagnosis prediction from the irregular medical time series is challenging because the intervals between consecutive records significantly vary along time. Existing methods often handle this problem by generating regular time series from the irregular medical records without considering the uncertainty in the generated data, induced by the varying intervals. Thus, a novel Uncertainty-Aware Convolutional Recurrent Neural Network (UA-CRNN) is proposed in this article, which introduces the uncertainty information in the generated data to boost the risk prediction. To tackle the complex medical time series with subseries of different frequencies, the uncertainty information is further incorporated into the subseries level rather than the whole sequence to seamlessly adjust different time intervals. Specifically, a hierarchical uncertainty-aware decomposition layer (UADL) is designed to adaptively decompose time series into different subseries and assign them proper weights in accordance with their reliabilities. Meanwhile, an Explainable UA-CRNN (eUA-CRNN) is proposed to exploit filters with different passbands to ensure the unity of components in each subseries and the diversity of components in different subseries. Furthermore, eUA-CRNN incorporates with an uncertainty-aware attention module to learn attention weights from the uncertainty information, providing the explainable prediction results. The extensive experimental results on three real-world medical data sets illustrate the superiority of the proposed method compared with the state-of-the-art methods.


Asunto(s)
Aprendizaje Profundo/tendencias , Registros Electrónicos de Salud/tendencias , Redes Neurales de la Computación , Incertidumbre , Humanos , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA