Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 583
Filtrar
1.
Pattern Recognit ; 1572025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39246820

RESUMO

Resting-state functional MRI (rs-fMRI) is increasingly employed in multi-site research to analyze neurological disorders, but there exists cross-site/domain data heterogeneity caused by site effects such as differences in scanners/protocols. Existing domain adaptation methods that reduce fMRI heterogeneity generally require accessing source domain data, which is challenging due to privacy concerns and/or data storage burdens. To this end, we propose a source-free collaborative domain adaptation (SCDA) framework using only a pretrained source model and unlabeled target data. Specifically, a multi-perspective feature enrichment method (MFE) is developed to dynamically exploit target fMRIs from multiple views. To facilitate efficient source-to-target knowledge transfer without accessing source data, we initialize MFE using parameters of a pretrained source model. We also introduce an unsupervised pretraining strategy using 3,806 unlabeled fMRIs from three large-scale auxiliary databases. Experimental results on three public and one private datasets show the efficacy of our method in cross-scanner and cross-study prediction.

2.
Neuroimage Clin ; 43: 103663, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39226701

RESUMO

Identifying biomarkers for computer-aided diagnosis (CAD) is crucial for early intervention of psychiatric disorders. Multi-site data have been utilized to increase the sample size and improve statistical power, while multi-modality classification offers significant advantages over traditional single-modality based approaches for diagnosing psychiatric disorders. However, inter-site heterogeneity and intra-modality heterogeneity present challenges to multi-site and multi-modality based classification. In this paper, brain functional and structural networks (BFNs/BSNs) from multiple sites were constructed to establish a joint multi-site multi-modality framework for psychiatric diagnosis. To do this we developed a hypergraph based multi-source domain adaptation (HMSDA) which allowed us to transform source domain subjects into a target domain. A local ordinal structure based multi-task feature selection (LOSMFS) approach was developed by integrating the transformed functional and structural connections (FCs/SCs). The effectiveness of our method was validated by evaluating diagnosis of both schizophrenia (SZ) and autism spectrum disorder (ASD). The proposed method obtained accuracies of 92.2 %±2.22 % and 84.8 %±2.68 % for the diagnosis of SZ and ASD, respectively. We also compared with 6 DA, 10 multi-modality feature selection, and 8 multi-site and multi-modality methods. Results showed the proposed HMSDA+LOSMFS effectively integrated multi-site and multi-modality data to enhance psychiatric diagnosis and identify disorder-specific diagnostic brain connections.


Assuntos
Imageamento por Ressonância Magnética , Esquizofrenia , Humanos , Masculino , Feminino , Adulto , Esquizofrenia/diagnóstico , Imageamento por Ressonância Magnética/métodos , Transtorno do Espectro Autista/diagnóstico , Encéfalo/fisiopatologia , Encéfalo/diagnóstico por imagem , Adulto Jovem , Transtornos Mentais/diagnóstico , Adolescente , Diagnóstico por Computador/métodos
3.
Int J Biostat ; 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39322995

RESUMO

Learning individualized treatment rules (ITRs) for a target patient population with mental disorders is confronted with many challenges. First, the target population may be different from the training population that provided data for learning ITRs. Ignoring differences between the training patient data and the target population can result in sub-optimal treatment strategies for the target population. Second, for mental disorders, a patient's underlying mental state is not observed but can be inferred from measures of high-dimensional combinations of symptomatology. Treatment mechanisms are unknown and can be complex, and thus treatment effect moderation can take complicated forms. To address these challenges, we propose a novel method that connects measurement models, efficient weighting schemes, and flexible neural network architecture through latent variables to tailor treatments for a target population. Patients' underlying mental states are represented by a compact set of latent state variables while preserving interpretability. Weighting schemes are designed based on lower-dimensional latent variables to efficiently balance population differences so that biases in learning the latent structure and treatment effects are mitigated. Extensive simulation studies demonstrated consistent superiority of the proposed method and the weighting approach. Applications to two real-world studies of patients with major depressive disorder have shown a broad utility of the proposed method in improving treatment outcomes in the target population.

4.
Sci Rep ; 14(1): 22291, 2024 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-39333249

RESUMO

Fluorescence spectroscopy is a fundamental tool in life sciences and chemistry, with applications in environmental monitoring, food quality control, and biomedical diagnostics. However, analysis of spectroscopic data with deep learning, in particular of fluorescence excitation-emission matrices (EEMs), presents significant challenges due to the typically small and sparse datasets available. Furthermore, the analysis of EEMs is difficult due to their high dimensionality and overlapping spectral features. This study proposes a new approach that exploits domain adaptation with pretrained vision models, along with a novel interpretability algorithm to address these challenges. Thanks to specialised feature engineering of the neural networks described in this work, we are now able to provide deeper insights into the physico-chemical processes underlying the data. The proposed approach is demonstrated through the analysis of the oxidation process in extra virgin olive oil (EVOO), showing its effectiveness in predicting quality indicators and identifying the spectral bands and thus the molecules involved in the process. This work describes a significantly innovative approach to deep learning for spectroscopy, transforming it from a black box into a tool for understanding complex biological and chemical processes.


Assuntos
Aprendizado Profundo , Azeite de Oliva , Oxirredução , Espectrometria de Fluorescência , Azeite de Oliva/química , Espectrometria de Fluorescência/métodos , Algoritmos , Redes Neurais de Computação
5.
Comput Biol Med ; 182: 109150, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39298884

RESUMO

Recent advancements in retinal vessel segmentation, which employ transformer-based and domain-adaptive approaches, show promise in addressing the complexity of ocular diseases such as diabetic retinopathy. However, current algorithms face challenges in effectively accommodating domain-specific variations and limitations of training datasets, which fail to represent real-world conditions comprehensively. Manual inspection by specialists remains time-consuming despite technological progress in medical imaging, underscoring the pressing need for automated and robust segmentation techniques. Additionally, these methods have deficiencies in handling unlabeled target sets, requiring extra preprocessing steps and manual intervention, which hinders their scalability and practical application in clinical settings. This research introduces a novel framework that employs semi-supervised domain adaptation and contrastive pre-training to address these limitations. The proposed model effectively learns from target data by implementing a novel pseudo-labeling approach and feature-based knowledge distillation within a temporal convolutional network (TCN) and extracts robust, domain-independent features. This approach enhances cross-domain adaptation, significantly enhancing the model's versatility and performance in clinical settings. The semi-supervised domain adaptation component overcomes the challenges posed by domain shifts, while pseudo-labeling utilizes the data's inherent structure for enhanced learning, which is particularly beneficial when labeled data is scarce. Evaluated on the DRIVE and CHASE_DB1 datasets, which contain clinical fundus images, the proposed model achieves outstanding performance, with accuracy, sensitivity, specificity, and AUC values of 0.9792, 0.8640, 0.9901, and 0.9868 on DRIVE, and 0.9830, 0.9058, 0.9888, and 0.9950 on CHASE_DB1, respectively, outperforming current state-of-the-art vessel segmentation methods. The partitioning of datasets into training and testing sets ensures thorough validation, while extensive ablation studies with thorough sensitivity analysis of the model's parameters and different percentages of labeled data further validate its robustness.

6.
Neural Netw ; 180: 106739, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39299038

RESUMO

The goal of Partial Domain Adaptation (PDA) is to transfer a neural network from a source domain (joint source distribution) to a distinct target domain (joint target distribution), where the source label space subsumes the target label space. To address the PDA problem, existing works have proposed to learn the marginal source weights to match the weighted marginal source distribution to the marginal target distribution. However, this is sub-optimal, since the neural network's target performance is concerned with the joint distribution disparity, not the marginal distribution disparity. In this paper, we propose a Joint Weight Optimization (JWO) approach that optimizes the joint source weights to match the weighted joint source distribution to the joint target distribution in the neural network's feature space. To measure the joint distribution disparity, we exploit two statistical distances: the distribution-difference-based L2-distance and the distribution-ratio-based χ2-divergence. Since these two distances are unknown in practice, we propose a Kernel Statistical Distance Estimation (KSDE) method to estimate them from the weighted source data and the target data. Our KSDE method explicitly expresses the two estimated statistical distances as functions of the joint source weights. Therefore, we can optimize the joint weights to minimize the estimated distance functions and reduce the joint distribution disparity. Finally, we achieve the PDA goal by training the neural network on the weighted source data. Experiments on several popular datasets are conducted to demonstrate the effectiveness of our approach. Intro video and Pytorch code are available at https://github.com/sentaochen/Joint-Weight-Optimation. Interested readers can also visit https://github.com/sentaochen for more source codes of the related domain adaptation, multi-source domain adaptation, and domain generalization approaches.

7.
Biomed Eng Lett ; 14(5): 1137-1146, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39220031

RESUMO

In medical clinical scenarios for reasons such as patient privacy, information protection and data migration, when domain adaptation is needed for real scenarios, the source-domain data is often inaccessible and only the pre-trained source model on the source-domain is available. Existing solutions for this type of problem tend to forget the rich task experience previously learned on the source domain after adapting, which means that the model simply overfits the target-domain data when adapting and does not learn robust features that facilitate real task decisions. We address this problem by exploring the particular application of source-free domain adaptation in medical image segmentation and propose a two-stage additive source-free adaptation framework. We generalize the domain-invariant features by constraining the core pathological structure and semantic consistency between different perspectives. And we reduce the segmentation generated by locating and filtering elements that may have errors through Monte-Carlo uncertainty estimation. We conduct comparison experiments with some other methods on a cross-device polyp segmentation and a cross-modal brain tumor segmentation dataset, the results in both the target and source domains verify that the proposed method can effectively solve the domain offset problem and the model retains its dominance on the source domain after learning new knowledge of the target domain.This work provides valuable exploration for achieving additive learning on the target and source domains in the absence of source data and offers new ideas and methods for adaptation research in the field of medical image segmentation.

8.
Med Phys ; 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39225652

RESUMO

BACKGROUND: Cone beam computed tomography (CBCT) image segmentation is crucial in prostate cancer radiotherapy, enabling precise delineation of the prostate gland for accurate treatment planning and delivery. However, the poor quality of CBCT images poses challenges in clinical practice, making annotation difficult due to factors such as image noise, low contrast, and organ deformation. PURPOSE: The objective of this study is to create a segmentation model for the label-free target domain (CBCT), leveraging valuable insights derived from the label-rich source domain (CT). This goal is achieved by addressing the domain gap across diverse domains through the implementation of a cross-modality medical image segmentation framework. METHODS: Our approach introduces a multi-scale domain adaptive segmentation method, performing domain adaptation simultaneously at both the image and feature levels. The primary innovation lies in a novel multi-scale anatomical regularization approach, which (i) aligns the target domain feature space with the source domain feature space at multiple spatial scales simultaneously, and (ii) exchanges information across different scales to fuse knowledge from multi-scale perspectives. RESULTS: Quantitative and qualitative experiments were conducted on pelvic CBCT segmentation tasks. The training dataset comprises 40 unpaired CBCT-CT images with only CT images annotated. The validation and testing datasets consist of 5 and 10 CT images, respectively, all with annotations. The experimental results demonstrate the superior performance of our method compared to other state-of-the-art cross-modality medical image segmentation methods. The Dice similarity coefficients (DSC) for CBCT image segmentation results is 74.6 ± 9.3 $74.6 \pm 9.3$ %, and the average symmetric surface distance (ASSD) is 3.9 ± 1.8 mm $3.9\pm 1.8\;\mathrm{mm}$ . Statistical analysis confirms the statistical significance of the improvements achieved by our method. CONCLUSIONS: Our method exhibits superiority in pelvic CBCT image segmentation compared to its counterparts.

9.
Heliyon ; 10(17): e36823, 2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39263111

RESUMO

Human Pose Estimation (HPE) is a crucial step towards understanding people in images and videos. HPE provides geometric and motion information of the human body, which has been applied to a wide range of applications (e.g., human-computer interaction, motion analysis, augmented reality, virtual reality, healthcare, etc.). An extremely useful task of this kind is the 2D pose estimation of bedridden patients from infrared (IR) images. Here, the IR imaging modality is preferred due to privacy concerns and the need for monitoring both uncovered and covered patients at different levels of illumination. The major drawback of this research problem is the unavailability of covered examples, which are very costly to collect and time-consuming to label. In this work, a deep learning-based framework was developed for human sleeping pose estimation on covered images using only the uncovered training images. In the training scheme, two different image augmentation techniques, a statistical approach as well as a GAN-based approach, were explored for domain adaptation, where the statistical approach performed better. The accuracy of the model trained on the statistically augmented dataset was improved by 124 % as compared with the model trained on non-augmented images. To handle the scarcity of training infrared images, a transfer learning strategy was used by pre-training the model on an RGB pose estimation dataset, resulting in a further increment in accuracy of 4 %. Semi-supervised learning techniques, with a novel pose discriminator model in the loop, were adopted to utilize the unannotated training data, resulting in a further 3 % increase in accuracy. Thus, significant improvement has been shown in the case of 2D pose estimation from infrared images, with a comparatively small amount of annotated data and a large amount of unannotated data by using the proposed training pipeline powered by heavy augmentation.

10.
JMIR Aging ; 7: e53793, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39283346

RESUMO

Background: Cognitive impairment and dementia pose a significant challenge to the aging population, impacting the well-being, quality of life, and autonomy of affected individuals. As the population ages, this will place enormous strain on health care and economic systems. While computerized cognitive training programs have demonstrated some promise in addressing cognitive decline, adherence to these interventions can be challenging. Objective: The objective of this study is to improve the accuracy of predicting adherence lapses to ultimately develop tailored adherence support systems to promote engagement with cognitive training among older adults. Methods: Data from 2 previously conducted cognitive training intervention studies were used to forecast adherence levels among older participants. Deep convolutional neural networks were used to leverage their feature learning capabilities and predict adherence patterns based on past behavior. Domain adaptation (DA) was used to address the challenge of limited training data for each participant, by using data from other participants with similar playing patterns. Time series data were converted into image format using Gramian angular fields, to facilitate clustering of participants during DA. To the best of our knowledge, this is the first effort to use DA techniques to predict older adults' daily adherence to cognitive training programs. Results: Our results demonstrated the promise and potential of deep neural networks and DA for predicting adherence lapses. In all 3 studies, using 2 independent datasets, DA consistently produced the best accuracy values. Conclusions: Our findings highlight that deep learning and DA techniques can aid in the development of adherence support systems for computerized cognitive training, as well as for other interventions aimed at improving health, cognition, and well-being. These techniques can improve engagement and maximize the benefits of such interventions, ultimately enhancing the quality of life of individuals at risk for cognitive impairments. This research informs the development of more effective interventions, benefiting individuals and society by improving conditions associated with aging.


Assuntos
Disfunção Cognitiva , Aprendizado Profundo , Humanos , Idoso , Feminino , Masculino , Disfunção Cognitiva/psicologia , Disfunção Cognitiva/terapia , Idoso de 80 Anos ou mais , Cooperação do Paciente/psicologia , Qualidade de Vida/psicologia , Treino Cognitivo
11.
Neural Netw ; 180: 106682, 2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39241436

RESUMO

In unsupervised domain adaptive object detection, learning target-specific features is pivotal in enhancing detector performance. However, previous methods mostly concentrated on aligning domain-invariant features across domains and neglected integrating the specific features. To tackle this issue, we introduce a novel feature learning method called Joint Feature Differentiation and Interaction (JFDI), which significantly boosts the adaptability of the object detector. We construct a dual-path architecture based on we proposed feature differentiate modules: One path, guided by the source domain data, utilizes multiple discriminators to confuse and align domain-invariant features. The other path, specifically tailored to the target domain, learns its distinctive characteristics based on pseudo-labeled target data. Subsequently, we implement an interactive enhanced mechanism between these paths to ensure stable learning of features and mitigate interference from pseudo-label noise during the iterative optimization. Additionally, we devise a hierarchical pseudo-label fusion module that consolidates more comprehensive and reliable results. In addition, we analyze the generalization error bound of JFDI, which provides a theoretical basis for the effectiveness of JFDI. Extensive empirical evaluations across diverse benchmark scenarios demonstrate that our method is advanced and efficient.

12.
Digit Health ; 10: 20552076241277440, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39229464

RESUMO

Objective: Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation. Methods: StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain. Results: The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain. Conclusions: The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.

13.
Radiol Artif Intell ; 6(5): e230521, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39166972

RESUMO

Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Estudos Retrospectivos , Pessoa de Meia-Idade , Idoso , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Próstata/diagnóstico por imagem , Próstata/patologia , Imageamento por Ressonância Magnética/métodos
14.
Med Image Anal ; 97: 103287, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39111265

RESUMO

Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Redes Neurais de Computação , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Aprendizado Profundo , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos
15.
Neural Netw ; 179: 106617, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39180976

RESUMO

Vigilance state is crucial for the effective performance of users in brain-computer interface (BCI) systems. Most vigilance estimation methods rely on a large amount of labeled data to train a satisfactory model for the specific subject, which limits the practical application of the methods. This study aimed to build a reliable vigilance estimation method using a small amount of unlabeled calibration data. We conducted a vigilance experiment in the designed BCI-based cursor-control task. Electroencephalogram (EEG) signals of eighteen participants were recorded in two sessions on two different days. And, we proposed a contrastive fine-grained domain adaptation network (CFGDAN) for vigilance estimation. Here, an adaptive graph convolution network (GCN) was built to project the EEG data of different domains into a common space. The fine-grained feature alignment mechanism was designed to weight and align the feature distributions across domains at the EEG channel level, and the contrastive information preservation module was developed to preserve the useful target-specific information during the feature alignment. The experimental results show that the proposed CFGDAN outperforms the compared methods in our BCI vigilance dataset and SEED-VIG dataset. Moreover, the visualization results demonstrate the efficacy of the designed feature alignment mechanisms. These results indicate the effectiveness of our method for vigilance estimation. Our study is helpful for reducing calibration efforts and promoting the practical application potential of vigilance estimation methods.


Assuntos
Nível de Alerta , Interfaces Cérebro-Computador , Eletroencefalografia , Redes Neurais de Computação , Humanos , Eletroencefalografia/métodos , Masculino , Nível de Alerta/fisiologia , Feminino , Adulto , Adulto Jovem , Encéfalo/fisiologia , Algoritmos , Processamento de Sinais Assistido por Computador
16.
Comput Softw Big Sci ; 8(1): 15, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39135680

RESUMO

Simulated events are key ingredients in almost all high-energy physics analyses. However, imperfections in the simulation can lead to sizeable differences between the observed data and simulated events. The effects of such mismodelling on relevant observables must be corrected either effectively via scale factors, with weights or by modifying the distributions of the observables and their correlations. We introduce a correction method that transforms one multidimensional distribution (simulation) into another one (data) using a simple architecture based on a single normalising flow with a boolean condition. We demonstrate the effectiveness of the method on a physics-inspired toy dataset with non-trivial mismodelling of several observables and their correlations.

17.
Int J Neural Syst ; 34(10): 2450055, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39136190

RESUMO

Automatic seizure detection from Electroencephalography (EEG) is of great importance in aiding the diagnosis and treatment of epilepsy due to the advantages of convenience and economy. Existing seizure detection methods are usually patient-specific, the training and testing are carried out on the same patient, limiting their scalability to other patients. To address this issue, we propose a cross-subject seizure detection method via unsupervised domain adaptation. The proposed method aims to obtain seizure specific information through shallow and deep feature alignments. For shallow feature alignment, we use convolutional neural network (CNN) to extract seizure-related features. The distribution gap of the shallow features between different patients is minimized by multi-kernel maximum mean discrepancies (MK-MMD). For deep feature alignment, adversarial learning is utilized. The feature extractor tries to learn feature representations that try to confuse the domain classifier, making the extracted deep features more generalizable to new patients. The performance of our method is evaluated on the CHB-MIT and Siena databases in epoch-based experiments. Additionally, event-based experiments are also conducted on the CHB-MIT dataset. The results validate the feasibility of our method in diminishing the domain disparities among different patients.


Assuntos
Eletroencefalografia , Redes Neurais de Computação , Convulsões , Aprendizado de Máquina não Supervisionado , Humanos , Eletroencefalografia/métodos , Convulsões/diagnóstico , Convulsões/fisiopatologia , Aprendizado Profundo , Processamento de Sinais Assistido por Computador
18.
Neural Netw ; 180: 106626, 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39173197

RESUMO

Recently, point cloud domain adaptation (DA) practices have been implemented to improve the generalization ability of deep learning models on point cloud data. However, variations across domains often result in decreased performance of models trained on different distributed data sources. Previous studies have focused on output-level domain alignment to address this challenge. But this approach may increase the amount of errors experienced when aligning different domains, particularly for targets that would otherwise be predicted incorrectly. Therefore, in this study, we propose an input-level discretization-based matching to enhance the generalization ability of DA. Specifically, an efficient geometric deformation depth decoupling network (3DeNet) is implemented to learn the knowledge from the source domain and embed it into an implicit feature space, which facilitates the effective constraint of unsupervised predictions for downstream tasks. Secondly, we demonstrate that the sparsity within the implicit feature space varies between domains, rendering domain differences difficult to support. Consequently, we match sets of neighboring points with different densities and biases by differentiating the adaptive densities. Finally, inter-domain differences are aligned by constraining the loss originating from and between the target domains. We conduct experiments on point cloud DA datasets PointDA-10 and PointSegDA, achieving advanced results (over 1.2% and 1% on average).

19.
Bioengineering (Basel) ; 11(8)2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39199750

RESUMO

Accurate evaluation of retinopathy of prematurity (ROP) severity is vital for screening and proper treatment. Current deep-learning-based automated AI systems for assessing ROP severity do not follow clinical guidelines and are opaque. The aim of this study is to develop an interpretable AI system by mimicking the clinical screening process to determine ROP severity level. A total of 6100 RetCam Ⅲ wide-field digital retinal images were collected from Guangdong Women and Children Hospital at Panyu (PY) and Zhongshan Ophthalmic Center (ZOC). A total of 3330 images of 520 pediatric patients from PY were annotated to train an object detection model to detect lesion type and location. A total of 2770 images of 81 pediatric patients from ZOC were annotated for stage, zone, and the presence of plus disease. Integrating stage, zone, and the presence of plus disease according to clinical guidelines yields ROP severity such that an interpretable AI system was developed to provide the stage from the lesion type, the zone from the lesion location, and the presence of plus disease from a plus disease classification model. The ROP severity was calculated accordingly and compared with the assessment of a human expert. Our method achieved an area under the curve (AUC) of 0.95 (95% confidence interval [CI] 0.90-0.98) in assessing the severity level of ROP. Compared with clinical doctors, our method achieved the highest F1 score value of 0.76 in assessing the severity level of ROP. In conclusion, we developed an interpretable AI system for assessing the severity level of ROP that shows significant potential for use in clinical practice for ROP severity level screening.

20.
J Med Imaging (Bellingham) ; 11(4): 044006, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39185474

RESUMO

Purpose: We address the need for effective stain domain adaptation methods in histopathology to enhance the performance of downstream computational tasks, particularly classification. Existing methods exhibit varying strengths and weaknesses, prompting the exploration of a different approach. The focus is on improving stain color consistency, expanding the stain domain scope, and minimizing the domain gap between image batches. Approach: We introduce a new domain adaptation method, Stain simultaneous augmentation and normalization (SAN), designed to adjust the distribution of stain colors to align with a target distribution. Stain SAN combines the merits of established methods, such as stain normalization, stain augmentation, and stain mix-up, while mitigating their inherent limitations. Stain SAN adapts stain domains by resampling stain color matrices from a well-structured target distribution. Results: Experimental evaluations of cross-dataset clinical estrogen receptor status classification demonstrate the efficacy of Stain SAN and its superior performance compared with existing stain adaptation methods. In one case, the area under the curve (AUC) increased by 11.4%. Overall, our results clearly show the improvements made over the history of the development of these methods culminating with substantial enhancement provided by Stain SAN. Furthermore, we show that Stain SAN achieves results comparable with the state-of-the-art generative adversarial network-based approach without requiring separate training for stain adaptation or access to the target domain during training. Stain SAN's performance is on par with HistAuGAN, proving its effectiveness and computational efficiency. Conclusions: Stain SAN emerges as a promising solution, addressing the potential shortcomings of contemporary stain adaptation methods. Its effectiveness is underscored by notable improvements in the context of clinical estrogen receptor status classification, where it achieves the best AUC performance. The findings endorse Stain SAN as a robust approach for stain domain adaptation in histopathology images, with implications for advancing computational tasks in the field.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA