RESUMEN
In the domain of medical imaging, the advent of deep learning has marked a significant progression, particularly in the nuanced area of periodontal disease diagnosis. This study specifically targets the prevalent issue of scarce labeled data in medical imaging. We introduce a novel unsupervised few-shot learning algorithm, meticulously crafted for classifying periodontal diseases using a limited collection of dental panoramic radiographs. Our method leverages UNet architecture for generating regions of interest (RoI) from radiographs, which are then processed through a Convolutional Variational Autoencoder (CVAE). This approach is pivotal in extracting critical latent features, subsequently clustered using an advanced algorithm. This clustering is key in our methodology, enabling the assignment of labels to images indicative of periodontal diseases, thus circumventing the challenges posed by limited datasets. Our validation process, involving a comparative analysis with traditional supervised learning and standard autoencoder-based clustering, demonstrates a marked improvement in both diagnostic accuracy and efficiency. For three real-world validation datasets, our UNet-CVAE architecture achieved up to average 14% higher accuracy compared to state-of-the-art supervised models including the vision transformer model when trained with 100 labeled images. This study not only highlights the capability of unsupervised learning in overcoming data limitations but also sets a new benchmark for diagnostic methodologies in medical AI, potentially transforming practices in data-constrained scenarios.
Asunto(s)
Aprendizaje Profundo , Enfermedades Periodontales , Radiografía Panorámica , Humanos , Enfermedades Periodontales/diagnóstico por imagen , Radiografía Panorámica/métodos , Algoritmos , Aprendizaje Automático no Supervisado , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
Solid-state NMR spectroscopy (SSNMR) is a powerful technique to probe structural and dynamic properties of biomolecules at an atomic level. Modern SSNMR methods employ multidimensional pulse sequences requiring data collection over a period of days to weeks. Variations in signal intensity or frequency due to environmental fluctuation introduce artifacts into the spectra. Therefore, it is critical to actively monitor instrumentation subject to fluctuations. Here, we demonstrate a method rooted in the unsupervised machine learning algorithm principal component analysis (PCA) to evaluate the impact of environmental parameters that affect sensitivity, resolution and peak positions (chemical shifts) in multidimensional SSNMR protein spectra. PCA loading spectra illustrate the unique features associated with each drifting parameter, while the PCA scores quantify the magnitude of parameter drift. This is demonstrated both for double (HC) and triple resonance (HCN) experiments. Furthermore, we apply this methodology to identify magnetic field B0 drift, and leverage PCA to "denoise" multidimensional SSNMR spectra of the membrane protein, EmrE, using several spectra collected over several days. Finally, we utilize PCA to identify changes in B1 (CP and decoupling) and B0 fields in a manner that we envision could be automated in the future. Overall, these approaches enable improved objectivity in monitoring NMR spectrometers, and are also applicable to other forms of spectroscopy.
Asunto(s)
Análisis de Componente Principal , Aprendizaje Automático no Supervisado , Resonancia Magnética Nuclear Biomolecular , AlgoritmosRESUMEN
In this study, we utilize genetic algorithms to develop a realistic implicit solvent ultra-coarse-grained (ultra-CG) membrane model comprising only three interaction sites. The key philosophy of the ultra-CG membrane model SMARTINI3 is its compatibility with realistic membrane proteins, for example, modeled within the Martini coarse-grained (CG) model, as well as with the widely used GROMACS software for molecular simulations. Our objective is to parameterize this ultra-CG model to accurately reproduce the experimentally observed structural and thermodynamic properties of Phosphatidylcholine (PC) membranes in real units, including properties such as area per lipid, area compressibility, bending modulus, line tension, phase transition temperature, density profile, and radial distribution function. In our example, we specifically focus on the properties of a POPC membrane, although the developed membrane model could be perceived as a generic model of lipid membranes. To optimize the performance of the model (the fitness), we conduct a series of evolutionary runs with diverse random initial population sizes (ranging from 96 to 384). We demonstrate that the ultra-CG membrane model we developed exhibits authentic lipid membrane behaviors, including self-assembly into bilayers, vesicle formation, membrane fusion, and gel phase formation. Moreover, we demonstrate compatibility with the Martini coarse-grained model by successfully reproducing the behavior of a transmembrane domain embedded within a lipid bilayer. This facilitates the simulation of realistic membrane proteins within an ultra-CG bilayer membrane, enhancing the accuracy and applicability of our model in biophysical studies.
Asunto(s)
Membrana Dobles de Lípidos , Fosfatidilcolinas , Fosfatidilcolinas/química , Membrana Dobles de Lípidos/química , Membrana Dobles de Lípidos/metabolismo , Aprendizaje Automático no Supervisado , Termodinámica , Simulación de Dinámica Molecular , Algoritmos , Proteínas de la Membrana/química , Proteínas de la Membrana/metabolismo , Programas InformáticosRESUMEN
Although spatial transcriptomics data provide valuable insights into gene expression profiles and the spatial structure of tissues, most studies rely solely on gene expression information, underutilizing the spatial data. To fully leverage the potential of spatial transcriptomics and graph neural networks, the DGSI (Deep Graph Structure Infomax) model is proposed. This innovative graph data processing model uses graph convolutional neural networks and employs an unsupervised learning approach. It maximizes the mutual information between graph-level and node-level representations, emphasizing flexible sampling and aggregation of nodes and their neighbors. This effectively captures and incorporates local information from nodes into the overall graph structure. Additionally, this paper developed the DGSIST framework, an unsupervised cell clustering method that integrates the DGSI model, SVD dimensionality reduction algorithm, and k-means++ clustering algorithm. This aims to identify cell types accurately. DGSIST fully uses spatial transcriptomics data and outperforms existing methods in accuracy. Demonstrations of DGSIST's capability across various tissue types and technological platforms have shown its effectiveness in accurately identifying spatial domains in multiple tissue sections. Compared to other spatial clustering methods, DGSIST excels in cell clustering and effectively eliminates batch effects without needing batch correction. DGSIST excels in spatial clustering analysis, spatial variation identification, and differential gene expression detection and directly applies to graph analysis tasks, such as node classification, link prediction, or graph clustering. Anticipation lies in the contribution of the DGSIST framework to a deeper understanding of the spatial organizational structures of diseases such as cancer.
Asunto(s)
Algoritmos , Transcriptoma , Análisis por Conglomerados , Transcriptoma/genética , Humanos , Perfilación de la Expresión Génica/métodos , Redes Neurales de la Computación , Aprendizaje Automático no Supervisado , Biología Computacional/métodosRESUMEN
Objective. Identifying the seizure occurrence period (SOP) in extended EEG recordings is crucial for neurologists to diagnose seizures effectively. However, many existing computer-aided diagnosis systems for epileptic seizure detection (ESD) primarily focus on distinguishing between ictal and interictal states in EEG recordings. This focus has limited their application in clinical settings, as these systems typically rely on supervised learning approaches that require labeled data.Approach. To address this, our study introduces an unsupervised learning framework for ESD using a 1D- cascaded convolutional autoencoder (1D-CasCAE). In this approach, EEG recordings from selected patients in the CHB-MIT datasets are first segmented into 5 s epochs. Eight informative channels are chosen based on the correlation coefficient and Shannon entropy. The 1D-CasCAE is designed to autonomously learn the characteristic patterns of interictal (non-seizure) segments through downsampling and upsampling processes. The integration of adaptive thresholding and a moving window significantly enhances the model's robustness, enabling it to accurately identify ictal segments in long EEG recordings.Main results. Experimental results demonstrate that the proposed 1D-CasCAE effectively learns normal EEG signal patterns and efficiently detects anomalies (ictal segments) using reconstruction errors. When compared with other leading methods in anomaly detection, our model exhibits superior performance, as evidenced by its average Gmean, sensitivity, specificity, precision, and false positive rate scores of 98.00% ± 3.51%, 94.94% ± 6.92%, 99.60% ± 0.30%, 79.92% ± 13.56% and 0.0044 ± 0.0030 h-1respectively for a typical patient in CHB-MIT datasets.Significance. The developed model framework can be employed in clinical settings, replacing the manual inspection process of EEG signals by neurologists. Furthermore, the proposed automated system can adapt to each patient's SOP through the use of variable time windows for seizure detection.
Asunto(s)
Electroencefalografía , Epilepsia , Convulsiones , Humanos , Electroencefalografía/métodos , Epilepsia/diagnóstico , Epilepsia/fisiopatología , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Redes Neurales de la Computación , Masculino , Femenino , Adulto , Aprendizaje Automático no Supervisado , Diagnóstico por Computador/métodos , AlgoritmosRESUMEN
OBJECTIVE: Phenotypes are important for patient classification, disease prognostication, and treatment customization. We aimed to identify distinct clinical phenotypes of children and adolescents hospitalized with SARS-CoV-2 infection, and to evaluate their prognostic differences. METHODS: The German Society of Pediatric Infectious Diseases (DGPI) registry is a nationwide, prospective registry for children and adolescents hospitalized with a SARS-CoV-2 infection in Germany. We applied hierarchical clustering for phenotype identification with variables including sex, SARS-CoV-2-related symptoms on admission, pre-existing comorbidities, clinically relevant coinfection, and SARS-CoV-2 risk factors. Outcomes of this study were: discharge status and ICU admission. Discharge status was categorized as: full recovery, residual symptoms, and unfavorable prognosis (including consequential damage that has already been identified as potentially irreversible at the time of discharge and SARS-CoV-2-related death). After acquiring the phenotypes, we evaluated their correlation with discharge status by multinomial logistic regression model, and correlation with ICU admission by binary logistic regression model. We conducted an analogous subgroup analysis for those aged < 1 year (infants) and those aged ⩾ 1 year (non-infants). RESULTS: The DGPI registry enrolled 6983 patients, through which we identified six distinct phenotypes for children and adolescents with SARS-CoV-2 which can be characterized by their symptom pattern: phenotype A had a range of symptoms, while predominant symptoms of patients with other phenotypes were gastrointestinal (95.9%, B), asymptomatic (95.9%, C), lower respiratory tract (49.8%, D), lower respiratory tract and ear, nose and throat (86.2% and 41.7%, E), and neurological (99.2%, F). Regarding discharge status, patients with D and E phenotype had the highest odds of having residual symptoms (OR: 1.33 [1.11, 1.59] and 1.91 [1.65, 2.21], respectively) and patients with phenotype D were significantly more likely (OR: 4.00 [1.95, 8.19]) to have an unfavorable prognosis. Regarding ICU, patients with phenotype D had higher possibility of ICU admission than staying in normal ward (OR: 4.26 [3.06, 5.98]), compared to patients with phenotype A. The outcomes observed in the infants and non-infants closely resembled those of the entire registered population, except infants did not exhibit typical neurological/neuromuscular phenotypes. CONCLUSIONS: Phenotypes enable pediatric patient stratification by risk and thus assist in personalized patient care. Our findings in SARS-CoV-2-infected population might also be transferable to other infectious diseases.
Asunto(s)
COVID-19 , Fenotipo , Sistema de Registros , Aprendizaje Automático no Supervisado , Humanos , COVID-19/epidemiología , COVID-19/mortalidad , COVID-19/diagnóstico , Alemania/epidemiología , Masculino , Femenino , Adolescente , Niño , Pronóstico , Preescolar , Lactante , Estudios Prospectivos , Hospitalización/estadística & datos numéricos , SARS-CoV-2RESUMEN
Generally, essential genes identified using shRNA and CRISPR are not always the same, raising questions about the choice between these two screening platforms. To address this, we systematically compared the performance of CRISPR and shRNA to identify essential genes across different gene expression levels in 254 cell lines. As both platforms have a notable false positive rate, to correct this confounding factor, we first developed a graph-based unsupervised machine learning model to predict common essential genes. Furthermore, to maintain the unique characteristics of individual cell lines, we intersect essential genes derived from the biological experiment with the predicted common essential genes. Finally, we employed statistical methods to compare the ability of these two screening platforms to identify essential genes that exhibit differential expression across various cell lines. Our analysis yielded several noteworthy findings: (1) shRNA outperforms CRISPR in the identification of lowly expressed essential genes; (2) both screening methodologies demonstrate strong performance in identifying highly expressed essential genes but with limited overlap, so we suggest using a combination of these two platforms for highly expressed essential genes; (3) notably, we did not observe a single gene that becomes universally essential across all cancer cell lines.
Asunto(s)
Genes Esenciales , ARN Interferente Pequeño , Humanos , ARN Interferente Pequeño/genética , ARN Interferente Pequeño/metabolismo , Aprendizaje Automático no Supervisado , Repeticiones Palindrómicas Cortas Agrupadas y Regularmente Espaciadas/genética , Sistemas CRISPR-Cas/genética , Línea Celular Tumoral , Línea CelularRESUMEN
Unsupervised learning, particularly clustering, plays a pivotal role in disease subtyping and patient stratification, especially with the abundance of large-scale multi-omics data. Deep learning models, such as variational autoencoders (VAEs), can enhance clustering algorithms by leveraging inter-individual heterogeneity. However, the impact of confounders-external factors unrelated to the condition, e.g. batch effect or age-on clustering is often overlooked, introducing bias and spurious biological conclusions. In this work, we introduce four novel VAE-based deconfounding frameworks tailored for clustering multi-omics data. These frameworks effectively mitigate confounding effects while preserving genuine biological patterns. The deconfounding strategies employed include (i) removal of latent features correlated with confounders, (ii) a conditional VAE, (iii) adversarial training, and (iv) adding a regularization term to the loss function. Using real-life multi-omics data from The Cancer Genome Atlas, we simulated various confounding effects (linear, nonlinear, categorical, mixed) and assessed model performance across 50 repetitions based on reconstruction error, clustering stability, and deconfounding efficacy. Our results demonstrate that our novel models, particularly the conditional multi-omics VAE (cXVAE), successfully handle simulated confounding effects and recover biologically driven clustering structures. cXVAE accurately identifies patient labels and unveils meaningful pathological associations among cancer types, validating deconfounded representations. Furthermore, our study suggests that some of the proposed strategies, such as adversarial training, prove insufficient in confounder removal. In summary, our study contributes by proposing innovative frameworks for simultaneous multi-omics data integration, dimensionality reduction, and deconfounding in clustering. Benchmarking on open-access data offers guidance to end-users, facilitating meaningful patient stratification for optimized precision medicine.
Asunto(s)
Algoritmos , Humanos , Análisis por Conglomerados , Neoplasias/genética , Neoplasias/clasificación , Aprendizaje Profundo , Genómica/métodos , Biología Computacional/métodos , Aprendizaje Automático no Supervisado , MultiómicaRESUMEN
We can visually discriminate and recognize a wide range of materials. Meanwhile, we use language to describe what we see and communicate relevant information about the materials. Here, we investigate the relationship between visual judgment and language expression to understand how visual features relate to semantic representations in human cognition. We use deep generative models to generate images of realistic materials. Interpolating between the generative models enables us to systematically create material appearances in both well-defined and ambiguous categories. Using these stimuli, we compared the representations of materials from two behavioral tasks: visual material similarity judgments and free-form verbal descriptions. Our findings reveal a moderate but significant correlation between vision and language on a categorical level. However, analyzing the representations with an unsupervised alignment method, we discover structural differences that arise at the image-to-image level, especially among ambiguous materials morphed between known categories. Moreover, visual judgments exhibit more individual differences compared to verbal descriptions. Our results show that while verbal descriptions capture material qualities on the coarse level, they may not fully convey the visual nuances of material appearances. Analyzing the image representation of materials obtained from various pre-trained deep neural networks, we find that similarity structures in human visual judgments align more closely with those of the vision-language models than purely vision-based models. Our work illustrates the need to consider the vision-language relationship in building a comprehensive model for material perception. Moreover, we propose a novel framework for evaluating the alignment and misalignment between representations from different modalities, leveraging information from human behaviors and computational models.
Asunto(s)
Lenguaje , Psicofísica , Aprendizaje Automático no Supervisado , Percepción Visual , Humanos , Percepción Visual/fisiología , Psicofísica/métodos , Biología Computacional , Semántica , Visión Ocular/fisiología , Juicio/fisiología , Aprendizaje Profundo , Redes Neurales de la Computación , Cognición/fisiologíaRESUMEN
Peripheral nerve interfaces (PNIs) can enable communication with the peripheral nervous system and have a broad range of applications including in bioelectronic medicine and neuroprostheses. They can modulate neural activity through stimulation or monitor conditions by recording from the peripheral nerves. The recent growth of Machine Learning (ML) has led to the application of a wide variety of ML techniques to PNIs, especially in circumstances where the goal is classification or regression. However, the extent to which ML has been applied to PNIs or the range of suitable ML techniques has not been documented. Therefore, a scoping review was conducted to determine and understand the state of ML in the PNI field. The review searched five databases and included 63 studies after full-text review. Most studies incorporated a supervised learning approach to classify activity, with the most common algorithms being some form of neural network (artificial neural network, convolutional neural network or recurrent neural network). Unsupervised, semi-supervised and reinforcement learning (RL) approaches are currently underutilized and could be better leveraged to improve performance in this domain.
Asunto(s)
Algoritmos , Aprendizaje Automático , Redes Neurales de la Computación , Nervios Periféricos , Humanos , Nervios Periféricos/fisiología , Aprendizaje Automático Supervisado , Aprendizaje Automático no Supervisado , Refuerzo en PsicologíaRESUMEN
Recent advances in measurement technologies, particularly single-cell RNA sequencing (scRNA-seq), have revolutionized our ability to acquire large amounts of omics-level data on cellular states. As measurement techniques evolve, there has been an increasing need for data analysis methodologies, especially those focused on cell-type identification and inference of gene regulatory networks (GRNs). We have developed a new method named BootCellNet, which employs smoothing and resampling to infer GRNs. Using the inferred GRNs, BootCellNet further infers the minimum dominating set (MDS), a set of genes that determines the dynamics of the entire network. We have demonstrated that BootCellNet robustly infers GRNs and their MDSs from scRNA-seq data and facilitates unsupervised identification of cell clusters using scRNA-seq datasets of peripheral blood mononuclear cells and hematopoiesis. It has also identified COVID-19 patient-specific cells and their potential regulatory transcription factors. BootCellNet not only identifies cell types in an unsupervised and explainable way but also provides insights into the characteristics of identified cell types through the inference of GRNs and MDS.
Asunto(s)
COVID-19 , Biología Computacional , Redes Reguladoras de Genes , Análisis de la Célula Individual , Humanos , Redes Reguladoras de Genes/genética , Análisis de la Célula Individual/métodos , COVID-19/genética , Biología Computacional/métodos , SARS-CoV-2/genética , Algoritmos , Leucocitos Mononucleares/metabolismo , Hematopoyesis/genética , Análisis de Secuencia de ARN/métodos , Aprendizaje Automático no Supervisado , Perfilación de la Expresión Génica/métodosRESUMEN
PURPOSE: Cardiac cine magnetic resonance imaging (MRI) is an important tool in assessing dynamic heart function. However, this technique requires long acquisition time and long breath holds, which presents difficulties. The aim of this study is to propose an unsupervised neural network framework that can perform cardiac cine interpolation in time, so that we can increase the temporal resolution of cardiac cine without increasing acquisition time. METHODS: In this study, a subject-specific unsupervised generative neural network is designed to perform temporal interpolation for cardiac cine MRI. The network takes in a 2D latent vector in which each element corresponds to one cardiac phase in the cardiac cycle and then the network outputs the cardiac cine images which are acquired on the scanner. After the training of the generative network, we can interpolate the 2D latent vector and input the interpolated latent vector into the network and the network will output the frame-interpolated cine images. The results of the proposed cine interpolation neural network (CINN) framework are compared quantitatively and qualitatively with other state-of-the-art methods, the ground truth training cine frames, and the ground truth frames removed from the original acquisition. Signal-to-noise ratio (SNR), structural similarity index measures (SSIM), peak signal-to-noise ratio (PSNR), strain analysis, as well as the sharpness calculated using the Tenengrad algorithm were used for image quality assessment. RESULTS: As shown quantitatively and qualitatively, the proposed framework learns the generative task well and hence performs the temporal interpolation task well. Furthermore, both quantitative and qualitative comparison studies show the effectiveness of the proposed framework in cardiac cine interpolation in time. CONCLUSION: The proposed generative model can effectively learn the generative task and perform high quality cardiac cine interpolation in time.
Asunto(s)
Imagen por Resonancia Cinemagnética , Redes Neurales de la Computación , Humanos , Imagen por Resonancia Cinemagnética/métodos , Algoritmos , Aprendizaje Automático no Supervisado , Procesamiento de Imagen Asistido por Computador/métodos , Corazón/diagnóstico por imagenRESUMEN
The data deluge in biology calls for computational approaches that can integrate multiple datasets of different types to build a holistic view of biological processes or structures of interest. An emerging paradigm in this domain is the unsupervised learning of data embeddings that can be used for downstream clustering and classification tasks. While such approaches for integrating data of similar types are becoming common, there is scarcer work on consolidating different data modalities such as network and image information. Here, we introduce DICE (Data Integration through Contrastive Embedding), a contrastive learning model for multi-modal data integration. We apply this model to study the subcellular organization of proteins by integrating protein-protein interaction data and protein image data measured in HEK293 cells. We demonstrate the advantage of data integration over any single modality and show that our framework outperforms previous integration approaches. Availability: https://github.com/raminass/protein-contrastive Contact: raminass@gmail.com.
Asunto(s)
Biología Computacional , Humanos , Células HEK293 , Biología Computacional/métodos , Mapeo de Interacción de Proteínas/métodos , Proteínas/metabolismo , Proteínas/química , Aprendizaje Automático no SupervisadoRESUMEN
Image stitching is a traditional but challenging computer vision task. The goal is to stitch together multiple images with overlapping areas into a single, natural-looking, high-resolution image without ghosts or seams. This article aims to increase the field of view of gastroenteroscopy and reduce the missed detection rate. To this end, an improved depth framework based on unsupervised panoramic image stitching of the gastrointestinal tract is proposed. In addition, preprocessing for aberration correction of monocular endoscope images is introduced, and a C2f module is added to the image reconstruction network to improve the network's ability to extract features. A comprehensive real image data set, GASE-Dataset, is proposed to establish an evaluation benchmark and training learning framework for unsupervised deep gastrointestinal image splicing. Experimental results show that the MSE, RMSE, PSNR, SSIM and RMSE_SW indicators are improved, while the splicing time remains within an acceptable range. Compared with traditional image stitching methods, the performance of this method is enhanced. In addition, improvements are proposed to address the problems of lack of annotated data, insufficient generalization ability and insufficient comprehensive performance in image stitching schemes based on supervised learning. These improvements provide valuable aids in gastrointestinal examination.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tracto Gastrointestinal/diagnóstico por imagen , Aprendizaje Profundo , Aprendizaje Automático no Supervisado , Endoscopía Gastrointestinal/métodosRESUMEN
There is considerable evidence that action potentials are accompanied by "intrinsic optical signals", such as a nanometer-scale motion of the cell membrane. Here we present ChiSCAT, a technically simple imaging scheme that detects such signals with interferometric sensitivity. ChiSCAT combines illumination by a chaotic speckle pattern and interferometric scattering microscopy (iSCAT) to sensitively detect motion in any direction. The technique features reflective high-NA illumination, common-path suppression of vibrations, and a large field of view. This approach maximizes sensitivity to motion, but does not produce a visually interpretable image. We show that unsupervised learning based on matched filtering and motif discovery can recover underlying motion patterns and detect action potentials. We demonstrate these claims in an experiment on blebbistatin-paralyzed cardiomyocytes. ChiSCAT opens the door to action potential measurement in scattering tissue, including a living brain.
Asunto(s)
Potenciales de Acción , Miocitos Cardíacos , Animales , Aprendizaje Automático no Supervisado , Movimiento Celular/efectos de los fármacos , Microscopía de Interferencia/métodosRESUMEN
Unsupervised domain adaptation medical image segmentation is aimed to segment unlabeled target domain images with labeled source domain images. However, different medical imaging modalities lead to large domain shift between their images, in which well-trained models from one imaging modality often fail to segment images from anothor imaging modality. In this paper, to mitigate domain shift between source domain and target domain, a style consistency unsupervised domain adaptation image segmentation method is proposed. First, a local phase-enhanced style fusion method is designed to mitigate domain shift and produce locally enhanced organs of interest. Second, a phase consistency discriminator is constructed to distinguish the phase consistency of domain-invariant features between source domain and target domain, so as to enhance the disentanglement of the domain-invariant and style encoders and removal of domain-specific features from the domain-invariant encoder. Third, a style consistency estimation method is proposed to obtain inconsistency maps from intermediate synthesized target domain images with different styles to measure the difficult regions, mitigate domain shift between synthesized target domain images and real target domain images, and improve the integrity of interested organs. Fourth, style consistency entropy is defined for target domain images to further improve the integrity of the interested organ by the concentration on the inconsistent regions. Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.
Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático no Supervisado , Tomografía Computarizada por Rayos X/métodosRESUMEN
Understanding the mechanism by which the brain achieves relatively consistent information processing contrary to its inherent inconsistency in activity is one of the major challenges in neuroscience. Recently, it has been reported that the consistency of neural responses to stimuli that are presented repeatedly is enhanced implicitly in an unsupervised way, and results in improved perceptual consistency. Here, we propose the term "selective consistency" to describe this input-dependent consistency and hypothesize that it will be acquired in a self-organizing manner by plasticity within the neural system. To test this, we investigated whether a reservoir-based plastic model could acquire selective consistency to repeated stimuli. We used white noise sequences randomly generated in each trial and referenced white noise sequences presented multiple times. The results showed that the plastic network was capable of acquiring selective consistency rapidly, with as little as five exposures to stimuli, even for white noise. The acquisition of selective consistency could occur independently of performance optimization, as the network's time-series prediction accuracy for referenced stimuli did not improve with repeated exposure and optimization. Furthermore, the network could only achieve selective consistency when in the region between order and chaos. These findings suggest that the neural system can acquire selective consistency in a self-organizing manner and that this may serve as a mechanism for certain types of learning.
Asunto(s)
Biología Computacional , Modelos Neurológicos , Redes Neurales de la Computación , Plasticidad Neuronal , Plasticidad Neuronal/fisiología , Humanos , Aprendizaje Automático no Supervisado , Red Nerviosa/fisiología , Encéfalo/fisiología , Aprendizaje/fisiología , Percepción/fisiologíaRESUMEN
MOTIVATION: In the realm of precision medicine, effective patient stratification and disease subtyping demand innovative methodologies tailored for multi-omics data. Clustering techniques applied to multi-omics data have become instrumental in identifying distinct subgroups of patients, enabling a finer-grained understanding of disease variability. Meanwhile, clinical datasets are often small and must be aggregated from multiple hospitals. Online data sharing, however, is seen as a significant challenge due to privacy concerns, potentially impeding big data's role in medical advancements using machine learning. This work establishes a powerful framework for advancing precision medicine through unsupervised random forest-based clustering in combination with federated computing. RESULTS: We introduce a novel multi-omics clustering approach utilizing unsupervised random forests. The unsupervised nature of the random forest enables the determination of cluster-specific feature importance, unraveling key molecular contributors to distinct patient groups. Our methodology is designed for federated execution, a crucial aspect in the medical domain where privacy concerns are paramount. We have validated our approach on machine learning benchmark datasets as well as on cancer data from The Cancer Genome Atlas. Our method is competitive with the state-of-the-art in terms of disease subtyping, but at the same time substantially improves the cluster interpretability. Experiments indicate that local clustering performance can be improved through federated computing. AVAILABILITY AND IMPLEMENTATION: The proposed methods are available as an R-package (https://github.com/pievos101/uRF).
Asunto(s)
Medicina de Precisión , Humanos , Análisis por Conglomerados , Medicina de Precisión/métodos , Aprendizaje Automático no Supervisado , Aprendizaje Automático , Neoplasias , Privacidad , Algoritmos , Bosques AleatoriosRESUMEN
Objective.We investigated fluctuations of the photoplethysmography (PPG) waveform in patients undergoing surgery. There is an association between the morphologic variation extracted from arterial blood pressure (ABP) signals and short-term surgical outcomes. The underlying physiology could be the numerous regulatory mechanisms on the cardiovascular system. We hypothesized that similar information might exist in PPG waveform. However, due to the principles of light absorption, the noninvasive PPG signals are more susceptible to artifacts and necessitate meticulous signal processing.Approach.Employing the unsupervised manifold learning algorithm, dynamic diffusion map, we quantified multivariate waveform morphological variations from the PPG continuous waveform signal. Additionally, we developed several data analysis techniques to mitigate PPG signal artifacts to enhance performance and subsequently validated them using real-life clinical database.Main results.Our findings show similar associations between PPG waveform during surgery and short-term surgical outcomes, consistent with the observations from ABP waveform analysis.Significance.The variation of morphology information in the PPG waveform signal in major surgery provides clinical meanings, which may offer new opportunity of PPG waveform in a wider range of biomedical applications, due to its non-invasive nature.