Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.003
Filtrar
1.
Eur Heart J ; 2024 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-39322420

RESUMO

Digital twins, which are in silico replications of an individual and its environment, have advanced clinical decision-making and prognostication in cardiovascular medicine. The technology enables personalized simulations of clinical scenarios, prediction of disease risk, and strategies for clinical trial augmentation. Current applications of cardiovascular digital twins have integrated multi-modal data into mechanistic and statistical models to build physiologically accurate cardiac replicas to enhance disease phenotyping, enrich diagnostic workflows, and optimize procedural planning. Digital twin technology is rapidly evolving in the setting of newly available data modalities and advances in generative artificial intelligence, enabling dynamic and comprehensive simulations unique to an individual. These twins fuse physiologic, environmental, and healthcare data into machine learning and generative models to build real-time patient predictions that can model interactions with the clinical environment to accelerate personalized patient care. This review summarizes digital twins in cardiovascular medicine and their potential future applications by incorporating new personalized data modalities. It examines the technical advances in deep learning and generative artificial intelligence that broaden the scope and predictive power of digital twins. Finally, it highlights the individual and societal challenges as well as ethical considerations that are essential to realizing the future vision of incorporating cardiology digital twins into personalized cardiovascular care.

2.
Front Neurorobot ; 18: 1442080, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39323931

RESUMO

Physiological signal recognition is crucial in emotion recognition, and recent advancements in multi-modal fusion have enabled the integration of various physiological signals for improved recognition tasks. However, current models for emotion recognition with hyper complex multi-modal signals face limitations due to fusion methods and insufficient attention mechanisms, preventing further enhancement in classification performance. To address these challenges, we propose a new model framework named Signal Channel Attention Network (SCA-Net), which comprises three main components: an encoder, an attention fusion module, and a decoder. In the attention fusion module, we developed five types of attention mechanisms inspired by existing research and performed comparative experiments using the public dataset MAHNOB-HCI. All of these experiments demonstrate the effectiveness of the attention module we addressed for our baseline model in improving both accuracy and F1 score metrics. We also conducted ablation experiments within the most effective attention fusion module to verify the benefits of multi-modal fusion. Additionally, we adjusted the training process for different attention fusion modules by employing varying early stopping parameters to prevent model overfitting.

3.
Lang Resour Eval ; 58(3): 883-902, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39323983

RESUMO

Dementia affects cognitive functions of adults, including memory, language, and behaviour. Standard diagnostic biomarkers such as MRI are costly, whilst neuropsychological tests suffer from sensitivity issues in detecting dementia onset. The analysis of speech and language has emerged as a promising and non-intrusive technology to diagnose and monitor dementia. Currently, most work in this direction ignores the multi-modal nature of human communication and interactive aspects of everyday conversational interaction. Moreover, most studies ignore changes in cognitive status over time due to the lack of consistent longitudinal data. Here we introduce a novel fine-grained longitudinal multi-modal corpus collected in a natural setting from healthy controls and people with dementia over two phases, each spanning 28 sessions. The corpus consists of spoken conversations, a subset of which are transcribed, as well as typed and written thoughts and associated extra-linguistic information such as pen strokes and keystrokes. We present the data collection process and describe the corpus in detail. Furthermore, we establish baselines for capturing longitudinal changes in language across different modalities for two cohorts, healthy controls and people with dementia, outlining future research directions enabled by the corpus.

4.
PeerJ Comput Sci ; 10: e2262, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314679

RESUMO

In crisis management, quickly identifying and helping affected individuals is key, especially when there is limited information about the survivors' conditions. Traditional emergency systems often face issues with reachability and handling large volumes of requests. Social media has become crucial in disaster response, providing important information and aiding in rescues when standard communication systems fail. Due to the large amount of data generated on social media during emergencies, there is a need for automated systems to process this information effectively and help improve emergency responses, potentially saving lives. Therefore, accurately understanding visual scenes and their meanings is important for identifying damage and obtaining useful information. Our research introduces a framework for detecting damage in social media posts, combining the Bidirectional Encoder Representations from Transformers (BERT) architecture with advanced convolutional processing. This framework includes a BERT-based network for analyzing text and multiple convolutional neural network blocks for processing images. The results show that this combination is very effective, outperforming existing methods in accuracy, recall, and F1 score. In the future, this method could be enhanced by including more types of information, such as human voices or background sounds, to improve its prediction efficiency.

5.
Phys Med Biol ; 2024 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-39317235

RESUMO

OBJECTIVE: Joint segmentation of tumors in PET-CT images is crucial for precise treatment planning. However, current segmentation methods often use addition or concatenation to fuse PET and CT images, which potentially overlooks the nuanced interplay between these modalities. Additionally, these methods often neglect multi-view information that is helpful for more accurately locating and segmenting the target structure. This study aims to address these disadvantages and develop a deep learning-based algorithm for joint segmentation of tumors in PET-CT images. Approach. To address these limitations, we propose the Multi-view Information Enhancement and Multi-modal Feature Fusion Network (MIEMFF-Net) for joint tumor segmentation in three-dimensional PET-CT images. Our model incorporates a dynamic multi-modal fusion strategy to effectively exploit the metabolic and anatomical information from PET and CT images and a multi-view information enhancement strategy to effectively recover the lost information during upsampling. A Multi-scale Spatial Perception Block is proposed to effectively extract information from different views and reduce redundancy interference in the multi-view feature extraction process. Main results. The proposed MIEMFF-Net achieved a Dice score of 83.93%, a Precision of 81.49%, a Sensitivity of 87.89% and an IOU of 69.27% on the STS dataset and a Dice score of 76.83%, a Precision of 86.21%, a Sensitivity of 80.73% and an IOU of 65.15% on the AutoPET dataset. Significance. Experimental results demonstrate that MIEMFF-Net outperforms existing state-of-the-art(SOTA) models which implies potential applications of the proposed method in clinical practice.

6.
Artigo em Inglês | MEDLINE | ID: mdl-39303999

RESUMO

BACKGROUND: The ambiguous boundaries of tumors and organs at risk (OARs) seen in medical images pose challenges in treatment planning and other tasks in radiotherapy. METHODS: This study introduces an innovative analytical algorithm, Multi-Modal Image Confidence (MMC), which exploits the collective strengths of complementary multi-modal medical images to determine a confidence measure for each voxel belonging to the region of interest (ROI). MMC facilitates the creation of modality-specific ROI-enhanced images, enabling a detailed representation of the ROI's boundaries and internal features. By employing an interpretable mathematical model that propagates voxel confidence based on inter-voxel correlations, MMC avoids the need for model training, distinguishing it from deep learning (DL)-based methods. RESULTS: The performance of the proposed algorithm was qualitatively and quantitatively evaluated using 156 nasopharyngeal carcinoma cases and 1251 glioma cases. Qualitative assessments underscored the accuracy of MMC and ROI-enhanced images in estimating lesion boundaries and capturing internal tumor characteristics. Quantitative analyses revealed strong agreement between MMC and manual delineations. CONCLUSION: This paper introduces a novel analytical algorithm to identify and depict ROI boundaries based on complementary multi-modal 3D medical images. The applicability of the proposed method can extend to both targets and OARs at diverse anatomical sites across multiple image modalities, amplifying its potential for augmenting radiotherapy-related tasks.

7.
Exp Gerontol ; : 112585, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39306310

RESUMO

Parkinson's disease (PD) is a prevalent neurological disorder characterized by progressive dopaminergic neuron loss, leading to both motor and non-motor symptoms. Early and accurate diagnosis is challenging due to the subtle and variable nature of early symptoms. This study aims to address these diagnostic challenges by proposing a novel method, Localized Region Extraction and Multi-Modal Fusion (LRE-MMF), designed to enhance diagnostic accuracy through the integration of structural MRI (sMRI) and resting-state functional MRI (rs-fMRI) data. The LRE-MMF method utilizes the complementary strengths of sMRI and rs-fMRI: sMRI provides detailed anatomical information, while rs-fMRI captures functional connectivity patterns. We applied this approach to a dataset consisting of 20 PD patients and 20 healthy controls (HC), all scanned with a 3 T MRI. The primary objective was to determine whether the integration of sMRI and rs-fMRI through the LRE-MMF method improves the classification accuracy between PD and HC subjects. LRE-MMF involves the division of imaging data into localized regions, followed by feature extraction and dimensionality reduction using Principal Component Analysis (PCA). The resulting features were fused and processed through a neural network to learn high-level representations. The model achieved an accuracy of 75 %, with a precision of 0.8125, recall of 0.65, and an AUC of 0.8875. The validation accuracy curves indicated good generalization, with significant brain regions identified, including the caudate, putamen, thalamus, supplementary motor area, and precuneus, as per the AAL atlas. These results demonstrate the potential of the LRE-MMF method for improving early diagnosis and understanding of PD by effectively utilizing both sMRI and rs-fMRI data. This approach could contribute to the development of more accurate diagnostic tools.

8.
Comput Methods Programs Biomed ; 257: 108400, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39270533

RESUMO

BACKGROUND AND OBJECTIVE: Accurate prognosis prediction for cancer patients plays a significant role in the formulation of treatment strategies, considerably impacting personalized medicine. Recent advancements in this field indicate that integrating information from various modalities, such as genetic and clinical data, and developing multi-modal deep learning models can enhance prediction accuracy. However, most existing multi-modal deep learning methods either overlook patient similarities that benefit prognosis prediction or fail to effectively capture diverse information due to measuring patient similarities from a single perspective. To address these issues, a novel framework called multi-modal multi-view graph convolutional networks (MMGCN) is proposed for cancer prognosis prediction. METHODS: Initially, we utilize the similarity network fusion (SNF) algorithm to merge patient similarity networks (PSNs), individually constructed using gene expression, copy number alteration, and clinical data, into a fused PSN for integrating multi-modal information. To capture diverse perspectives of patient similarities, we treat the fused PSN as a multi-view graph by considering each single-edge-type subgraph as a view graph, and propose multi-view graph convolutional networks (GCNs) with a view-level attention mechanism. Moreover, an edge homophily prediction module is designed to alleviate the adverse effects of heterophilic edges on the representation power of GCNs. Finally, comprehensive representations of patient nodes are obtained to predict cancer prognosis. RESULTS: Experimental results demonstrate that MMGCN outperforms state-of-the-art baselines on four public datasets, including METABRIC, TCGA-BRCA, TCGA-LGG, and TCGA-LUSC, with the area under the receiver operating characteristic curve achieving 0.827 ± 0.005, 0.805 ± 0.014, 0.925 ± 0.007, and 0.746 ± 0.013, respectively. CONCLUSIONS: Our study reveals the effectiveness of the proposed MMGCN, which deeply explores patient similarities related to different modalities from a broad perspective, in enhancing the performance of multi-modal cancer prognosis prediction. The source code is publicly available at https://github.com/ping-y/MMGCN.

9.
Adv Cancer Res ; 163: 39-70, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39271267

RESUMO

Unveiling the intricate interplay of cells in their native environment lies at the heart of understanding fundamental biological processes and unraveling disease mechanisms, particularly in complex diseases like cancer. Spatial transcriptomics (ST) offers a revolutionary lens into the spatial organization of gene expression within tissues, empowering researchers to study both cell heterogeneity and microenvironments in health and disease. However, current ST technologies often face limitations in either resolution or the number of genes profiled simultaneously. Integrating ST data with complementary sources, such as single-cell transcriptomics and detailed tissue staining images, presents a powerful solution to overcome these limitations. This review delves into the computational approaches driving the integration of spatial transcriptomics with other data types. By illuminating the key challenges and outlining the current algorithmic solutions, we aim to highlight the immense potential of these methods to revolutionize our understanding of cancer biology.


Assuntos
Neoplasias , Humanos , Neoplasias/patologia , Neoplasias/genética , Biologia Computacional/métodos , Perfilação da Expressão Gênica/métodos , Transcriptoma , Análise de Célula Única/métodos , Animais , Microambiente Tumoral , Algoritmos
10.
Adv Mater ; : e2406977, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39223900

RESUMO

The integration of visual simulation and biorehabilitation devices promises great applications for sustainable electronics, on-demand integration and neuroscience. However, achieving a multifunctional synergistic biomimetic system with tunable optoelectronic properties at the individual device level remains a challenge. Here, an electro-optically configurable transistor employing conjugated-polymer as semiconductor layer and an insulating polymer (poly(1,8-octanediol-co-citrate) (POC)) with clusterization-triggered photoactive properties as dielectric layer is shown. These devices realize adeptly transition from electrical to optical synapses, featuring multiwavelength and multilevel optical synaptic memory properties exceeding 3 bits. Utilizing enhanced optical memory, the images learning and memory function for visual simulation are achieved. Benefiting from rapid electrical response akin to biological muscle activation, increased actuation occurs under increased stimulus frequency of gate voltage. Additionally, the transistor on POC substrate can be effectively degraded in NaOH solution due to degradation of POC. Pioneeringly, the electro-optically configurability stems from light absorption and photoluminescence of the aggregation cluster in POC layer after 200 °C annealing. The enhancement of optical synaptic plasticity and integration of motion-activation functions within a single device opens new avenues at the intersection of optoelectronics, synaptic computing, and bioengineering.

11.
Med Phys ; 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39221589

RESUMO

BACKGROUND: Adult-type diffuse gliomas are among the central nervous system's most aggressive malignant primary neoplasms. Despite advancements in systemic therapies and technological improvements in radiation oncology treatment delivery, the survival outcome for these patients remains poor. Fast and accurate assessment of tumor response to oncologic treatments is crucial, as it can enable the early detection of recurrent or refractory gliomas, thereby allowing timely intervention with life-prolonging salvage therapies. PURPOSE: Radiomics is a developing field with great potential to improve medical image interpretation. This study aims to apply a radiomics-based predictive model for classifying response to radiotherapy within the first 3 months post-treatment. METHODS: Ninety-five patients were selected from the Burdenko Glioblastoma Progression Dataset. Tumor regions were delineated in the axial plane on contrast-enhanced T1(CE T1W) and T2 fluid-attenuated inversion recovery (T2_FLAIR) magnetic resonance imaging (MRI). Hand-crafted radiomic (HCR) features, including first- and second-order features, were extracted using PyRadiomics (3.7.6) in Python (3.10). Then, recursive feature elimination with a random forest (RF) classifier was applied for feature dimensionality reduction. RF and support vector machine (SVM) classifiers were built to predict treatment outcomes using the selected features. Leave-one-out cross-validation was employed to tune hyperparameters and evaluate the models. RESULTS: For each segmented target, 186 HCR features were extracted from the MRI sequence. Using the top-ranked radiomic features from a combination of CE T1W and T2_FLAIR, an optimized classifier achieved the highest averaged area under the curve (AUC) of 0.829 ± 0.075 using the RF classifier. The HCR features of CE T1W produced the worst outcomes among all models (0.603 ± 0.024 and 0.615 ± 0.075 for RF and SVM classifiers, respectively). CONCLUSIONS: We developed and evaluated a radiomics-based predictive model for early tumor response to radiotherapy, demonstrating excellent performance supported by high AUC values. This model, harnessing radiomic features from multi-modal MRI, showed superior predictive performance compared to single-modal MRI approaches. These results underscore the potential of radiomics in clinical decision support for this disease process.

12.
Sensors (Basel) ; 24(17)2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39275711

RESUMO

As a fundamental element of the transportation system, traffic signs are widely used to guide traffic behaviors. In recent years, drones have emerged as an important tool for monitoring the conditions of traffic signs. However, the existing image processing technique is heavily reliant on image annotations. It is time consuming to build a high-quality dataset with diverse training images and human annotations. In this paper, we introduce the utilization of Vision-language Models (VLMs) in the traffic sign detection task. Without the need for discrete image labels, the rapid deployment is fulfilled by the multi-modal learning and large-scale pretrained networks. First, we compile a keyword dictionary to explain traffic signs. The Chinese national standard is used to suggest the shape and color information. Our program conducts Bootstrapping Language-image Pretraining v2 (BLIPv2) to translate representative images into text descriptions. Second, a Contrastive Language-image Pretraining (CLIP) framework is applied to characterize not only drone images but also text descriptions. Our method utilizes the pretrained encoder network to create visual features and word embeddings. Third, the category of each traffic sign is predicted according to the similarity between drone images and keywords. Cosine distance and softmax function are performed to calculate the class probability distribution. To evaluate the performance, we apply the proposed method in a practical application. The drone images captured from Guyuan, China, are employed to record the conditions of traffic signs. Further experiments include two widely used public datasets. The calculation results indicate that our vision-language model-based method has an acceptable prediction accuracy and low training cost.

13.
J Colloid Interface Sci ; 678(Pt B): 1088-1103, 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39276517

RESUMO

One of the primary challenges for immune checkpoint blockade (ICB)-based therapy is the limited infiltration of T lymphocytes (T cells) into tumors, often referred to as immunologically "cold" tumors. A promising strategy to enhance the anti-tumor efficacy of ICB is to increase antigen exposure, thereby enhancing T cell activation and converting "cold" tumors into "hot" ones. Herein, we present an innovative all-in-one therapeutic nanoplatform to realize local mild photothermal- and photodynamic-triggered antigen exposure, thereby improving the anti-tumor efficacy of ICB. This nanoplatform involves conjugating programmed death-ligand 1 antibody (aPD-L1) with gadolinium-doped near-infrared (NIR)-emitting carbon dots (aPD-L1@GdCDs), which displays negligible cytotoxicity in the absence of light. But under controlled NIR laser irradiation, the GdCDs produce combined photothermal and photodynamic effects. This not only results in tumor ablation but also induces immunogenic cell death (ICD), facilitating enhanced infiltration of CD8+ T cells in the tumor area. Importantly, the combination of aPD-L1 with photothermal and photodynamic therapies via aPD-L1@GdCDs significantly boosts CD8+ T cell infiltration, reduces tumor size, and improves anti-metastasis effects compared to either GdCDs-based phototherapy or aPD-L1 alone. In addition, the whole treatment process can be monitored by multi-modal fluorescence/photoacoustic/magnetic resonance imaging (FLI/PAI/MRI). Our study highlights a promising nanoplatform for cancer diagnosis and therapy, as well as paves the way to promote the efficacy of ICB therapy through mild photothermal- and photodynamic-triggered immunotherapy.

14.
Sci Justice ; 64(5): 485-497, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39277331

RESUMO

Verifying the speaker of a speech fragment can be crucial in attributing a crime to a suspect. The question can be addressed given disputed and reference speech material, adopting the recommended and scientifically accepted likelihood ratio framework for reporting evidential strength in court. In forensic practice, usually, auditory and acoustic analyses are performed to carry out such a verification task considering a diversity of features, such as language competence, pronunciation, or other linguistic features. Automated speaker comparison systems can also be used alongside those manual analyses. State-of-the-art automatic speaker comparison systems are based on deep neural networks that take acoustic features as input. Additional information, though, may be obtained from linguistic analysis. In this paper, we aim to answer if, when and how modern acoustic-based systems can be complemented by an authorship technique based on frequent words, within the likelihood ratio framework. We consider three different approaches to derive a combined likelihood ratio: using a support vector machine algorithm, fitting bivariate normal distributions, and passing the score of the acoustic system as additional input to the frequent-word analysis. We apply our method to the forensically relevant dataset FRIDA and the FISHER corpus, and we explore under which conditions fusion is valuable. We evaluate our results in terms of log likelihood ratio cost (Cllr) and equal error rate (EER). We show that fusion can be beneficial, especially in the case of intercepted phone calls with noise in the background.


Assuntos
Ciências Forenses , Humanos , Ciências Forenses/métodos , Funções Verossimilhança , Linguística , Máquina de Vetores de Suporte , Acústica da Fala , Algoritmos , Fala
15.
Sci Rep ; 14(1): 20992, 2024 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251743

RESUMO

Humans express emotions through various modalities such as facial expressions and natural language. However, the relationships between emotions expressed through different modalities and their correlations with neural activities remain uncertain. Here, we aimed to unveil some of these uncertainties by investigating the similarity of emotion representations across modalities and brain regions. First, we represented various emotion categories as multi-dimensional vectors derived from visual (face), linguistic, and visio-linguistic data, and used representational similarity analysis to compare these modalities. Second, we examined the linear transferability of emotion representation from other modalities to the visual modality. Third, we compared the representational structure derived in the first step with those from brain activities across 360 regions. Our findings revealed that emotion representations share commonalities across modalities with modality-type dependent variations, and they can be linearly mapped from other modalities to the visual modality. Additionally, emotion representations in uni-modalities showed relatively higher similarity with specific brain regions, while multi-modal emotion representation was most similar to representations across the entire brain region. These findings suggest that emotional experiences are represented differently across various brain regions with varying degrees of similarity to different modality types, and that they may be multi-modally conveyable in visual and linguistic domains.


Assuntos
Encéfalo , Emoções , Expressão Facial , Humanos , Emoções/fisiologia , Encéfalo/fisiologia , Masculino , Feminino , Adulto , Mapeamento Encefálico , Adulto Jovem , Imageamento por Ressonância Magnética , Idioma
17.
IEEE Trans Med Robot Bionics ; 6(3): 1004-1016, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39280352

RESUMO

Catheter-based cardiac ablation is a minimally invasive procedure for treating atrial fibrillation (AF). Electrophysiologists perform the procedure under image guidance during which the contact force between the heart tissue and the catheter tip determines the quality of lesions created. This paper describes a novel multi-modal contact force estimator based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). The estimator takes the shape and optical flow of the deflectable distal section as two modalities since frames and motion between frames complement each other to capture the long context in the video frames of the catheter. The angle between the tissue and the catheter tip is considered a complement of the extracted shape. The data acquisition platform measures the two-degrees-of-freedom contact force and video data as the catheter motion is constrained in the imaging plane. The images are captured via a camera that simulates single-view fluoroscopy for experimental purposes. In this sensor-free procedure, the features of the images and optical flow modalities are extracted through transfer learning. Long Short-Term Memory Networks (LSTMs) with a memory fusion network (MFN) are implemented to consider time dependency and hysteresis due to friction. The architecture integrates spatial and temporal networks. Late fusion with the concatenation of LSTMs, transformer decoders, and Gated Recurrent Units (GRUs) are implemented to verify the feasibility of the proposed network-based approach and its superiority over single-modality networks. The resulting mean absolute error, which accounted for only 2.84% of the total magnitude, was obtained by collecting data under more realistic circumstances in contrast to previous research studies. The decrease in error is considerably better than that achieved by individual modalities and late fusion with concatenation. These results emphasize the practicality and relevance of utilizing a multimodal network in real-world scenarios.

18.
Crit Care ; 28(1): 294, 2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39232842

RESUMO

BACKGROUND: Over the recent decades, continuous multi-modal monitoring of cerebral physiology has gained increasing interest for its potential to help minimize secondary brain injury following moderate-to-severe acute traumatic neural injury (also termed traumatic brain injury; TBI). Despite this heightened interest, there has yet to be a comprehensive evaluation of the effects of derangements in multimodal cerebral physiology on global cerebral physiologic insult burden. In this study, we offer a multi-center descriptive analysis of the associations between deranged cerebral physiology and cerebral physiologic insult burden. METHODS: Using data from the Canadian High-Resolution TBI (CAHR-TBI) Research Collaborative, a total of 369 complete patient datasets were acquired for the purposes of this study. For various cerebral physiologic metrics, patients were trichotomized into low, intermediate, and high cohorts based on mean values. Jonckheere-Terpstra testing was then used to assess for directional relationships between these cerebral physiologic metrics and various measures of cerebral physiologic insult burden. Contour plots were then created to illustrate the impact of preserved vs impaired cerebrovascular reactivity on these relationships. RESULTS: It was found that elevated intracranial pressure (ICP) was associated with more time spent with cerebral perfusion pressure (CPP) < 60 mmHg and more time with impaired cerebrovascular reactivity. Low CPP was associated with more time spent with ICP > 20 or 22 mmHg and more time spent with impaired cerebrovascular reactivity. Elevated cerebrovascular reactivity indices were associated with more time spent with CPP < 60 mmHg as well as ICP > 20 or 22 mmHg. Low brain tissue oxygenation (PbtO2) only demonstrated a significant association with more time spent with CPP < 60 mmHg. Low regional oxygen saturation (rSO2) failed to produce a statistically significant association with any particular measure of cerebral physiologic insult burden. CONCLUSIONS: Mean ICP, CPP and, cerebrovascular reactivity values demonstrate statistically significant associations with global cerebral physiologic insult burden; however, it is uncertain whether measures of oxygen delivery provide any significant insight into such insult burden.


Assuntos
Lesões Encefálicas Traumáticas , Humanos , Canadá/epidemiologia , Lesões Encefálicas Traumáticas/fisiopatologia , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Circulação Cerebrovascular/fisiologia , Pressão Intracraniana/fisiologia , Idoso
19.
J Med Signals Sens ; 14: 19, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39234592

RESUMO

Unexpected seizures significantly decrease the quality of life in epileptic patients. Seizure attacks are caused by hyperexcitability and anatomical lesions of special regions of the brain, and cognitive impairments and memory deficits are their most common concomitant effects. In addition to seizure reduction treatments, medical rehabilitation involving brain-computer interfaces and neurofeedback can improve cognition and quality of life in patients with focal epilepsy in most cases, in particular when resective epilepsy surgery has been considered treatment in drug-resistant epilepsy. Source estimation and precise localization of epileptic foci can improve such rehabilitation and treatment. Electroencephalography (EEG) monitoring and multimodal noninvasive neuroimaging techniques such as ictal/interictal single-photon emission computerized tomography (SPECT) imaging and structural magnetic resonance imaging are common practices for the localization of epileptic foci and have been studied in several kinds of researches. In this article, we review the most recent research on EEG-based localization of seizure foci and discuss various methods, their advantages, limitations, and challenges with a focus on model-based data processing and machine learning algorithms. In addition, we survey whether combined analysis of EEG monitoring and neuroimaging techniques, which is known as multimodal brain data fusion, can potentially increase the precision of the seizure foci localization. To this end, we further review and summarize the key parameters and challenges of processing, fusion, and analysis of multiple source data, in the framework of model-based signal processing, for the development of a multimodal brain data analyzing system. This article has the potential to be used as a valuable resource for neuroscience researchers for the development of EEG-based rehabilitation systems based on multimodal data analysis related to focal epilepsy.

20.
Med Image Anal ; 99: 103331, 2024 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-39243598

RESUMO

Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly under-sampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed aligned cross-modal prior term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative stages of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on four real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA