Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 860
Filtrar
1.
NMR Biomed ; : e5221, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39113170

RESUMO

Chemical exchange saturation transfer (CEST) MRI at 3 T suffers from low specificity due to overlapping CEST effects from multiple metabolites, while higher field strengths (B0) allow for better separation of Z-spectral "peaks," aiding signal interpretation and quantification. However, data acquisition at higher B0 is restricted by equipment access, field inhomogeneity and safety issues. Herein, we aim to synthesize higher-B0 Z-spectra from readily available data acquired with 3 T clinical scanners using a deep learning framework. Trained with simulation data using models based on Bloch-McConnell equations, this framework comprised two deep neural networks (DNNs) and a singular value decomposition (SVD) module. The first DNN identified B0 shifts in Z-spectra and aligned them to correct frequencies. After B0 correction, the lower-B0 Z-spectra were streamlined to the second DNN, casting into the key feature representations of higher-B0 Z-spectra, obtained through SVD truncation. Finally, the complete higher-B0 Z-spectra were recovered from inverse SVD, given the low-rank property of Z-spectra. This study constructed and validated two models, a phosphocreatine (PCr) model and a pseudo-in-vivo one. Each experimental dataset, including PCr phantoms, egg white phantoms, and in vivo rat brains, was sequentially acquired on a 3 T human and a 9.4 T animal scanner. Results demonstrated that the synthetic 9.4 T Z-spectra were almost identical to the experimental ground truth, showing low RMSE (0.11% ± 0.0013% for seven PCr tubes, 1.8% ± 0.2% for three egg white tubes, and 0.79% ± 0.54% for three rat slices) and high R2 (>0.99). The synthesized amide and NOE contrast maps, calculated using the Lorentzian difference, were also well matched with the experiments. Additionally, the synthesis model exhibited robustness to B0 inhomogeneities, noise, and other acquisition imperfections. In conclusion, the proposed framework enables synthesis of higher-B0 Z-spectra from lower-B0 ones, which may facilitate CEST MRI quantification and applications.

3.
Angew Chem Int Ed Engl ; : e202411849, 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39162073

RESUMO

Liquid water under nanoscale confinement has attracted intensive attention due to its pivotal role in understanding various phenomena across many scientific fields. MXenes serve an ideal paradigm for investigating the dynamic behaviors of nanoconfined water in a hydrophilic environment. Combining deep neural networks and an active learning scheme, here we elucidate the proton-driven dynamics of water molecules confined between V2CTx sheets using molecular dynamics simulation. Firstly, we have found that the Eigen and Zundel cations can inhibit water-induced oxidation by adjusting the orientation of water molecules, thus proposing a general antioxidant strategy. Besides, we also identified a hexagonal ice phase with abnormal bonding rules at room temperature, rather than only at ultralow temperatures as other studies reported, and further captured the proton-induced water phase transition. This highlighted the importance of protons in the maintaining stable crystal phase and phase transition of water. Furthermore, we discussed the conversions of different water structures and water diffusivity with changing proton concentrations in detail. The results provide useful guidance in practical applications of MXenes including developing antioxidant strategies, identifying novel 2D water phases and optimizing energy storage and conversion.

4.
J Cardiovasc Magn Reson ; : 101082, 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39142567

RESUMO

BACKGROUND: Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. METHODS: Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). RESULTS: The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). CONCLUSIONS: The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.

5.
ArXiv ; 2024 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-39148930

RESUMO

Background: Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods: Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results: The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions: The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.

8.
Comput Med Imaging Graph ; 116: 102421, 2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39084165

RESUMO

Intracranial aneurysm (IA) is a prevalent disease that poses a significant threat to human health. The use of computed tomography angiography (CTA) as a diagnostic tool for IAs remains time-consuming and challenging. Deep neural networks (DNNs) have made significant advancements in the field of medical image segmentation. Nevertheless, training large-scale DNNs demands substantial quantities of high-quality labeled data, making the annotation of numerous brain CTA scans a challenging endeavor. To address these challenges and effectively develop a robust IAs segmentation model from a large amount of unlabeled training data, we propose a triple learning framework (TLF). The framework primarily consists of three learning paradigms: pseudo-supervised learning, contrastive learning, and confident learning. This paper introduces an enhanced mean teacher model and voxel-selective strategy to conduct pseudo-supervised learning on unreliable labeled training data. Concurrently, we construct the positive and negative training pairs within the high-level semantic feature space to improve the overall learning efficiency of the TLF through contrastive learning. In addition, a multi-scale confident learning is proposed to correct unreliable labels, which enables the acquisition of broader local structural information instead of relying on individual voxels. To evaluate the effectiveness of our method, we conducted extensive experiments on a self-built database of hundreds of cases of brain CTA scans with IAs. Experimental results demonstrate that our method can effectively learn a robust CTA scan-based IAs segmentation model using unreliable labeled data, outperforming state-of-the-art methods in terms of segmentation accuracy. Codes are released at https://github.com/XueShuangqian/TLF.

9.
J Pathol Inform ; 15: 100384, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39027045

RESUMO

Analysis of gene expression at the single-cell level could help predict the effectiveness of therapies in the field of chronic inflammatory diseases such as arthritis. Here, we demonstrate an adopted approach for processing images from the Slide-seq method. Using a puck, which consists of about 50,000 DNA barcode beads, an RNA sequence of a cell is to be read. The pucks are repeatedly brought into contact with liquids and then recorded with a conventional epifluorescence microscope. The image analysis initially consists of stitching the partial images of a sequence recording, registering images from different sequences, and finally reading out the bases. The new method enables the use of an inexpensive epifluorescence microscope instead of a confocal microscope.

10.
Entropy (Basel) ; 26(7)2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-39056952

RESUMO

While collecting training data, even with the manual verification of experts from crowdsourcing platforms, eliminating incorrect annotations (noisy labels) completely is difficult and expensive. In dealing with datasets that contain noisy labels, over-parameterized deep neural networks (DNNs) tend to overfit, leading to poor generalization and classification performance. As a result, noisy label learning (NLL) has received significant attention in recent years. Existing research shows that although DNNs eventually fit all training data, they first prioritize fitting clean samples, then gradually overfit to noisy samples. Mainstream methods utilize this characteristic to divide training data but face two issues: class imbalance in the segmented data subsets and the optimization conflict between unsupervised contrastive representation learning and supervised learning. To address these issues, we propose a Balanced Partitioning and Training framework with Pseudo-Label Relaxed contrastive loss called BPT-PLR, which includes two crucial processes: a balanced partitioning process with a two-dimensional Gaussian mixture model (BP-GMM) and a semi-supervised oversampling training process with a pseudo-label relaxed contrastive loss (SSO-PLR). The former utilizes both semantic feature information and model prediction results to identify noisy labels, introducing a balancing strategy to maintain class balance in the divided subsets as much as possible. The latter adopts the latest pseudo-label relaxed contrastive loss to replace unsupervised contrastive loss, reducing optimization conflicts between semi-supervised and unsupervised contrastive losses to improve performance. We validate the effectiveness of BPT-PLR on four benchmark datasets in the NLL field: CIFAR-10/100, Animal-10N, and Clothing1M. Extensive experiments comparing with state-of-the-art methods demonstrate that BPT-PLR can achieve optimal or near-optimal performance.

11.
Sensors (Basel) ; 24(14)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-39065893

RESUMO

We propose an artificial intelligence approach based on deep neural networks to tackle a canonical 2D scalar inverse source problem. The learned singular value decomposition (L-SVD) based on hybrid autoencoding is considered. We compare the reconstruction performance of L-SVD to the Truncated SVD (TSVD) regularized inversion, which is a canonical regularization scheme, to solve an ill-posed linear inverse problem. Numerical tests referring to far-field acquisitions show that L-SVD provides, with proper training on a well-organized dataset, superior performance in terms of reconstruction errors as compared to TSVD, allowing for the retrieval of faster spatial variations of the source. Indeed, L-SVD accommodates a priori information on the set of relevant unknown current distributions. Different from TSVD, which performs linear processing on a linear problem, L-SVD operates non-linearly on the data. A numerical analysis also underlines how the performance of the L-SVD degrades when the unknown source does not match the training dataset.

12.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066042

RESUMO

The aim of this study is to address the challenge of 12-lead ECG delineation by different encoder-decoder architectures of deep neural networks (DNNs). This study compares four concepts for encoder-decoders based on a fully convolutional architecture (CED-Net) and its modifications with a recurrent layer (CED-LSTM-Net), residual connections between symmetrical encoder and decoder feature maps (CED-U-Net), and sequential residual blocks (CED-Res-Net). All DNNs transform 12-lead representative beats to three diagnostic ECG intervals (P-wave, QRS-complex, QT-interval) used for the global delineation of the representative beat (P-onset, P-offset, QRS-onset, QRS-offset, T-offset). All DNNs were trained and optimized using the large PhysioNet ECG database (PTB-XL) under identical conditions, applying an advanced approach for machine-based supervised learning with a reference algorithm for ECG delineation (ETM, Schiller AG, Baar, Switzerland). The test results indicate that all DNN architectures are equally capable of reproducing the reference delineation algorithm's measurements in the diagnostic PTB database with an average P-wave detection accuracy (96.6%) and time and duration errors: mean values (-2.6 to 2.4 ms) and standard deviations (2.9 to 11.4 ms). The validation according to the standard-based evaluation practices of diagnostic electrocardiographs with the CSE database outlines a CED-Net model, which measures P-duration (2.6 ± 11.0 ms), PQ-interval (0.9 ± 5.8 ms), QRS-duration (-2.4 ± 5.4 ms), and QT-interval (-0.7 ± 10.3 ms), which meet all standard tolerances. Noise tests with high-frequency, low-frequency, and power-line frequency noise (50/60 Hz) confirm that CED-Net, CED-Res-Net, and CED-LSTM-Net are robust to all types of noise, mostly presenting a mean duration error < 2.5 ms when compared to measurements without noise. Reduced noise immunity is observed for the U-net architecture. Comparative analysis with other published studies scores this research within the lower range of time errors, highlighting its competitive performance.


Assuntos
Algoritmos , Eletrocardiografia , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos , Humanos , Frequência Cardíaca/fisiologia , Bases de Dados Factuais
13.
Ophthalmic Physiol Opt ; 44(6): 1224-1236, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38980216

RESUMO

PURPOSE: To optimise the precision and efficacy of orthokeratology, this investigation evaluated a deep neural network (DNN) model for lens fitting. The objective was to refine the standardisation of fitting procedures and curtail subjective evaluations, thereby augmenting patient safety in the context of increasing global myopia. METHODS: A retrospective study of successful orthokeratology treatment was conducted on 266 patients, with 449 eyes being analysed. A DNN model with an 80%-20% training-validation split predicted lens parameters (curvature, power and diameter) using corneal topography and refractive indices. The model featured two hidden layers for precision. RESULTS: The DNN model achieved mean absolute errors of 0.21 D for alignment curvature (AC), 0.19 D for target power (TP) and 0.02 mm for lens diameter (LD), with R2 values of 0.97, 0.95 and 0.91, respectively. Accuracy decreased for myopia of less than 1.00 D, astigmatism exceeding 2.00 D and corneal curvatures >45.00 D. Approximately, 2% of cases with unique physiological characteristics showed notable prediction variances. CONCLUSION: While exhibiting high accuracy, the DNN model's limitations in specifying myopia, cylinder power and corneal curvature cases highlight the need for algorithmic refinement and clinical validation in orthokeratology practice.


Assuntos
Topografia da Córnea , Miopia , Redes Neurais de Computação , Procedimentos Ortoceratológicos , Refração Ocular , Humanos , Procedimentos Ortoceratológicos/métodos , Estudos Retrospectivos , Miopia/terapia , Miopia/fisiopatologia , Feminino , Masculino , Refração Ocular/fisiologia , Adolescente , Córnea/patologia , Córnea/diagnóstico por imagem , Lentes de Contato , Adulto Jovem , Criança , Adulto , Acuidade Visual/fisiologia
14.
Dev Sci ; : e13538, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38949566

RESUMO

Impaired numerosity perception in developmental dyscalculia (low "number acuity") has been interpreted as evidence of reduced representational precision in the neurocognitive system supporting non-symbolic number sense. However, recent studies suggest that poor numerosity judgments might stem from stronger interference from non-numerical visual information, in line with alternative accounts that highlight impairments in executive functions and visuospatial abilities in the etiology of dyscalculia. To resolve this debate, we used a psychophysical method designed to disentangle the contribution of numerical and non-numerical features to explicit numerosity judgments in a dot comparison task and we assessed the relative saliency of numerosity in a spontaneous categorization task. Children with dyscalculia were compared to control children with average mathematical skills matched for age, IQ, and visuospatial memory. In the comparison task, the lower accuracy of dyscalculics compared to controls was linked to weaker encoding of numerosity, but not to the strength of non-numerical biases. Similarly, in the spontaneous categorization task, children with dyscalculia showed a weaker number-based categorization compared to the control group, with no evidence of a stronger influence of non-numerical information on category choice. Simulations with a neurocomputational model of numerosity perception showed that the reduction of representational resources affected the progressive refinement of number acuity, with little effect on non-numerical bias in numerosity judgments. Together, these results suggest that impaired numerosity perception in dyscalculia cannot be explained by increased interference from non-numerical visual cues, thereby supporting the hypothesis of a core number sense deficit. RESEARCH HIGHLIGHTS: A strongly debated issue is whether impaired numerosity perception in dyscalculia stems from a deficit in number sense or from poor executive and visuospatial functions. Dyscalculic children show reduced precision in visual numerosity judgments and weaker number-based spontaneous categorization, but no increasing reliance on continuous visual properties. Simulations with deep neural networks demonstrate that reduced neural/computational resources affect the developmental trajectory of number acuity and account for impaired numerosity judgments. Our findings show that weaker number acuity in developmental dyscalculia is not necessarily related to increased interference from non-numerical visual cues.

15.
Elife ; 132024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38968311

RESUMO

Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether ('invariance'), represented in non-interfering subspaces of population activity ('factorization') or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters - lighting, background, camera viewpoint, and object pose - in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.


When looking at a picture, we can quickly identify a recognizable object, such as an apple, applying a single word label to it. Although extensive neuroscience research has focused on how human and monkey brains achieve this recognition, our understanding of how the brain and brain-like computer models interpret other complex aspects of a visual scene ­ such as object position and environmental context ­ remains incomplete. In particular, it was not clear to what extent object recognition comes at the expense of other important scene details. For example, various aspects of the scene might be processed simultaneously. On the other hand, general object recognition may interfere with processing of such details. To investigate this, Lindsey and Issa analyzed 12 monkey and human brain datasets, as well as numerous computer models, to explore how different aspects of a scene are encoded in neurons and how these aspects are represented by computational models. The analysis revealed that preventing effective separation and retention of information about object pose and environmental context worsened object identification in monkey cortex neurons. In addition, the computer models that were the most brain-like could independently preserve the other scene details without interfering with object identification. The findings suggest that human and monkey high level ventral visual processing systems are capable of representing the environment in a more complex way than previously appreciated. In the future, studying more brain activity data could help to identify how rich the encoded information is and how it might support other functions like spatial navigation. This knowledge could help to build computational models that process the information in the same way, potentially improving their understanding of real-world scenes.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Animais , Humanos , Masculino , Macaca mulatta/fisiologia , Vias Visuais/fisiologia , Percepção Visual/fisiologia , Córtex Visual/fisiologia , Feminino , Estimulação Luminosa , Modelos Neurológicos
16.
Neural Netw ; 179: 106507, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-39003984

RESUMO

Segmentation and the subsequent quantitative assessment of the target object in computed tomography (CT) images provide valuable information for the analysis of intracerebral hemorrhage (ICH) pathology. However, most existing methods lack a reasonable strategy to explore the discriminative semantics of multi-scale ICH regions, making it difficult to address the challenge of complex morphology in clinical data. In this paper, we propose a novel multi-scale object equalization learning network (MOEL-Net) for accurate ICH region segmentation. Specifically, we first introduce a shallow feature extraction module (SFEM) for obtaining shallow semantic representations to maintain sufficient and effective detailed location information. Then, a deep feature extraction module (DFEM) is leveraged to extract the deep semantic information of the ICH region from the combination of SFEM and original image features. To further achieve equalization learning in different scales of ICH regions, we introduce a multi-level semantic feature equalization fusion module (MSFEFM), which explores the equalized fusion features of the described objects with the assistance of shallow and deep semantic information provided by SFEM and DFEM. Driven by the above three designs, MOEL-Net shows a solid capacity to capture more discriminative features in various ICH region segmentation. To promote the research of clinical automatic ICH region segmentation, we collect two datasets, VMICH and FRICH (divided into Test A and Test B) for evaluation. Experimental results show that the proposed model achieves the Dice scores of 88.28%, 90.92%, and 90.95% on the VMICH, FRICH Test A, and Test B, respectively, which outperform fourteen competing methods.

17.
Med Biol Eng Comput ; 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39012416

RESUMO

Pituitary adenomas (PA) represent the most common type of sellar neoplasm. Extracting relevant information from radiological images is essential for decision support in addressing various objectives related to PA. Given the critical need for an accurate assessment of the natural progression of PA, computer vision (CV) and artificial intelligence (AI) play a pivotal role in automatically extracting features from radiological images. The field of "Radiomics" involves the extraction of high-dimensional features, often referred to as "Radiomic features," from digital radiological images. This survey offers an analysis of the current state of research in PA radiomics. Our work comprises a systematic review of 34 publications focused on PA radiomics and other automated information mining pertaining to PA through the analysis of radiological data using computer vision methods. We begin with a theoretical exploration essential for understanding the theoretical background of radionmics, encompassing traditional approaches from computer vision and machine learning, as well as the latest methodologies in deep radiomics utilizing deep learning (DL). Thirty-four research works under examination are comprehensively compared and evaluated. The overall results achieved in the analyzed papers are high, e.g., the best accuracy is up to 96% and the best achieved AUC is up to 0.99, which establishes optimism for the successful use of radiomic features. Methods based on deep learning seem to be the most promising for the future. In relation to this perspective DL methods, several challenges are remarkable: It is important to create high-quality and sufficiently extensive datasets necessary for training deep neural networks. Interpretability of deep radiomics is also a big open challenge. It is necessary to develop and verify methods that will explain to us how deep radiomic features reflect various physics-explainable aspects.

18.
Cureus ; 16(5): e61400, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38953082

RESUMO

Artificial intelligence (AI) and machine learning (ML) show promise in various medical domains, including medical imaging, precise diagnoses, and pharmaceutical research. In neuroscience and neurosurgery, AI/ML advancements enhance brain-computer interfaces, neuroprosthetics, and surgical planning. They are poised to revolutionize neuroregeneration by unraveling the nervous system's complexities. However, research on AI/ML in neuroregeneration is fragmented, necessitating a comprehensive review. Adhering to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations, 19 English-language papers focusing on AI/ML in neuroregeneration were selected from a total of 247. Two researchers independently conducted data extraction and quality assessment using the Mixed Methods Appraisal Tool (MMAT) 2018. Eight studies were deemed high quality, 10 moderate, and four low. Primary goals included diagnosing neurological disorders (35%), robotic rehabilitation (18%), and drug discovery (12% each). Methods ranged from analyzing imaging data (24%) to animal models (24%) and electronic health records (12%). Deep learning accounted for 41% of AI/ML techniques, while standard ML algorithms constituted 29%. The review underscores the growing interest in AI/ML for neuroregenerative medicine, with increasing publications. These technologies aid in diagnosing diseases and facilitating functional recovery through robotics and targeted stimulation. AI-driven drug discovery holds promise for identifying neuroregenerative therapies. Nonetheless, addressing existing limitations remains crucial in this rapidly evolving field.

19.
Med Biol Eng Comput ; 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38963467

RESUMO

Continuous blood pressure (BP) provides essential information for monitoring one's health condition. However, BP is currently monitored using uncomfortable cuff-based devices, which does not support continuous BP monitoring. This paper aims to introduce a blood pressure monitoring algorithm based on only photoplethysmography (PPG) signals using the deep neural network (DNN). The PPG signals are obtained from 125 unique subjects with 218 records and filtered using signal processing algorithms to reduce the effects of noise, such as baseline wandering, and motion artifacts. The proposed algorithm is based on pulse wave analysis of PPG signals, extracted various domain features from PPG signals, and mapped them to BP values. Four feature selection methods are applied and yielded four feature subsets. Therefore, an ensemble feature selection technique is proposed to obtain the optimal feature set based on major voting scores from four feature subsets. DNN models, along with the ensemble feature selection technique, outperformed in estimating the systolic blood pressure (SBP) and diastolic blood pressure (DBP) compared to previously reported approaches that rely only on the PPG signal. The coefficient of determination ( R 2 ) and mean absolute error (MAE) of the proposed algorithm are 0.962 and 2.480 mmHg, respectively, for SBP and 0.955 and 1.499 mmHg, respectively, for DBP. The proposed approach meets the Advancement of Medical Instrumentation standard for SBP and DBP estimations. Additionally, according to the British Hypertension Society standard, the results attained Grade A for both SBP and DBP estimations. It concludes that BP can be estimated more accurately using the optimal feature set and DNN models. The proposed algorithm has the potential ability to facilitate mobile healthcare devices to monitor continuous BP.

20.
Sci Rep ; 14(1): 15366, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38965359

RESUMO

Traditionally, vision models have predominantly relied on spatial features extracted from static images, deviating from the continuous stream of spatiotemporal features processed by the brain in natural vision. While numerous video-understanding models have emerged, incorporating videos into image-understanding models with spatiotemporal features has been limited. Drawing inspiration from natural vision, which exhibits remarkable resilience to input changes, our research focuses on the development of a brain-inspired model for vision understanding trained with videos. Our findings demonstrate that models that train on videos instead of still images and include temporal features become more resilient to various alternations on input media.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...