Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Res Sq ; 2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38260278

RESUMO

Peripheral Nerve Injuries (PNI) affect more than 20 million Americans and severely impact quality of life by causing long-term disability. The onset of PNI is characterized by nerve degeneration distal to the nerve injury resulting in long periods of skeletal muscle denervation. During this period, muscle fibers atrophy and frequently become incapable of "accepting" innervation because of the slow speed of axon regeneration post injury. We hypothesize that reprogramming the skeletal muscle to an embryonic-like state may preserve its reinnervation capability following PNI. To this end, we generated a mouse model in which NANOG, a pluripotency-associated transcription factor can be expressed locally upon delivery of doxycycline (Dox) in a polymeric vehicle. NANOG expression in the muscle upregulated the percentage of Pax7+ nuclei and expression of eMYHC along with other genes that are involved in muscle development. In a sciatic nerve transection model, NANOG expression led to upregulation of key genes associated with myogenesis, neurogenesis and neuromuscular junction (NMJ) formation, and downregulation of key muscle atrophy genes. Further, NANOG mice demonstrated extensive overlap between synaptic vesicles and NMJ acetylcholine receptors (AChRs) indicating restored innervation. Indeed, NANOG mice showed greater improvement in motor function as compared to wild-type (WT) animals, as evidenced by improved toe-spread reflex, EMG responses and isometric force production. In conclusion, we demonstrate that reprogramming the muscle can be an effective strategy to improve reinnervation and functional outcomes after PNI.

2.
ArXiv ; 2023 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-37664408

RESUMO

Introduction: Technical burdens and time-intensive review processes limit the practical utility of video capsule endoscopy (VCE). Artificial intelligence (AI) is poised to address these limitations, but the intersection of AI and VCE reveals challenges that must first be overcome. We identified five challenges to address. Challenge #1: VCE data are stochastic and contains significant artifact. Challenge #2: VCE interpretation is cost-intensive. Challenge #3: VCE data are inherently imbalanced. Challenge #4: Existing VCE AIMLT are computationally cumbersome. Challenge #5: Clinicians are hesitant to accept AIMLT that cannot explain their process. Methods: An anatomic landmark detection model was used to test the application of convolutional neural networks (CNNs) to the task of classifying VCE data. We also created a tool that assists in expert annotation of VCE data. We then created more elaborate models using different approaches including a multi-frame approach, a CNN based on graph representation, and a few-shot approach based on meta-learning. Results: When used on full-length VCE footage, CNNs accurately identified anatomic landmarks (99.1%), with gradient weighted-class activation mapping showing the parts of each frame that the CNN used to make its decision. The graph CNN with weakly supervised learning (accuracy 89.9%, sensitivity of 91.1%), the few-shot model (accuracy 90.8%, precision 91.4%, sensitivity 90.9%), and the multi-frame model (accuracy 97.5%, precision 91.5%, sensitivity 94.8%) performed well. Discussion: Each of these five challenges is addressed, in part, by one of our AI-based models. Our goal of producing high performance using lightweight models that aim to improve clinician confidence was achieved.

3.
BioData Min ; 15(1): 16, 2022 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-35964102

RESUMO

BACKGROUND: Cardiopulmonary exercise testing (CPET) provides a reliable and reproducible approach to measuring fitness in patients and diagnosing their health problems. However, the data from CPET consist of multiple time series that require training to interpret. Part of this training teaches the use of flow charts or nested decision trees to interpret the CPET results. This paper investigates the use of two machine learning techniques using neural networks to predict patient health conditions with CPET data in contrast to flow charts. The data for this investigation comes from a small sample of patients with known health problems and who had CPET results. The small size of the sample data also allows us to investigate the use and performance of deep learning neural networks on health care problems with limited amounts of labeled training and testing data. METHODS: This paper compares the current standard for interpreting and classifying CPET data, flowcharts, to neural network techniques, autoencoders and convolutional neural networks (CNN). The study also investigated the performance of principal component analysis (PCA) with logistic regression to provide an additional baseline of comparison to the neural network techniques. RESULTS: The patients in the sample had two primary diagnoses: heart failure and metabolic syndrome. All model-based testing was done with 5-fold cross-validation and metrics of precision, recall, F1 score, and accuracy. As a baseline for comparison to our models, the highest performing flow chart method achieved an accuracy of 77%. Both PCA regression and CNN achieved an average accuracy of 90% and outperformed the flow chart methods on all metrics. The autoencoder with logistic regression performed the best on each of the metrics and had an average accuracy of 94%. CONCLUSIONS: This study suggests that machine learning and neural network techniques, in particular, can provide higher levels of accuracy with CPET data than traditional flowchart methods. Further, the CNN performed well with a small data set showing that these techniques can be designed to perform well on small data problems that are often found in health care and the life sciences. Further testing with larger data sets is needed to continue evaluating the use of machine learning to interpret CPET data.

4.
Proc Future Technol Conf (2020) ; 1288: 426-434, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34693407

RESUMO

Video capsule endoscope (VCE) is an emerging technology that allows examination of the entire gastrointestinal (GI) tract with minimal invasion. While traditional endoscopy with biopsy procedures are the gold standard for diagnosis of most GI diseases, they are limited by how far the scope can be advanced in the tract and are also invasive. VCE allows gastroenterologists to investigate GI tract abnormalities in detail with visualization of all parts of the GI tract. It captures continuous real time images as it is propelled in the GI tract by gut motility. Even though VCE allows for thorough examination, reviewing and analyzing up to eight hours of images (compiled as videos) is tedious and not cost effective. In order to pave way for automation of VCE-based GI disease diagnosis, detecting the location of the capsule would allow for a more focused analysis as well as abnormality detection in each region of the GI tract. In this paper, we compared four deep Convolutional Neural Network models for feature extraction and detection of the anatomical part within the GI tract captured by VCE images. Our results showed that VGG-Net has superior performance with the highest average accuracy, precision, recall and, F1-score compared to other state of the art architectures: GoogLeNet, AlexNet and, ResNet.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...