Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.619
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38434231

RESUMEN

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Asunto(s)
Técnicas Histológicas , Microscopía , Animales , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador
2.
J Undergrad Neurosci Educ ; 22(3): A273-A288, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39355664

RESUMEN

Functional magnetic resonance imaging (fMRI) has been a cornerstone of cognitive neuroscience since its invention in the 1990s. The methods that we use for fMRI data analysis allow us to test different theories of the brain, thus different analyses can lead us to different conclusions about how the brain produces cognition. There has been a centuries-long debate about the nature of neural processing, with some theories arguing for functional specialization or localization (e.g., face and scene processing) while other theories suggest that cognition is implemented in distributed representations across many neurons and brain regions. Importantly, these theories have received support via different types of analyses; therefore, having students implement hands-on data analysis to explore the results of different fMRI analyses can allow them to take a firsthand approach to thinking about highly influential theories in cognitive neuroscience. Moreover, these explorations allow students to see that there are not clearcut "right" or "wrong" answers in cognitive neuroscience, rather we effectively instantiate assumptions within our analytical approaches that can lead us to different conclusions. Here, I provide Python code that uses freely available software and data to teach students how to analyze fMRI data using traditional activation analysis and machine-learning-based multivariate pattern analysis (MVPA). Altogether, these resources help teach students about the paramount importance of methodology in shaping our theories of the brain, and I believe they will be helpful for introductory undergraduate courses, graduate-level courses, and as a first analysis for people working in labs that use fMRI.

3.
J Undergrad Neurosci Educ ; 22(3): A197-A206, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39355672

RESUMEN

Electroencephalography (EEG) has given rise to a myriad of new discoveries over the last 90 years. EEG is a noninvasive technique that has revealed insights into the spatial and temporal processing of brain activity over many neuroscience disciplines, including sensory, motor, sleep, and memory formation. Most undergraduate students, however, lack laboratory access to EEG recording equipment or the skills to perform an experiment independently. Here, we provide easy-to-follow instructions to measure both wave and event-related EEG potentials using a portable, low-cost amplifier (Backyard Brains, Ann Arbor, MI) that connects to smartphones and PCs, independent of their operating system. Using open-source software (SpikeRecorder) and analysis tools (Python, Google Colaboratory), we demonstrate tractable and robust laboratory exercises for students to gain insights into the scientific method and discover multidisciplinary neuroscience research. We developed 2 laboratory exercises and ran them on participants within our research lab (N = 17, development group). In our first protocol, we analyzed power differences in the alpha band (8-13 Hz) when participants alternated between eyes open and eyes closed states (n = 137 transitions). We could robustly see an increase of over 50% in 59 (43%) of our sessions, suggesting this would make a reliable introductory experiment. Next, we describe an exercise that uses a SpikerBox to evoke an event-related potential (ERP) during an auditory oddball task. This experiment measures the average EEG potential elicited during an auditory presentation of either a highly predictable ("standard") or low-probability ("oddball") tone. Across all sessions in the development group (n=81), we found that 64% (n=52) showed a significant peak in the standard response window for P300 with an average peak latency of 442ms. Finally, we tested the auditory oddball task in a university classroom setting. In 66% of the sessions (n=30), a clear P300 was shown, and these signals were significantly above chance when compared to a Monte Carlo simulation. These laboratory exercises cover the two methods of analysis (frequency power and ERP), which are routinely used in neurology diagnostics, brain-machine interfaces, and neurofeedback therapy. Arming students with these methods and analysis techniques will enable them to investigate this laboratory exercise's variants or test their own hypotheses.

4.
Global Spine J ; : 21925682241290752, 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39359113

RESUMEN

STUDY DESIGN: Narrative review. OBJECTIVES: Artificial intelligence (AI) is being increasingly applied to the domain of spine surgery. We present a review of AI in spine surgery, including its use across all stages of the perioperative process and applications for research. We also provide commentary regarding future ethical considerations of AI use and how it may affect surgeon-industry relations. METHODS: We conducted a comprehensive literature review of peer-reviewed articles that examined applications of AI during the pre-, intra-, or postoperative spine surgery process. We also discussed the relationship among AI, spine industry partners, and surgeons. RESULTS: Preoperatively, AI has been mainly applied to image analysis, patient diagnosis and stratification, decision-making. Intraoperatively, AI has been used to aid image guidance and navigation. Postoperatively, AI has been used for outcomes prediction and analysis. AI can enable curation and analysis of huge datasets that can enhance research efforts. Large amounts of data are being accrued by industry sources for use by their AI platforms, though the inner workings of these datasets or algorithms are not well known. CONCLUSIONS: AI has found numerous uses in the pre-, intra-, or postoperative spine surgery process, and the applications of AI continue to grow. The clinical applications and benefits of AI will continue to be more fully realized, but so will certain ethical considerations. Making industry-sponsored databases open source, or at least somehow available to the public, will help alleviate potential biases and obscurities between surgeons and industry and will benefit patient care.

5.
J Clin Monit Comput ; 2024 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-39356374

RESUMEN

The main objective of this study is to evaluate the low-cost, open-source HEGduino device as a tissue oximetry monitor to advance the research of somatic NIRS monitoring. Specifically, this study analyzes the use of this portable functional NIRS system for detecting the cessation of blood flow due to vascular occlusion in an upper limb. 19 healthy patients aged between 25 and 50 were recruited and monitored using HEGduino device. Participants underwent a vascular occlusion test on one forearm. Raw values collected by HEGduino as well as the processed variables derived from the measurements were registered. Additional variables to characterize the signal noise during the tests were also recorded. The results of the data distribution curves for all the subjects in the study accurately detected the physiological events associated with transient tissue ischemia. The statistical analysis of the recorded data showed that the difference between the baseline values recorded by the red led (RED) and its normalized minimum variable was always different from zero (p < 0.014). Furthermore, the difference between the normalized baseline values recorded by the infrared led (IR) and the corresponding normalized minimum value was also different from zero (p < 0.001). The R-squared coefficient of determination for the noise variables considered in this study on the normalized RED and IR values was 0.08 and 0.105, respectively. The study confirms the potential of HEGduino system to detect an interruption of the blood flow by means of variations in regional tissue oxygen saturation. This study demonstrates the potential of the HEGduino device as a monitoring alternative to advance the study of the applicability of NIRS in muscle tissue oximetry.

7.
HardwareX ; 19: e00570, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39262424

RESUMEN

The current lack of standardized testing methods to assess the binding isotherms of ions in cement and concrete research leads to uncontrolled variability in these results. In this study, an open-source and low-cost apparatus, named OpenHW3, is proposed to accurately measure the binding isotherms of ions in various cementitious material systems. OpenHW3 provides two main options, a temperature-controlled orbital shaker, as well as an option to retrofit a commercial orbital shaker for temperature control. The effectiveness of these device options is validated via comparison with conventional binding isotherms experiments. The binding isotherm results were comparable to conventional Waterbath shakers, while providing more reliable results compared to horizontal commercial shakers. It also provided accurate temperature control between 25 °C and 75 °C. The results here are critical for allowing open access to scientific equipment, and providing high-quality binding isotherm data for reliable service life models of urban infrastructure assets throughout the world.

8.
J Cell Sci ; 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39258319

RESUMEN

Environment-sensitive probes are frequently used in spectral/multi-channel microscopy to study alterations in cell homeostasis. However, the few open-source packages available for processing of spectral images are limited in scope. Here, we present VISION, a stand-alone software based on Python for spectral analysis with improved applicability. In addition to classical intensity-based analysis, our software can batch-process multidimensional images with an advanced single-cell segmentation capability and apply user-defined mathematical operations on spectra to calculate biophysical and metabolic parameters of single cells. VISION allows for 3D and temporal mapping of properties such as membrane fluidity and mitochondrial potential. We demonstrate the broad applicability of VISION by applying it to study the effect of various drugs on cellular biophysical properties; the correlation between membrane fluidity and mitochondrial potential; protein distribution in cell-cell contacts; and properties of nanodomains in cell-derived vesicles. Together with the code, we provide a graphical user interface for facile adoption.

9.
J Proteome Res ; 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39254081

RESUMEN

The FragPipe computational proteomics platform is gaining widespread popularity among the proteomics research community because of its fast processing speed and user-friendly graphical interface. Although FragPipe produces well-formatted output tables that are ready for analysis, there is still a need for an easy-to-use and user-friendly downstream statistical analysis and visualization tool. FragPipe-Analyst addresses this need by providing an R shiny web server to assist FragPipe users in conducting downstream analyses of the resulting quantitative proteomics data. It supports major quantification workflows, including label-free quantification, tandem mass tags, and data-independent acquisition. FragPipe-Analyst offers a range of useful functionalities, such as various missing value imputation options, data quality control, unsupervised clustering, differential expression (DE) analysis using Limma, and gene ontology and pathway enrichment analysis using Enrichr. To support advanced analysis and customized visualizations, we also developed FragPipeAnalystR, an R package encompassing all FragPipe-Analyst functionalities that is extended to support site-specific analysis of post-translational modifications (PTMs). FragPipe-Analyst and FragPipeAnalystR are both open-source and freely available.

10.
Sensors (Basel) ; 24(17)2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39275696

RESUMEN

Fusing data from many sources helps to achieve improved analysis and results. In this work, we present a new algorithm to fuse data from multiple cameras with data from multiple lidars. This algorithm was developed to increase the sensitivity and specificity of autonomous vehicle perception systems, where the most accurate sensors measuring the vehicle's surroundings are cameras and lidar devices. Perception systems based on data from one type of sensor do not use complete information and have lower quality. The camera provides two-dimensional images; lidar produces three-dimensional point clouds. We developed a method for matching pixels on a pair of stereoscopic images using dynamic programming inspired by an algorithm to match sequences of amino acids used in bioinformatics. We improve the quality of the basic algorithm using additional data from edge detectors. Furthermore, we also improve the algorithm performance by reducing the size of matched pixels determined by available car speeds. We perform point cloud densification in the final step of our method, fusing lidar output data with stereo vision output. We implemented our algorithm in C++ with Python API, and we provided the open-source library named Stereo PCD. This library very efficiently fuses data from multiple cameras and multiple lidars. In the article, we present the results of our approach to benchmark databases in terms of quality and performance. We compare our algorithm with other popular methods.

11.
Sensors (Basel) ; 24(18)2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39338625

RESUMEN

Recent advancements in vehicle technology have stimulated innovation across the automotive sector, from Advanced Driver Assistance Systems (ADAS) to autonomous driving and motorsport applications. Modern vehicles, equipped with sensors for perception, localization, navigation, and actuators for autonomous driving, generate vast amounts of data used for training and evaluating autonomous systems. Real-world testing is essential for validation but is complex, expensive, and time-intensive, requiring multiple vehicles and reference systems. To address these challenges, computer graphics-based simulators offer a compelling solution by providing high-fidelity 3D environments to simulate vehicles and road users. These simulators are crucial for developing, validating, and testing ADAS, autonomous driving systems, and cooperative driving systems, and enhancing vehicle performance and driver training in motorsport. This paper reviews computer graphics-based simulators tailored for automotive applications. It begins with an overview of their applications and analyzes their key features. Additionally, this paper compares five open-source (CARLA, AirSim, LGSVL, AWSIM, and DeepDrive) and ten commercial simulators. Our findings indicate that open-source simulators are best for the research community, offering realistic 3D environments, multiple sensor support, APIs, co-simulation, and community support. Conversely, commercial simulators, while less extensible, provide a broader set of features and solutions.

12.
Sensors (Basel) ; 24(18)2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39338661

RESUMEN

Information and decision support systems are essential to conducting scientific field campaigns in the atmospheric sciences. However, their development is costly and time-consuming since each field campaign has its own research goals, which result in using a unique set of sensors and various analysis procedures. To reduce development costs, we present a software framework that is based on the Industrial Internet of Things (IIoT) and an implementation using well-established and newly developed open-source components. This framework architecture and these components allow developers to customize the software to a campaign's specific needs while keeping the coding to a minimum. The framework's applicability was tested in two scientific field campaigns that dealt with questions regarding air quality by developing specialized IIoT applications for each one. Each application provided the online monitoring of the acquired data and an intuitive interface for the scientific team to perform the analysis. The framework presented in this study is sufficiently robust and adaptable to meet the diverse requirements of field campaigns.

13.
Int J Cardiol ; : 132598, 2024 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-39341506

RESUMEN

BACKGROUND: Quantitative coronary angiography (QCA) typically employs traditional edge detection algorithms that often require manual correction. This has important implications for the accuracy of downstream 3D coronary reconstructions and computed haemodynamic indices (e.g. angiography-derived fractional flow reserve). We developed AngioPy, a deep-learning model for coronary segmentation that employs user-defined ground-truth points to boost performance and minimise manual correction. We compared its performance without correction with an established QCA system. METHODS: Deep learning models integrating user-defined ground-truth points were developed using 2455 images from the Fractional Flow Reserve versus Angiography for Multivessel Evaluation 2 (FAME 2) study. External validation was performed on a dataset of 580 images. Vessel dimensions from 203 images with mild/moderate stenoses segmented by AngioPy (without correction) and an established QCA system (Medis QFR®) were compared (609 diameters). RESULTS: The top-performing model had an average F1 score of 0.927 (pixel accuracy 0.998, precision 0.925, sensitivity 0.930, specificity 0.999) with 99.2 % of masks exhibiting an F1 score > 0.8. Similar results were seen with external validation (F1 score 0.924, pixel accuracy 0.997, precision 0.921, sensitivity 0.929, specificity 0.999). Vessel dimensions from AngioPy exhibited excellent agreement with QCA (r = 0.96 [95 % CI 0.95-0.96], p < 0.001; mean difference - 0.18 mm [limits of agreement (LOA): -0.84 to 0.49]), including the minimal luminal diameter (r = 0.93 [95 % CI 0.91-0.95], p < 0.001; mean difference - 0.06 mm [LOA: -0.70 to 0.59]). CONCLUSION: AngioPy, an open-source tool, performs rapid and accurate coronary segmentation without the need for manual correction. It has the potential to increase the accuracy and efficiency of QCA.

14.
BMC Med Imaging ; 24(1): 254, 2024 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-39333958

RESUMEN

BACKGROUND: The impression section integrates key findings of a radiology report but can be subjective and variable. We sought to fine-tune and evaluate an open-source Large Language Model (LLM) in automatically generating impressions from the remainder of a radiology report across different imaging modalities and hospitals. METHODS: In this institutional review board-approved retrospective study, we collated a dataset of CT, US, and MRI radiology reports from the University of California San Francisco Medical Center (UCSFMC) (n = 372,716) and the Zuckerberg San Francisco General (ZSFG) Hospital and Trauma Center (n = 60,049), both under a single institution. The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) score, an automatic natural language evaluation metric that measures word overlap, was used for automatic natural language evaluation. A reader study with five cardiothoracic radiologists was performed to more strictly evaluate the model's performance on a specific modality (CT chest exams) with a radiologist subspecialist baseline. We stratified the results of the reader performance study based on the diagnosis category and the original impression length to gauge case complexity. RESULTS: The LLM achieved ROUGE-L scores of 46.51, 44.2, and 50.96 on UCSFMC and upon external validation, ROUGE-L scores of 40.74, 37.89, and 24.61 on ZSFG across the CT, US, and MRI modalities respectively, implying a substantial degree of overlap between the model-generated impressions and impressions written by the subspecialist attending radiologists, but with a degree of degradation upon external validation. In our reader study, the model-generated impressions achieved overall mean scores of 3.56/4, 3.92/4, 3.37/4, 18.29 s,12.32 words, and 84 while the original impression written by a subspecialist radiologist achieved overall mean scores of 3.75/4, 3.87/4, 3.54/4, 12.2 s, 5.74 words, and 89 for clinical accuracy, grammatical accuracy, stylistic quality, edit time, edit distance, and ROUGE-L score respectively. The LLM achieved the highest clinical accuracy ratings for acute/emergent findings and on shorter impressions. CONCLUSIONS: An open-source fine-tuned LLM can generate impressions to a satisfactory level of clinical accuracy, grammatical accuracy, and stylistic quality. Our reader performance study demonstrates the potential of large language models in drafting radiology report impressions that can aid in streamlining radiologists' workflows.


Asunto(s)
Procesamiento de Lenguaje Natural , Humanos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Variaciones Dependientes del Observador , Sistemas de Información Radiológica
15.
Cureus ; 16(8): e67119, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39290911

RESUMEN

This study presents a detailed methodology for integrating three-dimensional (3D) printing technology into preoperative planning in neurosurgery. The increasing capabilities of 3D printing over the last decade have made it a valuable tool in medical fields such as orthopedics and dental practices. Neurosurgery can similarly benefit from these advancements, though the creation of accurate 3D models poses a significant challenge due to the technical expertise required and the cost of specialized software. This paper demonstrates a step-by-step process for developing a 3D physical model for preoperative planning using free, open-source software. A case involving a 62-year-old male with a large infiltrating tumor in the sacrum, originating from renal cell carcinoma, is used to illustrate the method. The process begins with the acquisition of a CT scan, followed by image reconstruction using InVesalius 3, an open-source software. The resulting 3D model is then processed in Autodesk Meshmixer (Autodesk, Inc., San Francisco, CA), where individual anatomical structures are segmented and prepared for printing. The model is printed using the Bambu Lab X1 Carbon 3D printer (Bambu Lab, Austin, TX), allowing for multicolor differentiation of structures such as bones, tumors, and blood vessels. The study highlights the practical aspects of model creation, including artifact removal, surface separation, and optimization for print volume. It discusses the advantages of multicolor printing for visual clarity in surgical planning and compares it with monochromatic and segmented printing approaches. The findings underscore the potential of 3D printing to enhance surgical precision and planning, providing a replicable protocol that leverages accessible technology. This work supports the broader adoption of 3D printing in neurosurgery, emphasizing the importance of collaboration between medical and engineering professionals to maximize the utility of these models in clinical practice.

16.
Neurosurg Clin N Am ; 35(4): 481-488, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39244320

RESUMEN

Medical technology plays a significant role in the reduction of disability and mortality due to the global burden of disease. The lack of diagnostic technology has been identified as the largest gap in the global health care pathway, and the cost of this technology is a driving factor for its lack of proliferation. Technology developed in high-income countries is often focused on producing high-quality, patient-specific data at a cost high-income markets can pay. While machine learning plays an important role in this process, great care must be taken to ensure appropriate translation to clinical practice.


Asunto(s)
Bioingeniería , Salud Global , Humanos , Bioingeniería/métodos , Bioingeniería/tendencias , Tecnología Biomédica/tendencias
17.
Heliyon ; 10(17): e36351, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39281629

RESUMEN

Background: The ever-increasing volume of academic literature necessitates efficient and sophisticated tools for researchers to analyze, interpret, and uncover trends. Traditional search methods, while valuable, often fail to capture the nuance and interconnectedness of vast research domains. Results: TopicTracker, a novel software tool, addresses this gap by providing a comprehensive solution from querying PubMed databases to creating intricate semantic network maps. Through its functionalities, users can systematically search for desired literature, analyze trends, and visually represent co-occurrences in a given field. Our case studies, including support for the WHO on ethical considerations in infodemic management and mapping the evolution of ethics pre- and post-pandemic, underscore the tool's applicability and precision. Conclusions: TopicTracker represents a significant advancement in academic research tools for text mining. While it has its limitations, primarily tied to its alignment with PubMed, its benefits far outweigh the constraints. As the landscape of research continues to expand, tools like TopicTracker may be instrumental in guiding scholars in their pursuit of knowledge, ensuring they navigate the large amount of literature with clarity and precision.

19.
Heliyon ; 10(17): e36998, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39296015

RESUMEN

We introduce NMR-Onion, an open-source, computationally efficient algorithm based on Python and PyTorch, designed to facilitate the automatic deconvolution of 1D NMR spectra. NMR-Onion features two innovative time-domain models capable of handling asymmetric non-Lorentzian line shapes. Its core components for resolution-enhanced peak detection and digital filtering of user-specified key regions ensure precise peak prediction and efficient computation. The NMR-Onion framework includes three built-in statistical models, with automatic selection via the BIC criterion. Additionally, NMR-Onion assesses the repeatability of results by evaluating post-modeling uncertainty. Using the NMR-Onion algorithm helps to minimize excessive peak detection.

20.
Front Neurosci ; 18: 1385847, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39221005

RESUMEN

Diffusion-weighted imaging (DWI) is the primary method to investigate macro- and microstructure of neural white matter in vivo. DWI can be used to identify and characterize individual-specific white matter bundles, enabling precise analyses on hypothesis-driven connections in the brain and bridging the relationships between brain structure, function, and behavior. However, cortical endpoints of bundles may span larger areas than what a researcher is interested in, challenging presumptions that bundles are specifically tied to certain brain functions. Functional MRI (fMRI) can be integrated to further refine bundles such that they are restricted to functionally-defined cortical regions. Analyzing properties of these Functional Sub-Bundles (FSuB) increases precision and interpretability of results when studying neural connections supporting specific tasks. Several parameters of DWI and fMRI analyses, ranging from data acquisition to processing, can impact the efficacy of integrating functional and diffusion MRI. Here, we discuss the applications of the FSuB approach, suggest best practices for acquiring and processing neuroimaging data towards this end, and introduce the FSuB-Extractor, a flexible open-source software for creating FSuBs. We demonstrate our processing code and the FSuB-Extractor on an openly-available dataset, the Natural Scenes Dataset.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA