RESUMO
Phillip L. Geissler made important contributions to the statistical mechanics of biological polymers, heterogeneous materials, and chemical dynamics in aqueous environments. He devised analytical and computational methods that revealed the underlying organization of complex systems at the frontiers of biology, chemistry, and materials science. In this retrospective we celebrate his work at these frontiers.
Assuntos
Física , Masculino , Humanos , Estudos Retrospectivos , Físico-QuímicaRESUMO
Light-sheet fluorescence microscopy (LSFM), a prominent fluorescence microscopy technique, offers enhanced temporal resolution for imaging biological samples in four dimensions (4D; x, y, z, time). Some of the most recent implementations, including inverted selective plane illumination microscopy (iSPIM) and lattice light-sheet microscopy (LLSM), move the sample substrate at an oblique angle relative to the detection objective's optical axis. Data from such tilted-sample-scan LSFMs require subsequent deskewing and rotation for proper visualisation and analysis. Such data preprocessing operations currently demand substantial memory allocation and pose significant computational challenges for large 4D dataset. The consequence is prolonged data preprocessing time compared to data acquisition time, which limits the ability for live-viewing the data as it is being captured by the microscope. To enable the fast preprocessing of large light-sheet microscopy datasets without significant hardware demand, we have developed WH-Transform, a memory-efficient transformation algorithm for deskewing and rotating the raw dataset, significantly reducing memory usage and the run time by more than 10-fold for large image stacks. Benchmarked against the conventional method and existing software, our approach demonstrates linear runtime compared to the cubic and quadratic runtime of the other approaches. Preprocessing a raw 3D volume of 2 GB (512 × 1536 × 600 pixels) can be accomplished in 3 s using a GPU with 24 GB of memory on a single workstation. Applied to 4D LLSM datasets of human hepatocytes, lung organoid tissue and brain organoid tissue, our method provided rapid and accurate preprocessing within seconds. Importantly, such preprocessing speeds now allow visualisation of the raw microscope data stream in real time, significantly improving the usability of LLSM in biology. In summary, this advancement holds transformative potential for light-sheet microscopy, enabling real-time, on-the-fly data preprocessing, visualisation, and analysis on standard workstations, thereby revolutionising biological imaging applications for LLSM and similar microscopes.
RESUMO
OBJECTIVE: This study aimed to 1) investigate algorithm enhancements for identifying patients eligible for genetic testing of hereditary cancer syndromes using family history data from electronic health records (EHRs); and 2) assess their impact on relative differences across sex, race, ethnicity, and language preference. MATERIALS AND METHODS: The study used EHR data from a tertiary academic medical center. A baseline rule-base algorithm, relying on structured family history data (structured data; SD), was enhanced using a natural language processing (NLP) component and a relaxed criteria algorithm (partial match [PM]). The identification rates and differences were analyzed considering sex, race, ethnicity, and language preference. RESULTS: Among 120,007 patients aged 25-60, detection rate differences were found across all groups using the SD (all P < 0.001). Both enhancements increased identification rates; NLP led to a 1.9 % increase and the relaxed criteria algorithm (PM) led to an 18.5 % increase (both P < 0.001). Combining SD with NLP and PM yielded a 20.4 % increase (P < 0.001). Similar increases were observed within subgroups. Relative differences persisted across most categories for the enhanced algorithms, with disproportionately higher identification of patients who are White, Female, non-Hispanic, and whose preferred language is English. CONCLUSION: Algorithm enhancements increased identification rates for patients eligible for genetic testing of hereditary cancer syndromes, regardless of sex, race, ethnicity, and language preference. However, differences in identification rates persisted, emphasizing the need for additional strategies to reduce disparities such as addressing underlying biases in EHR family health information and selectively applying algorithm enhancements for disadvantaged populations. Systematic assessment of differences in algorithm performance across population subgroups should be incorporated into algorithm development processes.
Assuntos
Algoritmos , Síndromes Neoplásicas Hereditárias , Humanos , Feminino , Testes Genéticos , Registros Eletrônicos de Saúde , Processamento de Linguagem NaturalRESUMO
The purpose of this study was to develop and validate an algorithm for identifying Veterans with a history of traumatic brain injury (TBI) in the Veterans Affairs (VA) electronic health record using VA Million Veteran Program (MVP) data. Manual chart review (n = 200) was first used to establish 'gold standard' diagnosis labels for TBI ('Yes TBI' vs. 'No TBI'). To develop our algorithm, we used PheCAP, a semi-supervised pipeline that relied on the chart review diagnosis labels to train and create a prediction model for TBI. Cross-validation was used to train and evaluate the proposed algorithm, 'TBI-PheCAP.' TBI-PheCAP performance was compared to existing TBI algorithms and phenotyping methods, and the final algorithm was run on all MVP participants (n = 702,740) to assign a predicted probability for TBI and a binary classification status choosing specificity = 90%. The TBI-PheCAP algorithm had an area under the receiver operating characteristic curve of 0.92, sensitivity of 84%, and positive predictive value (PPV) of 98% at specificity = 90%. TBI-PheCAP generally performed better than other classification methods, with equivalent or higher sensitivity and PPV than existing rules-based TBI algorithms and MVP TBI-related survey data. Given its strong classification metrics, the TBI-PheCAP algorithm is recommended for use in future population-based TBI research.
Assuntos
Algoritmos , Lesões Encefálicas Traumáticas , Registros Eletrônicos de Saúde , United States Department of Veterans Affairs , Veteranos , Humanos , Lesões Encefálicas Traumáticas/diagnóstico , Estados Unidos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Reprodutibilidade dos TestesRESUMO
The ability to estimate lower-extremity mechanics in real-world scenarios may untether biomechanics research from a laboratory environment. This is particularly important for military populations where outdoor ruck marches over variable terrain and the addition of external load are cited as leading causes of musculoskeletal injury As such, this study aimed to examine (1) the validity of a minimal IMU sensor system for quantifying lower-extremity kinematics during treadmill walking and running compared with optical motion capture (OMC) and (2) the sensitivity of this IMU system to kinematic changes induced by load, grade, or a combination of the two. The IMU system was able to estimate hip and knee range of motion (ROM) with moderate accuracy during walking but not running. However, SPM analyses revealed IMU and OMC kinematic waveforms were significantly different at most gait phases. The IMU system was capable of detecting kinematic differences in knee kinematic waveforms that occur with added load but was not sensitive to changes in grade that influence lower-extremity kinematics when measured with OMC. While IMUs may be able to identify hip and knee ROM during gait, they are not suitable for replicating lab-level kinematic waveforms.
Assuntos
Articulação do Joelho , Caminhada , Fenômenos Biomecânicos , Marcha , Amplitude de Movimento Articular , HumanosRESUMO
The xCures platform aggregates, organizes, structures, and normalizes clinical EMR data across care sites, utilizing advanced technologies for near real-time access. The platform generates data in a format to support clinical care, accelerate research, and promote artificial intelligence/ machine learning algorithm development, highlighted by a clinical decision support algorithm for precision oncology.
Assuntos
Inteligência Artificial , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Neoplasias , Humanos , Neoplasias/terapia , Algoritmos , Oncologia/métodos , Ensaios Clínicos como Assunto , Sistemas de Apoio a Decisões ClínicasRESUMO
This study aimed to validate a 7-sensor inertial measurement unit system against optical motion capture to estimate bilateral lower-limb kinematics. Hip, knee, and ankle sagittal plane peak angles and range of motion (ROM) were compared during bodyweight squats and countermovement jumps in 18 participants. In the bodyweight squats, left peak hip flexion (intraclass correlation coefficient [ICC] = .51), knee extension (ICC = .68) and ankle plantar flexion (ICC = .55), and hip (ICC = .63) and knee (ICC = .52) ROM had moderate agreement, and right knee ROM had good agreement (ICC = .77). Relatively higher agreement was observed in the countermovement jumps compared to the bodyweight squats, moderate to good agreement in right peak knee flexion (ICC = .73), and right (ICC = .75) and left (ICC = .83) knee ROM. Moderate agreement was observed for right ankle plantar flexion (ICC = .63) and ROM (ICC = .51). Moderate agreement (ICC > .50) was observed in all variables in the left limb except hip extension, knee flexion, and dorsiflexion. In general, there was poor agreement for peak flexion angles, and at least moderate agreement for joint ROM. Future work will aim to optimize methodologies to increase usability and confidence in data interpretation by minimizing variance in system-based differences and may also benefit from expanding planes of movement.
Assuntos
Tornozelo , Extremidade Inferior , Humanos , Fenômenos Biomecânicos , Articulação do Tornozelo , Articulação do Joelho , Postura , Amplitude de Movimento ArticularRESUMO
The development of medical image analysis algorithm is a complex process including the multiple sub-steps of model training, data visualization, human-computer interaction and graphical user interface (GUI) construction. To accelerate the development process, algorithm developers need a software tool to assist with all the sub-steps so that they can focus on the core function implementation. Especially, for the development of deep learning (DL) algorithms, a software tool supporting training data annotation and GUI construction is highly desired. In this work, we constructed AnatomySketch, an extensible open-source software platform with a friendly GUI and a flexible plugin interface for integrating user-developed algorithm modules. Through the plugin interface, algorithm developers can quickly create a GUI-based software prototype for clinical validation. AnatomySketch supports image annotation using the stylus and multi-touch screen. It also provides efficient tools to facilitate the collaboration between human experts and artificial intelligent (AI) algorithms. We demonstrate four exemplar applications including customized MRI image diagnosis, interactive lung lobe segmentation, human-AI collaborated spine disc segmentation and Annotation-by-iterative-Deep-Learning (AID) for DL model training. Using AnatomySketch, the gap between laboratory prototyping and clinical testing is bridged and the development of MIA algorithms is accelerated. The software is opened at https://github.com/DlutMedimgGroup/AnatomySketch-Software .
Assuntos
Software , Interface Usuário-Computador , Humanos , Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética/métodosRESUMO
Introduction: Asthma is a common childhood respiratory disorder characterized by wheeze, cough and respiratory distress responsive to bronchodilator therapy. Asthma severity can be determined by subjective, manual scoring systems such as the Pulmonary Score (PS). These systems require significant medical training and expertise to rate clinical findings such as wheeze characteristics, and work of breathing. In this study, we report the development of an objective method of assessing acute asthma severity based on the automated analysis of cough sounds.Methods: We collected a cough sound dataset from 224 children; 103 without acute asthma and 121 with acute asthma. Using this database coupled with clinical diagnoses and PS determined by a clinical panel, we developed a machine classifier algorithm to characterize the severity of airway constriction. The performance of our algorithm was then evaluated against the PS from a separate set of patients, independent of the training set.Results: The cough-only model discriminated no/mild disease (PS 0-1) from severe disease (PS 5,6) but required a modified respiratory rate calculation to separate very severe disease (PS > 6). Asymptomatic children (PS 0) were separated from moderate asthma (PS 2-4) by the cough-only model without the need for clinical inputs.Conclusions: The PS provides information in managing childhood asthma but is not readily usable by non-medical personnel. Our method offers an objective measurement of asthma severity which does not rely on clinician-dependent inputs. It holds potential for use in clinical settings including improving the performance of existing asthma-rating scales and in community-management programs.AbbreviationsAMaccessory muscleBIbreathing indexCIconfidence intervalFEV1forced expiratory volume in one secondLRlogistic regressionPEFRpeak expiratory flow ratePSpulmonary scoreRRrespiratory rateSDstandard deviationSEstandard errorWAWestern Australia.
Assuntos
Asma/fisiopatologia , Tosse/fisiopatologia , Índice de Gravidade de Doença , Fatores Etários , Algoritmos , Austrália , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Estudos Prospectivos , Testes de Função Respiratória , Sons RespiratóriosRESUMO
BACKGROUND: Accurate, coded problem lists are valuable for data reuse, including clinical decision support and research. However, healthcare providers frequently modify coded diagnoses by including or removing common contextual properties in free-text diagnosis descriptions: uncertainty (suspected glaucoma), laterality (left glaucoma) and temporality (glaucoma 2002). These contextual properties could cause a difference in meaning between underlying diagnosis codes and modified descriptions, inhibiting data reuse. We therefore aimed to develop and evaluate an algorithm to identify these contextual properties. METHODS: A rule-based algorithm called UnLaTem (Uncertainty, Laterality, Temporality) was developed using a single-center dataset, including 288,935 diagnosis descriptions, of which 73,280 (25.4%) were modified by healthcare providers. Internal validation of the algorithm was conducted with an independent sample of 980 unique records. A second validation of the algorithm was conducted with 996 records from a Dutch multicenter dataset including 175,210 modified descriptions of five hospitals. Two researchers independently annotated the two validation samples. Performance of the algorithm was determined using means of the recall and precision of the validation samples. The algorithm was applied to the multicenter dataset to determine the actual prevalence of the contextual properties within the modified descriptions per specialty. RESULTS: For the single-center dataset recall (and precision) for removal of uncertainty, uncertainty, laterality and temporality respectively were 100 (60.0), 99.1 (89.9), 100 (97.3) and 97.6 (97.6). For the multicenter dataset for removal of uncertainty, uncertainty, laterality and temporality it was 57.1 (88.9), 86.3 (88.9), 99.7 (93.5) and 96.8 (90.1). Within the modified descriptions of the multicenter dataset, 1.3% contained removal of uncertainty, 9.9% uncertainty, 31.4% laterality and 9.8% temporality. CONCLUSIONS: We successfully developed a rule-based algorithm named UnLaTem to identify contextual properties in Dutch modified diagnosis descriptions. UnLaTem could be extended with more trigger terms, new rules and the recognition of term order to increase the performance even further. The algorithm's rules are available as additional file 2. Implementing UnLaTem in Dutch hospital systems can improve precision of information retrieval and extraction from diagnosis descriptions, which can be used for data reuse purposes such as decision support and research.
Assuntos
Registros Eletrônicos de Saúde , Glaucoma , Algoritmos , Humanos , Armazenamento e Recuperação da Informação , IncertezaRESUMO
BACKGROUND: Pathway analysis is widely applied in transcriptome analysis. Given certain transcriptomic changes, current pathway analysis tools tend to search for the most impacted pathways, which provides insight into underlying biological mechanisms. Further refining of the enriched pathways and extracting functional modules by "crosstalk" analysis have been proposed. However, the upstream/downstream relationships between the modules, which may provide extra biological insights such as the coordination of different functional modules and the signal transduction flow have been ignored. RESULTS: To quantitatively analyse the upstream/downstream relationships between functional modules, we developed a novel GEne Set Topological Impact Analysis (GESTIA), which could be used to assemble the enriched pathways and functional modules into a super-module with a topological structure. We showed the advantages of this analysis in the exploration of extra biological insight in addition to the individual enriched pathways and functional modules. CONCLUSIONS: GESTIA can be applied to a broad range of pathway/module analysis result. We hope that GESTIA may help researchers to get one additional step closer to understanding the molecular mechanism from the pathway/module analysis results.
Assuntos
Biologia Computacional , Perfilação da Expressão Gênica , Redes Reguladoras de Genes , Transdução de Sinais , TranscriptomaRESUMO
BACKGROUND: Patients hospitalized for suspected acute coronary syndrome (ACS) are at risk for transient myocardial ischemia. During the "rule-out" phase, continuous ECG ST-segment monitoring can identify transient myocardial ischemia, even when asymptomatic. However, current ST-segment monitoring software is vastly underutilized due to false positive alarms, with resultant alarm fatigue. Current ST algorithms may contribute to alarm fatigue because; (1) they are not designed with a delay (minutes), rather alarm to brief spikes (i.e., turning, heart rate changes), and (2) alarm to changes in a single ECG lead, rather than contiguous leads. PURPOSE: This study was designed to determine sensitivity, and specificity, of ST algorithms when accounting for; ST magnitude (100µV vs 200µV), duration, and changes in contiguous ECG leads (i.e., aVL, I, - aVR, II, aVF, III; V1, V2, V3, V4, V5, V6, V6, I). METHODS: This was a secondary analysis from the COMPARE Study, which assessed occurrence rates for transient myocardial ischemia in hospitalized patients with suspected ACS using 12-lead Holter. Transient myocardial ischemia was identified from Holter using >100µV ST-segment ↑ or ↓, in >1 ECG lead, >1min. Algorithms tested against Holter transient myocardial ischemia were done using the University of California San Francisco (UCSF) ECG algorithm and included: (1)100µV vs 200µV any lead during a 5-min ST average; (2)100µV vs 200µV any lead >5min, (3) 100µV vs 200µV any lead during a 5-min ST average in contiguous leads, and (4) 100µV vs 200µV>5min in contiguous leads (Table below). RESULTS: In 361 patients; mean age 63+12years, 63% male, 56% prior CAD, 43 (11%) had transient myocardial ischemia. Of the 43 patients with transient myocardial ischemia, 17 (40%) had ST-segment elevation events, and 26 (60%) ST-segment depression events. A higher proportion of patients with ST segment depression has missed ischemic events. Table shows sensitivity and specificity for the four algorithms tested. CONCLUSIONS: Sensitivity was highly variable, due to the ST threshold selected, with the 100µV measurement point being superior to the 200µV amplitude threshold. Of all the algorithms tested, there was moderate sensitivity and specificity (70% and 68%) using the 100µV ST-segment threshold, integrated ST-segment changes in contiguous leads during a 5-min average.
Assuntos
Síndrome Coronariana Aguda/diagnóstico , Algoritmos , Eletrocardiografia , Isquemia Miocárdica/diagnóstico , Síndrome Coronariana Aguda/fisiopatologia , Diagnóstico Diferencial , Reações Falso-Positivas , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Isquemia Miocárdica/fisiopatologia , Sensibilidade e EspecificidadeRESUMO
BACKGROUND: During gait training, physical therapists continuously supervise stroke survivors and provide physical support to their pelvis when they judge that the patient is unable to keep his balance. This paper is the first in providing quantitative data about the corrective forces that therapists use during gait training. It is assumed that changes in the acceleration of a patient's COM are a good predictor for therapeutic balance assistance during the training sessions Therefore, this paper provides a method that predicts the timing of therapeutic balance assistance, based on acceleration data of the sacrum. METHODS: Eight sub-acute stroke survivors and seven therapists were included in this study. Patients were asked to perform straight line walking as well as slalom walking in a conventional training setting. Acceleration of the sacrum was captured by an Inertial Magnetic Measurement Unit. Balance-assisting corrective forces applied by the therapist were collected from two force sensors positioned on both sides of the patient's hips. Measures to characterize the therapeutic balance assistance were the amount of force, duration, impulse and the anatomical plane in which the assistance took place. Based on the acceleration data of the sacrum, an algorithm was developed to predict therapeutic balance assistance. To validate the developed algorithm, the predicted events of balance assistance by the algorithm were compared with the actual provided therapeutic assistance. RESULTS: The algorithm was able to predict the actual therapeutic assistance with a Positive Predictive Value of 87% and a True Positive Rate of 81%. Assistance mainly took place over the medio-lateral axis and corrective forces of about 2% of the patient's body weight (15.9 N (11), median (IQR)) were provided by therapists in this plane. Median duration of balance assistance was 1.1 s (0.6) (median (IQR)) and median impulse was 9.4Ns (8.2) (median (IQR)). Although therapists were specifically instructed to aim for the force sensors on the iliac crest, a different contact location was reported in 22% of the corrections. CONCLUSIONS: This paper presents insights into the behavior of therapists regarding their manual physical assistance during gait training. A quantitative dataset was presented, representing therapeutic balance-assisting force characteristics. Furthermore, an algorithm was developed that predicts events at which therapeutic balance assistance was provided. Prediction scores remain high when different therapists and patients were analyzed with the same algorithm settings. Both the quantitative dataset and the developed algorithm can serve as technical input in the development of (robot-controlled) balance supportive devices.
Assuntos
Transtornos Neurológicos da Marcha/reabilitação , Marcha , Fisioterapeutas , Equilíbrio Postural , Reabilitação do Acidente Vascular Cerebral/métodos , Aceleração , Idoso , Algoritmos , Terapia por Exercício , Feminino , Quadril/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Sacro/fisiologia , Sobreviventes , CaminhadaRESUMO
Cysteine (Cys) is a critically important amino acid, serving a variety of functions within proteins including structural roles, catalysis, and regulation of function through post-translational modifications. Predicting which Cys residues are likely to be reactive is a very sought after feature. Few methods are currently available for the task, either based on evaluation of physicochemical features (e.g., pKa and exposure) or based on similarity with known instances. In this study, we developed an algorithm (named HAL-Cy) which blends previous work with novel implementations to identify reactive Cys from nonreactive. HAL-Cy present two major components: (i) an energy based part, rooted on the evaluation of H-bond network contributions and (ii) a knowledge based part, composed of different profiling approaches (including a newly developed weighting matrix for sequence profiling). In our evaluations, HAL-Cy provided significantly improved performances, as tested in comparisons with existing approaches. We implemented our algorithm in a web service (Cy-preds), the ultimate product of our work; we provided it with a variety of additional features, tools, and options: Cy-preds is capable of performing fully automated calculations for a thorough analysis of Cys reactivity in proteins, ranging from reactivity predictions (e.g., with HAL-Cy) to functional characterization. We believe it represents an original, effective, and very useful addition to the current array of tools available to scientists involved in redox biology, Cys biochemistry, and structural bioinformatics.
Assuntos
Algoritmos , Biologia Computacional/métodos , Cisteína/análise , Cisteína/química , Internet , Sequência de Aminoácidos , Cisteína/metabolismo , Bases de Dados de Proteínas , Modelos Estatísticos , Oxirredução , Alinhamento de SequênciaRESUMO
OBJECTIVES: To develop, validate, and implement algorithms to identify diabetic retinopathy (DR) cases and controls from electronic health care records (EHRs). MATERIALS AND METHODS: We developed and validated electronic health record (EHR)-based algorithms to identify DR cases and individuals with type I or II diabetes without DR (controls) in 3 independent EHR systems: Vanderbilt University Medical Center Synthetic Derivative (VUMC), the VA Northeast Ohio Healthcare System (VANEOHS), and Massachusetts General Brigham (MGB). Cases were required to meet 1 of the following 3 criteria: (1) 2 or more dates with any DR ICD-9/10 code documented in the EHR, (2) at least one affirmative health-factor or EPIC code for DR along with an ICD9/10 code for DR on a different day, or (3) at least one ICD-9/10 code for any DR occurring within 24 hours of an ophthalmology examination. Criteria for controls included affirmative evidence for diabetes as well as an ophthalmology examination. RESULTS: The algorithms, developed and evaluated in VUMC through manual chart review, resulted in a positive predictive value (PPV) of 0.93 for cases and negative predictive value (NPV) of 0.91 for controls. Implementation of algorithms yielded similar metrics in VANEOHS (PPV = 0.94; NPV = 0.86) and lower in MGB (PPV = 0.84; NPV = 0.76). In comparison, the algorithm for DR implemented in Phenome-wide association study (PheWAS) in VUMC yielded similar PPV (0.92) but substantially reduced NPV (0.48). Implementation of the algorithms to the Million Veteran Program identified over 62 000 DR cases with genetic data including 14 549 African Americans and 6209 Hispanics with DR. CONCLUSIONS/DISCUSSION: We demonstrate the robustness of the algorithms at 3 separate healthcare centers, with a minimum PPV of 0.84 and substantially improved NPV than existing automated methods. We strongly encourage independent validation and incorporation of features unique to each EHR to enhance algorithm performance for DR cases and controls.
Assuntos
Algoritmos , Retinopatia Diabética , Registros Eletrônicos de Saúde , Humanos , Retinopatia Diabética/diagnóstico , Masculino , Pessoa de Meia-Idade , Feminino , Estudos de Casos e Controles , Idoso , Classificação Internacional de Doenças , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 1/diagnóstico , AdultoRESUMO
To enhance market demand and fish utilization, cutting processing is essential for fish. Bighead carp were cut into four primary cuts: head, dorsal, belly, and tail, collectively accounting for 77.03% of the fish's total weight. These cuts were refrigerated at 4 °C for 10 days, during which the muscle from each cut was analyzed. Pseudomonas.fragi proliferated most rapidly and was most abundant in eye muscle (EM), while Aeromonas.sobria showed similar growth patterns in tail muscle (TM). Notably, EM exhibited the highest rate of fat oxidation. TM experienced the most rapid protein degradation. Furthermore, to facilitate the cutting applied in mechanical processing, a machine vision-based algorithm was developed. This algorithm utilized color threshold and morphological parameters to segment image background and divide bighead carp region. Consequently, each cut of bighead carp had a different storage quality and the machine vision-based algorithm proved effective for processing bighead carp.
Assuntos
Algoritmos , Carpas , Armazenamento de Alimentos , Alimentos Marinhos , Carpas/crescimento & desenvolvimento , Animais , Alimentos Marinhos/análise , Pseudomonas/crescimento & desenvolvimento , Aeromonas/crescimento & desenvolvimentoRESUMO
While Artificial Intelligence (AI) has the potential to transform the field of diagnostic radiology, important obstacles still inhibit its integration into clinical environments. Foremost among them is the inability to integrate clinical information and prior and concurrent imaging examinations, which can lead to diagnostic errors that could irreversibly alter patient care. For AI to succeed in modern clinical practice, model training and algorithm development need to account for relevant background information that may influence the presentation of the patient in question. While AI is often remarkably accurate in distinguishing binary outcomes-hemorrhage vs. no hemorrhage; fracture vs. no fracture-the narrow scope of current training datasets prevents AI from examining the entire clinical context of the image in question. In this article, we provide an overview of the ways in which failure to account for clinical data and prior imaging can adversely affect AI interpretation of imaging studies. We then showcase how emerging techniques such as multimodal fusion and combined neural networks can take advantage of both clinical and imaging data, as well as how development strategies like domain adaptation can ensure greater generalizability of AI algorithms across diverse and dynamic clinical environments.
RESUMO
Light-sheet fluorescence microscopy (LSFM), a prominent fluorescence microscopy technique, offers enhanced temporal resolution for imaging biological samples in four dimensions (4D; x, y, z, time). Some of the most recent implementations, including inverted selective plane illumination microscopy (iSPIM) and lattice light-sheet microscopy (LLSM), rely on a tilting of the sample plane with respect to the light sheet of 30-45 degrees to ease sample preparation. Data from such tilted-sample-plane LSFMs require subsequent deskewing and rotation for proper visualization and analysis. Such transformations currently demand substantial memory allocation. This poses computational challenges, especially with large datasets. The consequence is long processing times compared to data acquisition times, which currently limits the ability for live-viewing the data as it is being captured by the microscope. To enable the fast preprocessing of large light-sheet microscopy datasets without significant hardware demand, we have developed WH-Transform, a novel GPU-accelerated memory-efficient algorithm that integrates deskewing and rotation into a single transformation, significantly reducing memory requirements and reducing the preprocessing run time by at least 10-fold for large image stacks. Benchmarked against conventional methods and existing software, our approach demonstrates linear scalability. Processing large 3D stacks of up to 15 GB is now possible within one minute using a single GPU with 24 GB of memory. Applied to 4D LLSM datasets of human hepatocytes, human lung organoid tissue, and human brain organoid tissue, our method outperforms alternatives, providing rapid, accurate preprocessing within seconds. Importantly, such processing speeds now allow visualization of the raw microscope data stream in real time, significantly improving the usability of LLSM in biology. In summary, this advancement holds transformative potential for light-sheet microscopy, enabling real-time, on-the-fly data processing, visualization, and analysis on standard workstations, thereby revolutionizing biological imaging applications for LLSM, SPIM and similar light microscopes.
RESUMO
Coronavirus disease 2019 (COVID-19), the disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus, has had extensive economic, social, and public health impacts in the United States and around the world. To date, there have been more than 600 million reported infections worldwide with more than 6 million reported deaths. Retrospective analysis, which identified comorbidities, risk factors, and treatments, has underpinned the response. As the situation transitions to an endemic, retrospective analyses using electronic health records will be important to identify the long-term effects of COVID-19. However, these analyses can be complicated by incomplete records, which makes it difficult to differentiate visits where the patient had COVID-19. To address this issue, we trained a random Forest classifier to assign a probability of a patient having been diagnosed with COVID-19 during each visit. Using these probabilities, we found that higher COVID-19 probabilities were associated with a future diagnosis of myocardial infarction, urinary tract infection, acute renal failure, and type 2 diabetes.
RESUMO
Background: Continuous electrocardiographic (ECG) monitoring is used to identify ventricular tachycardia (VT), but false alarms occur frequently. Objective: The purpose of this study was to assess the rate of 30-day in-hospital mortality associated with VT alerts generated from bedside ECG monitors to those from a new algorithm among intensive care unit (ICU) patients. Methods: We conducted a retrospective cohort study in consecutive adult ICU patients at an urban academic medical center and compared current bedside monitor VT alerts, VT alerts from a new-unannotated algorithm, and true-annotated VT. We used survival analysis to explore the association between VT alerts and mortality. Results: We included 5679 ICU admissions (mean age 58 ± 17 years; 48% women), 503 (8.9%) experienced 30-day in-hospital mortality. A total of 30.1% had at least 1 current bedside monitor VT alert, 14.3% had a new-unannotated algorithm VT alert, and 11.6% had true-annotated VT. Bedside monitor VT alert was not associated with increased rate of 30-day mortality (adjusted hazard ratio [aHR] 1.06; 95% confidence interval [CI] 0.88-1.27), but there was an association for VT alerts from our new-unannotated algorithm (aHR 1.38; 95% CI 1.12-1.69) and true-annotated VT(aHR 1.39; 95% CI 1.12-1.73). Conclusion: Unannotated and annotated-true VT were associated with increased rate of 30-day in-hospital mortality, whereas current bedside monitor VT was not. Our new algorithm may accurately identify high-risk VT; however, prospective validation is needed.