Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 106
Filtrar
1.
J Biopharm Stat ; : 1-19, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38889012

RESUMEN

BACKGROUND: Positive and negative likelihood ratios (PLR and NLR) are important metrics of accuracy for diagnostic devices with a binary output. However, the properties of Bayesian and frequentist interval estimators of PLR/NLR have not been extensively studied and compared. In this study, we explore the potential use of the Bayesian method for interval estimation of PLR/NLR, and, more broadly, for interval estimation of the ratio of two independent proportions. METHODS: We develop a Bayesian-based approach for interval estimation of PLR/NLR for use as a part of a diagnostic device performance evaluation. Our approach is applicable to a broader setting for interval estimation of any ratio of two independent proportions. We compare score and Bayesian interval estimators for the ratio of two proportions in terms of the coverage probability (CP) and expected interval width (EW) via extensive experiments and applications to two case studies. A supplementary experiment was also conducted to assess the performance of the proposed exact Bayesian method under different priors. RESULTS: Our experimental results show that the overall mean CP for Bayesian interval estimation is consistent with that for the score method (0.950 vs. 0.952), and the overall mean EW for Bayesian is shorter than that for score method (15.929 vs. 19.724). Application to two case studies showed that the intervals estimated using the Bayesian and frequentist approaches are very similar. DISCUSSION: Our numerical results indicate that the proposed Bayesian approach has a comparable CP performance with the score method while yielding higher precision (i.e. a shorter EW).

2.
BJR Artif Intell ; 1(1): ubae006, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38828430

RESUMEN

Innovation in medical imaging artificial intelligence (AI)/machine learning (ML) demands extensive data collection, algorithmic advancements, and rigorous performance assessments encompassing aspects such as generalizability, uncertainty, bias, fairness, trustworthiness, and interpretability. Achieving widespread integration of AI/ML algorithms into diverse clinical tasks will demand a steadfast commitment to overcoming issues in model design, development, and performance assessment. The complexities of AI/ML clinical translation present substantial challenges, requiring engagement with relevant stakeholders, assessment of cost-effectiveness for user and patient benefit, timely dissemination of information relevant to robust functioning throughout the AI/ML lifecycle, consideration of regulatory compliance, and feedback loops for real-world performance evidence. This commentary addresses several hurdles for the development and adoption of AI/ML technologies in medical imaging. Comprehensive attention to these underlying and often subtle factors is critical not only for tackling the challenges but also for exploring novel opportunities for the advancement of AI in radiology.

3.
JCO Precis Oncol ; 8: e2300687, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38635935

RESUMEN

Radiomics, the science of extracting quantifiable data from routine medical images, is a powerful tool that has many potential applications in oncology. The Response Evaluation Criteria in Solid Tumors Working Group (RWG) held a workshop in May 2022, which brought together various stakeholders to discuss the potential role of radiomics in oncology drug development and clinical trials, particularly with respect to response assessment. This article summarizes the results of that workshop, reviewing radiomics for the practicing oncologist and highlighting the work that needs to be done to move forward the incorporation of radiomics into clinical trials.


Asunto(s)
Neoplasias , Medicina de Precisión , Humanos , Medicina de Precisión/métodos , Criterios de Evaluación de Respuesta en Tumores Sólidos , Radiómica , Oncología Médica , Neoplasias/diagnóstico por imagen , Neoplasias/tratamiento farmacológico
4.
Gastrointest Endosc ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38639679

RESUMEN

BACKGROUND AND AIMS: The American Society for Gastrointestinal Endoscopy (ASGE) AI Task Force along with experts in endoscopy, technology space, regulatory authorities, and other medical subspecialties initiated a consensus process that analyzed the current literature, highlighted potential areas, and outlined the necessary research in artificial intelligence (AI) to allow a clearer understanding of AI as it pertains to endoscopy currently. METHODS: A modified Delphi process was used to develop these consensus statements. RESULTS: Statement 1: Current advances in AI allow for the development of AI-based algorithms that can be applied to endoscopy to augment endoscopist performance in detection and characterization of endoscopic lesions. Statement 2: Computer vision-based algorithms provide opportunities to redefine quality metrics in endoscopy using AI, which can be standardized and can reduce subjectivity in reporting quality metrics. Natural language processing-based algorithms can help with the data abstraction needed for reporting current quality metrics in GI endoscopy effortlessly. Statement 3: AI technologies can support smart endoscopy suites, which may help optimize workflows in the endoscopy suite, including automated documentation. Statement 4: Using AI and machine learning helps in predictive modeling, diagnosis, and prognostication. High-quality data with multidimensionality are needed for risk prediction, prognostication of specific clinical conditions, and their outcomes when using machine learning methods. Statement 5: Big data and cloud-based tools can help advance clinical research in gastroenterology. Multimodal data are key to understanding the maximal extent of the disease state and unlocking treatment options. Statement 6: Understanding how to evaluate AI algorithms in the gastroenterology literature and clinical trials is important for gastroenterologists, trainees, and researchers, and hence education efforts by GI societies are needed. Statement 7: Several challenges regarding integrating AI solutions into the clinical practice of endoscopy exist, including understanding the role of human-AI interaction. Transparency, interpretability, and explainability of AI algorithms play a key role in their clinical adoption in GI endoscopy. Developing appropriate AI governance, data procurement, and tools needed for the AI lifecycle are critical for the successful implementation of AI into clinical practice. Statement 8: For payment of AI in endoscopy, a thorough evaluation of the potential value proposition for AI systems may help guide purchasing decisions in endoscopy. Reliable cost-effectiveness studies to guide reimbursement are needed. Statement 9: Relevant clinical outcomes and performance metrics for AI in gastroenterology are currently not well defined. To improve the quality and interpretability of research in the field, steps need to be taken to define these evidence standards. Statement 10: A balanced view of AI technologies and active collaboration between the medical technology industry, computer scientists, gastroenterologists, and researchers are critical for the meaningful advancement of AI in gastroenterology. CONCLUSIONS: The consensus process led by the ASGE AI Task Force and experts from various disciplines has shed light on the potential of AI in endoscopy and gastroenterology. AI-based algorithms have shown promise in augmenting endoscopist performance, redefining quality metrics, optimizing workflows, and aiding in predictive modeling and diagnosis. However, challenges remain in evaluating AI algorithms, ensuring transparency and interpretability, addressing governance and data procurement, determining payment models, defining relevant clinical outcomes, and fostering collaboration between stakeholders. Addressing these challenges while maintaining a balanced perspective is crucial for the meaningful advancement of AI in gastroenterology.

5.
BJR Artif Intell ; 1(1): ubae003, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38476957

RESUMEN

The adoption of artificial intelligence (AI) tools in medicine poses challenges to existing clinical workflows. This commentary discusses the necessity of context-specific quality assurance (QA), emphasizing the need for robust QA measures with quality control (QC) procedures that encompass (1) acceptance testing (AT) before clinical use, (2) continuous QC monitoring, and (3) adequate user training. The discussion also covers essential components of AT and QA, illustrated with real-world examples. We also highlight what we see as the shared responsibility of manufacturers or vendors, regulators, healthcare systems, medical physicists, and clinicians to enact appropriate testing and oversight to ensure a safe and equitable transformation of medicine through AI.

6.
J Med Imaging (Bellingham) ; 11(1): 017502, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38370423

RESUMEN

Purpose: Endometrial cancer (EC) is the most common gynecologic malignancy in the United States, and atypical endometrial hyperplasia (AEH) is considered a high-risk precursor to EC. Hormone therapies and hysterectomy are practical treatment options for AEH and early-stage EC. Some patients prefer hormone therapies for reasons such as fertility preservation or being poor surgical candidates. However, accurate prediction of an individual patient's response to hormonal treatment would allow for personalized and potentially improved recommendations for these conditions. This study aims to explore the feasibility of using deep learning models on whole slide images (WSI) of endometrial tissue samples to predict the patient's response to hormonal treatment. Approach: We curated a clinical WSI dataset of 112 patients from two clinical sites. An expert pathologist annotated these images by outlining AEH/EC regions. We developed an end-to-end machine learning model with mixed supervision. The model is based on image patches extracted from pathologist-annotated AEH/EC regions. Either an unsupervised deep learning architecture (Autoencoder or ResNet50), or non-deep learning (radiomics feature extraction) is used to embed the images into a low-dimensional space, followed by fully connected layers for binary prediction, which was trained with binary responder/non-responder labels established by pathologists. We used stratified sampling to partition the dataset into a development set and a test set for internal validation of the performance of our models. Results: The autoencoder model yielded an AUROC of 0.80 with 95% CI [0.63, 0.95] on the independent test set for the task of predicting a patient with AEH/EC as a responder vs non-responder to hormonal treatment. Conclusions: These findings demonstrate the potential of using mixed supervised machine learning models on WSIs for predicting the response to hormonal treatment in AEH/EC patients.

7.
J Med Imaging (Bellingham) ; 11(1): 014501, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38283653

RESUMEN

Purpose: Understanding an artificial intelligence (AI) model's ability to generalize to its target population is critical to ensuring the safe and effective usage of AI in medical devices. A traditional generalizability assessment relies on the availability of large, diverse datasets, which are difficult to obtain in many medical imaging applications. We present an approach for enhanced generalizability assessment by examining the decision space beyond the available testing data distribution. Approach: Vicinal distributions of virtual samples are generated by interpolating between triplets of test images. The generated virtual samples leverage the characteristics already in the test set, increasing the sample diversity while remaining close to the AI model's data manifold. We demonstrate the generalizability assessment approach on the non-clinical tasks of classifying patient sex, race, COVID status, and age group from chest x-rays. Results: Decision region composition analysis for generalizability indicated that a disproportionately large portion of the decision space belonged to a single "preferred" class for each task, despite comparable performance on the evaluation dataset. Evaluation using cross-reactivity and population shift strategies indicated a tendency to overpredict samples as belonging to the preferred class (e.g., COVID negative) for patients whose subgroup was not represented in the model development data. Conclusions: An analysis of an AI model's decision space has the potential to provide insight into model generalizability. Our approach uses the analysis of composition of the decision space to obtain an improved assessment of model generalizability in the case of limited test data.

9.
Clin Pharmacol Ther ; 115(4): 745-757, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-37965805

RESUMEN

In 2020, Novartis Pharmaceuticals Corporation and the U.S. Food and Drug Administration (FDA) started a 4-year scientific collaboration to approach complex new data modalities and advanced analytics. The scientific question was to find novel radio-genomics-based prognostic and predictive factors for HR+/HER- metastatic breast cancer under a Research Collaboration Agreement. This collaboration has been providing valuable insights to help successfully implement future scientific projects, particularly using artificial intelligence and machine learning. This tutorial aims to provide tangible guidelines for a multi-omics project that includes multidisciplinary expert teams, spanning across different institutions. We cover key ideas, such as "maintaining effective communication" and "following good data science practices," followed by the four steps of exploratory projects, namely (1) plan, (2) design, (3) develop, and (4) disseminate. We break each step into smaller concepts with strategies for implementation and provide illustrations from our collaboration to further give the readers actionable guidance.


Asunto(s)
Inteligencia Artificial , Multiómica , Humanos , Aprendizaje Automático , Genómica
10.
3D Print Med ; 9(1): 32, 2023 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-37978094

RESUMEN

BACKGROUND: Bone health and fracture risk are known to be correlated with stiffness. Both micro-finite element analysis (µFEA) and mechanical testing of additive manufactured phantoms are useful approaches for estimating mechanical properties of trabecular bone-like structures. However, it is unclear if measurements from the two approaches are consistent. The purpose of this work is to evaluate the agreement between stiffness measurements obtained from mechanical testing of additive manufactured trabecular bone phantoms and µFEA modeling. Agreement between the two methods would suggest 3D printing is a viable method for validation of µFEA modeling. METHODS: A set of 20 lumbar vertebrae regions of interests were segmented and the corresponding trabecular bone phantoms were produced using selective laser sintering. The phantoms were mechanically tested in uniaxial compression to derive their stiffness values. The stiffness values were also derived from in silico simulation, where linear elastic µFEA was applied to simulate the same compression and boundary conditions. Bland-Altman analysis was used to evaluate agreement between the mechanical testing and µFEA simulation values. Additionally, we evaluated the fidelity of the 3D printed phantoms as well as the repeatability of the 3D printing and mechanical testing process. RESULTS: We observed good agreement between the mechanically tested stiffness and µFEA stiffness, with R2 of 0.84 and normalized root mean square deviation of 8.1%. We demonstrate that the overall trabecular bone structures are printed in high fidelity (Dice score of 0.97 (95% CI, [0.96,0.98]) and that mechanical testing is repeatable (coefficient of variation less than 5% for stiffness values from testing of duplicated phantoms). However, we noticed some defects in the resin microstructure of the 3D printed phantoms, which may account for the discrepancy between the stiffness values from simulation and mechanical testing. CONCLUSION: Overall, the level of agreement achieved between the mechanical stiffness and µFEA indicates that our µFEA methods may be acceptable for assessing bone mechanics of complex trabecular structures as part of an analysis of overall bone health.

11.
J Med Imaging (Bellingham) ; 10(5): 051804, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37361549

RESUMEN

Purpose: To introduce developers to medical device regulatory processes and data considerations in artificial intelligence and machine learning (AI/ML) device submissions and to discuss ongoing AI/ML-related regulatory challenges and activities. Approach: AI/ML technologies are being used in an increasing number of medical imaging devices, and the fast evolution of these technologies presents novel regulatory challenges. We provide AI/ML developers with an introduction to U.S. Food and Drug Administration (FDA) regulatory concepts, processes, and fundamental assessments for a wide range of medical imaging AI/ML device types. Results: The device type for an AI/ML device and appropriate premarket regulatory pathway is based on the level of risk associated with the device and informed by both its technological characteristics and intended use. AI/ML device submissions contain a wide array of information and testing to facilitate the review process with the model description, data, nonclinical testing, and multi-reader multi-case testing being critical aspects of the AI/ML device review process for many AI/ML device submissions. The agency is also involved in AI/ML-related activities that support guidance document development, good machine learning practice development, AI/ML transparency, AI/ML regulatory research, and real-world performance assessment. Conclusion: FDA's AI/ML regulatory and scientific efforts support the joint goals of ensuring patients have access to safe and effective AI/ML devices over the entire device lifecycle and stimulating medical AI/ML innovation.

12.
Artículo en Inglés | MEDLINE | ID: mdl-37159719

RESUMEN

Endometrial cancer (EC) is the most common gynecologic malignancy in the US and complex atypical hyperplasia (CAH) is considered a high-risk precursor to EC. Treatment options for CAH and early-stage EC include hormone therapies and hysterectomy with the former preferred by certain patients, e.g., for fertility preservation or poor surgical candidates. Accurate prediction of response to hormonal treatment would allow for personalized and potentially improved recommendations for the treatment of these conditions. In this study, we investigate the feasibility of utilizing weakly supervised deep learning models on whole slide images of endometrial tissue samples for the prediction of patient response to hormonal treatment. We curated a clinical whole-slide-image (WSI) dataset of 112 patients from two clinical sites. We developed an end-to-end machine learning model using WSIs of endometrial specimens for the prediction of hormonal treatment response among women with CAH/EC. The model takes patches extracted from pathologist-annotated CAH/EC regions as input and utilizes an unsupervised deep learning architecture (Autoencoder or ResNet50) to embed the images into a low-dimensional space, followed by fully connected layers for binary prediction. Our autoencoder model yielded an AUC of 0.79 with 95% CI [0.61, 0.98] on a hold-out test set in the task of predicting a patient with CAH/EC as a responder vs non-responder to hormonal treatment. Our results, demonstrate the potential for using weakly supervised machine learning models on WSIs for predicting response to hormonal treatment of CAH/EC patients.

13.
Br J Radiol ; 96(1150): 20220878, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36971405

RESUMEN

Data drift refers to differences between the data used in training a machine learning (ML) model and that applied to the model in real-world operation. Medical ML systems can be exposed to various forms of data drift, including differences between the data sampled for training and used in clinical operation, differences between medical practices or context of use between training and clinical use, and time-related changes in patient populations, disease patterns, and data acquisition, to name a few. In this article, we first review the terminology used in ML literature related to data drift, define distinct types of drift, and discuss in detail potential causes within the context of medical applications with an emphasis on medical imaging. We then review the recent literature regarding the effects of data drift on medical ML systems, which overwhelmingly show that data drift can be a major cause for performance deterioration. We then discuss methods for monitoring data drift and mitigating its effects with an emphasis on pre- and post-deployment techniques. Some of the potential methods for drift detection and issues around model retraining when drift is detected are included. Based on our review, we find that data drift is a major concern in medical ML deployment and that more research is needed so that ML models can identify drift early, incorporate effective mitigation strategies and resist performance decay.


Asunto(s)
Aprendizaje Automático , Computación en Informática Médica
14.
JAMA Netw Open ; 6(2): e230524, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36821110

RESUMEN

Importance: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. Objectives: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. Design, Setting, and Participants: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. Main Outcomes and Measures: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. Results: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. Conclusions and Relevance: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Humanos , Femenino , Benchmarking , Mamografía/métodos , Algoritmos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Neoplasias de la Mama/diagnóstico por imagen
15.
Med Phys ; 50(7): 4255-4268, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36630691

RESUMEN

PURPOSE: Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS: Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS: We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS: Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Aprendizaje Automático , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador/métodos
16.
Med Phys ; 50(2): e1-e24, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36565447

RESUMEN

Rapid advances in artificial intelligence (AI) and machine learning, and specifically in deep learning (DL) techniques, have enabled broad application of these methods in health care. The promise of the DL approach has spurred further interest in computer-aided diagnosis (CAD) development and applications using both "traditional" machine learning methods and newer DL-based methods. We use the term CAD-AI to refer to this expanded clinical decision support environment that uses traditional and DL-based AI methods. Numerous studies have been published to date on the development of machine learning tools for computer-aided, or AI-assisted, clinical tasks. However, most of these machine learning models are not ready for clinical deployment. It is of paramount importance to ensure that a clinical decision support tool undergoes proper training and rigorous validation of its generalizability and robustness before adoption for patient care in the clinic. To address these important issues, the American Association of Physicists in Medicine (AAPM) Computer-Aided Image Analysis Subcommittee (CADSC) is charged, in part, to develop recommendations on practices and standards for the development and performance assessment of computer-aided decision support systems. The committee has previously published two opinion papers on the evaluation of CAD systems and issues associated with user training and quality assurance of these systems in the clinic. With machine learning techniques continuing to evolve and CAD applications expanding to new stages of the patient care process, the current task group report considers the broader issues common to the development of most, if not all, CAD-AI applications and their translation from the bench to the clinic. The goal is to bring attention to the proper training and validation of machine learning algorithms that may improve their generalizability and reliability and accelerate the adoption of CAD-AI systems for clinical decision support.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Computador , Humanos , Reproducibilidad de los Resultados , Diagnóstico por Computador/métodos , Diagnóstico por Imagen , Aprendizaje Automático
17.
BMC Bioinformatics ; 23(1): 544, 2022 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-36526957

RESUMEN

BACKGROUND: The Basic Local Alignment Search Tool (BLAST) is a suite of commonly used algorithms for identifying matches between biological sequences. The user supplies a database file and query file of sequences for BLAST to find identical sequences between the two. The typical millions of database and query sequences make BLAST computationally challenging but also well suited for parallelization on high-performance computing clusters. The efficacy of parallelization depends on the data partitioning, where the optimal data partitioning relies on an accurate performance model. In previous studies, a BLAST job was sped up by 27 times by partitioning the database and query among thousands of processor nodes. However, the optimality of the partitioning method was not studied. Unlike BLAST performance models proposed in the literature that usually have problem size and hardware configuration as the only variables, the execution time of a BLAST job is a function of database size, query size, and hardware capability. In this work, the nucleotide BLAST application BLASTN was profiled using three methods: shell-level profiling with the Unix "time" command, code-level profiling with the built-in "profiler" module, and system-level profiling with the Unix "gprof" program. The runtimes were measured for six node types, using six different database files and 15 query files, on a heterogeneous HPC cluster with 500+ nodes. The empirical measurement data were fitted with quadratic functions to develop performance models that were used to guide the data parallelization for BLASTN jobs. RESULTS: Profiling results showed that BLASTN contains more than 34,500 different functions, but a single function, RunMTBySplitDB, takes 99.12% of the total runtime. Among its 53 child functions, five core functions were identified to make up 92.12% of the overall BLASTN runtime. Based on the performance models, static load balancing algorithms can be applied to the BLASTN input data to minimize the runtime of the longest job on an HPC cluster. Four test cases being run on homogeneous and heterogeneous clusters were tested. Experiment results showed that the runtime can be reduced by 81% on a homogeneous cluster and by 20% on a heterogeneous cluster by re-distributing the workload. DISCUSSION: Optimal data partitioning can improve BLASTN's overall runtime 5.4-fold in comparison with dividing the database and query into the same number of fragments. The proposed methodology can be used in the other applications in the BLAST+ suite or any other application as long as source code is available.


Asunto(s)
Metodologías Computacionales , Programas Informáticos , Algoritmos , Biología Computacional/métodos , Alineación de Secuencia
19.
Med Phys ; 48(7): 3741-3751, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33932241

RESUMEN

PURPOSE: Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS: In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS: We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION: Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.


Asunto(s)
Neoplasias Pulmonares , Sistemas de Computación , Humanos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X
20.
J Med Imaging (Bellingham) ; 8(3): 034501, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33987451

RESUMEN

Purpose: The breast pathology quantitative biomarkers (BreastPathQ) challenge was a grand challenge organized jointly by the International Society for Optics and Photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment. Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard. Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores. Conclusions: The SPIE-AAPM-NCI BreastPathQ challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ challenge can be accessed on the Grand Challenge website.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA