Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39366765

RESUMO

BACKGROUND AND PURPOSE: Measurement of the mean upper cervical cord area (MUCCA) is an important biomarker in the study of neurodegeneration. However, dedicated high-resolution scans of the cervical spinal cord are rare in standard-of-care imaging due to timing and clinical usability. Most clinical cervical spinal cord imaging is sagittally acquired in 2D with thick slices and anisotropic voxels. As a solution, previous work describes high-resolution T1-weighted brain imaging for measuring the upper cord area, but this is still not common in clinical care. MATERIALS AND METHODS: We propose using a zero-shot super-resolution technique, SMORE, already validated in the brain, to enhance the resolution of 2D-acquired scans for upper cord area calculations. To incorporate super-resolution in spinal cord analysis, we validate SMORE against high-resolution research imaging and in a real-world longitudinal data analysis. RESULTS: Super-resolved images reconstructed using SMORE showed significantly greater similarity to the ground truth than low-resolution images across all tested resolutions (p<0.001 for all resolutions in PSNR and MSSIM). MUCCA results from super-resolved scans demonstrate excellent correlation with high-resolution scans (r>0.973 for all resolutions) compared to low-resolution scans. Additionally, super-resolved scans are consistent between resolutions (r>0.969), an essential factor in longitudinal analysis. Compared to clinical outcomes such as walking speed or disease severity, MUCCA values from low-resolution scans have significantly lower correlations than those from high-resolution scans. Super-resolved results have no significant difference. In a longitudinal real-world dataset, we show that these super-resolved volumes can be used in conjunction with T1-weighted brain scans to show a significant rate of atrophy (-0.790, p=0.020 vs. -0.438, p=0.301 with low-resolution). CONCLUSIONS: Super-resolution is a valuable tool for enabling large-scale studies of cord atrophy, as low-resolution images acquired in clinical practice are common and available. ABBREVIATIONS: MS=multiple sclerosis; MUCCA=mean upper cervical cord; HR=high-resolution; LR=low-resolution; SR=superresolved; CSC=cervical spinal cord; PMJ=pontomedullary junction; MSSIM=mean structural similarity; PSNR=peak signal-to-noise ratio; EDSS=expanded disability status scale.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39268202

RESUMO

Understanding the way cells communicate, co-locate, and interrelate is essential to understanding human physiology. Hematoxylin and eosin (H&E) staining is ubiquitously available both for clinical studies and research. The Colon Nucleus Identification and Classification (CoNIC) Challenge has recently innovated on robust artificial intelligence labeling of six cell types on H&E stains of the colon. However, this is a very small fraction of the number of potential cell classification types. Specifically, the CoNIC Challenge is unable to classify epithelial subtypes (progenitor, endocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), or connective subtypes (fibroblasts, stromal). In this paper, we propose to use inter-modality learning to label previously un-labelable cell types on virtual H&E. We leveraged multiplexed immunofluorescence (MxIF) histology imaging to identify 14 subclasses of cell types. We performed style transfer to synthesize virtual H&E from MxIF and transferred the higher density labels from MxIF to these virtual H&E images. We then evaluated the efficacy of learning in this approach. We identified helper T and progenitor nuclei with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02) respectively on virtual H&E. This approach represents a promising step towards automating annotation in digital pathology.

3.
J Med Imaging (Bellingham) ; 11(6): 067501, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39507410

RESUMO

Purpose: Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E. Approach: We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E. Results: On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01 ) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02 ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of 0.43 ± 0.03 (parent class prevalence 0.21) and 0.94 ± 0.02 (parent class prevalence 0.49) when using ground truth centroid information. Conclusions: This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.

4.
Magn Reson Imaging ; 98: 155-163, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36702167

RESUMO

To reduce scan time, magnetic resonance (MR) images are often acquired using 2D multi-slice protocols with thick slices that may also have gaps between them. The resulting image volumes have lower resolution in the through-plane direction than in the in-plane direction, and the through-plane resolution is in part characterized by the protocol's slice profile which acts as a through-plane point spread function (PSF). Although super-resolution (SR) has been shown to improve the visualization and down-stream processing of 2D multi-slice MR acquisitions, previous algorithms are usually unaware of the true slice profile, which may lead to sub-optimal SR performance. In this work, we present an algorithm to estimate the slice profile of a 2D multi-slice acquisition given only its own image volume without any external training data. We assume that an anatomical image is isotropic in the sense that, after accounting for a correctly estimated slice profile, the image patches along different orientations have the same probability distribution. Our proposed algorithm uses a modified generative adversarial network (GAN) where the generator network estimates the slice profile to reduce the resolution of the in-plane direction, and the discriminator network determines whether a direction is generated or real low resolution. The proposed algorithm, ESPRESO, which stands for "estimating the slice profile for resolution enhancement of a single image only", was tested with a state-of-the-art internally supervised SR algorithm. Specifically, ESPRESO is used to create training data for this SR algorithm, and results show improvements when ESPRESO is used over commonly-used PSFs.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Cintilografia , Processamento de Imagem Assistida por Computador
5.
Ultrasonics ; 135: 107111, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37598499

RESUMO

Many organisms (including certain plant species) can be observed to emit sounds, potentially signifying threat alerts. Sensitivity to such sounds and vibrations may also play an important role in the lives of fungi. In this work, we explore the potential of ultrasound activity in dehydrating fungi, and discover that several species of fungi do not emit sounds (detectable with conventional instrumentation) in the frequency range of 10kHz to 210kHz upon dehydration. Over 5 terabytes of ultrasound recordings were collected and analysed. We conjecture that fungi interact via non-sound means, such as electrical or chemical.


Assuntos
Som , Vibração , Fungos , Ultrassonografia
6.
Neurobiol Aging ; 124: 85-97, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36446680

RESUMO

Enlarged perivascular spaces (ePVS) are difficult to quantify, and their etiologies and consequences are poorly understood. Vanderbilt Memory and Aging Project participants (n = 327, 73 ± 7 years) completed 3T brain MRI to quantify ePVS volume and count, longitudinal neuropsychological assessment, and cardiac MRI to quantify aortic stiffness. Linear regressions related (1) PWV to ePVS burden and (2) ePVS burden to cross-sectional and longitudinal neuropsychological performance adjusting for key demographic and medical factors. Higher aortic stiffness related to greater basal ganglia ePVS volume (ß = 7.0×10-5, p = 0.04). Higher baseline ePVS volume was associated with worse baseline information processing (ß = -974, p = 0.003), executive function (ß = -81.9, p < 0.001), and visuospatial performances (ß = -192, p = 0.02) and worse longitudinal language (ß = -54.9, p = 0.05), information processing (ß = -147, p = 0.03), executive function (ß = -10.9, p = 0.03), and episodic memory performances (ß = -10.6, p = 0.02). Results were similar for ePVS count. Greater arterial stiffness relates to worse basal ganglia ePVS burden, suggesting cardiovascular aging as an etiology. ePVS burden is associated with adverse cognitive trajectory, emphasizing the clinical relevance of ePVS.


Assuntos
Sistema Glinfático , Rigidez Vascular , Humanos , Estudos Transversais , Cognição , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética
7.
Comput Med Imaging Graph ; 109: 102285, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37657151

RESUMO

The lack of standardization and consistency of acquisition is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations in the acquired images due to differences in hardware and acquisition parameters. In recent years, image synthesis-based MR harmonization with disentanglement has been proposed to compensate for the undesired contrast variations. The general idea is to disentangle anatomy and contrast information from MR images to achieve cross-site harmonization. Despite the success of existing methods, we argue that major improvements can be made from three aspects. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable, since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both T1-weighted and T2-weighted images), limiting their applicability. Lastly, existing methods are generally sensitive to imaging artifacts. In this paper, we present Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), a novel approach to address these three issues. HACA3 incorporates an anatomy fusion module that accounts for the inherent anatomical differences between MR contrasts. Furthermore, HACA3 can be trained and applied to any combination of MR contrasts and is robust to imaging artifacts. HACA3 is developed and evaluated on diverse MR datasets acquired from 21 sites with varying field strengths, scanner platforms, and acquisition protocols. Experiments show that HACA3 achieves state-of-the-art harmonization performance under multiple image quality metrics. We also demonstrate the versatility and potential clinical impact of HACA3 on downstream tasks including white matter lesion segmentation for people with multiple sclerosis and longitudinal volumetric analyses for normal aging subjects. Code is available at https://github.com/lianruizuo/haca3.


Assuntos
Encéfalo , Substância Branca , Humanos , Encéfalo/patologia , Imageamento por Ressonância Magnética/métodos , Envelhecimento , Processamento de Imagem Assistida por Computador/métodos
8.
Artigo em Inglês | MEDLINE | ID: mdl-36303574

RESUMO

Deep learning promises the extraction of valuable information from traumatic brain injury (TBI) datasets and depends on efficient navigation when using large-scale mixed computed tomography (CT) datasets from clinical systems. To ensure a cleaner signal while training deep learning models, removal of computed tomography angiography (CTA) and scans with streaking artifacts is sensible. On massive datasets of heterogeneously sized scans, time-consuming manual quality assurance (QA) by visual inspection is still often necessary, despite the expectation of CTA annotation (artifact annotation is not expected). We propose an automatic QA approach for retrieving CT scans without artifacts by representing 3D scans as 2D axial slice montages and using a multi-headed convolutional neural network to detect CT vs CTA and artifact vs no artifact. We sampled 848 scans from a mixed CT dataset of TBI patients and performed 4-fold stratified cross-validation on 698 montages followed by an ablation experiment-150 stratified montages were withheld for external validation evaluation. Aggregate AUC for our main model was 0.978 for CT detection, 0.675 for artifact detection during cross-validation and 0.965 for CT detection, 0.698 for artifact detection on the external validation set, while the ablated model showed 0.946 for CT detection, 0.735 for artifact detection during cross-validation and 0.937 for CT detection, 0.708 for artifact detection on the external validation set. While our approach is successful for CT detection, artifact detection performance is potentially depressed due to the heterogeneity of present streaking artifacts and a suboptimal number of artifact scans in our training data.

9.
Front Neurosci ; 16: 768634, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35368292

RESUMO

Manual classification of functional resting state networks (RSNs) derived from Independent Component Analysis (ICA) decomposition can be labor intensive and requires expertise, particularly in large multi-subject analyses. Hence, a fully automatic algorithm that can reliably classify these RSNs is desirable. In this paper, we present a deep learning approach based on a Siamese Network to learn a discriminative feature representation for single-subject ICA component classification. Advantages of this supervised framework are that it requires relatively few training data examples and it does not require the number of ICA components to be specified. In addition, our approach permits one-shot learning, which allows generalization to new classes not seen in the training set with only one example of each new class. The proposed method is shown to out-perform traditional convolutional neural network (CNN) and template matching methods in identifying eleven subject-specific RSNs, achieving 100% accuracy on a holdout data set and over 99% accuracy on an outside data set. We also demonstrate that the method is robust to scan-rescan variation. Finally, we show that the functional connectivity of default mode and salience networks identified by the proposed technique is altered in a group analysis of mild traumatic brain injury (TBI), severe TBI, and healthy subjects.

10.
Simul Synth Med Imaging ; 13570: 55-65, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36326241

RESUMO

Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.

11.
Med Phys ; 48(10): 6060-6068, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34287944

RESUMO

PURPOSE: Artificial intelligence diagnosis and triage of large vessel occlusion may quicken clinical response for a subset of time-sensitive acute ischemic stroke patients, improving outcomes. Differences in architectural elements within data-driven convolutional neural network (CNN) models impact performance. Foreknowledge of effective model architectural elements for domain-specific problems can narrow the search for candidate models and inform strategic model design and adaptation to optimize performance on available data. Here, we study CNN architectures with a range of learnable parameters and which span the inclusion of architectural elements, such as parallel processing branches and residual connections with varying methods of recombining residual information. METHODS: We compare five CNNs: ResNet-50, DenseNet-121, EfficientNet-B0, PhiNet, and an Inception module-based network, on a computed tomography angiography large vessel occlusion detection task. The models were trained and preliminarily evaluated with 10-fold cross-validation on preprocessed scans (n = 240). An ablation study was performed on PhiNet due to superior cross-validated test performance across accuracy, precision, recall, specificity, and F1 score. The final evaluation of all models was performed on a withheld external validation set (n = 60) and these predictions were subsequently calibrated with sigmoid curves. RESULTS: Uncalibrated results on the withheld external validation set show that DenseNet-121 had the best average performance on accuracy, precision, recall, specificity, and F1 score. After calibration DenseNet-121 maintained superior performance on all metrics except recall. CONCLUSIONS: The number of learnable parameters in our five models and best-ablated PhiNet directly related to cross-validated test performance-the smaller the model the better. However, this pattern did not hold when looking at generalization on the withheld external validation set. DenseNet-121 generalized the best; we posit this was due to its heavy use of residual connections utilizing concatenation, which causes feature maps from earlier layers to be used deeper in the network, while aiding in gradient flow and regularization.


Assuntos
Isquemia Encefálica , Acidente Vascular Cerebral , Inteligência Artificial , Angiografia por Tomografia Computadorizada , Humanos , Redes Neurais de Computação , Acidente Vascular Cerebral/diagnóstico por imagem
12.
Simul Synth Med Imaging ; 12965: 14-23, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35291392

RESUMO

We propose a method to jointly super-resolve an anisotropic image volume along with its corresponding voxel labels without external training data. Our method is inspired by internally trained superresolution, or self-super-resolution (SSR) techniques that target anisotropic, low-resolution (LR) magnetic resonance (MR) images. While resulting images from such methods are quite useful, their corresponding LR labels-derived from either automatic algorithms or human raters-are no longer in correspondence with the super-resolved volume. To address this, we develop an SSR deep network that takes both an anisotropic LR MR image and its corresponding LR labels as input and produces both a super-resolved MR image and its super-resolved labels as output. We evaluated our method with 50 T 1-weighted brain MR images 4× down-sampled with 10 automatically generated labels. In comparison to other methods, our method had superior Dice across all labels and competitive metrics on the MR image. Our approach is the first reported method for SSR of paired anisotropic image and label volumes.

13.
Lect Notes Monogr Ser ; 124442020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34531637

RESUMO

Multi-site training methods for artificial neural networks are of particular interest to the medical machine learning community primarily due to the difficulty of data sharing between institutions. However, contemporary multi-site techniques such as weight averaging and cyclic weight transfer make theoretical sacrifices to simplify implementation. In this paper, we implement federated gradient averaging (FGA), a variant of federated learning without data transfer that is mathematically equivalent to single site training with centralized data. We evaluate two scenarios: a simulated multi-site dataset for handwritten digit classification with MNIST and a real multi-site dataset with head CT hemorrhage segmentation. We compare federated gradient averaging to single site training, federated weight averaging (FWA), and cyclic weight transfer. In the MNIST task, we show that training with FGA results in a weight set equivalent to centralized single site training. In the hemorrhage segmentation task, we show that FGA achieves on average superior results to both FWA and cyclic weight transfer due to its ability to leverage momentum-based optimization.

14.
Med Phys ; 47(1): 89-98, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31660621

RESUMO

PURPOSE: As deep neural networks achieve more success in the wide field of computer vision, greater emphasis is being placed on the generalizations of these models for production deployment. With sufficiently large training datasets, models can typically avoid overfitting their data; however, for medical imaging it is often difficult to obtain enough data from a single site. Sharing data between institutions is also frequently nonviable or prohibited due to security measures and research compliance constraints, enforced to guard protected health information (PHI) and patient anonymity. METHODS: In this paper, we implement cyclic weight transfer with independent datasets from multiple geographically disparate sites without compromising PHI. We compare results between single-site learning (SSL) and multisite learning (MSL) models on testing data drawn from each of the training sites as well as two other institutions. RESULTS: The MSL model attains an average dice similarity coefficient (DSC) of 0.690 on the holdout institution datasets with a volume correlation of 0.914, respectively corresponding to a 7% and 5% statistically significant improvement over the average of both SSL models, which attained an average DSC of 0.646 and average correlation of 0.871. CONCLUSIONS: We show that a neural network can be efficiently trained on data from two physically remote sites without consolidating patient data to a single location. The resulting network improves model generalization and achieves higher average DSCs on external datasets than neural networks trained on data from a single source.


Assuntos
Aprendizado Profundo , Hemorragia/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Humanos
15.
J Med Imaging (Bellingham) ; 7(6): 064004, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33381612

RESUMO

Purpose: Generalizability is an important problem in deep neural networks, especially with variability of data acquisition in clinical magnetic resonance imaging (MRI). Recently, the spatially localized atlas network tiles (SLANT) can effectively segment whole brain, non-contrast T1w MRI with 132 volumetric labels. Transfer learning (TL) is a commonly used domain adaptation tool to update the neural network weights for local factors, yet risks degradation of performance on the original validation/test cohorts. Approach: We explore TL using unlabeled clinical data to address these concerns in the context of adapting SLANT to scanning protocol variations. We optimize whole-brain segmentation on heterogeneous clinical data by leveraging 480 unlabeled pairs of clinically acquired T1w MRI with and without intravenous contrast. We use labels generated on the pre-contrast image to train on the post-contrast image in a five-fold cross-validation framework. We further validated on a withheld test set of 29 paired scans over a different acquisition domain. Results: Using TL, we improve reproducibility across imaging pairs measured by the reproducibility Dice coefficient (rDSC) between the pre- and post-contrast image. We showed an increase over the original SLANT algorithm (rDSC 0.82 versus 0.72) and the FreeSurfer v6.0.1 segmentation pipeline ( rDSC = 0.53 ). We demonstrate the impact of this work decreasing the root-mean-squared error of volumetric estimates of the hippocampus between paired images of the same subject by 67%. Conclusion: This work demonstrates a pipeline for unlabeled clinical data to translate algorithms optimized for research data to generalize toward heterogeneous clinical acquisitions.

16.
Artigo em Inglês | MEDLINE | ID: mdl-34040280

RESUMO

Generalizability is an important problem in deep neural networks, especially in the context of the variability of data acquisition in clinical magnetic resonance imaging (MRI). Recently, the Spatially Localized Atlas Network Tiles (SLANT) approach has been shown to effectively segment whole brain non-contrast T1w MRI with 132 volumetric labels. Enhancing generalizability of SLANT would enable broader application of volumetric assessment in multi-site studies. Transfer learning (TL) is commonly to update neural network weights for local factors; yet, it is commonly recognized to risk degradation of performance on the original validation/test cohorts. Here, we explore TL by data augmentation to address these concerns in the context of adapting SLANT to anatomical variation (e.g., adults versus children) and scanning protocol (e.g., non-contrast research T1w MRI versus contrast-enhanced clinical T1w MRI). We consider two datasets: First, 30 T1w MRI of young children with manually corrected volumetric labels, and accuracy of automated segmentation defined relative to the manually provided truth. Second, 36 paired datasets of pre- and post-contrast clinically acquired T1w MRI, and accuracy of the post-contrast segmentations assessed relative to the pre-contrast automated assessment. For both studies, we augment the original TL step of SLANT with either only the new data or with both original and new data. Over baseline SLANT, both approaches yielded significantly improved performance (pediatric: 0.89 vs. 0.82 DSC, p<0.001; contrast: 0.80 vs 0.76, p<0.001). The performance on the original test set decreased with the new-data only transfer learning approach, so data augmentation was superior to strict transfer learning.

17.
Artigo em Inglês | MEDLINE | ID: mdl-34040275

RESUMO

Multiple instance learning (MIL) is a supervised learning methodology that aims to allow models to learn instance class labels from bag class labels, where a bag is defined to contain multiple instances. MIL is gaining traction for learning from weak labels but has not been widely applied to 3D medical imaging. MIL is well-suited to clinical CT acquisitions since (1) the highly anisotropic voxels hinder application of traditional 3D networks and (2) patch-based networks have limited ability to learn whole volume labels. In this work, we apply MIL with a deep convolutional neural network to identify whether clinical CT head image volumes possess one or more large hemorrhages (> 20cm3), resulting in a learned 2D model without the need for 2D slice annotations. Individual image volumes are considered separate bags, and the slices in each volume are instances. Such a framework sets the stage for incorporating information obtained in clinical reports to help train a 2D segmentation approach. Within this context, we evaluate the data requirements to enable generalization of MIL by varying the amount of training data. Our results show that a training size of at least 400 patient image volumes was needed to achieve accurate per-slice hemorrhage detection. Over a five-fold cross-validation, the leading model, which made use of the maximum number of training volumes, had an average true positive rate of 98.10%, an average true negative rate of 99.36%, and an average precision of 0.9698. The models have been made available along with source code1 to enabled continued exploration and adaption of MIL in CT neuroimaging.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA