RESUMO
Artificial intelligence (AI) has the potential to bring transformative improvements to the field of radiology; yet, there are barriers to widespread clinical adoption. One of the most important barriers has been access to large, well-annotated, widely representative medical image datasets, which can be used to accurately train AI programs. Creating such datasets requires time and expertise and runs into constraints around data security and interoperability, patient privacy, and appropriate data use. Recognizing these challenges, several institutions have started curating and providing publicly available, high-quality datasets that can be accessed by researchers to advance AI models. The purpose of this work was to review the publicly available MRI datasets that can be used for AI research in radiology. Despite being an emerging field, a simple internet search for open MRI datasets presents an overwhelming number of results. Therefore, we decided to create a survey of the major publicly accessible MRI datasets in different subfields of radiology (brain, body, and musculoskeletal), and list the most important features of value to the AI researcher. To complete this review, we searched for publicly available MRI datasets and assessed them based on several parameters (number of subjects, demographics, area of interest, technical features, and annotations). We reviewed 110 datasets across sub-fields with 1,686,245 subjects in 12 different areas of interest ranging from spine to cardiac. This review is meant to serve as a reference for researchers to help spur advancements in the field of AI for radiology. LEVEL OF EVIDENCE: Level 4 TECHNICAL EFFICACY: Stage 6.
Assuntos
Inteligência Artificial , Radiologia , Humanos , Radiologia/métodos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagemRESUMO
Cancer centers have an urgent and unmet clinical and research need for AI that can guide patient management. A core component of advancing cancer treatment research is assessing response to therapy. Doing so by hand, for example, as per RECIST or RANO criteria, is tedious and time-consuming, and can miss important tumor response information. Most notably, the prevalent response criteria often exclude lesions, the non-target lesions, altogether. We wish to assess change in a holistic fashion that includes all lesions, obtaining simple, informative, and automated assessments of tumor progression or regression. Because genetic sub-types of cancer can be fairly specific and patient enrollment in therapy trials is often limited in number and accrual rate, we wish to make response assessments with small training sets. Deep neuroevolution (DNE) is a novel radiology artificial intelligence (AI) optimization approach that performs well on small training sets. Here, we use a DNE parameter search to optimize a convolutional neural network (CNN) that predicts progression versus regression of metastatic brain disease. We analyzed 50 pairs of MRI contrast-enhanced images as our training set. Half of these pairs, separated in time, qualified as disease progression, while the other 25 image pairs constituted regression. We trained the parameters of a CNN via "mutations" that consisted of random CNN weight adjustments and evaluated mutation "fitness" as summed training set accuracy. We then incorporated the best mutations into the next generation's CNN, repeating this process for approximately 50,000 generations. We applied the CNNs to our training set, as well as a separate testing set with the same class balance of 25 progression and 25 regression cases. DNE achieved monotonic convergence to 100% training set accuracy. DNE also converged monotonically to 100% testing set accuracy. We have thus shown that DNE can accurately classify brain metastatic disease progression versus regression. Future work will extend the input from 2D image slices to full 3D volumes, and include the category of "no change." We believe that an approach such as ours can ultimately provide a useful and informative complement to RANO/RECIST assessment and volumetric AI analysis.
Assuntos
Inteligência Artificial , Neoplasias Encefálicas , Humanos , Redes Neurais de Computação , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/terapia , Encéfalo/diagnóstico por imagem , Progressão da DoençaRESUMO
BACKGROUND: Background parenchymal enhancement (BPE) is assessed on breast MRI reports as mandated by the Breast Imaging Reporting and Data System (BI-RADS) but is prone to inter and intrareader variation. Semiautomated and fully automated BPE assessment tools have been developed but none has surpassed radiologist BPE designations. PURPOSE: To develop a deep learning model for automated BPE classification and to compare its performance with current standard-of-care radiology report BPE designations. STUDY TYPE: Retrospective. POPULATION: Consecutive high-risk patients (i.e. >20% lifetime risk of breast cancer) who underwent contrast-enhanced screening breast MRI from October 2013 to January 2019. The study included 5224 breast MRIs, divided into 3998 training, 444 validation, and 782 testing exams. On radiology reports, 1286 exams were categorized as high BPE (i.e., marked or moderate) and 3938 as low BPE (i.e., mild or minimal). FIELD STRENGTH/SEQUENCE: A 1.5 T or 3 T system; one precontrast and three postcontrast phases of fat-saturated T1-weighted dynamic contrast-enhanced imaging. ASSESSMENT: Breast MRIs were used to develop two deep learning models (Slab artificial intelligence (AI); maximum intensity projection [MIP] AI) for BPE categorization using radiology report BPE labels. Models were tested on a heldout test sets using radiology report BPE and three-reader averaged consensus as the reference standards. STATISTICAL TESTS: Model performance was assessed using receiver operating characteristic curve analysis. Associations between high BPE and BI-RADS assessments were evaluated using McNemar's chi-square test (α* = 0.025). RESULTS: The Slab AI model significantly outperformed the MIP AI model across the full test set (area under the curve of 0.84 vs. 0.79) using the radiology report reference standard. Using three-reader consensus BPE labels reference standard, our AI model significantly outperformed radiology report BPE labels. Finally, the AI model was significantly more likely than the radiologist to assign "high BPE" to suspicious breast MRIs and significantly less likely than the radiologist to assign "high BPE" to negative breast MRIs. DATA CONCLUSION: Fully automated BPE assessments for breast MRIs could be more accurate than BPE assessments from radiology reports. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY STAGE: 3.
Assuntos
Neoplasias da Mama , Aprendizado Profundo , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Radiologistas , Estudos RetrospectivosRESUMO
Image classification is probably the most fundamental task in radiology artificial intelligence. To reduce the burden of acquiring and labeling data sets, we employed a two-pronged strategy. We automatically extracted labels from radiology reports in Part 1. In Part 2, we used the labels to train a data-efficient reinforcement learning (RL) classifier. We applied the approach to a small set of patient images and radiology reports from our institution. For Part 1, we trained sentence-BERT (SBERT) on 90 radiology reports. In Part 2, we used the labels from the trained SBERT to train an RL-based classifier. We trained the classifier on a training set of [Formula: see text] images. We tested on a separate collection of [Formula: see text] images. For comparison, we also trained and tested a supervised deep learning (SDL) classification network on the same set of training and testing images using the same labels. Part 1: The trained SBERT model improved from 82 to [Formula: see text] accuracy. Part 2: Using Part 1's computed labels, SDL quickly overfitted the small training set. Whereas SDL showed the worst possible testing set accuracy of 50%, RL achieved [Formula: see text] testing set accuracy, with a [Formula: see text]-value of [Formula: see text]. We have shown the proof-of-principle application of automated label extraction from radiological reports. Additionally, we have built on prior work applying RL to classification using these labels, extending from 2D slices to entire 3D image volumes. RL has again demonstrated a remarkable ability to train effectively, in a generalized manner, and based on small training sets.
Assuntos
Inteligência Artificial , Neuroimagem , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento Tridimensional , EncéfaloRESUMO
Even though teeth are often included in the field of view for a variety of medical CT studies, dental pathology is often missed by radiologists. Given the myriad morbidity and occasional mortality associated with sequelae of dental pathology, an important goal is to decrease these false negatives. However, given the ever-increasing volume of cases studies that radiologists have to read and the number of structures and diseases they have to evaluate, it is important not to place undue time restraints on the radiologist to this end. We hypothesized that generating panoramic dental radiographs from non-dental CT scans can permit identification of key diseases, while not adding much time to interpretation. The key advantage of panoramic dental radiographs is that they display the plane of the teeth in two dimensions, thereby facilitating fast and accurate assessment. We found that interpreting panoramic radiographic reconstructions compared to the full CT volumes reduced time-to-diagnosis of key dental pathology on average by roughly a factor of four. This expedition was statistically significant, and the average time-to-diagnosis for panoramic reconstructions was on the order of seconds, without a loss in accuracy compared to full CT. As such, we posit that panoramic reconstruction can serve as a one-slice additional series in any CT image stack that includes the teeth in its field of view.
Assuntos
Tomografia Computadorizada por Raios X , Humanos , Radiografia PanorâmicaRESUMO
Aneurysm size correlates with rupture risk and is important for treatment planning. User annotation of aneurysm size is slow and tedious, particularly for large data sets. Geometric shortcuts to compute size have been shown to be inaccurate, particularly for nonstandard aneurysm geometries. To develop and train a convolutional neural network (CNN) to detect and measure cerebral aneurysms from magnetic resonance angiography (MRA) automatically and without geometric shortcuts. In step 1, a CNN based on the U-net architecture was trained on 250 MRA maximum intensity projection (MIP) images, then applied to a testing set. In step 2, the trained CNN was applied to a separate set of 14 basilar tip aneurysms for size prediction. Step 1-the CNN successfully identified aneurysms in 85/86 (98.8% of) testing set cases, with a receiver operating characteristic (ROC) area-under-the-curve of 0.87. Step 2-automated basilar tip aneurysm linear size differed from radiologist-traced aneurysm size on average by 2.01 mm, or 30%. The CNN aneurysm area differed from radiologist-derived area on average by 8.1 mm2 or 27%. CNN correctly predicted the area trend for the set of aneurysms. This approach is to our knowledge the first using CNNs to derive aneurysm size. In particular, we demonstrate the clinically pertinent application of computing maximal aneurysm one-dimensional size and two-dimensional area. We propose that future work can apply this to facilitate pre-treatment planning and possibly identify previously missed aneurysms in retrospective assessment.
Assuntos
Angiografia Cerebral/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aneurisma Intracraniano/diagnóstico por imagem , Angiografia por Ressonância Magnética/métodos , Redes Neurais de Computação , Humanos , Estudos RetrospectivosRESUMO
Ultrasound is notoriously plagued by high user dependence. There is a steep drop-off in information in going from what the sonographer sees during image acquisition and what the interpreting radiologist is able to view at the reading station. One countermeasure is probe localization and tracking. Current implementations are too difficult and expensive to use and/or do not provide adequate detail and perspective. The aim of this work was to demonstrate that a protocol combining surface three-dimensional photographic imaging with traditional ultrasound images may be a solution to the problem of probe localization, this approach being termed surface point cloud ultrasound (SPC-US). Ultrasound images were obtained of major vessels in an ultrasound training phantom, while simultaneously obtaining surface point cloud (SPC) 3D photographic images, with additional scanning performed on the right forearm soft tissues, kidneys, chest, and pelvis. The resulting sets of grayscale/color Doppler ultrasound and SPC images are juxtaposed and displayed for interpretation in a manner analogous to current text-based annotation or computer-generated stick figure probe position illustrations. Clearly demonstrated is that SPC-US better communicates information of probe position and orientation. Overall, it is shown that SPC-US provides much richer image representations of probe position on the patients than the current prevailing schemes. SPC-US turns out to be a rather general technique with many anticipated future applications, though only a few sample applications are illustrated in the present work.
Assuntos
Vasos Sanguíneos/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Algoritmos , Desenho de Equipamento , Humanos , Imagens de FantasmasRESUMO
OBJECTIVES: Early identification and quantification of bladder damage in pediatric patients with congenital anomalies of the kidney and urinary tract (CAKUT) is crucial to guiding effective treatment and may affect the eventual clinical outcome, including progression of renal disease. We have developed a novel approach based on the convex hull to calculate bladder wall trabecularity in pediatric patients with CAKUT. The objective of this study was to test whether our approach can accurately predict bladder wall irregularity. METHODS: Twenty pediatric patients, half with renal compromise and CAKUT and half with normal renal function, were evaluated. We applied the convex hull approach to calculate T, a metric proposed to reflect the degree of trabeculation/bladder wall irregularity, in this set of patients. RESULTS: The average T value was roughly 3 times higher for diseased than healthy patients (0.14 [95% confidence interval, 0.10-0.17] versus 0.05 [95% confidence interval, 0.03-0.07] for normal bladders). This disparity was statistically significant (P < .01). CONCLUSIONS: We have demonstrated that a convex hull-based procedure can measure bladder wall irregularity. Because bladder damage is a reversible precursor to irreversible renal parenchymal damage, applying such a measure to at-risk pediatric patients can help guide prompt interventions to avert disease progression.
Assuntos
Rim/anormalidades , Ultrassonografia/métodos , Doenças da Bexiga Urinária/diagnóstico por imagem , Bexiga Urinária/diagnóstico por imagem , Bexiga Urinária/patologia , Sistema Urinário/anormalidades , Adolescente , Criança , Pré-Escolar , Feminino , Humanos , Lactente , Masculino , Doenças da Bexiga Urinária/patologiaRESUMO
OBJECTIVES: To predict the chronic kidney disease (CKD) state for pediatric patients based on scaled renal cortical echogenicity. METHODS: Sonograms from a cohort of 26 patients, half of whom had stage 4 or 5 CKD, whereas the other half had normal renal function, were analyzed. For each patient image, a region of interest (ROI) was drawn around the renal cortex for comparison with an ROI drawn around the hepatic parenchyma. The latter ROI was shifted spatially to normalize the signal attenuations and time-gain compensations of the two organs' ROIs. Then the average pixel intensity of the renal ROI was divided by the corresponding hepatic value, resulting in scaled renal cortical echogenicity. RESULTS: The average scaled renal cortical echogenicity was higher for diseased than healthy kidneys by roughly a factor of 2 (2.01 [95% confidence interval, 1.62-2.40] versus 1.05 [95% confidence interval, 0.88-1.23] for normal kidneys). This difference was statistically significant (P < .001). CONCLUSIONS: Our results show that the pediatric CKD state correlates with rigorously calculated scaled renal cortical echogenicity.
Assuntos
Cicatriz/complicações , Cicatriz/diagnóstico por imagem , Insuficiência Renal Crônica/complicações , Insuficiência Renal Crônica/diagnóstico por imagem , Ultrassonografia , Adolescente , Criança , Pré-Escolar , Cicatriz/patologia , Estudos de Coortes , Feminino , Humanos , Rim/diagnóstico por imagem , Rim/patologia , Masculino , Insuficiência Renal Crônica/patologiaRESUMO
Surface morphology and shape in general are important predictors for the behavior of solid-type lung nodules detected on CT. More broadly, shape analysis is useful in many areas of computer-aided diagnosis and essentially all scientific and engineering disciplines. Automated methods for shape detection have all previously, to the author's knowledge, relied on some sort of geometric measure. I introduce Normal Mode Analysis Shape Detection (NMA-SD), an approach that measures shape indirectly via the motion it would undergo if one imagined the shape to be a pseudomolecule. NMA-SD allows users to visualize internal movements in the imaging object and thereby develop an intuition for which motions are important, and which geometric features give rise to them. This can guide the identification of appropriate classification features to distinguish among classes of interest. I employ normal mode analysis (NMA) to animate pseudomolecules representing simulated lung nodules. Doing so, I am able to assign a testing set of nodules into the classes circular, elliptical, and irregular with roughly 97 % accuracy. This represents a proof-of-principle that one can obtain shape information by treating voxels as pseudoatoms in a pseudomolecule, and analyzing the pseudomolecule's predicted motion.
Assuntos
Neoplasias Pulmonares/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Intensificação de Imagem Radiográfica , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Nódulo Pulmonar Solitário/diagnóstico por imagem , Humanos , Modelos Anatômicos , Intensificação de Imagem Radiográfica/métodos , Valores de Referência , Sensibilidade e EspecificidadeRESUMO
PURPOSE: To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. MATERIALS AND METHODS: Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. RESULTS: In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. CONCLUSION: Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input.
Assuntos
Algoritmos , Inteligência Artificial , Imagem de Difusão por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Neoplasias da Próstata/patologia , Software , Idoso , Humanos , Aumento da Imagem/métodos , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Validação de Programas de ComputadorRESUMO
Two significant obstacles hinder the advancement of Radiology AI. The first is the challenge of overfitting, where small training data sets can result in unreliable outcomes. The second challenge is the need for more generalizability, the lack of which creates difficulties in implementing the technology across various institutions and practices. A recent innovation, deep neuroevolution (DNE), has been introduced to tackle the overfitting issue by training on small data sets and producing accurate predictions. However, the generalizability of DNE has yet to be proven. This paper strives to overcome this barrier by demonstrating that DNE can achieve satisfactory results in diverse external validation sets. The main innovation of the work is thus showing that DNE can generalize to varied outside data. Our example use case is predicting brain metastasis from neuroblastoma, emphasizing the importance of AI with limited data sets. Despite image collection and labeling advancements, rare diseases will always constrain data availability. We optimized a convolutional neural network (CNN) with DNE to demonstrate generalizability. We trained the CNN with 60 MRI images and tested it on a separate diverse collection of images from over 50 institutions. For comparison, we also trained with the more traditional stochastic gradient descent (SGD) method, with the two variants of (1) training from scratch and (2) transfer learning. Our results show that DNE demonstrates excellent generalizability with 97% accuracy on the heterogeneous testing set, while neither form of SGD could reach 60% accuracy. DNE's ability to generalize from small training sets to external and diverse testing sets suggests that it or similar approaches may play an integral role in improving the clinical performance of AI.
RESUMO
Reliable and trustworthy artificial intelligence (AI), particularly in high-stake medical diagnoses, necessitates effective uncertainty quantification (UQ). Existing UQ methods using model ensembles often introduce invalid variability or computational complexity, rendering them impractical and ineffective in clinical workflow. We propose a UQ approach based on deep neuroevolution (DNE), a data-efficient optimization strategy. Our goal is to replicate trends observed in expert-based UQ. We focused on language lateralization maps from resting-state functional MRI (rs-fMRI). Fifty rs-fMRI maps were divided into training/testing (30:20) sets, representing two labels: "left-dominant" and "co-dominant." DNE facilitated acquiring an ensemble of 100 models with high training and testing set accuracy. Model uncertainty was derived from distribution entropies over the 100 model predictions. Expert reviewers provided user-based uncertainties for comparison. Model (epistemic) and user-based (aleatoric) uncertainties were consistent in the independently and identically distributed (IID) testing set, mainly indicating low uncertainty. In a mostly out-of-distribution (OOD) holdout set, both model and user-based entropies correlated but displayed a bimodal distribution, with one peak representing low and another high uncertainty. We also found a statistically significant positive correlation between epistemic and aleatoric uncertainties. DNE-based UQ effectively mirrored user-based uncertainties, particularly highlighting increased uncertainty in OOD images. We conclude that DNE-based UQ correlates with expert assessments, making it reliable for our use case and potentially for other radiology applications.
RESUMO
This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.
RESUMO
Surface morphology is an important indicator of malignant potential for solid-type lung nodules detected at CT, but is difficult to assess subjectively. Automated methods for morphology assessment have previously been described using a common measure of nodule shape, representative of the broad class of existing methods, termed area-to-perimeter-length ratio (APR). APR is static and thus highly susceptible to alterations by random noise and artifacts in image acquisition. We introduce and analyze the self-overlap (SO) method as a dynamic automated morphology detection scheme. SO measures the degree of change of nodule masks upon Gaussian blurring. We hypothesized that this new metric would afford equally high accuracy and superior precision than APR. Application of the two methods to a set of 119 patient lung nodules and a set of simulation nodules showed our approach to be slightly more accurate and on the order of ten times as precise, respectively. The dynamic quality of this new automated metric renders it less sensitive to image noise and artifacts than APR, and as such, SO is a potentially useful measure of cancer risk for solid-type lung nodules detected on CT.
Assuntos
Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Nódulo Pulmonar Solitário/diagnóstico por imagem , Nódulo Pulmonar Solitário/patologia , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos , Automação , Biópsia por Agulha , Diagnóstico Diferencial , Reações Falso-Positivas , Humanos , Imuno-Histoquímica , Imagens de Fantasmas , Sensibilidade e EspecificidadeRESUMO
PURPOSE: To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL). MATERIALS AND METHODS: In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy. RESULTS: Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy. CONCLUSION: The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020.
RESUMO
Nuclear Medicine imaging is an important modality to follow up abnormalities of thyroid function tests and to uncover and characterize thyroid nodules either de novo or as previously seen on other imaging modalities, namely ultrasound. In general, the hypofunctioning 'cold' nodules pose a higher malignancy potential than hyperfunctioning 'hot' nodules, for which the risk is <1%. Hot nodules are detected by the radiologist as a region of focal increased radiotracer uptake, which appears as a density of pixels that is higher than surrounding normal thyroid parenchyma. Similarly, cold nodules show decreased density of pixels, corresponding to their decreased uptake of radiotracer, and are photopenic. Partly because Nuclear Medicine images have poor resolution, these density variations can sometimes be subtle, and a second reader computer-aided detection (CAD) scheme that can highlight hot/cold nodules has the potential to reduce false negatives by bringing the radiologists' attention to the occasional overlooked nodules. Our approach subdivides thyroid images into small regions and employs a set of pixel density cutoffs, marking regions that fulfill density criteria. Thresholding is a fundamental tool in image processing. In nuclear medicine, scroll bars to adjust standardized uptake value cutoffs are already in wide commercial use in PET/CT display systems. A similar system could be used for planar thyroid images, whereby the user varies threshold and highlights suspect regions after an initial reader survey of the images. We hypothesized that a thresholding approach would accurately detect both hot and cold thyroid nodules relative to expert readers. Analyzing 22 nodules, half of them hot and the other half cold, we found good agreement between highlighted candidate nodules and the consensus selections of two expert readers, with nonzero overlap between expert and CAD selections in all cases.
Assuntos
Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Cintilografia/métodos , Compostos Radiofarmacêuticos/análise , Glândula Tireoide/patologia , Nódulo da Glândula Tireoide/diagnóstico , Diagnóstico Diferencial , Humanos , Estudos Retrospectivos , Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/classificação , Nódulo da Glândula Tireoide/diagnóstico por imagemRESUMO
The empirical harmonic potential function of elastic network models (ENMs) is augmented by three- and four-body interactions as well as by a parameter-free connection rule. In the new bend-twist-stretch (BTS) model the complexity of the parametrization is shifted from the spatial level of detail to the potential function, enabling an arbitrary coarse graining of the network. Compared to distance cutoff-based Hookean springs, the approach yields a more stable parametrization of coarse-grained ENMs for biomolecular dynamics. Traditional ENMs give rise to unbounded zero-frequency vibrations when (pseudo)atoms are connected to fewer than three neighbors. A large cutoff is therefore chosen in an ENM (about twice the average nearest-neighbor distance), resulting in many false-positive connections that reduce the spatial detail that can be resolved. More importantly, the required three-neighbor connectedness also limits the coarse graining, i.e., the network must be dense, even in the case of low-resolution structures that exhibit few spatial features. The new BTS model achieves such coarse graining by extending the ENM potential to include three-and four-atom interactions (bending and twisting, respectively) in addition to the traditional two-atom stretching. Thus, the BTS model enables reliable modeling of any three-dimensional graph irrespective of the atom connectedness. The additional potential terms were parametrized using continuum elastic theory of elastic rods, and the distance cutoff was replaced by a competitive Hebb connection rule, setting all free parameters in the model. We validate the approach on a carbon-alpha representation of adenylate kinase and illustrate its use with electron microscopy maps of E. coli RNA polymerase, E. coli ribosome, and eukaryotic chaperonin containing T-complex polypeptide 1, which were difficult to model with traditional ENMs. For adenylate kinase, we find excellent reproduction (>90% overlap) of the ENM modes and B factors when BTS is applied to the carbon-alpha representation as well as to coarser descriptions. For the volumetric maps, coarse BTS yields similar motions (70%-90% overlap) to those obtained from significantly denser representations with ENM. Our Python-based algorithms of ENM and BTS implementations are freely available.
Assuntos
Elasticidade , Modelos Moleculares , Movimento , Adenilato Quinase/química , Adenilato Quinase/metabolismo , Fenômenos Biomecânicos , Chaperoninas/química , Chaperoninas/metabolismo , RNA Polimerases Dirigidas por DNA/química , RNA Polimerases Dirigidas por DNA/metabolismo , Escherichia coli/enzimologia , Microscopia Eletrônica , Reprodutibilidade dos Testes , Ribossomos/química , Ribossomos/metabolismoRESUMO
We report a rare case of a patient with colorectal cancer with chest wall metastases. The development of bleeding at the site of the metastasis ultimately resulted in the development of a hematoma, necessitating resection of the tumor along with part of the chest wall. Literature on chest wall metastases of colonic adenocarcinoma is reviewed and discussed. The teaching point is that a chest wall mass seen on imaging should prompt consideration of metastatic cancer in the differential diagnosis. The colon is a rare though reported primary site.
Assuntos
Adenocarcinoma/secundário , Neoplasias do Colo , Hematoma/etiologia , Neoplasias Torácicas/secundário , Parede Torácica , Idoso , Diagnóstico Diferencial , Ecocardiografia , Humanos , Imageamento por Ressonância Magnética , Masculino , Tomografia Computadorizada por Raios XRESUMO
Advances in multidetector technology have made dual-energy computed tomography (CT) imaging possible. Dual-energy CT imaging enables tissue characterization in addition to morphologic evaluation of imaged regions. This article reviews current and potential CT technology, technical and workflow considerations when performing dual-energy CT, and clinical applications in the thorax, with an emphasis on the knowledge gained so far.