Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Int J Legal Med ; 138(3): 939-949, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38147158

RESUMEN

PURPOSE: We aimed to establish a model combining MRI volume measurements from the 1st, 2nd and 3rd molars for age prediction in sub-adults and compare the age prediction performance of different combinations of all three molars, internally in the study cohort. MATERIAL AND METHOD: We examined 99 volunteers using a 1.5 T MR scanner with a customized high-resolution single T2 sequence. Segmentation was performed using SliceOmatic (Tomovision©). Age prediction was based on the tooth tissue ratio (high signal soft tissue + low signal soft tissue)/total. The model included three correlation parameters to account for statistical dependence between the molars. Age prediction performance of different combinations of teeth for the three molars was assessed using interquartile range (IQR). RESULTS: We included data from the 1st molars from 87 participants (F/M 59/28), 2nd molars from 93 (F/M 60/33) and 3rd molars from 67 (F/M 45/22). The age range was 14-24 years with a median age of 18 years. The model with the best age prediction performance (smallest IQR) was 46-47-18 (lower right 1st and 2nd and upper right 3rd molar) in males. The estimated correlation between the different molars was 0.620 (46 vs. 47), 0.430 (46 vs. 18), and 0.598 (47 vs. 18). IQR was the smallest in tooth combinations including a 3rd molar. CONCLUSION: We have established a model for combining tissue volume measurements from the 1st, 2nd and 3rd molars for age prediction in sub-adults. The prediction performance was mostly driven by the 3rd molars. All combinations involving the 3rd molar performed well.


Asunto(s)
Imagen por Resonancia Magnética , Diente Molar , Adulto , Masculino , Humanos , Adolescente , Adulto Joven , Diente Molar/diagnóstico por imagen
2.
Int J Legal Med ; 137(5): 1515-1526, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37402013

RESUMEN

PURPOSE: To investigate prediction of age older than 18 years in sub-adults using tooth tissue volumes from MRI segmentation of the entire 1st and 2nd molars, and to establish a model for combining information from two different molars. MATERIALS AND METHODS: We acquired T2 weighted MRIs of 99 volunteers with a 1.5-T scanner. Segmentation was performed using SliceOmatic (Tomovision©). Linear regression was used to analyse the association between mathematical transformation outcomes of tissue volumes, age, and sex. Performance of different outcomes and tooth combinations were assessed based on the p-value of the age variable, common, or separate for each sex, depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach using information from the 1st and 2nd molars both separately and combined. RESULTS: 1st molars from 87 participants, and 2nd molars from 93 participants were included. The age range was 14-24 years with a median age of 18 years. The transformation outcome (high signal soft tissue + low signal soft tissue)/total had the strongest statistical association with age for the lower right 1st (p= 7.1*10-4 for males) and 2nd molar (p=9.44×10-7 for males and p=7.4×10-10 for females). Combining the lower right 1st and 2nd molar in males did not increase the prediction performance compared to using the best tooth alone. CONCLUSION: MRI segmentation of the lower right 1st and 2nd molar might prove useful in the prediction of age older than 18 years in sub-adults. We provided a statistical framework to combine the information from two molars.


Asunto(s)
Imagen por Resonancia Magnética , Diente Molar , Masculino , Femenino , Humanos , Adulto , Adolescente , Adulto Joven , Teorema de Bayes , Diente Molar/diagnóstico por imagen , Modelos Lineales , Probabilidad
3.
Int J Legal Med ; 137(3): 753-763, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36811675

RESUMEN

PURPOSE: Our aim was to investigate tissue volumes measured by MRI segmentation of the entire 3rd molar for prediction of a sub-adult being older than 18 years. MATERIAL AND METHOD: We used a 1.5-T MR scanner with a customized high-resolution single T2 sequence acquisition with 0.37 mm iso-voxels. Two dental cotton rolls drawn with water stabilized the bite and delineated teeth from oral air. Segmentation of the different tooth tissue volumes was performed using SliceOmatic (Tomovision©). Linear regression was used to analyze the association between mathematical transformation outcomes of the tissue volumes, age, and sex. Performance of different transformation outcomes and tooth combinations were assessed based on the p value of the age variable, combined or separated for each sex depending on the selected model. The predictive probability of being older than 18 years was obtained by a Bayesian approach. RESULTS: We included 67 volunteers (F/M: 45/22), range 14-24 years, median age 18 years. The transformation outcome (pulp + predentine)/total volume for upper 3rd molars had the strongest association with age (p = 3.4 × 10-9). CONCLUSION: MRI segmentation of tooth tissue volumes might prove useful in the prediction of age older than 18 years in sub-adults.


Asunto(s)
Determinación de la Edad por los Dientes , Diente Molar , Adolescente , Humanos , Teorema de Bayes , Modelos Lineales , Imagen por Resonancia Magnética , Diente Molar/diagnóstico por imagen , Determinación de la Edad por los Dientes/métodos , Adulto Joven , Masculino , Femenino , Valor Predictivo de las Pruebas
5.
Pediatr Radiol ; 52(6): 1104-1114, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35107593

RESUMEN

BACKGROUND: Manual assessment of bone marrow signal is time-consuming and requires meticulous standardisation to secure adequate precision of findings. OBJECTIVE: We examined the feasibility of using deep learning for automated segmentation of bone marrow signal in children and adolescents. MATERIALS AND METHODS: We selected knee images from 95 whole-body MRI examinations of healthy individuals and of children with chronic non-bacterial osteomyelitis, ages 6-18 years, in a longitudinal prospective multi-centre study cohort. Bone marrow signal on T2-weighted Dixon water-only images was divided into three color-coded intensity-levels: 1 = slightly increased; 2 = mildly increased; 3 = moderately to highly increased, up to fluid-like signal. We trained a convolutional neural network on 85 examinations to perform bone marrow segmentation. Four readers manually segmented a test set of 10 examinations and calculated ground truth using simultaneous truth and performance level estimation (STAPLE). We evaluated model and rater performance through Dice similarity coefficient and in consensus. RESULTS: Consensus score of model performance showed acceptable results for all but one examination. Model performance and reader agreement had highest scores for level-1 signal (median Dice 0.68) and lowest scores for level-3 signal (median Dice 0.40), particularly in examinations where this signal was sparse. CONCLUSION: It is feasible to develop a deep-learning-based model for automated segmentation of bone marrow signal in children and adolescents. Our model performed poorest for the highest signal intensity in examinations where this signal was sparse. Further improvement requires training on larger and more balanced datasets and validation against ground truth, which should be established by radiologists from several institutions in consensus.


Asunto(s)
Aprendizaje Profundo , Adolescente , Médula Ósea/diagnóstico por imagen , Niño , Estudios de Factibilidad , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Espectroscopía de Resonancia Magnética , Estudios Prospectivos
6.
Clin Nutr ESPEN ; 43: 360-368, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34024541

RESUMEN

BACKGROUND & AIMS: Excess adipose tissue may affect colorectal cancer (CRC) patients' disease progression and treatment. In contrast to the commonly used anthropometric measurements, Dual-Energy X-Ray Absorptiometry (DXA) and Computed Tomography (CT) can differentiate adipose tissues. However, these modalities are rarely used in the clinic despite providing high-quality estimates. This study aimed to compare DXA's measurement of abdominal visceral adipose tissue (VAT) and fat mass (FM) against a corresponding volume by CT in a CRC population. Secondly, we aimed to identify the best single lumbar CT slice for abdominal VAT. Lastly, we investigated the associations between anthropometric measurements and VAT estimated by DXA and CT. METHODS: Non-metastatic CRC patients between 50-80 years from the ongoing randomized controlled trial CRC-NORDIET were included in this cross-sectional study. Corresponding abdominal volumes were acquired by Lunar iDXA and from clinically acquired CT examinations. Also, single CT slices at L2-, L3-and L4-level were obtained. Agreement between the methods was investigated using univariate linear regression and Bland-Altman plots. RESULTS: Sixty-six CRC patients were included. Abdominal volumetric VAT and FM measured by DXA explained up to 91% and 96% of the variance in VAT and FM by CT, respectively. Bland-Altman plots demonstrated an overestimation of VAT by DXA compared to CT (mean difference of 76 cm3) concurrent with an underestimation of FM (mean difference of -319 cm3). A higher overestimation of VAT (p = 0.015) and underestimation of FM (p = 0.036) were observed in obese relative to normal weight subjects. VAT in a single slice at L3-level showed the highest explained variance against CT volume (R2 = 0.97), but a combination of three slices (L2, L3, L4) explained a significantly higher variance than L3 alone (R2 = 0.98, p < 0.006). The anthropometric measurements explained between 31-65% of the variance of volumetric VAT measured by DXA and CT. CONCLUSIONS: DXA and the combined use of three CT slices (L2-L4) are valid to predict abdominal volumetric VAT and FM in CRC patients when using volumetric CT as a reference method. Due to the poor performance of anthropometric measurements we recommend exploring the added value of advanced body composition by DXA and CT integrated into CRC care.


Asunto(s)
Neoplasias Colorrectales , Tomografía Computarizada por Rayos X , Absorciometría de Fotón , Tejido Adiposo , Anciano , Anciano de 80 o más Años , Neoplasias Colorrectales/diagnóstico por imagen , Estudios Transversales , Humanos , Persona de Mediana Edad
7.
IEEE Trans Neural Netw Learn Syst ; 32(3): 932-946, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33544680

RESUMEN

Chest computed tomography (CT) imaging has become indispensable for staging and managing coronavirus disease 2019 (COVID-19), and current evaluation of anomalies/abnormalities associated with COVID-19 has been performed majorly by the visual score. The development of automated methods for quantifying COVID-19 abnormalities in these CT images is invaluable to clinicians. The hallmark of COVID-19 in chest CT images is the presence of ground-glass opacities in the lung region, which are tedious to segment manually. We propose anamorphic depth embedding-based lightweight CNN, called Anam-Net, to segment anomalies in COVID-19 chest CT images. The proposed Anam-Net has 7.8 times fewer parameters compared to the state-of-the-art UNet (or its variants), making it lightweight capable of providing inferences in mobile or resource constraint (point-of-care) platforms. The results from chest CT images (test cases) across different experiments showed that the proposed method could provide good Dice similarity scores for abnormal and normal regions in the lung. We have benchmarked Anam-Net with other state-of-the-art architectures, such as ENet, LEDNet, UNet++, SegNet, Attention UNet, and DeepLabV3+. The proposed Anam-Net was also deployed on embedded systems, such as Raspberry Pi 4, NVIDIA Jetson Xavier, and mobile-based Android application (CovSeg) embedded with Anam-Net to demonstrate its suitability for point-of-care platforms. The generated codes, models, and the mobile application are available for enthusiastic users at https://github.com/NaveenPaluru/Segmentation-COVID-19.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , COVID-19/epidemiología , Humanos
8.
Radiographics ; 40(5): 1395-1411, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32735475

RESUMEN

Neuroimmune disorders in children are a complex group of inflammatory conditions of the central nervous system with diverse pathophysiologic mechanisms and clinical manifestations. Improvements in antibody analysis, genetics, neuroradiology, and different clinical phenotyping have expanded knowledge of the different neuroimmune disorders. The authors focus on pediatric-onset myelin oligodendrocyte glycoprotein (MOG) antibody-associated disease, which is a new entity in the spectrum of inflammatory demyelinating diseases, distinct from both multiple sclerosis (MS) and anti-aquaporin-4 (AQP4) antibody neuromyelitis optica spectrum disorders (NMOSDs). The authors review the importance of an optimized antibody-detection assay, the frequency of MOG antibodies in children with acquired demyelinating syndrome (ADS), the disease course, the clinical spectrum, proposed diagnostic criteria, and neuroimaging of MOG antibody-associated disease. Also, they outline differential diagnosis from other neuroimmune disorders in children according to the putative primary immune mechanism. Finally, they recommend a diagnostic algorithm for the first manifestation of ADS or relapsing ADS that leads to four demyelinating syndromes: MOG antibody-associated disease, AQP4 antibody NMOSDs, MS, and seronegative relapsing ADS. This diagnostic approach provides a framework for the strategic role of neuroradiology in diagnosis of ADS and decision making, to optimize patient care and treatment outcome in concert with clinicians. Online supplemental material is available for this article. ©RSNA, 2020.


Asunto(s)
Enfermedades Autoinmunes del Sistema Nervioso/diagnóstico por imagen , Imagen Molecular/métodos , Neuroimagen/métodos , Enfermedades Autoinmunes del Sistema Nervioso/terapia , Niño , Diagnóstico Diferencial , Humanos
9.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31089974

RESUMEN

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Asunto(s)
Conjuntos de Datos como Asunto , Aprendizaje Profundo , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Sistemas de Información Radiológica , Humanos
10.
Radiology ; 290(3): 669-679, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30526356

RESUMEN

Purpose To develop and evaluate a fully automated algorithm for segmenting the abdomen from CT to quantify body composition. Materials and Methods For this retrospective study, a convolutional neural network based on the U-Net architecture was trained to perform abdominal segmentation on a data set of 2430 two-dimensional CT examinations and was tested on 270 CT examinations. It was further tested on a separate data set of 2369 patients with hepatocellular carcinoma (HCC). CT examinations were performed between 1997 and 2015. The mean age of patients was 67 years; for male patients, it was 67 years (range, 29-94 years), and for female patients, it was 66 years (range, 31-97 years). Differences in segmentation performance were assessed by using two-way analysis of variance with Bonferroni correction. Results Compared with reference segmentation, the model for this study achieved Dice scores (mean ± standard deviation) of 0.98 ± 0.03, 0.96 ± 0.02, and 0.97 ± 0.01 in the test set, and 0.94 ± 0.05, 0.92 ± 0.04, and 0.98 ± 0.02 in the HCC data set, for the subcutaneous, muscle, and visceral adipose tissue compartments, respectively. Performance met or exceeded that of expert manual segmentation. Conclusion Model performance met or exceeded the accuracy of expert manual segmentation of CT examinations for both the test data set and the hepatocellular carcinoma data set. The model generalized well to multiple levels of the abdomen and may be capable of fully automated quantification of body composition metrics in three-dimensional CT examinations. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Chang in this issue.


Asunto(s)
Composición Corporal , Aprendizaje Profundo , Reconocimiento de Normas Patrones Automatizadas , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Abdominal , Tomografía Computarizada por Rayos X , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Carcinoma Hepatocelular/diagnóstico por imagen , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Persona de Mediana Edad , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...