Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Radiol Artif Intell ; 6(2): e230152, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38353633

RESUMO

Purpose To develop a Weakly supervISed model DevelOpment fraMework (WISDOM) model to construct a lymph node (LN) diagnosis model for patients with rectal cancer (RC) that uses preoperative MRI data coupled with postoperative patient-level pathologic information. Materials and Methods In this retrospective study, the WISDOM model was built using MRI (T2-weighted and diffusion-weighted imaging) and patient-level pathologic information (the number of postoperatively confirmed metastatic LNs and resected LNs) based on the data of patients with RC between January 2016 and November 2017. The incremental value of the model in assisting radiologists was investigated. The performances in binary and ternary N staging were evaluated using area under the receiver operating characteristic curve (AUC) and the concordance index (C index), respectively. Results A total of 1014 patients (median age, 62 years; IQR, 54-68 years; 590 male) were analyzed, including the training cohort (n = 589) and internal test cohort (n = 146) from center 1 and two external test cohorts (cohort 1: 117; cohort 2: 162) from centers 2 and 3. The WISDOM model yielded an overall AUC of 0.81 and C index of 0.765, significantly outperforming junior radiologists (AUC = 0.69, P < .001; C index = 0.689, P < .001) and performing comparably with senior radiologists (AUC = 0.79, P = .21; C index = 0.788, P = .22). Moreover, the model significantly improved the performance of junior radiologists (AUC = 0.80, P < .001; C index = 0.798, P < .001) and senior radiologists (AUC = 0.88, P < .001; C index = 0.869, P < .001). Conclusion This study demonstrates the potential of WISDOM as a useful LN diagnosis method using routine rectal MRI data. The improved radiologist performance observed with model assistance highlights the potential clinical utility of WISDOM in practice. Keywords: MR Imaging, Abdomen/GI, Rectum, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. Published under a CC BY 4.0 license.


Assuntos
Aprendizado Profundo , Neoplasias Retais , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias Retais/diagnóstico por imagem , Linfonodos/diagnóstico por imagem
2.
Radiol Artif Intell ; 5(3): e220146, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37293340

RESUMO

Artificial intelligence (AI) tools may assist breast screening mammography programs, but limited evidence supports their generalizability to new settings. This retrospective study used a 3-year dataset (April 1, 2016-March 31, 2019) from a U.K. regional screening program. The performance of a commercially available breast screening AI algorithm was assessed with a prespecified and site-specific decision threshold to evaluate whether its performance was transferable to a new clinical site. The dataset consisted of women (aged approximately 50-70 years) who attended routine screening, excluding self-referrals, those with complex physical requirements, those who had undergone a previous mastectomy, and those who underwent screening that had technical recalls or did not have the four standard image views. In total, 55 916 screening attendees (mean age, 60 years ± 6 [SD]) met the inclusion criteria. The prespecified threshold resulted in high recall rates (48.3%, 21 929 of 45 444), which reduced to 13.0% (5896 of 45 444) following threshold calibration, closer to the observed service level (5.0%, 2774 of 55 916). Recall rates also increased approximately threefold following a software upgrade on the mammography equipment, requiring per-software version thresholds. Using software-specific thresholds, the AI algorithm would have recalled 277 of 303 (91.4%) screen-detected cancers and 47 of 138 (34.1%) interval cancers. AI performance and thresholds should be validated for new clinical settings before deployment, while quality assurance systems should monitor AI performance for consistency. Keywords: Breast, Screening, Mammography, Computer Applications-Detection/Diagnosis, Neoplasms-Primary, Technology Assessment Supplemental material is available for this article. © RSNA, 2023.

3.
Radiol Artif Intell ; 4(5): e220055, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204531

RESUMO

Purpose: To train a deep natural language processing (NLP) model, using data mined structured oncology reports (SOR), for rapid tumor response category (TRC) classification from free-text oncology reports (FTOR) and to compare its performance with human readers and conventional NLP algorithms. Materials and Methods: In this retrospective study, databases of three independent radiology departments were queried for SOR and FTOR dated from March 2018 to August 2021. An automated data mining and curation pipeline was developed to extract Response Evaluation Criteria in Solid Tumors-related TRCs for SOR for ground truth definition. The deep NLP bidirectional encoder representations from transformers (BERT) model and three feature-rich algorithms were trained on SOR to predict TRCs in FTOR. Models' F1 scores were compared against scores of radiologists, medical students, and radiology technologist students. Lexical and semantic analyses were conducted to investigate human and model performance on FTOR. Results: Oncologic findings and TRCs were accurately mined from 9653 of 12 833 (75.2%) queried SOR, yielding oncology reports from 10 455 patients (mean age, 60 years ± 14 [SD]; 5303 women) who met inclusion criteria. On 802 FTOR in the test set, BERT achieved better TRC classification results (F1, 0.70; 95% CI: 0.68, 0.73) than the best-performing reference linear support vector classifier (F1, 0.63; 95% CI: 0.61, 0.66) and technologist students (F1, 0.65; 95% CI: 0.63, 0.67), had similar performance to medical students (F1, 0.73; 95% CI: 0.72, 0.75), but was inferior to radiologists (F1, 0.79; 95% CI: 0.78, 0.81). Lexical complexity and semantic ambiguities in FTOR influenced human and model performance, revealing maximum F1 score drops of -0.17 and -0.19, respectively. Conclusion: The developed deep NLP model reached the performance level of medical students but not radiologists in curating oncologic outcomes from radiology FTOR.Keywords: Neural Networks, Computer Applications-Detection/Diagnosis, Oncology, Research Design, Staging, Tumor Response, Comparative Studies, Decision Analysis, Experimental Investigations, Observer Performance, Outcomes Analysis Supplemental material is available for this article. © RSNA, 2022.

4.
Radiol Artif Intell ; 4(5): e210268, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36204530

RESUMO

Purpose: To evaluate the performance of a deep learning (DL) model that measures the liver segmental volume ratio (LSVR) (ie, the volumes of Couinaud segments I-III/IV-VIII) and spleen volumes from CT scans to predict cirrhosis and advanced fibrosis. Materials and Methods: For this Health Insurance Portability and Accountability Act-compliant, retrospective study, two datasets were used. Dataset 1 consisted of patients with hepatitis C who underwent liver biopsy (METAVIR F0-F4, 2000-2016). Dataset 2 consisted of patients who had cirrhosis from other causes who underwent liver biopsy (Ishak 0-6, 2001-2021). Whole liver, LSVR, and spleen volumes were measured with contrast-enhanced CT by radiologists and the DL model. Areas under the receiver operating characteristic curve (AUCs) for diagnosing advanced fibrosis (≥METAVIR F2 or Ishak 3) and cirrhosis (≥METAVIR F4 or Ishak 5) were calculated. Multivariable models were built on dataset 1 and tested on datasets 1 (hold out) and 2. Results: Datasets 1 and 2 consisted of 406 patients (median age, 50 years [IQR, 44-56 years]; 297 men) and 207 patients (median age, 50 years [IQR, 41-57 years]; 147 men), respectively. In dataset 1, the prediction of cirrhosis was similar between the manual versus automated measurements for spleen volume (AUC, 0.86 [95% CI: 0.82, 0.9] vs 0.85 [95% CI: 0.81, 0.89]; significantly noninferior, P < .001) and LSVR (AUC, 0.83 [95% CI: 0.78, 0.87] vs 0.79 [95% CI: 0.74, 0.84]; P < .001). The best performing multivariable model achieved AUCs of 0.94 (95% CI: 0.89, 0.99) and 0.79 (95% CI: 0.71, 0.87) for cirrhosis and 0.8 (95% CI: 0.69, 0.91) and 0.71 (95% CI: 0.64, 0.78) for advanced fibrosis in datasets 1 and 2, respectively. Conclusion: The CT-based DL model performed similarly to radiologists. LSVR and splenic volume were predictive of advanced fibrosis and cirrhosis.Keywords: CT, Liver, Cirrhosis, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. © RSNA, 2022.

5.
Radiol Artif Intell ; 4(3): e210064, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652114

RESUMO

Purpose: To assess generalizability of published deep learning (DL) algorithms for radiologic diagnosis. Materials and Methods: In this systematic review, the PubMed database was searched for peer-reviewed studies of DL algorithms for image-based radiologic diagnosis that included external validation, published from January 1, 2015, through April 1, 2021. Studies using nonimaging features or incorporating non-DL methods for feature extraction or classification were excluded. Two reviewers independently evaluated studies for inclusion, and any discrepancies were resolved by consensus. Internal and external performance measures and pertinent study characteristics were extracted, and relationships among these data were examined using nonparametric statistics. Results: Eighty-three studies reporting 86 algorithms were included. The vast majority (70 of 86, 81%) reported at least some decrease in external performance compared with internal performance, with nearly half (42 of 86, 49%) reporting at least a modest decrease (≥0.05 on the unit scale) and nearly a quarter (21 of 86, 24%) reporting a substantial decrease (≥0.10 on the unit scale). No study characteristics were found to be associated with the difference between internal and external performance. Conclusion: Among published external validation studies of DL algorithms for image-based radiologic diagnosis, the vast majority demonstrated diminished algorithm performance on the external dataset, with some reporting a substantial performance decrease.Keywords: Meta-Analysis, Computer Applications-Detection/Diagnosis, Neural Networks, Computer Applications-General (Informatics), Epidemiology, Technology Assessment, Diagnosis, Informatics Supplemental material is available for this article. © RSNA, 2022.

6.
Heliyon ; 8(4): e09311, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35520623

RESUMO

Purpose: This study aims to evaluate the potential of machine learning algorithms built with radiomics features from computed tomography urography (CTU) images that classify RB1 gene mutation status in bladder cancer. Method: The study enrolled CTU images of 18 patients with and 54 without RB1 mutation from a public database. Image and data preprocessing were performed after data augmentation. Feature selection steps were consisted of filter and wrapper methods. Pearson's correlation analysis was the filter, and a wrapper-based sequential feature selection algorithm was the wrapper. Models with XGBoost, Random Forest (RF), and k-Nearest Neighbors (kNN) algorithms were developed. Performance metrics of the models were calculated. Models' performances were compared by using Friedman's test. Results: 8 features were selected from 851 total extracted features. Accuracy, sensitivity, specificity, precision, recall, F1 measure and AUC were 84%, 80%, 88%, 86%, 80%, 0.83 and 0.84, for XGBoost; 72%, 80%, 65%, 67%, 80%, 0.73 and 0.72 for RF; 66%, 53%, 76%, 67%, 53%, 0.60 and 0.65 for kNN, respectively. XGBoost model had outperformed kNN model in Friedman's test (p = 0.006). Conclusions: Machine learning algorithms with radiomics features from CTU images show promising results in classifying bladder cancer by RB1 mutation status non-invasively.

7.
Radiol Artif Intell ; 3(6): e200232, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34870211

RESUMO

PURPOSE: To investigate if a deep learning convolutional neural network (CNN) could enable low-dose fluorine 18 (18F) fluorodeoxyglucose (FDG) PET/MRI for correct treatment response assessment of children and young adults with lymphoma. MATERIALS AND METHODS: In this secondary analysis of prospectively collected data (ClinicalTrials.gov identifier: NCT01542879), 20 patients with lymphoma (mean age, 16.4 years ± 6.4 [standard deviation]) underwent 18F-FDG PET/MRI between July 2015 and August 2019 at baseline and after induction chemotherapy. Full-dose 18F-FDG PET data (3 MBq/kg) were simulated to lower 18F-FDG doses based on the percentage of coincidence events (representing simulated 75%, 50%, 25%, 12.5%, and 6.25% 18F-FDG dose [hereafter referred to as 75%Sim, 50%Sim, 25%Sim, 12.5%Sim, and 6.25%Sim, respectively]). A U.S. Food and Drug Administration-approved CNN was used to augment input simulated low-dose scans to full-dose scans. For each follow-up scan after induction chemotherapy, the standardized uptake value (SUV) response score was calculated as the maximum SUV (SUVmax) of the tumor normalized to the mean liver SUV; tumor response was classified as adequate or inadequate. Sensitivity and specificity in the detection of correct response status were computed using full-dose PET as the reference standard. RESULTS: With decreasing simulated radiotracer doses, tumor SUVmax increased. A dose below 75%Sim of the full dose led to erroneous upstaging of adequate responders to inadequate responders (43% [six of 14 patients] for 75%Sim; 93% [13 of 14 patients] for 50%Sim; and 100% [14 of 14 patients] below 50%Sim; P < .05 for all). CNN-enhanced low-dose PET/MRI scans at 75%Sim and 50%Sim enabled correct response assessments for all patients. Use of the CNN augmentation for assessing adequate and inadequate responses resulted in identical sensitivities (100%) and specificities (100%) between the assessment of 100% full-dose PET, augmented 75%Sim, and augmented 50%Sim images. CONCLUSION: CNN enhancement of PET/MRI scans may enable 50% 18F-FDG dose reduction with correct treatment response assessment of children and young adults with lymphoma.Keywords: Pediatrics, PET/MRI, Computer Applications Detection/Diagnosis, Lymphoma, Tumor Response, Whole-Body Imaging, Technology AssessmentClinical trial registration no: NCT01542879 Supplemental material is available for this article. © RSNA, 2021.

8.
Radiol Imaging Cancer ; 3(5): e200160, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34559005

RESUMO

Purpose To compare the inter- and intraobserver agreement and reading times achieved when assigning Lung Imaging Reporting and Data System (Lung-RADS) categories to baseline and follow-up lung cancer screening studies by using a dedicated CT lung screening viewer with integrated nodule detection and volumetric support with those achieved by using a standard picture archiving and communication system (PACS)-like viewer. Materials and Methods Data were obtained from the National Lung Screening Trial (NLST). By using data recorded by NLST radiologists, scans were assigned to Lung-RADS categories. For each Lung-RADS category (1 or 2, 3, 4A, and 4B), 40 CT scans (20 baseline scans and 20 follow-up scans) were randomly selected for 160 participants (median age, 61 years; interquartile range, 58-66 years; 61 women) in total. Seven blinded observers independently read all CT scans twice in a randomized order with a 2-week washout period: once by using the standard PACS-like viewer and once by using the dedicated viewer. Observers were asked to assign a Lung-RADS category to each scan and indicate the risk-dominant nodule. Inter- and intraobserver agreement was analyzed by using Fleiss κ values and Cohen weighted κ values, respectively. Reading times were compared by using a Wilcoxon signed rank test. Results The interobserver agreement was moderate for the standard viewer and substantial for the dedicated viewer, with Fleiss κ values of 0.58 (95% CI: 0.55, 0.60) and 0.66 (95% CI: 0.64, 0.68), respectively. The intraobserver agreement was substantial, with a mean Cohen weighted κ value of 0.67. The median reading time was significantly reduced from 160 seconds with the standard viewer to 86 seconds with the dedicated viewer (P < .001). Conclusion Lung-RADS interobserver agreement increased from moderate to substantial when using the dedicated CT lung screening viewer. The median reading time was substantially reduced when scans were read by using the dedicated CT lung screening viewer. Keywords: CT, Thorax, Lung, Computer Applications-Detection/Diagnosis, Observer Performance, Technology Assessment Supplemental material is available for this article. © RSNA, 2021.


Assuntos
Neoplasias Pulmonares , Detecção Precoce de Câncer , Feminino , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico por imagem , Programas de Rastreamento , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X
9.
Radiol Artif Intell ; 3(4): e200190, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34350409

RESUMO

PURPOSE: To assess the generalizability of a deep learning pneumothorax detection model on datasets from multiple external institutions and examine patient and acquisition factors that might influence performance. MATERIALS AND METHODS: In this retrospective study, a deep learning model was trained for pneumothorax detection by merging two large open-source chest radiograph datasets: ChestX-ray14 and CheXpert. It was then tested on six external datasets from multiple independent institutions (labeled A-F) in a retrospective case-control design (data acquired between 2016 and 2019 from institutions A-E; institution F consisted of data from the MIMIC-CXR dataset). Performance on each dataset was evaluated by using area under the receiver operating characteristic curve (AUC) analysis, sensitivity, specificity, and positive and negative predictive values, with two radiologists in consensus being used as the reference standard. Patient and acquisition factors that influenced performance were analyzed. RESULTS: The AUCs for pneumothorax detection for external institutions A-F were 0.91 (95% CI: 0.88, 0.94), 0.97 (95% CI: 0.94, 0.99), 0.91 (95% CI: 0.85, 0.97), 0.98 (95% CI: 0.96, 1.0), 0.97 (95% CI: 0.95, 0.99), and 0.92 (95% CI: 0.90, 0.95), respectively, compared with the internal test AUC of 0.93 (95% CI: 0.92, 0.93). The model had lower performance for small compared with large pneumothoraces (AUC, 0.88 [95% CI: 0.85, 0.91] vs AUC, 0.96 [95% CI: 0.95, 0.97]; P = .005). Model performance was not different when a chest tube was present or absent on the radiographs (AUC, 0.95 [95% CI: 0.92, 0.97] vs AUC, 0.94 [95% CI: 0.92, 0.05]; P > .99). CONCLUSION: A deep learning model trained with a large volume of data on the task of pneumothorax detection was able to generalize well to multiple external datasets with patient demographics and technical parameters independent of the training data.Keywords: Thorax, Computer Applications-Detection/DiagnosisSee also commentary by Jacobson and Krupinski in this issue.Supplemental material is available for this article.©RSNA, 2021.

10.
Radiol Imaging Cancer ; 3(4): e210010, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34241550

RESUMO

Purpose To identify distinguishing CT radiomic features of pancreatic ductal adenocarcinoma (PDAC) and to investigate whether radiomic analysis with machine learning can distinguish between patients who have PDAC and those who do not. Materials and Methods This retrospective study included contrast material-enhanced CT images in 436 patients with PDAC and 479 healthy controls from 2012 to 2018 from Taiwan that were randomly divided for training and testing. Another 100 patients with PDAC (enriched for small PDACs) and 100 controls from Taiwan were identified for testing (from 2004 to 2011). An additional 182 patients with PDAC and 82 healthy controls from the United States were randomly divided for training and testing. Images were processed into patches. An XGBoost (https://xgboost.ai/) model was trained to classify patches as cancerous or noncancerous. Patients were classified as either having or not having PDAC on the basis of the proportion of patches classified as cancerous. For both patch-based and patient-based classification, the models were characterized as either a local model (trained on Taiwanese data only) or a generalized model (trained on both Taiwanese and U.S. data). Sensitivity, specificity, and accuracy were calculated for patch- and patient-based analysis for the models. Results The median tumor size was 2.8 cm (interquartile range, 2.0-4.0 cm) in the 536 Taiwanese patients with PDAC (mean age, 65 years ± 12 [standard deviation]; 289 men). Compared with normal pancreas, PDACs had lower values for radiomic features reflecting intensity and higher values for radiomic features reflecting heterogeneity. The performance metrics for the developed generalized model when tested on the Taiwanese and U.S. test data sets, respectively, were as follows: sensitivity, 94.7% (177 of 187) and 80.6% (29 of 36); specificity, 95.4% (187 of 196) and 100% (16 of 16); accuracy, 95.0% (364 of 383) and 86.5% (45 of 52); and area under the curve, 0.98 and 0.91. Conclusion Radiomic analysis with machine learning enabled accurate detection of PDAC at CT and could identify patients with PDAC. Keywords: CT, Computer Aided Diagnosis (CAD), Pancreas, Computer Applications-Detection/Diagnosis Supplemental material is available for this article. © RSNA, 2021.


Assuntos
Carcinoma Ductal Pancreático , Neoplasias Pancreáticas , Idoso , Humanos , Masculino , Pâncreas/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
11.
Med Phys ; 47(1): 64-74, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31449684

RESUMO

PURPOSE: Currently, radiologists use tumor-to-normal tissue contrast across multiphase computed tomography (MPCT) for lesion detection. Here, we developed a novel voxel-based enhancement pattern mapping (EPM) technique and investigated its ability to improve contrast-to-noise ratios (CNRs) in a phantom study and in patients with hepatobiliary cancers. METHODS: The EPM algorithm is based on the root mean square deviation between each voxel and a normal liver enhancement model using patient-specific (EPM-PA) or population data (EPM-PO). We created a phantom consisting of liver tissue and tumors with distinct enhancement signals under varying tumor sizes, motion, and noise. We also retrospectively evaluated 89 patients with hepatobiliary cancers who underwent active breath-hold MPCT between 2016 and 2017. MPCT phases were registered using a three-dimensional deformable image registration algorithm. For the patient study, CNRs of tumor to adjacent tissue across MPCT phases, EPM-PA and EPM-PO were measured and compared. RESULTS: EPM resulted in statistically significant CNR improvement (P < 0.05) for tumor sizes down to 3 mm, but the CNR improvement was significantly affected by tumor motion and image noise. Eighty-two of 89 hepatobiliary cases showed CNR improvement with EPM (PA or PO) over grayscale MPCT, by an average factor of 1.4, 1.6, and 1.5 for cholangiocarcinoma, hepatocellular carcinoma, and colorectal liver metastasis, respectively (P < 0.05 for all). CONCLUSIONS: EPM increases CNR compared with grayscale MPCT for primary and secondary hepatobiliary cancers. This new visualization method derived from MPCT datasets may have applications for early cancer detection, radiomic characterization, tumor treatment response, and segmentation. IMPLICATIONS FOR PATIENT CARE: We developed a voxel-wise enhancement pattern mapping (EPM) technique to improve the contrast-to-noise ratio (CNR) of multiphase CT. The improvement in CNR was observed in datasets of patients with cholangiocarcinoma, hepatocellular carcinoma, and colorectal liver metastasis. EPM has the potential to be clinically useful for cancers with regard to early detection, radiomic characterization, response, and segmentation.


Assuntos
Neoplasias do Sistema Digestório/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Masculino , Pessoa de Meia-Idade , Imagens de Fantasmas , Estudos Retrospectivos
12.
Acad Radiol ; 27(3): 311-320, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31126808

RESUMO

RATIONALE AND OBJECTIVES: To assess whether a fully-automated deep learning system can accurately detect and analyze truncal musculature at multiple lumbar vertebral levels and muscle groupings on abdominal CT for potential use in the detection of central sarcopenia. MATERIALS AND METHODS: A computer system for automated segmentation of truncal musculature groups was designed and created. Abdominal CT scans of 102 sequential patients (mean age 68 years, range 59-81 years; 53 women, 49 men) conducted between January 2015 and February 2015 were assembled as a data set. Truncal musculature was manually segmented on axial CT images at multiple lumbar vertebral levels as reference standard data, divided into training and testing subsets, and analyzed by the system. Dice similarity coefficients were calculated to evaluate system performance. IRB approval was obtained, with waiver of informed consent in this retrospective study. RESULTS: System performance as gauged by the Dice coefficients, for detecting the total abdominal muscle cross-section at the level of the third and fourth lumbar vertebrae, were, respectively, 0.953 ± 0.015 and 0.953 ± 0.011 for the training set, and 0.938 ± 0.028 and 0.940 ± 0.026 for the testing set. Dice coefficients for detecting total psoas muscle cross-section at the level of the third and fourth lumbar vertebrae, were, respectively, 0.942 ± 0.040 and 0.951 ± 0.037 for the training set, and 0.939 ± 0.028 and 0.946 ± 0.032 for the testing set. CONCLUSION: This system fully-automatically and accurately segments multiple muscle groups at all lumbar spine levels on abdominal CT for detection of sarcopenia.


Assuntos
Sarcopenia , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador , Estudos Retrospectivos , Sarcopenia/diagnóstico por imagem , Tomografia Computadorizada por Raios X
13.
Int J Neurosci ; 126(7): 617-22, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26005046

RESUMO

AIM OF THE STUDY: Recurrence is more common in bilateral chronic subdural hematomas (CSDHs) than in unilateral. Our aim was to quantitatively compare the late phase of brain shifting postevacuation in unilateral and bilateral CSDHs. MATERIALS AND METHODS: We reviewed computed tomography (CT) scans and medical records of consecutive patients with CSDHs who underwent burr hole drainage. CT scan images (preoperative and postoperative days [PODs] 30 and 60) were imported to Adobe Photoshop, and temporal and spatial changes in brain shifting between PODs 30 and 60, and also the subdural space on POD 60, were analyzed. RESULTS: The bilateral group exhibited a significantly greater late phase of brain shifting than the unilateral group between PODs 30 and 60 (P < 0.001). The median late phase of brain shifting of the bilateral group was 8.9 mm (interquartile range [IQR]: 8.3-9.0 mm) between PODs 30 and 60, while that of the unilateral group was 1.8 mm (IQR: 1.3-2.5 mm). CONCLUSIONS: The postevacuation late phase of brain shifting is statistically greater in bilateral CSDHs than in unilateral CSDHs, which might facilitate bridging vein tearing and consequent rebleeding. This may be one factor accounting for the higher recurrence rate of bilateral CSDHs.


Assuntos
Hematoma Subdural Crônico/diagnóstico por imagem , Hematoma Subdural Crônico/cirurgia , Avaliação de Resultados em Cuidados de Saúde , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Craniotomia/métodos , Drenagem/métodos , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Procedimentos Neurocirúrgicos , Recidiva
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA