Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Radiother Oncol ; 197: 110368, 2024 Jun 02.
Artículo en Inglés | MEDLINE | ID: mdl-38834153

RESUMEN

BACKGROUND AND PURPOSE: To optimize our previously proposed TransRP, a model integrating CNN (convolutional neural network) and ViT (Vision Transformer) designed for recurrence-free survival prediction in oropharyngeal cancer and to extend its application to the prediction of multiple clinical outcomes, including locoregional control (LRC), Distant metastasis-free survival (DMFS) and overall survival (OS). MATERIALS AND METHODS: Data was collected from 400 patients (300 for training and 100 for testing) diagnosed with oropharyngeal squamous cell carcinoma (OPSCC) who underwent (chemo)radiotherapy at University Medical Center Groningen. Each patient's data comprised pre-treatment PET/CT scans, clinical parameters, and clinical outcome endpoints, namely LRC, DMFS and OS. The prediction performance of TransRP was compared with CNNs when inputting image data only. Additionally, three distinct methods (m1-3) of incorporating clinical predictors into TransRP training and one method (m4) that uses TransRP prediction as one parameter in a clinical Cox model were compared. RESULTS: TransRP achieved higher test C-index values of 0.61, 0.84 and 0.70 than CNNs for LRC, DMFS and OS, respectively. Furthermore, when incorporating TransRP's prediction into a clinical Cox model (m4), a higher C-index of 0.77 for OS was obtained. Compared with a clinical routine risk stratification model of OS, our model, using clinical variables, radiomics and TransRP prediction as predictors, achieved larger separations of survival curves between low, intermediate and high risk groups. CONCLUSION: TransRP outperformed CNN models for all endpoints. Combining clinical data and TransRP prediction in a Cox model achieved better OS prediction.

2.
Comput Methods Programs Biomed ; 244: 107939, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38008678

RESUMEN

BACKGROUND AND OBJECTIVE: Recently, deep learning (DL) algorithms showed to be promising in predicting outcomes such as distant metastasis-free survival (DMFS) and overall survival (OS) using pre-treatment imaging in head and neck cancer. Gross Tumor Volume of the primary tumor (GTVp) segmentation is used as an additional channel in the input to DL algorithms to improve model performance. However, the binary segmentation mask of the GTVp directs the focus of the network to the defined tumor region only and uniformly. DL models trained for tumor segmentation have also been used to generate predicted tumor probability maps (TPM) where each pixel value corresponds to the degree of certainty of that pixel to be classified as tumor. The aim of this study was to explore the effect of using TPM as an extra input channel of CT- and PET-based DL prediction models for oropharyngeal cancer (OPC) patients in terms of local control (LC), regional control (RC), DMFS and OS. METHODS: We included 399 OPC patients from our institute that were treated with definitive (chemo)radiation. For each patient, CT and PET scans and GTVp contours, used for radiotherapy treatment planning, were collected. We first trained a previously developed 2.5D DL framework for tumor probability prediction by 5-fold cross validation using 131 patients. Then, a 3D ResNet18 was trained for outcome prediction using the 3D TPM as one of the possible inputs. The endpoints were LC, RC, DMFS, and OS. We performed 3-fold cross validation on 168 patients for each endpoint using different combinations of image modalities as input. The final prediction in the test set (100) was obtained by averaging the predictions of the 3-fold models. The C-index was used to evaluate the discriminative performance of the models. RESULTS: The models trained replacing the GTVp contours with the TPM achieved the highest C-indexes for LC (0.74) and RC (0.60) prediction. For OS, using the TPM or the GTVp as additional image modality resulted in comparable C-indexes (0.72 and 0.74). CONCLUSIONS: Adding predicted TPMs instead of GTVp contours as an additional input channel for DL-based outcome prediction models improved model performance for LC and RC.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Neoplasias Orofaríngeas , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Neoplasias Orofaríngeas/diagnóstico por imagen , Pronóstico
3.
Phys Imaging Radiat Oncol ; 28: 100502, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38026084

RESUMEN

Background and purpose: To compare the prediction performance of image features of computed tomography (CT) images extracted by radiomics, self-supervised learning and end-to-end deep learning for local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), tumor-specific survival (TSS), overall survival (OS) and disease-free survival (DFS) of oropharyngeal squamous cell carcinoma (OPSCC) patients after (chemo)radiotherapy. Methods and materials: The OPC-Radiomics dataset was used for model development and independent internal testing and the UMCG-OPC set for external testing. Image features were extracted from the Gross Tumor Volume contours of the primary tumor (GTVt) regions in CT scans when using radiomics or a self-supervised learning-based method (autoencoder). Clinical and combined (radiomics, autoencoder or end-to-end) models were built using multivariable Cox proportional-hazard analysis with clinical features only and both clinical and image features for LC, RC, LRC, DMFS, TSS, OS and DFS prediction, respectively. Results: In the internal test set, combined autoencoder models performed better than clinical models and combined radiomics models for LC, RC, LRC, DMFS, TSS and DFS prediction (largest improvements in C-index: 0.91 vs. 0.76 in RC and 0.74 vs. 0.60 in DMFS). In the external test set, combined radiomics models performed better than clinical and combined autoencoder models for all endpoints (largest improvements in LC, 0.82 vs. 0.71). Furthermore, combined models performed better in risk stratification than clinical models and showed good calibration for most endpoints. Conclusions: Image features extracted using self-supervised learning showed best internal prediction performance while radiomics features have better external generalizability.

4.
Med Phys ; 50(10): 6190-6200, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37219816

RESUMEN

BACKGROUND: Personalized treatment is increasingly required for oropharyngeal squamous cell carcinoma (OPSCC) patients due to emerging new cancer subtypes and treatment options. Outcome prediction model can help identify low or high-risk patients who may be suitable to receive de-escalation or intensified treatment approaches. PURPOSE: To develop a deep learning (DL)-based model for predicting multiple and associated efficacy endpoints in OPSCC patients based on computed tomography (CT). METHODS: Two patient cohorts were used in this study: a development cohort consisting of 524 OPSCC patients (70% for training and 30% for independent testing) and an external test cohort of 396 patients. Pre-treatment CT-scans with the gross primary tumor volume contours (GTVt) and clinical parameters were available to predict endpoints, including 2-year local control (LC), regional control (RC), locoregional control (LRC), distant metastasis-free survival (DMFS), disease-specific survival (DSS), overall survival (OS), and disease-free survival (DFS). We proposed DL outcome prediction models with the multi-label learning (MLL) strategy that integrates the associations of different endpoints based on clinical factors and CT-scans. RESULTS: The multi-label learning models outperformed the models that were developed based on a single endpoint for all endpoints especially with high AUCs ≥ 0.80 for 2-year RC, DMFS, DSS, OS, and DFS in the internal independent test set and for all endpoints except 2-year LRC in the external test set. Furthermore, with the models developed, patients could be stratified into high and low-risk groups that were significantly different for all endpoints in the internal test set and for all endpoints except DMFS in the external test set. CONCLUSION: MLL models demonstrated better discriminative ability for all 2-year efficacy endpoints than single outcome models in the internal test and for all endpoints except LRC in the external set.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias de Cabeza y Cuello , Neoplasias Orofaríngeas , Humanos , Carcinoma de Células Escamosas de Cabeza y Cuello , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/terapia , Tomografía Computarizada por Rayos X , Supervivencia sin Enfermedad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/terapia , Estudios Retrospectivos
5.
Radiother Oncol ; 180: 109483, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36690302

RESUMEN

BACKGROUND AND PURPOSE: The aim of this study was to develop and evaluate a prediction model for 2-year overall survival (OS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients who received definitive radiotherapy by considering clinical variables and image features from pre-treatment CT-scans. MATERIALS AND METHODS: NSCLC patients who received stereotactic radiotherapy were prospectively collected at the UMCG and split into a training and a hold out test set including 189 and 81 patients, respectively. External validation was performed on 228 NSCLC patients who were treated with radiation or concurrent chemoradiation at the Maastro clinic (Lung1 dataset). A hybrid model that integrated both image and clinical features was implemented using deep learning. Image features were learned from cubic patches containing lung tumours extracted from pre-treatment CT scans. Relevant clinical variables were selected by univariable and multivariable analyses. RESULTS: Multivariable analysis showed that age and clinical stage were significant prognostic clinical factors for 2-year OS. Using these two clinical variables in combination with image features from pre-treatment CT scans, the hybrid model achieved a median AUC of 0.76 [95 % CI: 0.65-0.86] and 0.64 [95 % CI: 0.58-0.70] on the complete UMCG and Maastro test sets, respectively. The Kaplan-Meier survival curves showed significant separation between low and high mortality risk groups on these two test sets (log-rank test: p-value < 0.001, p-value = 0.012, respectively) CONCLUSION: We demonstrated that a hybrid model could achieve reasonable performance by utilizing both clinical and image features for 2-year OS prediction. Such a model has the potential to identify patients with high mortality risk and guide clinical decision making.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Carcinoma de Pulmón de Células no Pequeñas/terapia , Carcinoma de Pulmón de Células no Pequeñas/tratamiento farmacológico , Neoplasias Pulmonares/patología , Estadificación de Neoplasias , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos
6.
J Pers Med ; 11(7)2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-34357096

RESUMEN

Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.

7.
J Pers Med ; 11(6)2021 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-34208429

RESUMEN

Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, exhibit lower contrast and higher levels of noise and artifacts due to extremely low radiation in comparison with the conventional computed tomography (CT), which makes automatic mandible segmentation from CBCT data challenging. In this work, we propose a novel coarse-to-fine segmentation framework based on 3D convolutional neural network and recurrent SegUnet for mandible segmentation in CBCT scans. Specifically, the mandible segmentation is decomposed into two stages: localization of the mandible-like region by rough segmentation and further accurate segmentation of the mandible details. The method was evaluated using a dental CBCT dataset. In addition, we evaluated the proposed method and compared it with state-of-the-art methods in two CT datasets. The experiments indicate that the proposed algorithm can provide more accurate and robust segmentation results for different imaging techniques in comparison with the state-of-the-art models with respect to these three datasets.

8.
Am J Pathol ; 191(9): 1520-1525, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34197776

RESUMEN

The u-serrated immunodeposition pattern in direct immunofluorescence (DIF) microscopy is a recognizable feature and confirmative for the diagnosis of epidermolysis bullosa acquisita (EBA). Due to unfamiliarity with serrated patterns, serration pattern recognition is still of limited use in routine DIF microscopy. The objective of this study was to investigate the feasibility of using convolutional neural networks (CNNs) for the recognition of u-serrated patterns that can assist in the diagnosis of EBA. The nine most commonly used CNNs were trained and validated by using 220,800 manually delineated DIF image patches from 106 images of 46 different patients. The data set was split into 10 subsets: nine training subsets from 42 patients to train CNNs and the last subset from the remaining four patients for a validation data set of diagnostic accuracy. This process was repeated 10 times with a different subset used for validation. The best-performing CNN achieved a specificity of 89.3% and a corresponding sensitivity of 89.3% in the classification of u-serrated DIF image patches, an expert level of diagnostic accuracy. Experiments and results show the effectiveness of CNN approaches for u-serrated pattern recognition with a high accuracy. The proposed approach can assist clinicians and pathologists in recognition of u-serrated patterns in DIF images and facilitate the diagnosis of EBA.


Asunto(s)
Epidermólisis Ampollosa Adquirida/diagnóstico , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Epidermólisis Ampollosa Adquirida/patología , Técnica del Anticuerpo Fluorescente Directa , Humanos , Microscopía Fluorescente/métodos , Sensibilidad y Especificidad
9.
J Pers Med ; 11(6)2021 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-34072714

RESUMEN

PURPOSE: Classic encoder-decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. METHODS: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. RESULTS: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. CONCLUSIONS: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.

10.
J Pers Med ; 11(5)2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-34062762

RESUMEN

Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.

11.
Eur J Epidemiol ; 35(1): 75-86, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31016436

RESUMEN

Lung cancer, chronic obstructive pulmonary disease (COPD), and coronary artery disease (CAD) are expected to cause most deaths by 2050. State-of-the-art computed tomography (CT) allows early detection of lung cancer and simultaneous evaluation of imaging biomarkers for the early stages of COPD, based on pulmonary density and bronchial wall thickness, and of CAD, based on the coronary artery calcium score (CACS), at low radiation dose. To determine cut-off values for positive tests for elevated risk and presence of disease is one of the major tasks before considering implementation of CT screening in a general population. The ImaLife (Imaging in Lifelines) study, embedded in the Lifelines study, is designed to establish the reference values of the imaging biomarkers for the big three diseases in a well-defined general population aged 45 years and older. In total, 12,000 participants will undergo CACS and chest acquisitions with latest CT technology. The estimated percentage of individuals with lung nodules needing further workup is around 1-2%. Given the around 10% prevalence of COPD and CAD in the general population, the expected number of COPD and CAD is around 1000 each. So far, nearly 4000 participants have been included. The ImaLife study will allow differentiation between normal aging of the pulmonary and cardiovascular system and early stages of the big three diseases based on low-dose CT imaging. This information can be finally integrated into personalized precision health strategies in the general population.


Asunto(s)
Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Detección Precoz del Cáncer , Neoplasias Pulmonares/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Vigilancia de la Población , Valor Predictivo de las Pruebas
12.
IEEE Trans Med Imaging ; 39(3): 797-805, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31425026

RESUMEN

Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximum intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.7% with 1 false positive per scan and sensitivity of 94.2% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.


Asunto(s)
Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Bases de Datos Factuales , Detección Precoz del Cáncer/métodos , Humanos , Imagenología Tridimensional/métodos , Neoplasias Pulmonares/patología , Sensibilidad y Especificidad , Nódulo Pulmonar Solitario/patología
13.
Phys Med Biol ; 64(17): 175020, 2019 09 05.
Artículo en Inglés | MEDLINE | ID: mdl-31239411

RESUMEN

Segmentation of mandibular bone in CT scans is crucial for 3D virtual surgical planning of craniofacial tumor resection and free flap reconstruction of the resection defect, in order to obtain a detailed surface representation of the bones. A major drawback of most existing mandibular segmentation methods is that they require a large amount of expert knowledge for manual or partially automatic segmentation. In fact, due to the lack of experienced doctors and experts, high quality expert knowledge is hard to achieve in practice. Furthermore, segmentation of mandibles in CT scans is influenced seriously by metal artifacts and large variations in their shape and size among individuals. In order to address these challenges we propose an automatic mandible segmentation approach in CT scans, which considers the continuum of anatomical structures through different planes. The approach adopts the architecture of the U-Net and then combines the resulting 2D segmentations from three orthogonal planes into a 3D segmentation. We implement such a segmentation approach on two head and neck datasets and then evaluate the performance. Experimental results show that our proposed approach for mandible segmentation in CT scans exhibits high accuracy.


Asunto(s)
Imagenología Tridimensional/métodos , Mandíbula/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Humanos
14.
Eur Radiol ; 29(10): 5441-5451, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30859281

RESUMEN

OBJECTIVE: To predict the local recurrence of giant cell bone tumors (GCTB) on MR features and the clinical characteristics after curettage using a deep convolutional neural network (CNN). METHODS: MR images were collected from 56 patients with histopathologically confirmed GCTB after curettage who were followed up for 5.8 years (range, 2.0 to 9.5 years). The inception v3 CNN architecture was fine-tuned by two categories of the MR datasets (recurrent and non-recurrent GCTB) obtained through data augmentation and was validated using fourfold cross-validation to evaluate its generalization ability. Twenty-eight cases (50%) were chosen as the training dataset for the CNN and four radiologists, while the remaining 28 cases (50%) were used as the test dataset. A binary logistic regression model was established to predict recurrent GCTB by combining the CNN prediction and patient features (age and tumor location). Accuracy and sensitivity were used to evaluate the prediction performance. RESULTS: When comparing the CNN, CNN regression, and radiologists, the accuracies of the CNN and CNN regression models were 75.5% (95% CI 55.1 to 89.3%) and 78.6% (59.0 to 91.7%), respectively, which were higher than the 64.3% (44.1 to 81.4%) accuracy of the radiologists. The sensitivities were 85.7% (42.1 to 99.6%) and 87.5% (47.3 to 99.7%), respectively, which were higher than the 58.3% (27.7 to 84.8%) sensitivity of the radiologists (p < 0.05). CONCLUSION: The CNN has the potential to predict recurrent GCTB after curettage. A binary regression model combined with patient characteristics improves its prediction accuracy. KEY POINTS: • Convolutional neural network (CNN) can be trained successfully on a limited number of pre-surgery MR images, by fine-tuning a pre-trained CNN architecture. • CNN has an accuracy of 75.5% to predict post-surgery recurrence of giant cell tumors of bone, which surpasses the 64.3% accuracy of human observation. • A binary logistic regression model combining CNN prediction rate, patient age, and tumor location improves the accuracy to predict post-surgery recurrence of giant cell bone tumors to 78.6%.


Asunto(s)
Neoplasias Óseas/diagnóstico por imagen , Tumor Óseo de Células Gigantes/diagnóstico por imagen , Recurrencia Local de Neoplasia/diagnóstico por imagen , Redes Neurales de la Computación , Adolescente , Adulto , Algoritmos , Neoplasias Óseas/cirugía , Huesos/patología , Legrado , Femenino , Estudios de Seguimiento , Tumor Óseo de Células Gigantes/cirugía , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Modelos Logísticos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Estadificación de Neoplasias , Periodo Preoperatorio , Pronóstico , Adulto Joven
15.
Int J Med Inform ; 122: 27-36, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30623781

RESUMEN

Direct immunofluorescence (DIF) microscopy of a skin biopsy is used by physicians and pathologists to diagnose autoimmune bullous dermatoses (AIBD). This technique is the reference standard for diagnosis of AIBD, which is used worldwide in medical laboratories. For diagnosis of subepidermal AIBD (sAIBD), two different types of serrated pattern of immunodepositions can be recognized from DIF images, namely n- and u-serrated patterns. The n-serrated pattern is typically found in the most common sAIBD bullous pemphigoid. Presence of the u-serrated pattern indicates the sAIBD subtype epidermolysis bullosa acquisita (EBA), which has a different prognosis and requires a different treatment. The manual identification of these serrated patterns is learnable but challenging. We propose an automatic technique that is able to localize u-serrated patterns for automated computer-assisted diagnosis of EBA. The distinctive feature of u-serrated patterns as compared to n-serrated patterns is the presence of ridge-endings. We introduce a novel ridge-ending detector which uses inhibition-augmented trainable COSFIRE filters. Then, we apply a hierarchical clustering approach to detect the suspicious u-serrated patterns from the detected ridge-endings. For each detected u-serrated pattern we provide a score that indicates the reliability of its detection. In order to evaluate the proposed approach, we created a data set with 180 DIF images for serration pattern analysis. This data set consists of seven subsets which were obtained from various biopsy samples under different conditions. We achieve an average recognition rate of 82.2% of the u-serrated pattern on these 180 DIF images, which is comparable to the recognition rate achieved by experienced medical doctors and pathologists.


Asunto(s)
Enfermedades Autoinmunes/diagnóstico , Epidermólisis Ampollosa Adquirida/diagnóstico , Técnica del Anticuerpo Fluorescente Directa/instrumentación , Técnica del Anticuerpo Fluorescente Directa/métodos , Interpretación de Imagen Asistida por Computador/métodos , Enfermedades Autoinmunes/diagnóstico por imagen , Diagnóstico Diferencial , Epidermólisis Ampollosa Adquirida/diagnóstico por imagen , Humanos , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...