Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Eur J Nucl Med Mol Imaging ; 48(4): 1005-1015, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33006656

RESUMO

PURPOSE: Fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) is included in the International Myeloma Working Group (IMWG) imaging guidelines for the work-up at diagnosis and the follow-up of multiple myeloma (MM) notably because it is a reliable tool as a predictor of prognosis. Nevertheless, none of the published studies focusing on the prognostic value of PET-derived features at baseline consider tumor heterogeneity, which could be of high importance in MM. The aim of this study was to evaluate the prognostic value of baseline PET-derived features in transplant-eligible newly diagnosed (TEND) MM patients enrolled in two prospective independent European randomized phase III trials using an innovative statistical random survival forest (RSF) approach. METHODS: Imaging ancillary studies of IFM/DFCI2009 and EMN02/HO95 trials formed part of the present analysis (IMAJEM and EMN02/HO95, respectively). Among all patients initially enrolled in these studies, those with a positive baseline FDG-PET/CT imaging and focal bone lesions (FLs) and/or extramedullary disease (EMD) were included in the present analysis. A total of 17 image features (visual and quantitative, reflecting whole imaging characteristics) and 5 clinical/histopathological parameters were collected. The statistical analysis was conducted using two RSF approaches (train/validation + test and additional nested cross-validation) to predict progression-free survival (PFS). RESULTS: One hundred thirty-nine patients were considered for this study. The final model based on the first RSF (train/validation + test) approach selected 3 features (treatment arm, hemoglobin, and SUVmaxBone Marrow (BM)) among the 22 involved initially, and two risk groups of patients (good and poor prognosis) could be defined with a mean hazard ratio of 4.3 ± 1.5 and a mean log-rank p value of 0.01 ± 0.01. The additional RSF (nested cross-validation) analysis highlighted the robustness of the proposed model across different splits of the dataset. Indeed, the first features selected using the train/validation + test approach remained the first ones over the folds with the nested approach. CONCLUSION: We proposed a new prognosis model for TEND MM patients at diagnosis based on two RSF approaches. TRIAL REGISTRATION: IMAJEM: NCT01309334 and EMN02/HO95: NCT01134484.


Assuntos
Fluordesoxiglucose F18 , Mieloma Múltiplo , Humanos , Mieloma Múltiplo/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Prognóstico , Estudos Prospectivos , Compostos Radiofarmacêuticos
2.
Scand J Gastroenterol ; 53(9): 1100-1106, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30270677

RESUMO

BACKGROUND AND AIMS: Clinical data suggest that the quality of optical diagnoses of colorectal polyps differs markedly among endoscopists. The aim of this study was to develop a computer program that was able to differentiate neoplastic from non-neoplastic polyps using unmagnified endoscopic pictures. METHODS: During colonoscopy procedures polyp photographies were performed using the unmagnified high-definition white light and narrow band image mode. All detected polyps (n = 275) were resected and sent to pathology. Histopathological diagnoses served as the ground truth. Machine learning was used in order to generate a computer-assisted optical biopsy (CAOB) approach. In the test phase pictures were presented to CAOB in order to obtain optical diagnoses. Altogether 788 pictures were available (602 for training the machine learning algorithm and 186 for CAOB testing). All test pictures were also presented to two experts in optical polyp characterization. The primary endpoint of the study was the accuracy of CAOB diagnoses in the test phase. RESULTS: A total of 100 polyps (of these 52% neoplastic) were used in the CAOB test phase. The mean size of test polyps was 4 mm. Accuracy of the CAOB approach was 78.0%. Sensitivity and negative predictive value were 92.3% and 88.2%, respectively. Accuracy obtained by two expert endoscopists was 84.0% and 77.0%. Regarding accuracy of optical diagnoses CAOB predictions did not differ significantly compared to experts (p = .307 and p = 1.000, respectively). CONCLUSIONS: CAOB showed good accuracy on the basis of unmagnified endoscopic pictures. Performance of CAOB predictions did not differ significantly from experts' decisions. The concept of computer assistance for colorectal polyp characterization needs to evolve towards a real-time application prior of being used in a broader set-up.


Assuntos
Pólipos do Colo/classificação , Pólipos do Colo/diagnóstico , Colonoscopia/instrumentação , Aprendizado de Máquina , Idoso , Biópsia/métodos , Colonoscopia/métodos , Neoplasias Colorretais/patologia , Feminino , Alemanha , Hospitais Universitários , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Imagem de Banda Estreita , Valor Preditivo dos Testes
3.
Artigo em Inglês | MEDLINE | ID: mdl-38789884

RESUMO

PURPOSE: Segmenting ultrasound images is important for precise area and/or volume calculations, ensuring reliable diagnosis and effective treatment evaluation for diseases. Recently, many segmentation methods have been proposed and shown impressive performance. However, currently, there is no deeper understanding of how networks segment target regions or how they define the boundaries. In this paper, we present a new approach that analyzes ultrasound segmentation networks in terms of learned borders because border delimitation is challenging in ultrasound. METHODS: We propose a way to split the boundaries for ultrasound images into distinct and completed. By exploiting the Grad-CAM of the split borders, we analyze the areas each network pays attention to. Further, we calculate the ratio of correct predictions for distinct and completed borders. We conducted experiments on an in-house leg ultrasound dataset (LEG-3D-US) as well as on two additional public datasets of thyroid, nerves, and one private for prostate. RESULTS: Quantitatively, the networks exhibit around 10% improvement in handling completed borders compared to distinct borders. Similar to doctors, the network struggles to define the borders in less visible areas. Additionally, the Seg-Grad-CAM analysis underscores how completion uses distinct borders and landmarks, while distinct focuses mainly on the shiny structures. We also observe variations depending on the attention mechanism of each architecture. CONCLUSION: In this work, we highlight the importance of studying ultrasound borders differently than other modalities such as MRI or CT. We split the borders into distinct and completed, similar to clinicians, and show the quality of the network-learned information for these two types of borders. Additionally, we open-source a 3D leg ultrasound dataset to the community https://github.com/Al3xand1a/segmentation-border-analysis .

4.
J Nucl Med ; 65(1): 156-162, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-37945379

RESUMO

The results of the GA in Newly Diagnosed Diffuse Large B-Cell Lymphoma (GAINED) study demonstrated the success of an 18F-FDG PET-driven approach to allow early identification-for intensification therapy-of diffuse large B-cell lymphoma patients with a high risk of relapse. Besides, some works have reported the prognostic value of baseline PET radiomics features (RFs). This work investigated the added value of such biomarkers on survival of patients involved in the GAINED protocol. Methods: Conventional PET features and RFs were computed from 18F-FDG PET at baseline and extracted using different volume definitions (patient level, largest lesion, and hottest lesion). Clinical features and the consolidation treatment information were also considered in the model. Two machine-learning pipelines were trained with 80% of patients and tested on the remaining 20%. The training was repeated 100 times to highlight the test set variability. For the 2-y progression-free survival (PFS) outcome, the pipeline included a data augmentation and an elastic net logistic regression model. Results for different feature groups were compared using the mean area under the curve (AUC). For the survival outcome, the pipeline included a Cox univariate model to select the features. Then, the model included a split between high- and low-risk patients using the median of a regression score based on the coefficients of a penalized Cox multivariate approach. The log-rank test P values over the 100 loops were compared with a Wilcoxon signed-ranked test. Results: In total, 545 patients were included for the 2-y PFS classification and 561 for survival analysis. Clinical features alone, consolidation features alone, conventional PET features, and RFs extracted at patient level achieved an AUC of, respectively, 0.65 ± 0.07, 0.64 ± 0.06, 0.60 ± 0.07, and 0.62 ± 0.07 (0.62 ± 0.07 for the largest lesion and 0.54 ± 0.07 for the hottest). Combining clinical features with the consolidation features led to the best AUC (0.72 ± 0.06). Adding conventional PET features or RFs did not improve the results. For survival, the log-rank P values of the model involving clinical and consolidation features together were significantly smaller than all combined-feature groups (P < 0.007). Conclusion: The results showed that a concatenation of multimodal features coupled with a simple machine-learning model does not seem to improve the results in terms of 2-y PFS classification and PFS prediction for patient treated according to the GAINED protocol.


Assuntos
Fluordesoxiglucose F18 , Linfoma Difuso de Grandes Células B , Humanos , Prognóstico , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Radiômica , Recidiva Local de Neoplasia , Linfoma Difuso de Grandes Células B/diagnóstico por imagem , Linfoma Difuso de Grandes Células B/terapia , Estudos Retrospectivos
5.
Comput Methods Programs Biomed ; 229: 107318, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36592580

RESUMO

BACKGROUND AND OBJECTIVE: For early breast cancer detection, regular screening with mammography imaging is recommended. Routine examinations result in datasets with a predominant amount of negative samples. The limited representativeness of positive cases can be problematic for learning Computer-Aided Diagnosis (CAD) systems. Collecting data from multiple institutions is a potential solution to mitigate this problem. Recently, federated learning has emerged as an effective tool for collaborative learning. In this setting, local models perform computation on their private data to update the global model. The order and the frequency of local updates influence the final global model. In the context of federated adversarial learning to improve multi-site breast cancer classification, we investigate the role of the order in which samples are locally presented to the optimizers. METHODS: We define a novel memory-aware curriculum learning method for the federated setting. We aim to improve the consistency of the local models penalizing inconsistent predictions, i.e., forgotten samples. Our curriculum controls the order of the training samples prioritizing those that are forgotten after the deployment of the global model. Our approach is combined with unsupervised domain adaptation to deal with domain shift while preserving data privacy. RESULTS: Two classification metrics: area under the receiver operating characteristic curve (ROC-AUC) and area under the curve for the precision-recall curve (PR-AUC) are used to evaluate the performance of the proposed method. Our method is evaluated with three clinical datasets from different vendors. An ablation study showed the improvement of each component of our method. The AUC and PR-AUC are improved on average by 5% and 6%, respectively, compared to the conventional federated setting. CONCLUSIONS: We demonstrated the benefits of curriculum learning for the first time in a federated setting. Our results verified the effectiveness of the memory-aware curriculum federated learning for the multi-site breast cancer classification. Our code is publicly available at: https://github.com/ameliajimenez/curriculum-federated-learning.


Assuntos
Conscientização , Neoplasias , Cognição , Currículo , Aprendizagem , Mamografia
6.
JACC Cardiovasc Imaging ; 16(7): 951-961, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37052569

RESUMO

BACKGROUND: Fluorine-18 fluorodeoxyglucose (18F-FDG)-positron emission tomography (PET)/computed tomography (CT) results in better sensitivity for prosthetic valve endocarditis (PVE) diagnosis, but visual image analysis results in relatively weak specificity and significant interobserver variability. OBJECTIVES: The primary objective of this study was to evaluate the performance of a radiomics and machine learning-based analysis of 18F-FDG PET/CT (PET-ML) as a major criterion for the European Society of Cardiology score using machine learning as a major imaging criterion (ESC-ML) in PVE diagnosis. The secondary objective was to assess performance of PET-ML as a standalone examination. METHODS: All 18F-FDG-PET/CT scans performed for suspected aortic PVE at a single center from 2015 to 2021 were retrospectively included. The gold standard was expert consensus after at least 3 months' follow-up. The machine learning (ML) method consisted of manually segmenting each prosthetic valve, extracting 31 radiomics features from the segmented region, and training a ridge logistic regressor to predict PVE. Training and hyperparameter tuning were done with a cross-validation approach, followed by an evaluation on an independent test database. RESULTS: A total of 108 patients were included, regardless of myocardial uptake, and were divided into training (n = 68) and test (n = 40) cohorts. Considering the latter, PET-ML findings were positive for 13 of 22 definite PVE cases and 3 of 18 rejected PVE cases (59% sensitivity, 83% specificity), thus leading to an ESC-ML sensitivity of 72% and a specificity of 83%. CONCLUSIONS: The use of ML for analyzing 18F-FDG-PET/CT images in PVE diagnosis was feasible and beneficial, particularly when ML was included in the ESC 2015 criteria. Despite some limitations and the need for future developments, this approach seems promising to optimize the role of 18F-FDG PET/CT in PVE diagnosis.


Assuntos
Endocardite Bacteriana , Endocardite , Próteses Valvulares Cardíacas , Humanos , Fluordesoxiglucose F18 , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Estudos Retrospectivos , Valor Preditivo dos Testes , Endocardite/diagnóstico por imagem , Endocardite/etiologia , Aprendizado de Máquina , Compostos Radiofarmacêuticos
7.
Med Image Anal ; 75: 102273, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34731773

RESUMO

An adequate classification of proximal femur fractures from X-ray images is crucial for the treatment choice and the patients' clinical outcome. We rely on the commonly used AO system, which describes a hierarchical knowledge tree classifying the images into types and subtypes according to the fracture's location and complexity. In this paper, we propose a method for the automatic classification of proximal femur fractures into 3 and 7 AO classes based on a Convolutional Neural Network (CNN). As it is known, CNNs need large and representative datasets with reliable labels, which are hard to collect for the application at hand. In this paper, we design a curriculum learning (CL) approach that improves over the basic CNNs performance under such conditions. Our novel formulation reunites three curriculum strategies: individually weighting training samples, reordering the training set, and sampling subsets of data. The core of these strategies is a scoring function ranking the training samples. We define two novel scoring functions: one from domain-specific prior knowledge and an original self-paced uncertainty score. We perform experiments on a clinical dataset of proximal femur radiographs. The curriculum improves proximal femur fracture classification up to the performance of experienced trauma surgeons. The best curriculum method reorders the training set based on prior knowledge resulting into a classification improvement of 15%. Using the publicly available MNIST dataset, we further discuss and demonstrate the benefits of our unified CL formulation for three controlled and challenging digit recognition scenarios: with limited amounts of data, under class-imbalance, and in the presence of label noise. The code of our work is available at: https://github.com/ameliajimenez/curriculum-learning-prior-uncertainty.


Assuntos
Aprendizado Profundo , Currículo , Fêmur/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Incerteza
8.
Phys Med Biol ; 67(15)2022 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-35785776

RESUMO

Objective.This paper proposes a novel approach for the longitudinal registration of PET imaging acquired for the monitoring of patients with metastatic breast cancer. Unlike with other image analysis tasks, the use of deep learning (DL) has not significantly improved the performance of image registration. With this work, we propose a new registration approach to bridge the performance gap between conventional and DL-based methods: medical image registration method regularized by architecture (MIRRBA).Approach.MIRRBAis a subject-specific deformable registration method which relies on a deep pyramidal architecture to parametrize the deformation field. Diverging from the usual deep-learning paradigms,MIRRBAdoes not require a learning database, but only a pair of images to be registered that is used to optimize the network's parameters. We appliedMIRRBAon a private dataset of 110 whole-body PET images of patients with metastatic breast cancer. We used different architecture configurations to produce the deformation field and studied the results obtained. We also compared our method to several standard registration approaches: two conventional iterative registration methods (ANTs and Elastix) and two supervised DL-based models (LapIRN and Voxelmorph). Registration accuracy was evaluated using the Dice score, the target registration error, the average Hausdorff distance and the detection rate, while the realism of the registration obtained was evaluated using Jacobian's determinant. The ability of the different methods to shrink disappearing lesions was also computed with the disappearing rate.Main results.MIRRBA significantly improved all metrics when compared to DL-based approaches. The organ and lesion Dice scores of Voxelmorph improved by 6% and 52% respectively, while the ones of LapIRN increased by 5% and 65%. Regarding conventional approaches, MIRRBA presented comparable results showing the feasibility of our method.Significance.In this paper, we also demonstrate the regularizing power of deep architectures and present new elements to understand the role of the architecture in DL methods used for registration.


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons
9.
IEEE Trans Med Imaging ; 40(10): 2711-2722, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33417539

RESUMO

Early breast cancer screening through mammography produces every year millions of images worldwide. Despite the volume of the data generated, these images are not systematically associated with standardized labels. Current protocols encourage giving a malignancy probability to each studied breast but do not require the explicit and burdensome annotation of the affected regions. In this work, we address the problem of abnormality detection in the context of such weakly annotated datasets. We combine domain knowledge about the pathology and clinically available image-wise labels to propose a mixed self- and weakly supervised learning framework for abnormalities reconstruction. We also introduce an auxiliary classification task based on the reconstructed regions to improve explainability. We work with high-resolution imaging that enables our network to capture different findings, including masses, micro-calcifications, distortions, and asymmetries, unlike most state-of-the-art works that mainly focus on masses. We use the popular INBreast dataset as well as our private multi-manufacturer dataset for validation and we challenge our method in segmentation, detection, and classification versus multiple state-of-the-art methods. Our results include image-wise AUC up to 0.86, overall region detection true positives rate of 0.93, and the pixel-wise F1 score of 64% on malignant masses.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Detecção Precoce de Câncer , Feminino , Humanos , Mamografia
10.
Front Radiol ; 1: 796078, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-37492176

RESUMO

In breast cancer screening, binary classification of mammograms is a common task aiming to determine whether a case is malignant or benign. A Computer-Aided Diagnosis (CADx) system based on a trainable classifier requires clean data and labels coming from a confirmed diagnosis. Unfortunately, such labels are not easy to obtain in clinical practice, since the histopathological reports of biopsy may not be available alongside mammograms, while normal cases may not have an explicit follow-up confirmation. Such ambiguities result either in reducing the number of samples eligible for training or in a label uncertainty that may decrease the performances. In this work, we maximize the number of samples for training relying on multi-task learning. We design a deep-neural-network-based classifier yielding multiple outputs in one forward pass. The predicted classes include binary malignancy, cancer probability estimation, breast density, and image laterality. Since few samples have all classes available and confirmed, we propose to introduce the uncertainty related to the classes as a per-sample weight during training. Such weighting prevents updating the network's parameters when training on uncertain or missing labels. We evaluate our approach on the public INBreast and private datasets, showing statistically significant improvements compared to baseline and independent state-of-the-art approaches. Moreover, we use mammograms from Susan G. Komen Tissue Bank for fine-tuning, further demonstrating the ability to improve the performances in our multi-task learning setup from raw clinical data. We achieved the binary classification performance of AUC = 80.46 on our private dataset and AUC = 85.23 on the INBreast dataset.

11.
IEEE Trans Med Imaging ; 40(10): 2615-2628, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33560982

RESUMO

We present an accurate, fast and efficient method for segmentation and muscle mask propagation in 3D freehand ultrasound data, towards accurate volume quantification. A deep Siamese 3D Encoder-Decoder network that captures the evolution of the muscle appearance and shape for contiguous slices is deployed. We use it to propagate a reference mask annotated by a clinical expert. To handle longer changes of the muscle shape over the entire volume and to provide an accurate propagation, we devise a Bidirectional Long Short Term Memory module. Also, to train our model with a minimal amount of training samples, we propose a strategy combining learning from few annotated 2D ultrasound slices with sequential pseudo-labeling of the unannotated slices. We introduce a decremental update of the objective function to guide the model convergence in the absence of large amounts of annotated data. After training with a few volumes, the decremental update strategy switches from a weak supervised training to a few-shot setting. Finally, to handle the class-imbalance between foreground and background muscle pixels, we propose a parametric Tversky loss function that learns to penalize adaptively the false positives and the false negatives. We validate our approach for the segmentation, label propagation, and volume computation of the three low-limb muscles on a dataset of 61600 images from 44 subjects. We achieve a Dice score coefficient of over 95% and a volumetric error of 1.6035 ± 0.587%.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Músculos , Tomografia Computadorizada por Raios X , Ultrassonografia
12.
IEEE Trans Med Imaging ; 39(11): 3725-3736, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32746117

RESUMO

In a low-statistics PET imaging context, the positive bias in regions of low activity is a burning issue. To overcome this problem, algorithms without the built-in non-negativity constraint may be used. They allow negative voxels in the image to reduce, or even to cancel the bias. However, such algorithms increase the variance and are difficult to interpret since the resulting images contain negative activities, which do not hold a physical meaning when dealing with radioactive concentration. In this paper, a post-processing approach is proposed to remove these negative values while preserving the local mean activities. Its original idea is to transfer the value of each voxel with negative activity to its direct neighbors under the constraint of preserving the local means of the image. In that respect, the proposed approach is formalized as a linear programming problem with a specific symmetric structure, which makes it solvable in a very efficient way by a dual-simplex-like iterative algorithm. The relevance of the proposed approach is discussed on simulated and on experimental data. Acquired data from an yttrium-90 phantom show that on images produced by a non-constrained algorithm, a much lower variance in the cold area is obtained after the post-processing step, at the price of a slightly increased bias. More specifically, when compared with the classical OSEM algorithm, images are improved, both in terms of bias and of variance.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Algoritmos , Imagens de Fantasmas
13.
Int J Comput Assist Radiol Surg ; 15(5): 847-857, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32335786

RESUMO

PURPOSE: Demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to improve patient treatment planning and provide support for the training of trauma surgeon residents. MATERIAL AND METHODS: A database of 1347 clinical radiographic studies was collected. Radiologists and trauma surgeons annotated all fractures with bounding boxes and provided a classification according to the AO standard. In all experiments, the dataset was split patient-wise in three with the ratio 70%:10%:20% to build the training, validation and test sets, respectively. ResNet-50 and AlexNet architectures were implemented as deep learning classification and localization models, respectively. Accuracy, precision, recall and [Formula: see text]-score were reported as classification metrics. Retrieval of similar cases was evaluated in terms of precision and recall. RESULTS: The proposed CAD tool for the classification of radiographs into types "A," "B" and "not-fractured" reaches a [Formula: see text]-score of 87% and AUC of 0.95. When classifying fractures versus not-fractured cases it improves up to 94% and 0.98. Prior localization of the fracture results in an improvement with respect to full-image classification. In total, 100% of the predicted centers of the region of interest are contained in the manually provided bounding boxes. The system retrieves on average 9 relevant images (from the same class) out of 10 cases. CONCLUSION: Our CAD scheme localizes, detects and further classifies proximal femur fractures achieving results comparable to expert-level and state-of-the-art performance. Our auxiliary localization model was highly accurate predicting the region of interest in the radiograph. We further investigated several strategies of verification for its adoption into the daily clinical routine. A sensitivity analysis of the size of the ROI and image retrieval as a clinical use case were presented.


Assuntos
Diagnóstico por Computador , Fraturas do Fêmur/diagnóstico por imagem , Bases de Dados Factuais , Aprendizado Profundo , Fraturas do Fêmur/classificação , Fraturas do Fêmur/cirurgia , Humanos , Radiografia
14.
Int J Comput Assist Radiol Surg ; 15(1): 129-139, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31256359

RESUMO

PURPOSE: Multiple myeloma (MM) is a bone marrow cancer that accounts for 10% of all hematological malignancies. It has been reported that FDG PET imaging provides prognostic information for both baseline and therapeutic follow-up of MM patients using visual analysis. In this study, we aim to develop a computer-assisted method based on PET quantitative image features to assist diagnoses and treatment decisions for MM patients. METHODS: Our proposed model relies on a two-stage method with Random Survival Forest (RFS) and variable importance (VIMP) for both feature selection and prediction. The targeted variable for prediction is the progression-free survival (PFS). We consider texture-based (radiomics), conventional (e.g., SUVmax) and clinical biomarkers. We evaluate PFS predictions in terms of C-index and final prognosis separation in two risk groups, from a database of 66 patients who were part of the prospective multi-centric french IMAJEM study. RESULTS: Our method (VIMP + RSF) provides better results (1-C-index of 0.36) than conventional methods such as Lasso-Cox and gradient-boosting Cox (0.48 and 0.56, respectively). We experimentally proved the interest of using selection (0.61 for RSF without selection) and showed that VIMP selection is more stable and gives better results than minimal depth and variable hunting (0.47 and 0.43). The approach gives better prognosis group separation (a p value of 0.05 against 0.11 to 0.4 for others). CONCLUSION: Our results confirm the predictive value of radiomics for MM patients, in particular, they demonstrate that quantitative/heterogeneity image-based features reduce the error of the predicted progression. To our knowledge, this is the first work using RFS on PET images for the progression prediction of MM patients. Moreover, we provide an analysis of the feature selection process, which points toward the identification of clinically relevant biomarkers.


Assuntos
Fluordesoxiglucose F18/farmacologia , Mieloma Múltiplo/diagnóstico , Tomografia por Emissão de Pósitrons/métodos , Humanos , Aprendizado de Máquina , Prognóstico , Estudos Prospectivos , Compostos Radiofarmacêuticos/farmacologia
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1536-1539, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018284

RESUMO

Semi-automatic measurements are performed on 18FDG PET-CT images to monitor the evolution of metastatic sites in the clinical follow-up of metastatic breast cancer patients. Apart from being time-consuming and prone to subjective approximation, semi-automatic tools cannot make the difference between cancerous regions and active organs, presenting a high 18FDG uptake.In this work, we combine a deep learning-based approach with a superpixel segmentation method to segment the main active organs (brain, heart, bladder) from full-body PET images. In particular, we integrate a superpixel SLIC algorithm at different levels of a convolutional network. Results are compared with a deep learning segmentation network alone. The methods are cross-validated on full-body PET images of 36 patients and tested on the acquisitions of 24 patients from a different study center, in the context of the ongoing EPICUREseinmeta study. The similarity between the manually defined organ masks and the results is evaluated with the Dice score. Moreover, the amount of false positives is evaluated through the positive predictive value (PPV).According to the computed Dice scores, all approaches allow to accurately segment the target organs. However, the networks integrating superpixels are better suited to transfer knowledge across datasets acquired on multiple sites (domain adaptation) and are less likely to segment structures outside of the target organs, according to the PPV.Hence, combining deep learning with superpixels allows to segment organs presenting a high 18FDG uptake on PET images without selecting cancerous lesion, and thus improves the precision of the semi-automatic tools monitoring the evolution of breast cancer metastasis.Clinical relevance- We demonstrate the utility of combining deep learning and superpixel segmentation methods to accurately find the contours of active organs from metastatic breast cancer images, to different dataset distributions.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Algoritmos , Encéfalo , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Humanos , Metástase Neoplásica , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada
16.
Ultrasound Med Biol ; 44(1): 278-291, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29107355

RESUMO

A new method to address the problem of shadowing in fetal brain ultrasound volumes is presented. The proposed approach is based on the spatial composition of multiple 3-D fetal head projections using the weighted Euclidean norm as an operator. A support vector machine, which is trained with optimal textural features, was used to assign weighting according to the posterior probabilities of brain tissue and shadows. Both phantom and real fetal head ultrasound volumes were compounded using previously reported operators and compared with the proposed composition method to validate it. The quantitative evaluations revealed increases in signal-to-noise ratio ≤35% and in contrast-to-noise ratio ≤135% using real data. Qualitative comparisons made by obstetricians indicated that this novel method adequately recovers brain tissue and improves the visibility of the main cerebral structures. This may prove useful both for fetal monitoring and in the diagnosis of brain defects. Overall this new approach outperforms spatial composition methods previously reported.


Assuntos
Encéfalo/diagnóstico por imagem , Encéfalo/embriologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Ultrassonografia Pré-Natal/métodos , Algoritmos , Feminino , Humanos , Modelos Estatísticos , Imagens de Fantasmas , Gravidez , Ultrassonografia Pré-Natal/estatística & dados numéricos
17.
Med Image Anal ; 41: 2-17, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28506641

RESUMO

In this paper, we address the multimodal registration problem from a novel perspective, aiming to predict the transformation aligning images directly from their visual appearance. We formulate the prediction as a supervised regression task, with joint image descriptors as input and the output are the parameters of the transformation that guide the moving image towards alignment. We model the joint local appearance with context aware descriptors that capture both local and global cues simultaneously in the two modalities, while the regression function is based on the gradient boosted trees method capable of handling the very large contextual feature space. The good properties of our predictions allow us to couple them with a simple gradient-based optimization for the final registration. Our approach can be applied to any transformation parametrization as well as a broad range of modality pairs. Our method learns the relationship between the intensity distributions of a pair of modalities by using prior knowledge in the form of a small training set of aligned image pairs (in the order of 1-5 in our experiments). We demonstrate the flexibility and generality of our method by evaluating its performance on a variety of multimodal imaging pairs obtained from two publicly available datasets, RIRE (brain MR, CT and PET) and IXI (brain MR). We also show results for the very challenging deformable registration of Intravascular Ultrasound and Histology images. In these experiments, our approach has a larger capture range when compared to other state-of-the-art methods, while improving registration accuracy in complex cases.


Assuntos
Imagem Multimodal/métodos , Imageamento por Ressonância Magnética , Tomografia por Emissão de Pósitrons , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X
18.
Med Image Anal ; 35: 655-668, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27750189

RESUMO

The examination of biopsy samples plays a central role in the diagnosis and staging of numerous diseases, including most cancer types. However, because of the large size of the acquired images, the localization and quantification of diseased portions of a tissue is usually time-consuming, as pathologists must scroll through the whole slide to look for objects of interest which are often only scarcely distributed. In this work, we introduce an approach to facilitate the visual inspection of large digital histopathological slides. Our method builds on a random forest classifier trained to segment the structures sought by the pathologist. However, moving beyond the pixelwise segmentation task, our main contribution is an interactive exploration framework including: (i) a region scoring function which is used to rank and sequentially display regions of interest to the user, and (ii) a relevance feedback capability which leverages human annotations collected on each suggested region. Thereby, an online domain adaptation of the learned pixelwise segmentation model is performed, so that the region scores adapt on-the-fly to possible discrepancies between the original training data and the slide at hand. Three real-time update strategies are compared, including a novel approach based on online gradient descent which supports faster user interaction than an accurate delineation of objects. Our method is evaluated on the task of extramedullary hematopoiesis quantification within mouse liver slides. We assess quantitatively the retrieval abilities of our approach and the benefit of the interactive adaptation scheme. Moreover, we demonstrate the possibility of extrapolating, after a partial exploration of the slide, the surface covered by hematopoietic cells within the whole tissue.


Assuntos
Algoritmos , Patologia/métodos , Animais , Hematopoese , Fígado/patologia , Camundongos , Patologia/instrumentação
20.
Int J Comput Assist Radiol Surg ; 10(6): 773-81, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25976832

RESUMO

PURPOSE: The continuous integration of innovative imaging modalities into conventional vascular surgery rooms has led to an urgent need for computer assistance solutions that support the smooth integration of imaging within the surgical workflow. In particular, endovascular interventions performed under 2D fluoroscopic or angiographic imaging only, require reliable and fast navigation support for complex treatment procedures such as endovascular aortic repair. Despite the vast variety of image-based guide wire and catheter tracking methods, an adoption of these for detecting and tracking the stent graft delivery device is not possible due to its special geometry and intensity appearance. METHODS: In this paper, we present, for the first time, the automatic detection and tracking of the stent graft delivery device in 2D fluoroscopic sequences on the fly. The proposed approach is based on the robust principal component analysis and extends the conventional batch processing towards an online tracking system that is able to detect and track medical devices on the fly. RESULTS: The proposed method has been tested on interventional sequences of four different clinical cases. In the lack of publicly available ground truth data, we have further initiated a crowd sourcing strategy that has resulted in 200 annotations by unexperienced users, 120 of which were used to establish a ground truth dataset for quantitatively evaluating our algorithm. In addition, we have performed a user study amongst our clinical partners for qualitative evaluation of the results. CONCLUSIONS: Although we calculated an average error in the range of nine pixels, the fact that our tracking method functions on the fly and is able to detect stent grafts in all unfolding stages without fine-tuning of parameters has convinced our clinical partners and they all agreed on the very high clinical relevance of our method.


Assuntos
Aorta/cirurgia , Procedimentos Endovasculares/métodos , Internet , Angiografia/métodos , Cateterismo/métodos , Fluoroscopia/métodos , Humanos , Stents , Cirurgia Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA