Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
1.
Life (Basel) ; 13(7)2023 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-37511816

RESUMO

The purpose of this investigation was to evaluate the diagnostic performance of two convolutional neural networks (CNNs), namely ResNet-152 and VGG-19, in analyzing, on panoramic images, the rapport that exists between the lower third molar (MM3) and the mandibular canal (MC), and to compare this performance with that of an inexperienced observer (a sixth year dental student). Utilizing the k-fold cross-validation technique, 142 MM3 images, cropped from 83 panoramic images, were split into 80% as training and validation data and 20% as test data. They were subsequently labeled by an experienced radiologist as the gold standard. In order to compare the diagnostic capabilities of CNN algorithms and the inexperienced observer, the diagnostic accuracy, sensitivity, specificity, and positive predictive value (PPV) were determined. ResNet-152 achieved a mean sensitivity, specificity, PPV, and accuracy, of 84.09%, 94.11%, 92.11%, and 88.86%, respectively. VGG-19 achieved 71.82%, 93.33%, 92.26%, and 85.28% regarding the aforementioned characteristics. The dental student's diagnostic performance was respectively 69.60%, 53.00%, 64.85%, and 62.53%. This work demonstrated the potential use of deep CNN architecture for the identification and evaluation of the contact between MM3 and MC in panoramic pictures. In addition, CNNs could be a useful tool to assist inexperienced observers in more accurately identifying contact relationships between MM3 and MC on panoramic images.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8372-8389, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37015430

RESUMO

Event cameras are novel bio-inspired sensors that measure per-pixel brightness differences asynchronously. Recovering brightness from events is appealing since the reconstructed images inherit the high dynamic range (HDR) and high-speed properties of events; hence they can be used in many robotic vision applications and to generate slow-motion HDR videos. However, state-of-the-art methods tackle this problem by training an event-to-image Recurrent Neural Network (RNN), which lacks explainability and is difficult to tune. In this work we show, for the first time, how tackling the combined problem of motion and brightness estimation leads us to formulate event-based image reconstruction as a linear inverse problem that can be solved without training an image reconstruction RNN. Instead, classical and learning-based regularizers are used to solve the problem and remove artifacts from the reconstructed images. The experiments show that the proposed approach generates images with visual quality on par with state-of-the-art methods despite only using data from a short time interval. State-of-the-art results are achieved using an image denoising Convolutional Neural Network (CNN) as the regularization function. The proposed regularized formulation and solvers have a unifying character because they can be applied also to reconstruct brightness from the second derivative. Additionally, the formulation is attractive because it can be naturally combined with super-resolution, motion-segmentation and color demosaicing. Code is available at https://github.com/tub-rip/event_based_image_rec_inverse_problem.

3.
Diagnostics (Basel) ; 13(7)2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-37046428

RESUMO

Radionuclides are unstable isotopes that mainly emit alpha (α), beta (ß) or gamma (γ) radiation through radiation decay. Therefore, they are used in the biomedical field to label biomolecules or drugs for diagnostic imaging applications, such as positron emission tomography (PET) and/or single-photon emission computed tomography (SPECT). A growing field of research is the development of new radiopharmaceuticals for use in cancer treatments. Preclinical studies are the gold standard for translational research. Specifically, in vitro radiopharmaceutical studies are based on the use of radiopharmaceuticals directly on cells. To date, radiometric ß- and γ-counters are the only tools able to assess a preclinical in vitro assay with the aim of estimating uptake, retention, and release parameters, including time- and dose-dependent cytotoxicity and kinetic parameters. This review has been designed for researchers, such as biologists and biotechnologists, who would like to approach the radiobiology field and conduct in vitro assays for cellular radioactivity evaluations using radiometric counters. To demonstrate the importance of in vitro radiopharmaceutical assays using radiometric counters with a view to radiogenomics, many studies based on 64Cu-, 68Ga-, 125I-, and 99mTc-labeled radiopharmaceuticals have been revised and summarized in this manuscript.

4.
Life (Basel) ; 13(2)2023 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-36836717

RESUMO

Polyphenols have gained widespread attention as they are effective in the prevention and management of various diseases, including cancer diseases (CD) and rheumatoid arthritis (RA). They are natural organic substances present in fruits, vegetables, and spices. Polyphenols interact with various kinds of receptors and membranes. They modulate different signal cascades and interact with the enzymes responsible for CD and RA. These interactions involve cellular machinery, from cell membranes to major nuclear components, and provide information on their beneficial effects on health. These actions provide evidence for their pharmaceutical exploitation in the treatment of CD and RA. In this review, we discuss different pathways, modulated by polyphenols, which are involved in CD and RA. A search of the most recent relevant publications was carried out with the following criteria: publication date, 2012-2022; language, English; study design, in vitro; and the investigation of polyphenols present in extra virgin olive, grapes, and spices in the context of RA and CD, including, when available, the underlying molecular mechanisms. This review is valuable for clarifying the mechanisms of polyphenols targeting the pathways of senescence and leading to the development of CD and RA treatments. Herein, we focus on research reports that emphasize antioxidant properties.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3617-3631, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35635811

RESUMO

We investigate a multiview shape reconstruction problem based on an active surface model whose geometric evolution is driven by radar measurements acquired at sparse locations. Building on our previous work in the context of variational methods for the reconstruction of a scene conceptualized as the graph of a function, we generalize this inversion approach for a general geometry, now described by an active surface, strongly motivated by prior variational computer vision approaches to multiview stereo reconstruction from camera images. While conceptually similar, use of radar echoes within a variational scheme to drive the active surface evolution requires significant changes in regularization strategies compared to prior image based methodologies for the active surface evolution to work effectively. We describe all of these aspects and how we addressed them. While our long term objective is to develop a framework capable of fusing radar as well as other image based information, in which the active surface becomes an explicit shared reference for data fusion. In this paper, we explore the reconstruction using radar as a single modality, demonstrating that the presented approach can provide reconstructions of quality comparable to those from image based methods showing great potential for further development toward data fusion.

6.
Eur J Hybrid Imaging ; 6(1): 4, 2022 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-35165793

RESUMO

BACKGROUND: Positron emission tomography (PET)-derived LV MBF quantification is usually measured in standard anatomical vascular territories potentially averaging flow from normally perfused tissue with those from areas with abnormal flow supply. Previously we reported on an image-based tool to noninvasively measure absolute myocardial blood flow at locations just below individual epicardial vessel to help guide revascularization. The aim of this work is to determine the robustness of vessel-specific flow measurements (MBFvs) extracted from the fusion of dynamic PET (dPET) with coronary computed tomography angiography (CCTA) myocardial segmentations, using flow measured from the fusion with CCTA manual segmentation as the reference standard. METHODS: Forty-three patients' 13NH3 dPET, CCTA image datasets were used to measure the agreement of the MBFvs profiles after the fusion of dPET data with three CCTA anatomical models: (1) a manual model, (2) a fully automated segmented model and (3) a corrected model, where major inaccuracies in the automated segmentation were briefly edited. Pairwise accuracy of the normality/abnormality agreement of flow values along differently extracted vessels was determined by comparing, on a point-by-point basis, each vessel's flow to corresponding vessels' normal limits using Dice coefficients (DC) as the metric. RESULTS: Of the 43 patients CCTA fully automated mask models, 27 patients' borders required manual correction before dPET/CCTA image fusion, but this editing process was brief (2-3 min) allowing a 100% success rate of extracting MBFvs in clinically acceptable times. In total, 124 vessels were analyzed after dPET fusion with the manual and corrected CCTA mask models yielding 2225 stress and 2122 rest flow values. Forty-seven vessels were analyzed after fusion with the fully automatic masks producing 840 stress and 825 rest flow samples. All DC coefficients computed globally or by territory were ≥ 0.93. No statistical differences were found in the normal/abnormal flow classifications between manual and corrected or manual and fully automated CCTA masks. CONCLUSION: Fully automated and manually corrected myocardial CCTA segmentation provides anatomical masks in clinically acceptable times for vessel-specific myocardial blood flow measurements using dynamic PET/CCTA image fusion which are not significantly different in flow accuracy and within clinically acceptable processing times compared to fully manually segmented CCTA myocardial masks.

7.
Artigo em Inglês | MEDLINE | ID: mdl-34337618

RESUMO

We propose Directionally Paired Principal Component Analysis (DP-PCA), a novel linear dimension-reduction model for estimating coupled yet partially observable variable sets. Unlike partial least squares methods (e.g., partial least squares regression and canonical correlation analysis) that maximize correlation/covariance between the two datasets, our DP-PCA directly minimizes, either conditionally or unconditionally, the reconstruction and prediction errors for the observable and unobservable part, respectively. We demonstrate the optimality of the proposed DP-PCA approach, we compare and evaluate relevant linear cross-decomposition methods with data reconstruction and prediction experiments on synthetic Gaussian data, multi-target regression datasets, and a single-channel image dataset. Results show that when only a single pair of bases is allowed, the conditional DP-PCA achieves the lowest reconstruction error on the observable part and the total variable sets as a whole; meanwhile, the unconditional DP-PCA reaches the lowest prediction errors on the unobservable part. When an extra budget is allowed for the observable part's PCA basis, one can reach an optimal solution using a combined method: standard PCA for the observable part and unconditional DP-PCA for the unobservable part.

8.
Artigo em Inglês | MEDLINE | ID: mdl-34350427

RESUMO

Principal Component Analysis (PCA) is a widely used technique for dimensionality reduction in various problem domains, including data compression, image processing, visualization, exploratory data analysis, pattern recognition, time-series prediction, and machine learning. Often, data is presented in a correlated paired manner such that there exist observable and correlated unobservable measurements. Unfortunately, traditional PCA techniques generally fail to optimally capture the leverageable correlations between such paired data as it does not yield a maximally correlated basis between the observable and unobservable counterparts. This instead is the objective of Canonical Correlation Analysis (and the more general Partial Least Squares methods); however, such techniques are still symmetric in maximizing correlation (covariance for PLSR) over all choices of the basis for both datasets without differentiating between observable and unobservable variables (except for the regression phase of PLSR). Further, these methods deviate from PCA's formulation objective to minimize approximation error, seeking instead to maximize correlation or covariance. While these are sensible optimization objectives, they are not equivalent to error minimization. We therefore introduce a new method of leveraging PCA between paired datasets in a dependently coupled manner, which is optimal with respect to approximation error during training. We generate a dependently coupled paired basis for which we relax orthogonality constraints in decomposing unreliable unobservable measurements. In doing so, this allows us to optimally capture the variations of the observable data while conditionally minimizing the expected prediction error for the unobservable component. We show preliminary results that demonstrate improved learning of our proposed method compared to that of traditional techniques.

9.
Med Phys ; 48(9): 5130-5141, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34245012

RESUMO

PURPOSE: In current clinical practice, noisy and artifact-ridden weekly cone beam computed tomography (CBCT) images are only used for patient setup during radiotherapy. Treatment planning is performed once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures. If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment as well as for deriving biomarkers for treatment response. METHODS: Using a novel physics-based data augmentation strategy, we synthesize a large dataset of perfectly/inherently registered pCT and synthetic-CBCT pairs for locally advanced lung cancer patient cohort, which are then used in a multitask three-dimensional (3D) deep learning framework to simultaneously segment and translate real weekly CBCT images to high-quality pCT-like images. RESULTS: We compared the synthetic CT and OAR segmentations generated by the model to real pCT and manual OAR segmentations and showed promising results. The real week 1 (baseline) CBCT images which had an average mean absolute error (MAE) of 162.77 HU compared to pCT images are translated to synthetic CT images that exhibit a drastically improved average MAE of 29.31 HU and average structural similarity of 92% with the pCT images. The average DICE scores of the 3D OARs segmentations are: lungs 0.96, heart 0.88, spinal cord 0.83, and esophagus 0.66. CONCLUSIONS: We demonstrate an approach to translate artifact-ridden CBCT images to high-quality synthetic CT images, while simultaneously generating good quality segmentation masks for different OARs. This approach could allow clinicians to adjust treatment plans using only the routine low-quality CBCT images, potentially improving patient outcomes. Our code, data, and pre-trained models will be made available via our physics-based data augmentation library, Physics-ArX, at https://github.com/nadeemlab/Physics-ArX.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada de Feixe Cônico , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco , Física , Planejamento da Radioterapia Assistida por Computador
10.
Biomed Eng Lett ; 11(1): 15-24, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33747600

RESUMO

Diagnosis of ascending thoracic aortic aneurysm (ATAA) is based on the measurement of the maximum aortic diameter, but size is not a good predictor of the risk of adverse events. There is growing interest in the development of novel image-derived risk strategies to improve patient risk management towards a highly individualized level. In this study, the feasibility and efficacy of deep learning for the automatic segmentation of ATAAs was investigated using UNet, ENet, and ERFNet techniques. Specifically, CT angiography done on 72 patients with ATAAs and different valve morphology (i.e., tricuspid aortic valve, TAV, and bicuspid aortic valve, BAV) were semi-automatically segmented with Mimics software (Materialize NV, Leuven, Belgium), and then used for training of the tested deep learning models. The segmentation performance in terms of accuracy and time inference were compared using several parameters. All deep learning models reported a dice score higher than 88%, suggesting a good agreement between predicted and manual ATAA segmentation. We found that the ENet and UNet are more accurate than ERFNet, with the ENet much faster than UNet. This study demonstrated that deep learning models can rapidly segment and quantify the 3D geometry of ATAAs with high accuracy, thereby facilitating the expansion into clinical workflow of personalized approach to the management of patients with ATAAs.

11.
Appl Sci (Basel) ; 11(2)2021 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-33680505

RESUMO

Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.

12.
J Magn Reson Imaging ; 54(2): 452-459, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33634932

RESUMO

BACKGROUND: Prostate volume, as determined by magnetic resonance imaging (MRI), is a useful biomarker both for distinguishing between benign and malignant pathology and can be used either alone or combined with other parameters such as prostate-specific antigen. PURPOSE: This study compared different deep learning methods for whole-gland and zonal prostate segmentation. STUDY TYPE: Retrospective. POPULATION: A total of 204 patients (train/test = 99/105) from the PROSTATEx public dataset. FIELD STRENGTH/SEQUENCE: A 3 T, TSE T2 -weighted. ASSESSMENT: Four operators performed manual segmentation of the whole-gland, central zone + anterior stroma + transition zone (TZ), and peripheral zone (PZ). U-net, efficient neural network (ENet), and efficient residual factorized ConvNet (ERFNet) were trained and tuned on the training data through 5-fold cross-validation to segment the whole gland and TZ separately, while PZ automated masks were obtained by the subtraction of the first two. STATISTICAL TESTS: Networks were evaluated on the test set using various accuracy metrics, including the Dice similarity coefficient (DSC). Model DSC was compared in both the training and test sets using the analysis of variance test (ANOVA) and post hoc tests. Parameter number, disk size, training, and inference times determined network computational complexity and were also used to assess the model performance differences. A P < 0.05 was selected to indicate the statistical significance. RESULTS: The best DSC (P < 0.05) in the test set was achieved by ENet: 91% ± 4% for the whole gland, 87% ± 5% for the TZ, and 71% ± 8% for the PZ. U-net and ERFNet obtained, respectively, 88% ± 6% and 87% ± 6% for the whole gland, 86% ± 7% and 84% ± 7% for the TZ, and 70% ± 8% and 65 ± 8% for the PZ. Training and inference time were lowest for ENet. DATA CONCLUSION: Deep learning networks can accurately segment the prostate using T2 -weighted images. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 2.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos
13.
J Opt Soc Am A Opt Image Sci Vis ; 37(4): 568-578, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32400529

RESUMO

Optical imaging systems are found everywhere in modern society. They are integral to computer vision, where the goal is often to infer geometric and radiometric information about a 3D environment given limited sensing resources. It is helpful to develop relationships between these real-world properties and the actual measurements that are taken, such as 2D images. To this end, we propose a new relationship between object radiance and image irradiance based on power conservation and a thin lens imaging model. The relationship has a closed-form solution for in-focus points and can be solved via numerical integration for points that are not focused. It can be thought of as a generalization of Horn's commonly accepted irradiance equation. Through both ray tracing simulations and comparison to the intensity values of actual images, we believe our equation provides better accuracy than Horn's equation. An improvement is most notable for large lenses and near-focused images where the pinhole imaging model implicit in Horn's derivation breaks down. Outside of this regime, our model validates the use of Horn's approximation through a more thorough theoretical foundation.

14.
Diagnostics (Basel) ; 10(5)2020 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-32429182

RESUMO

BACKGROUND: Our study assesses the diagnostic value of different features extracted from high resolution computed tomography (HRCT) images of patients with idiopathic pulmonary fibrosis. These features are investigated over a range of HRCT lung volume measurements (in Hounsfield Units) for which no prior study has yet been published. In particular, we provide a comparison of their diagnostic value at different Hounsfield Unit (HU) thresholds, including corresponding pulmonary functional tests. METHODS: We consider thirty-two patients retrospectively for whom both HRCT examinations and spirometry tests were available. First, we analyse the HRCT histogram to extract quantitative lung fibrosis features. Next, we evaluate the relationship between pulmonary function and the HRCT features at selected HU thresholds, namely -200 HU, 0 HU, and +200 HU. We model the relationship using a Poisson approximation to identify the measure with the highest log-likelihood. RESULTS: Our Poisson models reveal no difference at the -200 and 0 HU thresholds. However, inferential conclusions change at the +200 HU threshold. Among the HRCT features considered, the percentage of normally attenuated lung at -200 HU shows the most significant diagnostic utility. CONCLUSIONS: The percentage of normally attenuated lung can be used together with qualitative HRCT assessment and pulmonary function tests to enhance the idiopathic pulmonary fibrosis (IPF) diagnostic process.

15.
Comput Biol Med ; 120: 103701, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32217282

RESUMO

Delineation of tumours in Positron Emission Tomography (PET) plays a crucial role in accurate diagnosis and radiotherapy treatment planning. In this context, it is of outmost importance to devise efficient and operator-independent segmentation algorithms capable of reconstructing the tumour three-dimensional (3D) shape. In previous work, we proposed a system for 3D tumour delineation on PET data (expressed in terms of Standardized Uptake Value - SUV), based on a two-step approach. Step 1 identified the slice enclosing the maximum SUV and generated a rough contour surrounding it. Such contour was then used to initialize step 2, where the 3D shape of the tumour was obtained by separately segmenting 2D PET slices, leveraging the slice-by-slice marching approach. Additionally, we combined active contours and machine learning components to improve performance. Despite its success, the slice marching approach poses unnecessary limitations that are naturally removed by performing the segmentation directly in 3D. In this paper, we migrate our system into 3D. In particular, the segmentation in step 2 is now performed by evolving an active surface directly in the 3D space. The key points of such an advancement are that it performs the shape reconstruction on the whole stack of slices simultaneously, naturally leveraging cross-slice information that could not be exploited before. Additionally, it does not require any specific stopping condition, as the active surface naturally reaches a stable topology once convergence is achieved. Performance of this fully 3D approach is evaluated on the same dataset discussed in our previous work, which comprises fifty PET scans of lung, head and neck, and brain tumours. The results have confirmed that a benefit is indeed achieved in practice for all investigated anatomical districts, both quantitatively, through a set of commonly used quality indicators (dice similarity coefficient >87.66%, Hausdorff distance < 1.48 voxel and Mahalanobis distance < 0.82 voxel), and qualitatively in terms of Likert score (>3 in 54% of the tumours).


Assuntos
Algoritmos , Neoplasias Encefálicas , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Imageamento Tridimensional , Aprendizado de Máquina , Tomografia por Emissão de Pósitrons
16.
SIAM J Imaging Sci ; 13(4): 2029-2062, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34336084

RESUMO

Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of first-order, gradient based parameter estimation in scenarios where second-order optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, A Variational Perspective on Accelerated Methods in Optimization, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODEs. We show how their formulation may be further extended to infinite dimensional manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The coevolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.

17.
J Imaging ; 6(11)2020 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-34460569

RESUMO

BACKGROUND: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. METHODS: Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources' requirements. RESULTS: E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. CONCLUSIONS: We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results.

18.
J Math Imaging Vis ; 62(1): 10-36, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34079176

RESUMO

We further develop a new framework, called PDE acceleration, by applying it to calculus of variation problems defined for general functions on ℝ n , obtaining efficient numerical algorithms to solve the resulting class of optimization problems based on simple discretizations of their corresponding accelerated PDEs. While the resulting family of PDEs and numerical schemes are quite general, we give special attention to their application for regularized inversion problems, with particular illustrative examples on some popular image processing applications. The method is a generalization of momentum, or accelerated, gradient descent to the PDE setting. For elliptic problems, the descent equations are a nonlinear damped wave equation, instead of a diffusion equation, and the acceleration is realized as an improvement in the CFL condition from Δt ~ Δx 2 (for diffusion) to Δt ~ Δx (for wave equations). We work out several explicit as well as a semi-implicit numerical scheme, together with their necessary stability constraints, and include recursive update formulations which allow minimal-effort adaptation of existing gradient descent PDE codes into the accelerated PDE framework. We explore these schemes more carefully for a broad class of regularized inversion applications, with special attention to quadratic, Beltrami, and total variation regularization, where the accelerated PDE takes the form of a nonlinear wave equation. Experimental examples demonstrate the application of these schemes for image denoising, deblurring, and inpainting, including comparisons against primal-dual, split Bregman, and ADMM algorithms.

19.
Artif Intell Med ; 94: 67-78, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30871684

RESUMO

In the context of cancer delineation using positron emission tomography datasets, we present an innovative approach which purpose is to tackle the real-time, three-dimensional segmentation task in a full, or at least nearly full automatized way. The approach comprises a preliminary initialization phase where the user highlights a region of interest around the cancer on just one slice of the tomographic dataset. The algorithm takes care of identifying an optimal and user-independent region of interest around the anomalous tissue and located on the slice containing the highest standardized uptake value so to start the successive segmentation task. The three-dimensional volume is then reconstructed using a slice-by-slice marching approach until a suitable automatic stop condition is met. On each slice, the segmentation is performed using an enhanced local active contour based on the minimization of a novel energy functional which combines the information provided by a machine learning component, the discriminant analysis in the present study. As a result, the whole algorithm is almost completely automatic and the output segmentation is independent from the input provided by the user. Phantom experiments comprising spheres and zeolites, and clinical cases comprising various body districts (lung, brain, and head and neck), and two different radio-tracers (18 F-fluoro-2-deoxy-d-glucose, and 11C-labeled Methionine) were used to assess the algorithm performances. Phantom experiments with spheres and with zeolites showed a dice similarity coefficient above 90% and 80%, respectively. Clinical cases showed high agreement with the gold standard (R2 = 0.98). These results indicate that the proposed method can be efficiently applied in the clinical routine with potential benefit for the treatment response assessment, and targeting in radiotherapy.


Assuntos
Algoritmos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Análise Discriminante , Humanos , Estudos Retrospectivos
20.
Comput Biol Med ; 102: 1-15, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30219733

RESUMO

Positron Emission Tomography (PET) imaging has an enormous potential to improve radiation therapy treatment planning offering complementary functional information with respect to other anatomical imaging approaches. The aim of this study is to develop an operator independent, reliable, and clinically feasible system for biological tumour volume delineation from PET images. Under this design hypothesis, we combine several known approaches in an original way to deploy a system with a high level of automation. The proposed system automatically identifies the optimal region of interest around the tumour and performs a slice-by-slice marching local active contour segmentation. It automatically stops when a "cancer-free" slice is identified. User intervention is limited at drawing an initial rough contour around the cancer region. By design, the algorithm performs the segmentation minimizing any dependence from the initial input, so that the final result is extremely repeatable. To assess the performances under different conditions, our system is evaluated on a dataset comprising five synthetic experiments and fifty oncological lesions located in different anatomical regions (i.e. lung, head and neck, and brain) using PET studies with 18F-fluoro-2-deoxy-d-glucose and 11C-labeled Methionine radio-tracers. Results on synthetic lesions demonstrate enhanced performances when compared against the most common PET segmentation methods. In clinical cases, the proposed system produces accurate segmentations (average dice similarity coefficient: 85.36 ±â€¯2.94%, 85.98 ±â€¯3.40%, 88.02 ±â€¯2.75% in the lung, head and neck, and brain district, respectively) with high agreement with the gold standard (determination coefficient R2 = 0.98). We believe that the proposed system could be efficiently used in the everyday clinical routine as a medical decision tool, and to provide the clinicians with additional information, derived from PET, which can be of use in radiation therapy, treatment, and planning.


Assuntos
Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/secundário , Reações Falso-Positivas , Fluordesoxiglucose F18 , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Metástase Neoplásica , Variações Dependentes do Observador , Reconhecimento Automatizado de Padrão , Imagens de Fantasmas , Valor Preditivo dos Testes , Planejamento da Radioterapia Assistida por Computador/métodos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade , Software , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA