Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43.917
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38746904

RESUMO

Image-enhanced endoscopy (IEE) has advanced gastrointestinal disease diagnosis and treatment. Traditional white-light imaging has limitations in detecting all gastrointestinal diseases, prompting the development of IEE. In this review, we explore the utility of IEE, including texture and color enhancement imaging and red dichromatic imaging, in pancreatobiliary (PB) diseases. IEE includes methods such as chromoendoscopy, optical-digital, and digital methods. Chromoendoscopy, using dyes such as indigo carmine, aids in delineating lesions and structures, including pancreato-/cholangio-jejunal anastomoses. Optical-digital methods such as narrow-band imaging enhance mucosal details and vessel patterns, aiding in ampullary tumor evaluation and peroral cholangioscopy. Moreover, red dichromatic imaging with its specific color allocation, improves the visibility of thick blood vessels in deeper tissues and enhances bleeding points with different colors and see-through effects, proving beneficial in managing bleeding complications post-endoscopic sphincterotomy. Color enhancement imaging, a novel digital method, enhances tissue texture, brightness, and color, improving visualization of PB structures, such as PB orifices, anastomotic sites, ampullary tumors, and intraductal PB lesions. Advancements in IEE hold substantial potential in improving the accuracy of PB disease diagnosis and treatment. These innovative techniques offer advantages paving the way for enhanced clinical management of PB diseases. Further research is warranted to establish their standard clinical utility and explore new frontiers in PB disease management.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39144408

RESUMO

Objectives: We aimed to conduct a systematic review and meta-analysis to assess the value of image-enhanced endoscopy including blue laser imaging (BLI), linked color imaging, narrow-band imaging (NBI), and texture and color enhancement imaging to detect and diagnose gastric cancer (GC) compared to that of white-light imaging (WLI). Methods: Studies meeting the inclusion criteria were identified through PubMed, Cochrane Library, and Japan Medical Abstracts Society databases searches. The pooled risk ratio for dichotomous variables was calculated using the random-effects model to assess the GC detection between WLI and image-enhanced endoscopy. A random-effects model was used to calculate the overall diagnostic performance of WLI and magnifying image-enhanced endoscopy for GC. Results: Sixteen studies met the inclusion criteria. The detection rate of GC was significantly improved in linked color imaging compared with that in WLI (risk ratio, 2.20; 95% confidence interval [CI], 1.39-3.25; p < 0.01) with mild heterogeneity. Magnifying endoscopy with NBI (ME-NBI) obtained a pooled sensitivity, specificity, and area under the summary receiver operating curve of 0.84 (95 % CI, 0.80-0.88), 0.96 (95 % CI, 0.94-0.97), and 0.92, respectively. Similarly, ME-BLI showed a pooled sensitivity, specificity, and area under the curve of 0.81 (95 % CI, 0.77-0.85), 0.85 (95 % CI, 0.82-0.88), and 0.95, respectively. The diagnostic efficacy of ME-NBI/BLI for GC was evidently high compared to that of WLI, However, significant heterogeneity among the NBI studies still existed. Conclusions: Our meta-analysis showed a high detection rate for linked color imaging and a high diagnostic performance of ME-NBI/BLI for GC compared to that with WLI.

3.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39034959

RESUMO

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Assuntos
Benchmarking , Imagem Molecular , Imagem Óptica , Imagens de Fantasmas , Razão Sinal-Ruído , Imagem Molecular/métodos , Imagem Molecular/normas , Imagem Óptica/métodos , Imagem Óptica/normas , Processamento de Imagem Assistida por Computador/métodos
4.
Sci Rep ; 14(1): 17936, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095507

RESUMO

Recently, we have developed an algorithm to quantitatively evaluate the roughness of spherical microparticles using scanning electron microscopy (SEM) images. The algorithm calculates the root-mean-squared profile roughness (RMS-RQ) of a single particle by analyzing the particle's boundary. The information extracted from a single SEM image yields however only two-dimensional (2D) profile roughness data from the horizontal plane of a particle. The present study offers a practical procedure and the necessary software tools to gain quasi three-dimensional (3D) information from 2D particle contours recorded at different particle inclinations by tilting the sample (stage). This new approach was tested on a set of polystyrene core-iron oxide shell-silica shell particles as few micrometer-sized beads with different (tailored) surface roughness, providing the proof of principle that validates the applicability of the proposed method. SEM images of these particles were analyzed by the latest version of the developed algorithm, which allows to determine the analysis of particles in terms of roughness both within a batch and across the batches as a routine quality control procedure. A separate set of particles has been analyzed by atomic force microscopy (AFM) as a powerful complementary surface analysis technique integrated into SEM, and the roughness results have been compared.

5.
BMC Med Imaging ; 24(1): 201, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39095688

RESUMO

Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.


Assuntos
Aprendizado Profundo , Dermoscopia , Redes Neurais de Computação , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos
6.
Med Biol Eng Comput ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39098859

RESUMO

Glaucoma is one of the most common causes of blindness in the world. Screening glaucoma from retinal fundus images based on deep learning is a common method at present. In the diagnosis of glaucoma based on deep learning, the blood vessels within the optic disc interfere with the diagnosis, and there is also some pathological information outside the optic disc in fundus images. Therefore, integrating the original fundus image with the vessel-removed optic disc image can improve diagnostic efficiency. In this paper, we propose a novel multi-step framework named MSGC-CNN that can better diagnose glaucoma. In the framework, (1) we combine glaucoma pathological knowledge with deep learning model, fuse the features of original fundus image and optic disc region in which the interference of blood vessel is specifically removed by U-Net, and make glaucoma diagnosis based on the fused features. (2) Aiming at the characteristics of glaucoma fundus images, such as small amount of data, high resolution, and rich feature information, we design a new feature extraction network RA-ResNet and combined it with transfer learning. In order to verify our method, we conduct binary classification experiments on three public datasets, Drishti-GS, RIM-ONE-R3, and ACRIMA, with accuracy of 92.01%, 93.75%, and 97.87%. The results demonstrate a significant improvement over earlier results.

7.
Artigo em Inglês | MEDLINE | ID: mdl-39099146

RESUMO

The deflection modeling during the insertion of bevel-tipped flexible needles into soft tissues is crucial for robot-assisted flexible needle insertion into specific target locations within the human body during percutaneous biopsy surgery. This paper proposes a mechanical model based on cutting force identification to predict the deflection of flexible needles in soft tissues. Unlike other models, this method does not require measuring Young's modulus (E) and Poisson's ratio (ν) of tissues, which require complex hardware to obtain. In the model, the needle puncture process is discretized into a series of uniform-depth puncture steps. The needle is simplified as a cantilever beam supported by a series of virtual springs, and the influence of tissue stiffness on needle deformation is represented by the spring stiffness coefficient of the virtual spring. By theoretical modeling and experimental parameter identification of cutting force, the spring stiffness coefficients are obtained, thereby modeling the deflection of the needle. To verify the accuracy of the proposed model, the predicted model results were compared with the deflection of the puncture experiment in polyvinyl alcohol (PVA) gel samples, and the average maximum error range predicted by the model was between 0.606 ± 0.167 mm and 1.005 ± 0.174 mm, which showed that the model can successfully predict the deflection of the needle. This work will contribute to the design of automatic control strategies for needles.

8.
Adv Sci (Weinh) ; : e2405667, 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39101243

RESUMO

The risk of information leaks increases as images become a crucial medium for information sharing. There is a great need to further develop the versatility of image encryption technology to protect confidential and sensitive information. Herein, using high spatial redundancy (strong correlation of neighboring pixels) of the image and the in situ encryption function of a quantum dot functionalized encryption camera, in situ image encryption is achieved by designing quantum dot films (size, color, and full width at half maximum) to modify the correlation and reduce spatial redundancy of the captured image during encryption processing. The correlation coefficients of simulated encrypted image closely apporach to 0. High-quality decrypted images are achieved with a PSNR of more than 35 dB by a convolutional neural network-based algorithm that meets the resolution requirements of human visual perception. Compared with the traditional image encryption algorithms, chaotic image encryption algorithms and neural network-based encryption algorithms described previously, it provides a universal, efficient and effective in situ image encryption method.

9.
J Ultrasound ; 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39102104

RESUMO

Intracerebral hemorrhage (ICH) is a common neurosurgical emergency that is associated with high morbidity and mortality. Minimally invasive or endoscopic hematoma evacuation has emerged in recent years as a viable alternative to conventional large craniotomies. However, accurate trajectory planning and placement of the tubular retractor remains a challenge. We describe a novel technique for handheld portable ultrasound-guided minimally invasive endoscopic evacuation of supratentorial hematomas. A 64-year-old male diagnosed right hematoma (48.5 mL) at the basal ganglia was treated with emergent ultrasound-guided endoscopic transtubular evacuation through a small craniotomy. Ultrasound-guidance facilitated optimal placement of the tubular retractor into the long axis of the hematoma, and allowed for near-total evacuation, reducing iatrogenic tissue damage by mitigating the need for wanding or repositioning of the retractor. The emergence of a new generation of small portable phased array ultrasound probes with improved resolution and clarity has broadened ultrasound's clinical applications.

10.
Sci Rep ; 14(1): 17799, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090172

RESUMO

Aerial image target detection is essential for urban planning, traffic monitoring, and disaster assessment. However, existing detection algorithms struggle with small target recognition and accuracy in complex environments. To address this issue, this paper proposes an improved model based on YOLOv8, named MPE-YOLO. Initially, a multilevel feature integrator (MFI) module is employed to enhance the representation of small target features, which meticulously moderates information loss during the feature fusion process. For the backbone network of the model, a perception enhancement convolution (PEC) module is introduced to replace traditional convolutional layers, thereby expanding the network's fine-grained feature processing capability. Furthermore, an enhanced scope-C2f (ES-C2f) module is designed, utilizing channel expansion and stacking of multiscale convolutional kernels to enhance the network's ability to capture small target details. After a series of experiments on the VisDrone, RSOD, and AI-TOD datasets, the model has not only demonstrated superior performance in aerial image detection tasks compared to existing advanced algorithms but also achieved a lightweight model structure. The experimental results demonstrate the potential of MPE-YOLO in enhancing the accuracy and operational efficiency of aerial target detection. Code will be available online (https://github.com/zhanderen/MPE-YOLO).

11.
Sci Rep ; 14(1): 17871, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090197

RESUMO

Cluster analysis can also detect abnormalities besides building a basis for identifying elements into clusters. Detecting abnormalities is a highly developed feature in the field of unsupervised learning. However, existing studies have mainly focused on discrete data, not probability density functions. This paper enables a possibilistic approach to solving the clustering for probability density functions dealing with abnormal elements. First, the data are extracted using the density function. Then, they are passed through the proposed algorithm to produce a possibilistic partition. Finally, a decision rule is established to recognize which function is abnormal. We compare the proposed algorithm with baseline algorithms in clustering PDFs, such as k-means, FCF, and Self-Updated Clustering. The results of three numerical examples applied to the image are typical for this new method. Furthermore, The proposed algorithm reaches accuracy at 100% over simulated benchmark data and outperforms baseline methods. Additionally, two last examples apply to image data reaching G-mean up from 96 to 100% (Sensitivity: 92-100% and Specificity: 100%). The proposed method can be researched and used to understand the internal structures of big data in the digital age through the probability density functions.

12.
Sci Rep ; 14(1): 17807, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090344

RESUMO

In recent years, a novel x-ray imaging modality has emerged that reveals unresolved sample microstructure via a "dark-field image", which provides complementary information to conventional "bright-field" images, such as attenuation and phase-contrast modalities. This x-ray dark-field signal is produced by unresolved microstructures scattering the x-ray beam resulting in localised image blur. Dark-field retrieval techniques extract this blur to reconstruct a dark-field image. Unfortunately, the presence of non-dark-field blur such as source-size blur or the detector point-spread-function can affect the dark-field retrieval as they also blur the experimental image. In addition, dark-field images can be degraded by the artefacts induced by large intensity gradients from attenuation and propagation-based phase contrast, particularly around sample edges. By measuring any non-dark-field blurring across the image plane and removing it from experimental images, as well as removing attenuation and propagation-based phase contrast, we show that a directional dark-field image can be retrieved with fewer artefacts and more consistent quantitative measures. We present the details of these corrections and provide "before and after" directional dark-field images of samples imaged at a synchrotron source. This paper utilises single-grid directional dark-field imaging, but these corrections have the potential to be broadly applied to other x-ray imaging techniques.

13.
EJNMMI Phys ; 11(1): 70, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39090442

RESUMO

BACKGROUND: Accurately redirecting reconstructed Positron emission tomography (PET) images into short-axis (SA) images shows great significance for subsequent clinical diagnosis. We developed a system for automatic redirection and quantitative analysis of myocardial PET images. METHODS: A total of 128 patients were enrolled for 18 F-FDG PET/CT myocardial metabolic images (MMIs), including 3 image classifications: without defects, with defects, and excess uptake. The automatic reorientation system includes five modules: regional division, myocardial segmentation, ellipsoid fitting, image rotation and quantitative analysis. First, the left ventricular geometry-based canny edge detection (LVG-CED) was developed and compared with the other 5 common region segmentation algorithms, the optimized partitioning was determined based on partition success rate. Then, 9 myocardial segmentation methods and 4 ellipsoid fitting methods were combined to derive 36 cross combinations for diagnostic performance in terms of Pearson correlation coefficient (PCC), Kendall correlation coefficient (KCC), Spearman correlation coefficient (SCC), and determination coefficient. Finally, the deflection angles were computed by ellipsoid fitting and the SA images were derived by affine transformation. Furthermore, the polar maps were used for quantitative analysis of SA images, and the redirection effects of 3 different image classifications were analyzed using correlation coefficients. RESULTS: On the dataset, LVG-CED outperformed other methods in the regional division module with a 100% success rate. In 36 cross combinations, PSO-FCM and LLS-SVD performed the best in terms of correlation coefficient. The linear results indicate that our algorithm (LVG-CED, PSO-FCM, and LLS-SVD) has good consistency with the reference manual method. In quantitative analysis, the similarities between our method and the reference manual method were higher than 96% at 17 segments. Moreover, our method demonstrated excellent performance in all 3 image classifications. CONCLUSION: Our algorithm system could realize accurate automatic reorientation and quantitative analysis of PET MMIs, which is also effective for images suffering from interference.

14.
Radiat Oncol ; 19(1): 100, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090614

RESUMO

BACKGROUND: We report the results of a retrospective analysis of localized prostate cancer (LPCa) treated with transperineal ultrasound image-guided radiotherapy (TPUS-IGRT). METHODS: A total of 124 patients (median age: 74 y, 46-84 y) with LPCa who underwent TPUS-IGRT (Clarity Autoscan system; CAS, Elekta; Stockholm, Sweden) between April 2016 and October 2021 for curative/after hormone induction were enrolled. The number of patients by risk (National Comprehensive Cancer Network 2019) was 7, 25, 42, and 50 for low (LR), good intermediate (good IR), poor intermediate (poor IR), and high (HR)/very high (VHR), respectively. Ninety-five patients were given neoadjuvant hormonal therapy. The planning target volume margin setting was 3 mm for rectal in most cases, 5-7 mm for superior/inferior, and 5 mm for anterior/right/left. The principle prescribed dose is 74 Gy (LR), 76 Gy (good IR), and 76-78 Gy (poor IR or above). CAS was equipped with a real-time prostate intrafraction monitoring (RTPIFM) system. When a displacement of 2-3 mm or more was detected, irradiation was paused, and the patients were placed on standby for prostate reinstatement/recorrection. Of the 3135 fractions in 85 patients for whom RTPIFM was performed, 1008 fractions (32.1%) were recorrected at least once after starting irradiation. RESULTS: A total of 123 patients completed the radiotherapy course. The 5-year overall survival rate was 95.9%. The 5-year biological prostate-specific antigen relapse-free survival rate (bPFS) was 100% for LR, 92.9% for intermediate IR, and 93.2% for HR/VHR (Phoenix method). The 5-year late toxicity rate of Grade 2+ was 7.4% for genitourinary (GU) and 6.5% for gastrointestinal (GI) organs. Comparing the ≤ 76 Gy group to the 78 Gy group for both GU and GI organs, the incidence was higher in the 78 Gy group for both groups. CONCLUSION: These results suggest that TPUS-IGRT is well tolerated, as the bPFS and incidence of late toxicity are almost comparable to those reported by other sources of image-guided radiotherapy.


Assuntos
Neoplasias da Próstata , Radioterapia Guiada por Imagem , Humanos , Masculino , Neoplasias da Próstata/radioterapia , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Idoso , Radioterapia Guiada por Imagem/métodos , Estudos Retrospectivos , Idoso de 80 Anos ou mais , Pessoa de Meia-Idade , Resultado do Tratamento , Dosagem Radioterapêutica , Radioterapia de Intensidade Modulada/métodos , Períneo , Planejamento da Radioterapia Assistida por Computador/métodos
15.
Cancer Imaging ; 24(1): 101, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090668

RESUMO

OBJECTIVES: The roles of magnetic resonance imaging (MRI) -based radiomics approach and deep learning approach in cervical adenocarcinoma (AC) have not been explored. Herein, we aim to develop prognosis-predictive models based on MRI-radiomics and clinical features for AC patients. METHODS: Clinical and pathological information from one hundred and ninety-seven patients with cervical AC was collected and analyzed. For each patient, 107 radiomics features were extracted from T2-weighted MRI images. Feature selection was performed using Spearman correlation and random forest (RF) algorithms, and predictive models were built using support vector machine (SVM) technique. Deep learning models were also trained with T2-weighted MRI images and clinicopathological features through Convolutional Neural Network (CNN). Kaplan-Meier curve was analyzed using significant features. In addition, information from another group of 56 AC patients was used for the independent validation. RESULTS: A total of 107 radiomics features and 6 clinicopathological features (age, FIGO stage, differentiation, invasion depth, lymphovascular space invasion (LVSI), and lymph node metastasis (LNM) were included in the analysis. When predicting the 3-year, 4-year, and 5-year DFS, the model trained solely on radiomics features achieved AUC values of 0.659 (95%CI: 0.620-0.716), 0.791 (95%CI: 0.603-0.922), and 0.853 (95%CI: 0.745-0.912), respectively. However, the combined model, incorporating both radiomics and clinicopathological features, outperformed the radiomics model with AUC values of 0.934 (95%CI: 0.885-0.981), 0.937 (95%CI: 0.867-0.995), and 0.916 (95%CI: 0.857-0.970), respectively. For deep learning models, the MRI-based models achieved an AUC of 0.857, 0.777 and 0.828 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively. And the combined deep learning models got a improved performance, the AUCs were 0.903. 0.862 and 0.969. In the independent test set, the combined model achieved an AUC of 0.873, 0.858 and 0.914 for 3-year DFS, 4-year DFS and 5-year DFS prediction, respectively. CONCLUSIONS: We demonstrated the prognostic value of integrating MRI-based radiomics and clinicopathological features in cervical adenocarcinoma. Both radiomics and deep learning models showed improved predictive performance when combined with clinical data, emphasizing the importance of a multimodal approach in patient management.


Assuntos
Adenocarcinoma , Aprendizado Profundo , Imageamento por Ressonância Magnética , Radiômica , Neoplasias do Colo do Útero , Adulto , Idoso , Feminino , Humanos , Pessoa de Meia-Idade , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Adenocarcinoma/cirurgia , Metástase Linfática/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Estadiamento de Neoplasias , Prognóstico , Estudos Retrospectivos , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/patologia
16.
Radiol Phys Technol ; 2024 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-39096446

RESUMO

Deep learning, particularly convolutional neural networks (CNNs), has advanced positron emission tomography (PET) image reconstruction. However, it requires extensive, high-quality training datasets. Unsupervised learning methods, such as deep image prior (DIP), have shown promise for PET image reconstruction. Although DIP-based PET image reconstruction methods demonstrate superior performance, they involve highly time-consuming calculations. This study proposed a two-step optimization method to accelerate end-to-end DIP-based PET image reconstruction and improve PET image quality. The proposed two-step method comprised a pre-training step using conditional DIP denoising, followed by an end-to-end reconstruction step with fine-tuning. Evaluations using Monte Carlo simulation data demonstrated that the proposed two-step method significantly reduced the computation time and improved the image quality, thereby rendering it a practical and efficient approach for end-to-end DIP-based PET image reconstruction.

17.
Clin Genitourin Cancer ; 22(5): 102155, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-39096564

RESUMO

INTRODUCTION: Treatment of men with metastatic prostate cancer can be difficult due to the heterogeneity of response of lesions. [68Ga]Ga-PSMA-11 (PSMA) PET/CT assists with monitoring and directing clinical intervention; however, the impact of response heterogeneity has yet to be related to outcome measures. The aim of this study was to assess the impact of quantitative imaging information on the value of PSMA PET/CT to assess patient outcomes in response evaluation. PATIENTS AND METHODS: Baseline and follow-up (6 months) PSMA PET/CT of 162 men with oligometastatic PC treated with standard clinical care were acquired between 2015 and 2016 for analysis. An augmentative software medical device was used to track lesions between scans and quantify lesion change to categorize them as either new, increasing, stable, decreasing, or disappeared. Quantitative imaging features describing the size, intensity, extent, change, and heterogeneity of change (based on percent change in SUVtotal) among lesions were extracted and evaluated for association with overall survival (OS) using Cox regression models. Model performance was evaluated using the c-index. RESULTS: Forty-one (25%) of subjects demonstrated heterogeneous response at follow-up, defined as having at least 1 new or increasing lesion and at least 1 decreasing or disappeared lesion. Subjects with heterogeneous response demonstrated significantly shorter OS than subjects without (median OS = 76.6 months vs. median OS not reached, P < .05, c-index = 0.61). In univariate analyses, SUVtotal at follow-up was most strongly associated with OS (HR = 1.29 [1.19, 1.40], P < .001, c-index = 0.73). Multivariable models applied using heterogeneity of change features demonstrated higher performance (c-index = 0.79) than models without (c-index = 0.71-0.76, P < .05). CONCLUSION: Augmentative software tools enhance the evaluation change on serial PSMA PET scans and can facilitate lesional evaluation between timepoints. This study demonstrates that a heterogeneous response at a lesional level may impact adversely on patient outcomes and supports further investigation to evaluate the role of imaging to guide individualized patient management to improve clinical outcomes.

18.
Comput Biol Med ; 180: 108944, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39096609

RESUMO

BACKGROUND: A single learning algorithm can produce deep learning-based image segmentation models that vary in performance purely due to random effects during training. This study assessed the effect of these random performance fluctuations on the reliability of standard methods of comparing segmentation models. METHODS: The influence of random effects during training was assessed by running a single learning algorithm (nnU-Net) with 50 different random seeds for three multiclass 3D medical image segmentation problems, including brain tumour, hippocampus, and cardiac segmentation. Recent literature was sampled to find the most common methods for estimating and comparing the performance of deep learning segmentation models. Based on this, segmentation performance was assessed using both hold-out validation and 5-fold cross-validation and the statistical significance of performance differences was measured using the Paired t-test and the Wilcoxon signed rank test on Dice scores. RESULTS: For the different segmentation problems, the seed producing the highest mean Dice score statistically significantly outperformed between 0 % and 76 % of the remaining seeds when estimating performance using hold-out validation, and between 10 % and 38 % when estimating performance using 5-fold cross-validation. CONCLUSION: Random effects during training can cause high rates of statistically-significant performance differences between segmentation models from the same learning algorithm. Whilst statistical testing is widely used in contemporary literature, our results indicate that a statistically-significant difference in segmentation performance is a weak and unreliable indicator of a true performance difference between two learning algorithms.

19.
Comput Biol Med ; 180: 108970, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39096606

RESUMO

Huntington's disease (HD) is a complex neurodegenerative disorder with considerable heterogeneity in clinical manifestations. While CAG repeat length is a known predictor of disease severity, this heterogeneity suggests the involvement of additional genetic and environmental factors. Previously we revealed that HD primary fibroblasts exhibit unique features, including distinct nuclear morphology and perturbed actin cap, resembling characteristics seen in Hutchinson-Gilford Progeria Syndrome (HGPS). This study establishes a link between actin cap deficiency and cell motility in HD, which correlates with the HD patient disease severity. Here, we examined single-cell motility imaging features in HD primary fibroblasts to explore in depth the relationship between cell migration patterns and their respective HD patients' clinical severity status (premanifest, mild and severe). The single-cell analysis revealed a decline in overall cell motility in correlation with HD severity, being most prominent in severe HD subgroup and HGPS. Moreover, we identified seven distinct spatial clusters of cell migration in all groups, which their proportion varies within each group becoming a significant HD severity classifier between HD subgroups. Next, we investigated the relationship between Lamin B1 expression, serving as nuclear envelope morphology marker, and cell motility finding that changes in Lamin B1 levels are associated with specific motility patterns within HD subgroups. Based on these data we present an accurate machine learning classifier offering comprehensive exploration of cellular migration patterns and disease severity markers for future accurate drug evaluation opening new opportunities for personalized treatment approaches in this challenging disorder.

20.
Comput Biol Med ; 180: 108933, 2024 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-39096612

RESUMO

Medical image segmentation demands precise accuracy and the capability to assess segmentation uncertainty for informed clinical decision-making. Denoising Diffusion Probability Models (DDPMs), with their advancements in image generation, can treat segmentation as a conditional generation task, providing accurate segmentation and uncertainty estimation. However, current DDPMs used in medical image segmentation suffer from low inference efficiency and prediction errors caused by excessive noise at the end of the forward process. To address this issue, we propose an accelerated denoising diffusion probabilistic model via truncated inverse processes (ADDPM) that is specifically designed for medical image segmentation. The inverse process of ADDPM starts from a non-Gaussian distribution and terminates early once a prediction with relatively low noise is obtained after multiple iterations of denoising. We employ a separate powerful segmentation network to obtain pre-segmentation and construct the non-Gaussian distribution of the segmentation based on the forward diffusion rule. By further adopting a separate denoising network, the final segmentation can be obtained with just one denoising step from the predictions with low noise. ADDPM greatly reduces the number of denoising steps to approximately one-tenth of that in vanilla DDPMs. Our experiments on four segmentation tasks demonstrate that ADDPM outperforms both vanilla DDPMs and existing representative accelerating DDPMs methods. Moreover, ADDPM can be easily integrated with existing advanced segmentation models to improve segmentation performance and provide uncertainty estimation. Implementation code: https://github.com/Guoxt/ADDPM.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA