Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
BME Front ; 5: 0037, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38515637

RESUMEN

Objective and Impact Statement: High-intensity focused ultrasound (HIFU) therapy is a promising noninvasive method that induces coagulative necrosis in diseased tissues through thermal and cavitation effects, while avoiding surrounding damage to surrounding normal tissues. Introduction: Accurate and real-time acquisition of the focal region temperature field during HIFU treatment marked enhances therapeutic efficacy, holding paramount scientific and practical value in clinical cancer therapy. Methods: In this paper, we initially designed and assembled an integrated HIFU system incorporating diagnostic, therapeutic, and temperature measurement functionalities to collect ultrasound echo signals and temperature variations during HIFU therapy. Furthermore, we introduced a novel multimodal teacher-student model approach, which utilizes the shared self-expressive coefficients and the deep canonical correlation analysis layer to aggregate each modality data, then through knowledge distillation strategies, transfers the knowledge from the teacher model to the student model. Results: By investigating the relationship between the phantoms, in vitro, and in vivo ultrasound echo signals and temperatures, we successfully achieved real-time reconstruction of the HIFU focal 2D temperature field region with a maximum temperature error of less than 2.5 °C. Conclusion: Our method effectively monitored the distribution of the HIFU temperature field in real time, providing scientifically precise predictive schemes for HIFU therapy, laying a theoretical foundation for subsequent personalized treatment dose planning, and providing efficient guidance for noninvasive, nonionizing cancer treatment.

2.
Phys Med Biol ; 69(5)2024 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-38306968

RESUMEN

Objective. Radiation therapy (RT) represents a prevalent therapeutic modality for head and neck (H&N) cancer. A crucial phase in RT planning involves the precise delineation of organs-at-risks (OARs), employing computed tomography (CT) scans. Nevertheless, the manual delineation of OARs is a labor-intensive process, necessitating individual scrutiny of each CT image slice, not to mention that a standard CT scan comprises hundreds of such slices. Furthermore, there is a significant domain shift between different institutions' H&N data, which makes traditional semi-supervised learning strategies susceptible to confirmation bias. Therefore, effectively using unlabeled datasets to support annotated datasets for model training has become a critical issue for preventing domain shift and confirmation bias.Approach. In this work, we proposed an innovative cross-domain orthogon-based-perspective consistency (CD-OPC) strategy within a two-branch collaborative training framework, which compels the two sub-networks to acquire valuable features from unrelated perspectives. More specifically, a novel generative pretext task cross-domain prediction (CDP) was designed for learning inherent properties of CT images. Then this prior knowledge was utilized to promote the independent learning of distinct features by the two sub-networks from identical inputs, thereby enhancing the perceptual capabilities of the sub-networks through orthogon-based pseudo-labeling knowledge transfer.Main results. Our CD-OPC model was trained on H&N datasets from nine different institutions, and validated on the four local intuitions' H&N datasets. Among all datasets CD-OPC achieved more advanced performance than other semi-supervised semantic segmentation algorithms.Significance. The CD-OPC method successfully mitigates domain shift and prevents network collapse. In addition, it enhances the network's perceptual abilities, and generates more reliable predictions, thereby further addressing the confirmation bias issue.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Humanos , Semántica , Tomografía Computarizada por Rayos X , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Órganos en Riesgo , Procesamiento de Imagen Asistido por Computador/métodos
3.
Phys Med Biol ; 69(6)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38373347

RESUMEN

Objective.Accurate delineation of organs-at-risk (OARs) is a critical step in radiotherapy. The deep learning generated segmentations usually need to be reviewed and corrected by oncologists manually, which is time-consuming and operator-dependent. Therefore, an automated quality assurance (QA) and adaptive optimization correction strategy was proposed to identify and optimize 'incorrect' auto-segmentations.Approach.A total of 586 CT images and labels from nine institutions were used. The OARs included the brainstem, parotid, and mandible. The deep learning generated contours were compared with the manual ground truth delineations. In this study, we proposed a novel contour quality assurance and adaptive optimization (CQA-AO) strategy, which consists of the following three main components: (1) the contour QA module classified the deep learning generated contours as either accepted or unaccepted; (2) the unacceptable contour categories analysis module provided the potential error reasons (five unacceptable category) and locations (attention heatmaps); (3) the adaptive correction of unacceptable contours module integrate vision-language representations and utilize convex optimization algorithms to achieve adaptive correction of 'incorrect' contours.Main results. In the contour QA tasks, the sensitivity (accuracy, precision) of CQA-AO strategy reached 0.940 (0.945, 0.948), 0.962 (0.937, 0.913), and 0.967 (0.962, 0.957) for brainstem, parotid and mandible, respectively. The unacceptable contour category analysis, the(FI,AccI,Fmicro,Fmacro)of CQA-AO strategy reached (0.901, 0.763, 0.862, 0.822), (0.855, 0.737, 0.837, 0.784), and (0.907, 0.762, 0.858, 0.821) for brainstem, parotid and mandible, respectively. After adaptive optimization correction, the DSC values of brainstem, parotid and mandible have been improved by 9.4%, 25.9%, and 13.5%, and Hausdorff distance values decreased by 62%, 70.6%, and 81.6%, respectively.Significance. The proposed CQA-AO strategy, which combines QA of contour and adaptive optimization correction for OARs contouring, demonstrated superior performance compare to conventional methods. This method can be implemented in the clinical contouring procedures and improve the efficiency of delineating and reviewing workflow.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Planificación de la Radioterapia Asistida por Computador/métodos , Órganos en Riesgo , Procesamiento de Imagen Asistido por Computador/métodos
4.
Technol Cancer Res Treat ; 23: 15330338231219366, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38179668

RESUMEN

Introduction: Currently, the incidence of liver cancer is on the rise annually. Precise identification of liver tumors is crucial for clinicians to strategize the treatment and combat liver cancer. Thus far, liver tumor contours have been derived through labor-intensive and subjective manual labeling. Computers have gained widespread application in the realm of liver tumor segmentation. Nonetheless, liver tumor segmentation remains a formidable challenge owing to the diverse range of volumes, shapes, and image intensities encountered. Methods: In this article, we introduce an innovative solution called the attention connect network (AC-Net) designed for automated liver tumor segmentation. Building upon the U-shaped network architecture, our approach incorporates 2 critical attention modules: the axial attention module (AAM) and the vision transformer module (VTM), which replace conventional skip-connections to seamlessly integrate spatial features. The AAM facilitates feature fusion by computing axial attention across feature maps, while the VTM operates on the lowest resolution feature maps, employing multihead self-attention, and reshaping the output into a feature map for subsequent concatenation. Furthermore, we employ a specialized loss function tailored to our approach. Our methodology begins with pretraining AC-Net using the LiTS2017 dataset and subsequently fine-tunes it using computed tomography (CT) and magnetic resonance imaging (MRI) data sourced from Hubei Cancer Hospital. Results: The performance metrics for AC-Net on CT data are as follows: dice similarity coefficient (DSC) of 0.90, Jaccard coefficient (JC) of 0.82, recall of 0.92, average symmetric surface distance (ASSD) of 4.59, Hausdorff distance (HD) of 11.96, and precision of 0.89. For AC-Net on MRI data, the metrics are DSC of 0.80, JC of 0.70, recall of 0.82, ASSD of 7.58, HD of 30.26, and precision of 0.84. Conclusion: The comparative experiments highlight that AC-Net exhibits exceptional tumor recognition accuracy when tested on the Hubei Cancer Hospital dataset, demonstrating highly competitive performance for practical clinical applications. Furthermore, the ablation experiments provide conclusive evidence of the efficacy of each module proposed in this article. For those interested, the code for this research article can be accessed at the following GitHub repository: https://github.com/killian-zero/py_tumor-segmentation.git.


Asunto(s)
Neoplasias Hepáticas , Tomografía Computarizada por Rayos X , Humanos , Imagen por Resonancia Magnética , Neoplasias Hepáticas/diagnóstico por imagen , Instituciones Oncológicas , Suministros de Energía Eléctrica , Procesamiento de Imagen Asistido por Computador
5.
J Appl Clin Med Phys ; 25(1): e14248, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38128058

RESUMEN

PURPOSE: Obvious inconsistencies in auto-segmentations exist among various AI software. In this study, we have developed a novel convolutional neural network (CNN) fine-tuning workflow to achieve precise and robust localized segmentation. METHODS: The datasets include Hubei Cancer Hospital dataset, Cetuximab Head and Neck Public Dataset, and Québec Public Dataset. Seven organs-at-risks (OARs), including brain stem, left parotid gland, esophagus, left optic nerve, optic chiasm, mandible, and pharyngeal constrictor, were selected. The auto-segmentation results from four commercial AI software were first compared with the manual delineations. Then a new multi-scale lightweight residual CNN model with an attention module (named as HN-Net) was trained and tested on 40 samples and 10 samples from Hubei Cancer Hospital, respectively. To enhance the network's accuracy and generalization ability, the fine-tuning workflow utilized an uncertainty estimation method for automatic selection of candidate samples of worthiness from Cetuximab Head and Neck Public Dataset for further training. The segmentation performances were evaluated on the Hubei Cancer Hospital dataset and/or the entire Québec Public Dataset. RESULTS: A maximum difference of 0.13 and 0.7 mm in average Dice value and Hausdorff distance value for the seven OARs were observed by four AI software. The proposed HN-Net achieved an average Dice value of 0.14 higher than that of the AI software, and it also outperformed other popular CNN models (HN-Net: 0.79, U-Net: 0.78, U-Net++: 0.78, U-Net-Multi-scale: 0.77, AI software: 0.65). Additionally, the HN-Net fine-tuning workflow by using the local datasets and external public datasets further improved the automatic segmentation with the average Dice value by 0.02. CONCLUSION: The delineations of commercial AI software need to be carefully reviewed, and localized further training is necessary for clinical practice. The proposed fine-tuning workflow could be feasibly adopted to implement an accurate and robust auto-segmentation model by using local datasets and external public datasets.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Flujo de Trabajo , Cetuximab , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Órganos en Riesgo
6.
Front Oncol ; 13: 1177788, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37927463

RESUMEN

Introduction: Radiation therapy is a common treatment option for Head and Neck Cancer (HNC), where the accurate segmentation of Head and Neck (HN) Organs-AtRisks (OARs) is critical for effective treatment planning. Manual labeling of HN OARs is time-consuming and subjective. Therefore, deep learning segmentation methods have been widely used. However, it is still a challenging task for HN OARs segmentation due to some small-sized OARs such as optic chiasm and optic nerve. Methods: To address this challenge, we propose a parallel network architecture called PCG-Net, which incorporates both convolutional neural networks (CNN) and a Gate-Axial-Transformer (GAT) to effectively capture local information and global context. Additionally, we employ a cascade graph module (CGM) to enhance feature fusion through message-passing functions and information aggregation strategies. We conducted extensive experiments to evaluate the effectiveness of PCG-Net and its robustness in three different downstream tasks. Results: The results show that PCG-Net outperforms other methods, improves the accuracy of HN OARs segmentation, which can potentially improve treatment planning for HNC patients. Discussion: In summary, the PCG-Net model effectively establishes the dependency between local information and global context and employs CGM to enhance feature fusion for accurate segment HN OARs. The results demonstrate the superiority of PCGNet over other methods, making it a promising approach for HNC treatment planning.

7.
Phys Med Biol ; 68(24)2023 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-37934040

RESUMEN

Objective. Ultrasound localization microscopy (ULM) enables microvascular reconstruction by localizing microbubbles (MBs). Although ULM can obtain microvascular images that are beyond the ultimate resolution of the ultrasound (US) diffraction limit, it requires long data processing time, and the imaging accuracy is susceptible to the density of MBs. Deep learning (DL)-based ULM is proposed to alleviate these limitations, which simulated MBs at low-resolution and mapped them to coordinates at high-resolution by centroid localization. However, traditional DL-based ULMs are imprecise and computationally complex. Also, the performance of DL is highly dependent on the training datasets, which are difficult to realistically simulate.Approach. A novel architecture called adaptive matching network (AM-Net) and a dataset generation method named multi-mapping (MMP) was proposed to overcome the above challenges. The imaging performance and processing time of the AM-Net have been assessed by simulation andin vivoexperiments.Main results. Simulation results show that at high density (20 MBs/frame), when compared to other DL-based ULM, AM-Net achieves higher localization accuracy in the lateral/axial direction.In vivoexperiment results show that the AM-Net can reconstruct ∼24.3µm diameter micro-vessels and separate two ∼28.3µm diameter micro-vessels. Furthermore, when processing a 128 × 128 pixels image in simulation experiments and an 896 × 1280 pixels imagein vivoexperiment, the processing time of AM-Net is ∼13 s and ∼33 s, respectively, which are 0.3-0.4 orders of magnitude faster than other DL-based ULM.Significance. We proposes a promising solution for ULM with low computing costs and high imaging performance.


Asunto(s)
Aprendizaje Profundo , Ultrasonografía/métodos , Microscopía/métodos , Fantasmas de Imagen , Microburbujas , Microvasos/diagnóstico por imagen
8.
Phys Med Biol ; 68(20)2023 10 02.
Artículo en Inglés | MEDLINE | ID: mdl-37703894

RESUMEN

Objective.Addition of a denoising filter step in ultrasound localization microscopy (ULM) has been shown to effectively reduce the error localizations of microbubbles (MBs) and achieve resolution improvement for super-resolution ultrasound (SR-US) imaging. However, previous image-denoising methods (e.g. block-matching 3D, BM3D) requires long data processing times, making ULM only able to be processed offline. This work introduces a new way to reduce data processing time through deep learning.Approach.In this study, we propose deep learning (DL) denoising based on contrastive semi-supervised network (CS-Net). The neural network is mainly trained with simulated MBs data to extract MB signals from noise. And the performances of CS-Net denoising are evaluated in bothin vitroflow phantom experiment andin vivoexperiment of New Zealand rabbit tumor.Main results.Forin vitroflow phantom experiment, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of single microbubble image are 26.91 dB and 4.01 dB, repectively. Forin vivoanimal experiment , the SNR and CNR were 12.29 dB and 6.06 dB. In addition, single microvessel of 24µm and two microvessels separated by 46µm could be clearly displayed. Most importantly,, the CS-Net denoising speeds forin vitroandin vivoexperiments were 0.041 s frame-1and 0.062 s frame-1, respectively.Significance.DL denoising based on CS-Net can improve the resolution of SR-US as well as reducing denoising time, thereby making further contributions to the clinical real-time imaging of ULM.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Animales , Conejos , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía , Redes Neurales de la Computación , Ultrasonografía/métodos , Relación Señal-Ruido
9.
Technol Cancer Res Treat ; 22: 15330338231157936, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36788411

RESUMEN

Purpose/Objective(s): With the development of deep learning, more convolutional neural networks (CNNs) are being introduced in automatic segmentation to reduce oncologists' labor requirement. However, it is still challenging for oncologists to spend considerable time evaluating the quality of the contours generated by the CNNs. Besides, all the evaluation criteria, such as Dice Similarity Coefficient (DSC), need a gold standard to assess the quality of the contours. To address these problems, we propose an automatic quality assurance (QA) method using isotropic and anisotropic methods to automatically analyze contour quality without a gold standard. Materials/Methods: We used 196 individuals with 18 different head-and-neck organs-at-risk. The overall process has the following 4 main steps. (1) Use CNN segmentation network to generate a series of contours, then use these contours as organ masks to erode and dilate to generate inner/outer shells for each 2D slice. (2) Thirty-eight radiomics features were extracted from these 2 shells, using the inner/outer shells' radiomics features ratios and DSCs as the input for 12 machine learning models. (3) Using the DSC threshold adaptively classified the passing/un-passing slices. (4) Through 2 different threshold analysis methods quantitatively evaluated the un-passing slices and obtained a series of location information of poor contours. Parts 1-3 were isotropic experiments, and part 4 was the anisotropic method. Result: From the isotropic experiments, almost all the predicted values were close to the labels. Through the anisotropic method, we obtained the contours' location information by assessing the thresholds of the peak-to-peak and area-to-area ratios. Conclusion: The proposed automatic segmentation QA method could predict the segmentation quality qualitatively. Moreover, the method can analyze the location information for un-passing slices.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Cuello , Aprendizaje Automático
10.
Front Oncol ; 11: 680807, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34434891

RESUMEN

PURPOSE: Accurate segmentation of liver and liver tumors is critical for radiotherapy. Liver tumor segmentation, however, remains a difficult and relevant problem in the field of medical image processing because of the various factors like complex and variable location, size, and shape of liver tumors, low contrast between tumors and normal tissues, and blurred or difficult-to-define lesion boundaries. In this paper, we proposed a neural network (S-Net) that can incorporate attention mechanisms to end-to-end segmentation of liver tumors from CT images. METHODS: First, this study adopted a classical coding-decoding structure to realize end-to-end segmentation. Next, we introduced an attention mechanism between the contraction path and the expansion path so that the network could encode a longer range of semantic information in the local features and find the corresponding relationship between different channels. Then, we introduced long-hop connections between the layers of the contraction path and the expansion path, so that the semantic information extracted in both paths could be fused. Finally, the application of closed operation was used to dissipate the narrow interruptions and long, thin divide. This eliminated small cavities and produced a noise reduction effect. RESULTS: In this paper, we used the MICCAI 2017 liver tumor segmentation (LiTS) challenge dataset, 3DIRCADb dataset and doctors' manual contours of Hubei Cancer Hospital dataset to test the network architecture. We calculated the Dice Global (DG) score, Dice per Case (DC) score, volumetric overlap error (VOE), average symmetric surface distance (ASSD), and root mean square error (RMSE) to evaluate the accuracy of the architecture for liver tumor segmentation. The segmentation DG for tumor was found to be 0.7555, DC was 0.613, VOE was 0.413, ASSD was 1.186 and RMSE was 1.804. For a small tumor, DG was 0.3246 and DC was 0.3082. For a large tumor, DG was 0.7819 and DC was 0.7632. CONCLUSION: S-Net obtained more semantic information with the introduction of an attention mechanism and long jump connection. Experimental results showed that this method effectively improved the effect of tumor recognition in CT images and could be applied to assist doctors in clinical treatment.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...