Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Med Imaging ; PP2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38771692

RESUMEN

Left ventricle (LV) endocardium segmentation in echocardiography video has received much attention as an important step in quantifying LV ejection fraction. Most existing methods are dedicated to exploiting temporal information on top of 2D convolutional networks. In addition to single appearance semantic learning, some research attempted to introduce motion cues through the optical flow estimation (OFE) task to enhance temporal consistency modeling. However, OFE in these methods is tightly coupled to LV endocardium segmentation, resulting in noisy inter-frame flow prediction, and post-optimization based on these flows accumulates errors. To address these drawbacks, we propose dynamic-guided spatiotemporal attention (DSA) for semi-supervised echocardiography video segmentation. We first fine-tune the off-the-shelf OFE network RAFT on echocardiography data to provide dynamic information. Taking inter-frame flows as additional input, we use a dual-encoder structure to extract motion and appearance features separately. Based on the connection between dynamic continuity and semantic consistency, we propose a bilateral feature calibration module to enhance both features. For temporal consistency modeling, the DSA is proposed to aggregate neighboring frame context using deformable attention that is realized by offsets grid attention. Dynamic information is introduced into DSA through a bilateral offset estimation module to effectively combine with appearance semantics and predict attention offsets, thereby guiding semantic-based spatiotemporal attention. We evaluated our method on two popular echocardiography datasets, CAMUS and EchoNet-Dynamic, and achieved state-of-the-art.

2.
Phys Imaging Radiat Oncol ; 29: 100546, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38369990

RESUMEN

Background and Purpose: Online cone-beam-based adaptive radiotherapy (ART) adjusts for anatomical changes during external beam radiotherapy. However, limited cone-beam image quality complicates nodal contouring. Despite this challenge, artificial-intelligence guided deformation (AID) can auto-generate nodal contours. Our study investigated the optimal use of such contours in cervical online cone-beam-based ART. Materials and Methods: From 136 adaptive fractions across 21 cervical cancer patients with nodal disease, we extracted 649 clinically-delivered and AID clinical target volume (CTV) lymph node boost structures. We assessed geometric alignment between AID and clinical CTVs via dice similarity coefficient, and 95% Hausdorff distance, and geometric coverage of clinical CTVs by AID planning target volumes by false positive dice. Coverage of clinical CTVs by AID contour-based plans was evaluated using D100, D95, V100%, and V95%. Results: Between AID and clinical CTVs, the median dice similarity coefficient was 0.66 and the median 95 % Hausdorff distance was 4.0 mm. The median false positive dice of clinical CTV coverage by AID planning target volumes was 0. The median D100 was 1.00, the median D95 was 1.01, the median V100% was 1.00, and the median V95% was 1.00. Increased nodal volume, fraction number, and daily adaptation were associated with reduced clinical CTV coverage by AID-based plans. Conclusion: In one of the first reports on pelvic nodal ART, AID-based plans could adequately cover nodal targets. However, physician review is required due to performance variation. Greater attention is needed for larger, daily-adapted nodes further into treatment.

3.
Comput Intell Neurosci ; 2022: 3470764, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35498198

RESUMEN

Breast cancer detection largely relies on imaging characteristics and the ability of clinicians to easily and quickly identify potential lesions. Magnetic resonance imaging (MRI) of breast tumors has recently shown great promise for enabling the automatic identification of breast tumors. Nevertheless, state-of-the-art MRI-based algorithms utilizing deep learning techniques are still limited in their ability to accurately separate tumor and healthy tissue. Therefore, in the current work, we propose an automatic and accurate two-stage U-Net-based segmentation framework for breast tumor detection using dynamic contrast-enhanced MRI (DCE-MRI). This framework was evaluated using T2-weighted MRI data from 160 breast tumor cases, and its performance was compared with that of the standard U-Net model. In the first stage of the proposed framework, a refined U-Net model was utilized to automatically delineate a breast region of interest (ROI) from the surrounding healthy tissue. Importantly, this automatic segmentation step reduced the impact of the background chest tissue on breast tumors' identification. For the second stage, we employed an improved U-Net model that combined a dense residual module based on dilated convolution with a recurrent attention module. This model was used to accurately and automatically segment the tumor tissue from healthy tissue in the breast ROI derived in the previous step. Overall, compared to the U-Net model, the proposed technique exhibited increases in the Dice similarity coefficient, Jaccard similarity, positive predictive value, sensitivity, and Hausdorff distance of 3%, 3%, 3%, 2%, and 16.2, respectively. The proposed model may in the future aid in the clinical diagnosis of breast cancer lesions and help guide individualized patient treatment.


Asunto(s)
Neoplasias de la Mama , Algoritmos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA