Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38564354

RESUMEN

For high-frame-rate ultrasound imaging, it remains challenging to implement on compact systems as a sparse imaging configuration with limited array channels. One key issue is that the resulting image quality is known to be mediocre not only because unfocused plane-wave excitations are used but also because grating lobes would emerge in sparse-array configurations. In this article, we present the design and use of a new channel recovery framework to infer full-array plane-wave channel datasets for periodically sparse arrays that operate with as few as one-quarter of the full-array aperture. This framework is based on a branched encoder-decoder convolutional neural network (CNN) architecture, which was trained using full-array plane-wave channel data collected from human carotid arteries (59 864 training acquisitions; 5-MHz imaging frequency; 20-MHz sampling rate; plane-wave steering angles between -15° and 15° in 1° increments). Three branched encoder-decoder CNNs were separately trained to recover missing channels after differing degrees of channelwise downsampling (2, 3, and 4 times). The framework's performance was tested on full-array and downsampled plane-wave channel data acquired from an in vitro point target, human carotid arteries, and human brachioradialis muscle. Results show that when inferred full-array plane-wave channel data were used for beamforming, spatial aliasing artifacts in the B-mode images were suppressed for all degrees of channel downsampling. In addition, the image contrast was enhanced compared with B-mode images obtained from beamforming with downsampled channel data. When the recovery framework was implemented on an RTX-2080 GPU, the three investigated degrees of downsampling all achieved the same inference time of 4 ms. Overall, the proposed framework shows promise in enhancing the quality of high-frame-rate ultrasound images generated using a sparse-array imaging setup.


Asunto(s)
Arterias Carótidas , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Ultrasonografía , Humanos , Ultrasonografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Arterias Carótidas/diagnóstico por imagen , Algoritmos
2.
Artículo en Inglés | MEDLINE | ID: mdl-35862334

RESUMEN

High-frame-rate ultrasound imaging uses unfocused transmissions to insonify an entire imaging view for each transmit event, thereby enabling frame rates over 1000 frames per second (fps). At these high frame rates, it is naturally challenging to realize real-time transfer of channel-domain raw data from the transducer to the system back end. Our work seeks to halve the total data transfer rate by uniformly decimating the receive channel count by 50% and, in turn, doubling the array pitch. We show that despite the reduced channel count and the inevitable use of a sparse array aperture, the resulting beamformed image quality can be maintained by designing a custom convolutional encoder-decoder neural network to infer the radio frequency (RF) data of the nullified channels. This deep learning framework was trained with in vivo human carotid data (5-MHz plane wave imaging, 128 channels, 31 steering angles over a 30° span, and 62 799 frames in total). After training, the network was tested on an in vitro point target scenario that was dissimilar to the training data, in addition to in vivo carotid validation datasets. In the point target phantom image beamformed from inferred channel data, spatial aliasing artifacts attributed to array pitch doubling were found to be reduced by up to 10 dB. For carotid imaging, our proposed approach yielded a lumen-to-tissue contrast that was on average within 3 dB compared to the full-aperture image, whereas without channel data inferencing, the carotid lumen was obscured. When implemented on an RTX-2080 GPU, the inference time to apply the trained network was 4 ms, which favors real-time imaging. Overall, our technique shows that with the help of deep learning, channel data transfer rates can be effectively halved with limited impact on the resulting image quality.


Asunto(s)
Aprendizaje Profundo , Artefactos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Transductores , Ultrasonografía/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA