Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Entropy (Basel) ; 26(5)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38785632

RESUMEN

Finding the most interesting areas of an image is the aim of saliency detection. Conventional methods based on low-level features rely on biological cues like texture and color. These methods, however, have trouble with processing complicated or low-contrast images. In this paper, we introduce a deep neural network-based saliency detection method. First, using semantic segmentation, we construct a pixel-level model that gives each pixel a saliency value depending on its semantic category. Next, we create a region feature model by combining both hand-crafted and deep features, which extracts and fuses the local and global information of each superpixel region. Third, we combine the results from the previous two steps, along with the over-segmented superpixel images and the original images, to construct a multi-level feature model. We feed the model into a deep convolutional network, which generates the final saliency map by learning to integrate the macro and micro information based on the pixels and superpixels. We assess our method on five benchmark datasets and contrast it against 14 state-of-the-art saliency detection algorithms. According to the experimental results, our method performs better than the other methods in terms of F-measure, precision, recall, and runtime. Additionally, we analyze the limitations of our method and propose potential future developments.

2.
Opt Express ; 31(4): 5426-5442, 2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36823823

RESUMEN

An improved contrast source inversion method (ICSI) is proposed to solve the problem of electromagnetic inverse scattering. Specifically, we first resort Fourier bases to represent contrast source, so that we can solve contrast source in the frequency domain, which can accelerate the solution of contrast source. Then, a spatial-frequency domain constraint is designed to ensure that the spanned-domain calculation is synchronous optimal. Afterwards, a multi-round optimization combined with SVD (the singular value decomposition) is developed to alleviate the reliance on initial guesses. Finally, we utilize a frequency-domain filter to eliminate redundant inversion information and narrow the search scope of the solution. Extensive experiments on the synthetic and real data show that ICSI holds a faster computing speed, a better inversion ability of relative permittivities, and a stronger anti-noise ability.

3.
Sensors (Basel) ; 20(8)2020 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-32344686

RESUMEN

Hyperspectral images reconstruction focuses on recovering the spectral information from a single RGBimage. In this paper, we propose two advanced Generative Adversarial Networks (GAN) for the heavily underconstrained inverse problem. We first propose scale attention pyramid UNet (SAPUNet), which uses U-Net with dilated convolution to extract features. We establish the feature pyramid inside the network and use the attention mechanism for feature selection. The superior performance of this model is due to the modern architecture and capturing of spatial semantics. To provide a more accurate solution, we propose another distinct architecture, named W-Net, that builds one more branch compared to U-Net to conduct boundary supervision. SAPUNet and scale attention pyramid WNet (SAPWNet) provide improvements on the Interdisciplinary Computational Vision Lab at Ben Gurion University (ICVL) datasetby 42% and 46.6%, and 45% and 50% in terms of root mean square error (RMSE) and relative RMSE, respectively. The experimental results demonstrate that our proposed models are more accurate than the state-of-the-art hyperspectral recovery methods.

4.
Ultrasound Q ; 37(3): 278-286, 2021 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-34478428

RESUMEN

ABSTRACT: Segmentation of anatomical structures from ultrasound images requires the expertise of an experienced clinician, but developing a machine automated segmentation process is complicated because of the existence of characteristic artifacts. In this article, we present a novel end-to-end network that enables automated measurements of the fetal head circumference (HC) and fetal abdomen circumference (AC) to be made from 2-dimensional (2D) ultrasound images during each pregnancy trimester. These measurements are necessary, because the HC and AC are used to predict gestational age and to monitor fetal growth. Automated HC and AC assessments are valuable for providing independent and objective results and are particularly useful for application in developing countries where trained sonographers are in short supply. We propose a scale attention expanding network that builds a feature pyramid inside the network, and the intermediate result of each scale is then concatenated to the feature with a fusion scheme for the next layer. Furthermore, a scale attention module is proposed for selecting the most useful scale and for reducing scale noise. To optimize the network, a deep supervision method based on boundary attention is employed. Results of experiments show that the scale attention expanding network obtained an absolute difference, Hausdorff distance, and dice similarity coefficient of 1.81 ± 1.69%, 1.22 ± 0.77%, and 97.94%, respectively, which were top results in the HC18 data set, and respective results on the abdomen set were 2.23 ± 2.38%, 0.42 ± 0.56%, and 98.04%. The experiments conducted demonstrate that our method provides a superior performance to existing fetal ultrasound segmentation methods.


Asunto(s)
Artefactos , Ultrasonografía Prenatal , Atención , Femenino , Feto , Humanos , Procesamiento de Imagen Asistido por Computador , Embarazo , Ultrasonografía
5.
Med Biol Eng Comput ; 58(11): 2879-2892, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32975706

RESUMEN

Measurement of anatomical structures from ultrasound images requires the expertise of experienced clinicians. Moreover, there are artificial factors that make an automatic measurement complicated. In this paper, we aim to present a novel end-to-end deep learning network to automatically measure the fetal head circumference (HC), biparietal diameter (BPD), and occipitofrontal diameter (OFD) length from 2D ultrasound images. Fully convolutional neural networks (FCNNs) have shown significant improvement in natural image segmentation. Therefore, to overcome the potential difficulties in automated segmentation, we present a novelty FCNN and add a regression branch for predicting OFD and BPD in parallel. In the segmentation branch, a feature pyramid inside our network is built from low-level feature layers for a variety of fetal head in ultrasound images, which is different from traditional feature pyramid building methods. In order to select the most useful scale and reduce scale noise, attention mechanism is taken for the feature's filter. In the regression branch, for the accurate estimation of OFD and BPD length, a new region of interest (ROI) pooling layer is proposed to extract the elliptic feature map. We also evaluate the performance of our method on large dataset: HC18. Our experimental results show that our method can achieve better performance than the existing fetal head measurement methods. Graphical Abstract Deep Neural Network for Fetal Head Measurement.


Asunto(s)
Cabeza/diagnóstico por imagen , Cabeza/embriología , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía Prenatal/métodos , Aprendizaje Profundo , Femenino , Humanos , Embarazo
6.
Artif Intell Med ; 66: 1-13, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26363682

RESUMEN

OBJECTIVE: This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. METHODS AND MATERIALS: To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. RESULTS: For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. CONCLUSIONS: For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation.


Asunto(s)
Algoritmos , Inteligencia Artificial , Compresión de Datos/métodos , Gastroscopía , Interpretación de Imagen Asistida por Computador/métodos , Gastropatías/diagnóstico , Grabación en Video , Estudios de Casos y Controles , Humanos , Reconocimiento de Normas Patrones Automatizadas , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados
7.
IEEE Trans Biomed Eng ; 63(11): 2347-2358, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-26890528

RESUMEN

GOAL: Most state-of-the-art computer-aided endoscopic diagnosis methods require pixelwise labeled data to train various supervised machine learning models. However, it is a tedious and time-consuming work to collect sufficient precisely labeled image data. Fortunately, we can easily obtain huge endoscopic medical reports including the diagnostic text and images, which can be considered as weakly labeled data. METHODS: In this paper, our motivation is to design a new computer-aided endoscopic diagnosis system without human specific labeling; in comparison with most state of the arts, ours only depends on the endoscopic images with weak labels mined from the diagnostic text. To achieve this, we first cast the endoscopic image folder and included images as bag and instances and represent each instance based on the global bag-of-words model. We then adopt a feature mapping scheme to represent each bag by mining the most suspicious lesion instance from each positive bag automatically. In order to achieve self-online updating from sequential new coming data, an online metric learning method is used to optimize the bag-level classification. RESULTS: Our computer-aided endoscopic diagnosis system achieves an AUC of 0.93 on a new endoscopic image dataset captured from 424 volunteers with more than 12k images. CONCLUSION: The system performance outperforms other state of the arts when we mine the most positive instances from positive bags and adopt the online phase to mine more information from the unseen bags. SIGNIFICANCE: We present the first weakly labeled endoscopic image dataset for computer-aided endoscopic diagnosis and a novel system that is suitable for use in clinical settings.


Asunto(s)
Endoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Algoritmos , Bases de Datos Factuales , Humanos , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA