Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 62
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 119(16): e2115064119, 2022 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-35412891

RESUMEN

Matrix completion problems arise in many applications including recommendation systems, computer vision, and genomics. Increasingly larger neural networks have been successful in many of these applications but at considerable computational costs. Remarkably, taking the width of a neural network to infinity allows for improved computational performance. In this work, we develop an infinite width neural network framework for matrix completion that is simple, fast, and flexible. Simplicity and speed come from the connection between the infinite width limit of neural networks and kernels known as neural tangent kernels (NTK). In particular, we derive the NTK for fully connected and convolutional neural networks for matrix completion. The flexibility stems from a feature prior, which allows encoding relationships between coordinates of the target matrix, akin to semisupervised learning. The effectiveness of our framework is demonstrated through competitive results for virtual drug screening and image inpainting/reconstruction. We also provide an implementation in Python to make our framework accessible on standard hardware to a broad audience.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Computadores , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Aprendizaje Automático Supervisado
2.
Sensors (Basel) ; 24(11)2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38894277

RESUMEN

If we visit famous and iconic landmarks, we may want to take a photo of them. However, such sites are usually crowded, and taking photos with only landmarks without people could be challenging. This paper aims to automatically remove people in a picture and produce a natural image of the landmark alone. To this end, it presents Thanos, a system to generate authentic human-removed images in crowded places. It is designed to produce high-quality images with reasonable computation cost using short video clips of a few seconds. For this purpose, a multi-frame-based recovery region minimization method is proposed. The key idea is to aggregate information partially available from multiple image frames to minimize the area to be restored. The evaluation result presents that the proposed method outperforms alternatives; it shows lower Fréchet Inception Distance (FID) scores with comparable processing latency. It is also shown that the images by Thanos achieve a lower FID score than those of existing applications; Thanos's score is 242.8, while those by Retouch-photos and Samsung object eraser are 249.4 and 271.2, respectively.

3.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38475140

RESUMEN

Land Surface Temperature (LST) is an important resource for a variety of tasks. The data are mostly free of charge and combine high spatial and temporal resolution with reliable data collection over a historical timeframe. When remote sensing is used to provide LST data, such as the MODA11 product using information from the MODIS sensors attached to NASA satellites, data acquisition can be hindered by clouds or cloud shadows, occluding the sensors' view on different areas of the world. This makes it difficult to take full advantage of the high resolution of the data. A common solution to interpolating LST data is statistical interpolation methods, such as fitting polynomials or thin plate spine interpolation. These methods have difficulties in incorporating additional knowledge about the research area and learning local dependencies that can help with the interpolation process. We propose a novel approach to interpolating remote sensing LST data in a fixed research area considering local ground-site air temperature measurements. The two-step approach consists of learning the LST from air temperature measurements, where the ground-site weather stations are located, and interpolating the remaining missing values with partial convolutions within a U-Net deep learning architecture. Our approach improves the interpolation of LST for our research area by 44% in terms of RMSE, when compared to state-of-the-art statistical methods. Due to the use of air temperature, we can provide coverage of 100%, even when no valid LST measurements were available. The resulting gapless coverage of high resolution LST data will help unlock the full potential of remote sensing LST data.

4.
Sensors (Basel) ; 24(14)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39065868

RESUMEN

An interpolation method, which estimates unknown values with constrained information, is based on mathematical calculations. In this study, we addressed interpolation from an image-based perspective and expanded the use of image inpainting to estimate values at unknown points. When chemical gas is dispersed through a chemical attack or terrorism, it is possible to determine the concentration of the gas at each location by utilizing the deployed sensors. By interpolating the concentrations, we can obtain the contours of gas concentration. Accurately distinguishing the contours of a contaminated region from a map enables the optimal response to minimize damage. However, areas with an insufficient number of sensors have less accurate contours than other areas. In order to achieve more accurate contour data, an image inpainting-based method is proposed to enhance reliability by erasing and reconstructing low-accuracy areas in the contour. Partial convolution is used as the machine learning approach for image-inpainting, with the modified loss function for optimization. In order to train the model, we developed a gas diffusion simulation model and generated a gas concentration contour dataset comprising 100,000 contour images. The results of the model were compared to those of Kriging interpolation, one of the conventional spatial interpolation methods, finally demonstrating 13.21% higher accuracy. This suggests that interpolation from an image-based perspective can achieve higher accuracy than numerical interpolation on well-trained data. The proposed method was validated using gas concentration contour data from the verified gas dispersion modeling software Nuclear Biological Chemical Reporting And Modeling System (NBC_RAMS), which was developed by the Agency for Defense Development, South Korea.

5.
Neuroimage ; 265: 119787, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36473647

RESUMEN

Multiple sclerosis (MS) is a chronic inflammatory and neurodegenerative disease characterized by the appearance of focal lesions across the central nervous system. The discrimination of acute from chronic MS lesions may yield novel biomarkers of inflammatory disease activity which may support patient management in the clinical setting and provide endpoints in clinical trials. On a single timepoint and in the absence of a prior reference scan, existing methods for acute lesion detection rely on the segmentation of hyperintense foci on post-gadolinium T1-weighted magnetic resonance imaging (MRI), which may underestimate recent acute lesion activity. In this paper, we aim to improve the sensitivity of acute MS lesion detection in the single-timepoint setting, by developing a novel machine learning approach for the automatic detection of acute MS lesions, using single-timepoint conventional non-contrast T1- and T2-weighted brain MRI. The MRI input data are supplemented via the use of a convolutional neural network generating "lesion-free" reconstructions from original "lesion-present" scans using image inpainting. A multi-objective statistical ranking module evaluates the relevance of textural radiomic features from the core and periphery of lesion sites, compared within "lesion-free" versus "lesion-present" image pairs. Then, an ensemble classifier is optimized through a recursive loop seeking consensus both in the feature space (via a greedy feature-pruning approach) and in the classifier space (via model selection repeated after each pruning operation). This leads to the identification of a compact textural signature characterizing lesion phenotype. On the patch-level task of acute versus chronic MS lesion classification, our method achieves a balanced accuracy in the range of 74.3-74.6% on fully external validation cohorts.


Asunto(s)
Esclerosis Múltiple , Enfermedades Neurodegenerativas , Humanos , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/patología , Enfermedades Neurodegenerativas/patología , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Aprendizaje Automático
6.
Sensors (Basel) ; 23(15)2023 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-37571471

RESUMEN

Image inpainting is an active area of research in image processing that focuses on reconstructing damaged or missing parts of an image. The advent of deep learning has greatly advanced the field of image restoration in recent years. While there are many existing methods that can produce high-quality restoration results, they often struggle when dealing with images that have large missing areas, resulting in blurry and artifact-filled outcomes. This is primarily because of the presence of invalid information in the inpainting region, which interferes with the inpainting process. To tackle this challenge, the paper proposes a novel approach called separable mask update convolution. This technique automatically learns and updates the mask, which represents the missing area, to better control the influence of invalid information within the mask area on the restoration results. Furthermore, this convolution method reduces the number of network parameters and the size of the model. The paper also introduces a regional normalization technique that collaborates with separable mask update convolution layers for improved feature extraction, thereby enhancing the quality of the restored image. Experimental results demonstrate that the proposed method performs well in restoring images with large missing areas and outperforms state-of-the-art image inpainting methods significantly in terms of image quality.

7.
Sensors (Basel) ; 23(4)2023 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-36850914

RESUMEN

The proliferation of deep learning has propelled image inpainting to an important research field. Although the current image inpainting model has made remarkable achievements, the two-stage image inpainting method is easy to produce structural errors in the rough stage because of insufficient treatment of the rough inpainting stage. To address this problem, we propose a multi-step structured image inpainting model combining attention mechanisms. Different from the previous two-stage inpainting model, we divide the damaged area into four sub-areas, calculate the priority of each area according to the priority, specify the inpainting order, and complete the rough inpainting stage several times. The stability of the model is enhanced by the multi-step method. The structural attention mechanism strengthens the expression of structural features and improves the quality of structure and contour reconstruction. Experimental evaluation of benchmark data sets shows that our method effectively reduces structural errors and improves the effect of image inpainting.

8.
Sensors (Basel) ; 23(2)2023 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-36679769

RESUMEN

Specular Reflections often exist in the endoscopic image, which not only hurts many computer vision algorithms but also seriously interferes with the observation and judgment of the surgeon. The information behind the recovery specular reflection areas is a necessary pre-processing step in medical image analysis and application. The existing highlight detection method is usually only suitable for medium-brightness images. The existing highlight removal method is only applicable to images without large specular regions, when dealing with high-resolution medical images with complex texture information, not only does it have a poor recovery effect, but the algorithm operation efficiency is also low. To overcome these limitations, this paper proposes a specular reflection detection and removal method for endoscopic images based on brightness classification. It can effectively detect the specular regions in endoscopic images of different brightness and can improve the operating efficiency of the algorithm while restoring the texture structure information of the high-resolution image. In addition to achieving image brightness classification and enhancing the brightness component of low-brightness images, this method also includes two new steps: In the highlight detection phase, the adaptive threshold function that changes with the brightness of the image is used to detect absolute highlights. During the highlight recovery phase, the priority function of the exemplar-based image inpainting algorithm was modified to ensure reasonable and correct repairs. At the same time, local priority computing and adaptive local search strategies were used to improve algorithm efficiency and reduce error matching. The experimental results show that compared with the other state-of-the-art, our method shows better performance in terms of qualitative and quantitative evaluations, and the algorithm efficiency is greatly improved when processing high-resolution endoscopy images.


Asunto(s)
Algoritmos , Endoscopía , Procesamiento de Imagen Asistido por Computador/métodos
9.
Sensors (Basel) ; 23(19)2023 Oct 08.
Artículo en Inglés | MEDLINE | ID: mdl-37837143

RESUMEN

Research on image-inpainting tasks has mainly focused on enhancing performance by augmenting various stages and modules. However, this trend does not consider the increase in the number of model parameters and operational memory, which increases the burden on computational resources. To solve this problem, we propose a Parametric Efficient Image InPainting Network (PEIPNet) for efficient and effective image-inpainting. Unlike other state-of-the-art methods, the proposed model has a one-stage inpainting framework in which depthwise and pointwise convolutions are adopted to reduce the number of parameters and computational cost. To generate semantically appealing results, we selected three unique components: spatially-adaptive denormalization (SPADE), dense dilated convolution module (DDCM), and efficient self-attention (ESA). SPADE was adopted to conditionally normalize activations according to the mask in order to distinguish between damaged and undamaged regions. The DDCM was employed at every scale to overcome the gradient-vanishing obstacle and gradually fill in the pixels by capturing global information along the feature maps. The ESA was utilized to obtain clues from unmasked areas by extracting long-range information. In terms of efficiency, our model has the lowest operational memory compared with other state-of-the-art methods. Both qualitative and quantitative experiments demonstrate the generalized inpainting of our method on three public datasets: Paris StreetView, CelebA, and Places2.

10.
Sensors (Basel) ; 23(16)2023 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-37631631

RESUMEN

Deep-learning-based image inpainting methods have made remarkable advancements, particularly in object removal tasks. The removal of face masks has gained significant attention, especially in the wake of the COVID-19 pandemic, and while numerous methods have successfully addressed the removal of small objects, removing large and complex masks from faces remains demanding. This paper presents a novel two-stage network for unmasking faces considering the intricate facial features typically concealed by masks, such as noses, mouths, and chins. Additionally, the scarcity of paired datasets comprising masked and unmasked face images poses an additional challenge. In the first stage of our proposed model, we employ an autoencoder-based network for binary segmentation of the face mask. Subsequently, in the second stage, we introduce a generative adversarial network (GAN)-based network enhanced with attention and Masked-Unmasked Region Fusion (MURF) mechanisms to focus on the masked region. Our network generates realistic and accurate unmasked faces that resemble the original faces. We train our model on paired unmasked and masked face images sourced from CelebA, a large public dataset, and evaluate its performance on multi-scale masked faces. The experimental results illustrate that the proposed method surpasses the current state-of-the-art techniques in both qualitative and quantitative metrics. It achieves a Peak Signal-to-Noise Ratio (PSNR) improvement of 4.18 dB over the second-best method, with the PSNR reaching 30.96. Additionally, it exhibits a 1% increase in the Structural Similarity Index Measure (SSIM), achieving a value of 0.95.


Asunto(s)
COVID-19 , Máscaras , Humanos , Pandemias , Equipo de Protección Personal , Benchmarking
11.
Sensors (Basel) ; 22(8)2022 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-35458840

RESUMEN

Most existing image inpainting methods have achieved remarkable progress in small image defects. However, repairing large missing regions with insufficient context information is still an intractable problem. In this paper, a Multi-stage Feature Reasoning Generative Adversarial Network to gradually restore irregular holes is proposed. Specifically, dynamic partial convolution is used to adaptively adjust the restoration proportion during inpainting progress, which strengthens the correlation between valid and invalid pixels. In the decoding phase, the statistical natures of features in the masked areas differentiate from those of unmasked areas. To this end, a novel decoder is designed which not only dynamically assigns a scaling factor and bias on per feature point basis using point-wise normalization, but also utilizes skip connections to solve the problem of information loss between the codec network layers. Moreover, in order to eliminate gradient vanishing and increase the reasoning times, a hybrid weighted merging method consisting of a hard weight map and a soft weight map is proposed to ensemble the feature maps generated during the whole reconstruction process. Experiments on CelebA, Places2, and Paris StreetView show that the proposed model generates results with a PSNR improvement of 0.3 dB to 1.2 dB compared to other methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Semántica , Procesamiento de Imagen Asistido por Computador/métodos , Proyectos de Investigación
12.
Sensors (Basel) ; 22(12)2022 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-35746374

RESUMEN

The quality of the veneer directly affects the quality and grade of a blockboard made of veneer. To improve the quality and utilization of a defective veneer, a novel deep generative model-based method is proposed, which can generate higher-quality inpainting results. A two-phase network is proposed to stabilize the network training process. Then, region normalization is introduced to solve the inconsistency problem between the mean and standard deviation, improve the convergence speed of the model, and prevent the model gradient from exploding. Finally, a hybrid dilated convolution module is proposed to reconstruct the missing areas of the panels, which alleviates the gridding problem by changing the dilation rate. Experiments on our dataset prove the effectiveness of the improved approach in image inpainting tasks. The results show that the PSNR of the improved method reaches 33.11 and the SSIM reaches 0.93, which are superior to other methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos
13.
Entropy (Basel) ; 24(10)2022 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-37420520

RESUMEN

In this work, we formulate the image in-painting as a matrix completion problem. Traditional matrix completion methods are generally based on linear models, assuming that the matrix is low rank. When the original matrix is large scale and the observed elements are few, they will easily lead to over-fitting and their performance will also decrease significantly. Recently, researchers have tried to apply deep learning and nonlinear techniques to solve matrix completion. However, most of the existing deep learning-based methods restore each column or row of the matrix independently, which loses the global structure information of the matrix and therefore does not achieve the expected results in the image in-painting. In this paper, we propose a deep matrix factorization completion network (DMFCNet) for image in-painting by combining deep learning and a traditional matrix completion model. The main idea of DMFCNet is to map iterative updates of variables from a traditional matrix completion model into a fixed depth neural network. The potential relationships between observed matrix data are learned in a trainable end-to-end manner, which leads to a high-performance and easy-to-deploy nonlinear solution. Experimental results show that DMFCNet can provide higher matrix completion accuracy than the state-of-the-art matrix completion methods in a shorter running time.

14.
J Microsc ; 281(3): 177-189, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-32901937

RESUMEN

The microscopic image is important data for recording the microstructure information of materials. Researchers usually use image-processing algorithms to extract material features from that and then characterise the material microstructure. However, the microscopic images obtained by a microscope often have random damaged regions, which will cause the loss of information and thus inevitably influence the accuracy of microstructural characterisation, even lead to a wrong result. To handle this problem, we provide a deep learning-based fully automatic method for detecting and inpainting damaged regions in material microscopic images, which can automatically inpaint damaged regions with different positions and shapes, as well as we also use a data augmentation method to improve the performance of inpainting model. We evaluate our method on Al-La alloy microscopic images, which indicates that our method can achieve promising performance on inpainted and material microstructure characterisation results compared to other image inpainting software for both accuracy and time consumption. LAY DESCRIPTION: A basic goal of materials data analysis is to extract useful information from materials datasets that can in turn be used to establish connections along the composition-processing-structure-properties chain. The microscopic images obtained by a microscope is the key carrier of material microstructural information. Researchers usually use image analysis algorithms to extract regions of interest or useful features from microscopic images, aiming to analyse material microstructure, organ tissues or device quality etc. Therefore, the integrity and clarity of the microscopic image are the most important attributes for image feature extraction. Scientists and engineers have been trying to develop various technologies to obtain perfect microscopic images. However, in practice, some extrinsic defects are often introduced during the preparation and/or shooting processes, and the elimination of these defects often requires mass efforts and cost, or even is impossible at present. Take the microstructure image of metallic material for example, samples prepared to microstructure characterisation often need to go through several steps such as cutting, grinding with sandpaper, polishing, etching, and cleaning. During the grinding and polishing process, defects such as scratches could be introduced. During the etching and cleaning process, some defects such as rust caused by substandard etching, stains etc. may arise and be persisted. These defects can be treated as damaged regions with nonfixed positions, different sizes, and random shapes, resulting in the loss of information, which seriously affects subsequent visual observation and microstructural feature extraction. To handle this problem, we provide a deep learning-based fully automatic method for detecting and inpainting damaged regions in material microscopic images, which can automatically inpaint damaged regions with different positions and shapes, as well as we also use a data augmentation method to improve the performance of inpainting model. We evaluate our method on Al-La alloy microscopic images, which indicates that our method can achieve promising performance on inpainted and material microstructure characterisation results compared to other image inpainting software for both accuracy and time consumption.

15.
Sensors (Basel) ; 21(19)2021 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-34640656

RESUMEN

Image inpainting aims to fill in corrupted regions with visually realistic and semantically plausible contents. In this paper, we propose a progressive image inpainting method, which is based on a forked-then-fused decoder network. A unit called PC-RN, which is the combination of partial convolution and region normalization, serves as the basic component to construct inpainting network. The PC-RN unit can extract useful features from the valid surroundings and can suppress incompleteness-caused interference at the same time. The forked-then-fused decoder network consists of a local reception branch, a long-range attention branch, and a squeeze-and-excitation-based fusing module. Two multi-scale contextual attention modules are deployed into the long-range attention branch for adaptively borrowing features from distant spatial positions. Progressive inpainting strategy allows the attention modules to use the previously filled region to reduce the risk of allocating wrong attention. We conduct extensive experiments on three benchmark databases: Places2, Paris StreetView, and CelebA. Qualitative and quantitative results show that the proposed inpainting model is superior to state-of-the-art works. Moreover, we perform ablation studies to reveal the functionality of each module for the image inpainting task.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
16.
Sensors (Basel) ; 21(6)2021 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-33803661

RESUMEN

This paper presents a multi-spectral photometric stereo (MPS) method based on image in-painting, which can reconstruct the shape using a multi-spectral image with a laser line. One of the difficulties in multi-spectral photometric stereo is to extract the laser line because the required illumination for MPS, e.g., red, green, and blue light, may pollute the laser color. Unlike previous methods, through the improvement of the network proposed by Isola, a Generative Adversarial Network based on image in-painting was proposed, to separate a multi-spectral image with a laser line into a clean laser image and an uncorrupted multi-spectral image without the laser line. Then these results were substituted into the method proposed by Fan to obtain high-precision 3D reconstruction results. To make the proposed method applicable to real-world objects, a rendered image dataset obtained using the rendering models in ShapeNet has been used for training the network. Evaluation using the rendered images and real-world images shows the superiority of the proposed approach over several previous methods.

17.
Sensors (Basel) ; 21(9)2021 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-34068573

RESUMEN

Recently, deep learning-based techniques have shown great power in image inpainting especially dealing with squared holes. However, they fail to generate plausible results inside the missing regions for irregular and large holes as there is a lack of understanding between missing regions and existing counterparts. To overcome this limitation, we combine two non-local mechanisms including a contextual attention module (CAM) and an implicit diversified Markov random fields (ID-MRF) loss with a multi-scale architecture which uses several dense fusion blocks (DFB) based on the dense combination of dilated convolution to guide the generative network to restore discontinuous and continuous large masked areas. To prevent color discrepancies and grid-like artifacts, we apply the ID-MRF loss to improve the visual appearance by comparing similarities of long-distance feature patches. To further capture the long-term relationship of different regions in large missing regions, we introduce the CAM. Although CAM has the ability to create plausible results via reconstructing refined features, it depends on initial predicted results. Hence, we employ the DFB to obtain larger and more effective receptive fields, which benefits to predict more precise and fine-grained information for CAM. Extensive experiments on two widely-used datasets demonstrate that our proposed framework significantly outperforms the state-of-the-art approaches both in quantity and quality.

18.
Sensors (Basel) ; 20(21)2020 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-33143187

RESUMEN

Image inpainting networks can produce visually reasonable results in the damaged regions. However, existing inpainting networks may fail to reconstruct the proper structures or tend to generate the results with color discrepancy. To solve this issue, this paper proposes an image inpainting approach using the proposed two-stage loss function. The loss function consists of different Gaussian kernels, which are utilized in different stages of network. The use of our two-stage loss function in coarse network helps to focus on the image structure, while the use of it in refinement network is helpful to restore the image details. Moreover, we proposed a global and local PatchGANs (GAN means generative adversarial network), named GL-PatchGANs, in which the global and local markovian discriminators were used to control the final results. This is beneficial to focus on the regions of interest (ROI) on different scales and tends to produce more realistic structural and textural details. We trained our network on three popular datasets on image inpainting separately, both Peak Signal to Noise ratio (PSNR) and Structural Similarity (SSIM) between our results, and ground truths on test images show that our network can achieve better performance compared with the recent works in most cases. Besides, the visual results on three datasets also show that our network can produce visual plausible results compared with the recent works.

19.
Sensors (Basel) ; 20(6)2020 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-32213982

RESUMEN

In real applications, obtained depth images are incomplete; therefore, depth image inpainting is studied here. A novel model that is characterised by both a low-rank structure and nonlocal self-similarity is proposed. As a double constraint, the low-rank structure and nonlocal self-similarity can fully exploit the features of single-depth images to complete the inpainting task. First, according to the characteristics of pixel values, we divide the image into blocks, and similar block groups and three-dimensional arrangements are then formed. Then, the variable splitting technique is applied to effectively divide the inpainting problem into the sub-problems of the low-rank constraint and nonlocal self-similarity constraint. Finally, different strategies are used to solve different sub-problems, resulting in greater reliability. Experiments show that the proposed algorithm attains state-of-the-art performance.

20.
Sensors (Basel) ; 20(6)2020 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-32183041

RESUMEN

Clinical treatment of skin lesion is primarily dependent on timely detection and delimitation of lesion boundaries for accurate cancerous region localization. Prevalence of skin cancer is on the higher side, especially that of melanoma, which is aggressive in nature due to its high metastasis rate. Therefore, timely diagnosis is critical for its treatment before the onset of malignancy. To address this problem, medical imaging is used for the analysis and segmentation of lesion boundaries from dermoscopic images. Various methods have been used, ranging from visual inspection to the textural analysis of the images. However, accuracy of these methods is low for proper clinical treatment because of the sensitivity involved in surgical procedures or drug application. This presents an opportunity to develop an automated model with good accuracy so that it may be used in a clinical setting. This paper proposes an automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet. Moreover, we also used image inpainting for hair removal, which improved the segmentation results significantly. We trained our model on the ISIC 2017 dataset and validated it on the ISIC 2017 test set as well as the PH2 dataset. Our proposed model attained a Jaccard Index of 0.772 on the ISIC 2017 test set and 0.854 on the PH2 dataset, which are comparable results to the current available state-of-the-art techniques.


Asunto(s)
Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Piel/diagnóstico por imagen , Algoritmos , Artefactos , Humanos , Piel/patología , Enfermedades de la Piel/diagnóstico por imagen , Enfermedades de la Piel/patología , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA