Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38610467

RESUMO

Lineament is a unique geological structure. The study of Lunar lineament structure has great significance on understanding its history and evolution of Lunar surface. However, the existing geographic feature extraction methods are not suitable for the extraction of Lunar lineament structure. In this paper, a new lineament extraction method is proposed based on improved-UNet++ and YOLOv5. Firstly, new lineament dataset is created containing lineaments structure based on CCD data from LROC. At same time the residual blocks are replaced with the VGG blocks in the down sample part of the UNet++ with adding the attention block between each layer. Secondly, the improved-UNet++ and YOLO networks are trained to execute the object detection and semantic segmentation of lineament structure respectively. Finally, a polygon-match strategy is proposed to combine the results of object detection and semantic segmentation. The experiment result indicate that this new method has relatively better and more stable performance compared with current mainstream networks and the original UNet++ network in the instance segmentation of lineament structure. Additionally, the polygon-match strategy is able to perform preciser edge detail in the instance segmentation of lineament structure result.

2.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 39(5): 897-908, 2022 Oct 25.
Artigo em Zh | MEDLINE | ID: mdl-36310478

RESUMO

Cranial defects may result from clinical brain tumor surgery or accidental trauma. The defect skulls require hand-designed skull implants to repair. The edge of the skull implant needs to be accurately matched to the boundary of the skull wound with various defects. For the manual design of cranial implants, it is time-consuming and technically demanding, and the accuracy is low. Therefore, an informer residual attention U-Net (IRA-Unet) for the automatic design of three-dimensional (3D) skull implants was proposed in this paper. Informer was applied from the field of natural language processing to the field of computer vision for attention extraction. Informer attention can extract attention and make the model focus more on the location of the skull defect. Informer attention can also reduce the computation and parameter count from N 2 to log( N). Furthermore,the informer residual attention is constructed. The informer attention and the residual are combined and placed in the position of the model close to the output layer. Thus, the model can select and synthesize the global receptive field and local information to improve the model accuracy and speed up the model convergence. In this paper, the open data set of the AutoImplant 2020 was used for training and testing, and the effects of direct and indirect acquisition of skull implants on the results were compared and analyzed in the experimental part. The experimental results show that the performance of the model is robust on the test set of 110 cases fromAutoImplant 2020. The Dice coefficient and Hausdorff distance are 0.940 4 and 3.686 6, respectively. The proposed model reduces the resources required to run the model while maintaining the accuracy of the cranial implant shape, and effectively assists the surgeon in automating the design of efficient cranial repair, thereby improving the quality of the patient's postoperative recovery.


Assuntos
Desenho Assistido por Computador , Crânio , Humanos , Crânio/cirurgia , Próteses e Implantes , Cabeça
3.
Sensors (Basel) ; 20(6)2020 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-32204506

RESUMO

Since Synthetic Aperture Radar (SAR) targets are full of coherent speckle noise, the traditional deep learning models are difficult to effectively extract key features of the targets and share high computational complexity. To solve the problem, an effective lightweight Convolutional Neural Network (CNN) model incorporating transfer learning is proposed for better handling SAR targets recognition tasks. In this work, firstly we propose the Atrous-Inception module, which combines both atrous convolution and inception module to obtain rich global receptive fields, while strictly controlling the parameter amount and realizing lightweight network architecture. Secondly, the transfer learning strategy is used to effectively transfer the prior knowledge of the optical, non-optical, hybrid optical and non-optical domains to the SAR target recognition tasks, thereby improving the model's recognition performance on small sample SAR target datasets. Finally, the model constructed in this paper is verified to be 97.97% on ten types of MSTAR datasets under standard operating conditions, reaching a mainstream target recognition rate. Meanwhile, the method presented in this paper shows strong robustness and generalization performance on a small number of randomly sampled SAR target datasets.

4.
Comput Methods Programs Biomed ; 238: 107601, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37210926

RESUMO

BACKGROUND AND OBJECTIVE: Melanoma is a highly malignant skin tumor. Accurate segmentation of skin lesions from dermoscopy images is pivotal for computer-aided diagnosis of melanoma. However, blurred lesion boundaries, variable lesion shapes, and other interference factors pose a challenge in this regard. METHODS: This work proposes a novel framework called CFF-Net (Cross Feature Fusion Network) for supervised skin lesion segmentation. The encoder of the network includes dual branches, where the CNNs branch aims to extract rich local features while MLPs branch is used to establish both the global-spatial-dependencies and global-channel-dependencies for precise delineation of skin lesions. Besides, a feature-interaction module between two branches is designed for strengthening the feature representation by allowing dynamic exchange of spatial and channel information, so as to retain more spatial details and inhibit irrelevant noise. Moreover, an auxiliary prediction task is introduced to learn the global geometric information, highlighting the boundary of the skin lesion. RESULTS: Comprehensive experiments using four publicly available skin lesion datasets (i.e., ISIC 2018, ISIC 2017, ISIC 2016, and PH2) indicated that CFF-Net outperformed the state-of-the-art models. In particular, CFF-Net greatly increased the average Jaccard Index score from 79.71% to 81.86% in ISIC 2018, from 78.03% to 80.21% in ISIC 2017, from 82.58% to 85.38% in ISIC 2016, and from 84.18% to 89.71% in PH2 compared with U-Net. Ablation studies demonstrated the effectiveness of each proposed component. Cross-validation experiments in ISIC 2018 and PH2 datasets verified the generalizability of CFF-Net under different skin lesion data distributions. Finally, comparison experiments using three public datasets demonstrated the superior performance of our model. CONCLUSION: The proposed CFF-Net performed well in four public skin lesion datasets, especially for challenging cases with blurred edges of skin lesions and low contrast between skin lesions and background. CFF-Net can be employed for other segmentation tasks with better prediction and more accurate delineation of boundaries.


Assuntos
Melanoma , Dermatopatias , Humanos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Dermoscopia/métodos , Dermatopatias/diagnóstico por imagem , Melanoma/diagnóstico por imagem , Melanoma/patologia
5.
Polymers (Basel) ; 15(4)2023 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-36850127

RESUMO

To accelerate the industrialization of bicomponent fibers, fiber-based flexible devices, and other technical fibers and to protect the property rights of inventors, it is necessary to develop fast, economical, and easy-to-test methods to provide some guidance for formulating relevant testing standards. A quantitative method based on cross-sectional in-situ observation and image processing was developed in this study. First, the cross-sections of the fibers were rapidly prepared by the non-embedding method. Then, transmission and reflection metallographic microscopes were used for in-situ observation and to capture the cross-section images of fibers. This in-situ observation allows for the rapid identification of the type and spatial distribution structure of the bicomponent fiber. Finally, the mass percentage content of each component was calculated rapidly by AI software according to its density, cross-section area, and total test samples of each component. By comparing the ultra-depth of field microscope, differential scanning calorimetry (DSC), and chemical dissolution method, the quantitative analysis was fast, accurate, economical, simple to operate, energy-saving, and environmentally friendly. This method will be widely used in the intelligent qualitative identification and quantitative analysis of bicomponent fibers, fiber-based flexible devices, and blended textiles.

6.
Comput Intell Neurosci ; 2022: 9986611, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35634050

RESUMO

Datasets usually suffer from supervised information missing and weak generalization ability in deep convolution neural network. In this paper, pseudolabel (PL) of Weakly Supervised Learning (WSL) was used to address the problem of supervised information missing, while Cross Network (CN) of Multitask Learning (MTL) was used to solve the problem of weak generalization ability in deep convolution neural network. In PL, the data of supervised information missing was predicted; thus, PL of the corresponding data was generated. In CN, PL data and labeled data were taken as two tasks to train together. Firstly, the labeled data was divided into training dataset and testing dataset, respectively, and image preprocessing was carried out. Secondly, the network was initialized and trained, and the model with high accuracy and good generalization was selected as the optimal model. Then, the optimal model was used to predict the unlabeled data and generate PL. Finally, the steps above were repeated several times to find a better optimal model. In the experiments of the fusion model of PL and CN, Facial Beauty Prediction was regarded as main task and the others as auxiliary tasks. Experimental results show that the model was suitable for multitask training of different tasks in different or similar datasets, and the accuracy of the main task of Facial Beauty Prediction reaches 64.76%, higher than the highest accuracy by conventional methods.


Assuntos
Generalização Psicológica , Redes Neurais de Computação
7.
Comput Intell Neurosci ; 2022: 3470764, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35498198

RESUMO

Breast cancer detection largely relies on imaging characteristics and the ability of clinicians to easily and quickly identify potential lesions. Magnetic resonance imaging (MRI) of breast tumors has recently shown great promise for enabling the automatic identification of breast tumors. Nevertheless, state-of-the-art MRI-based algorithms utilizing deep learning techniques are still limited in their ability to accurately separate tumor and healthy tissue. Therefore, in the current work, we propose an automatic and accurate two-stage U-Net-based segmentation framework for breast tumor detection using dynamic contrast-enhanced MRI (DCE-MRI). This framework was evaluated using T2-weighted MRI data from 160 breast tumor cases, and its performance was compared with that of the standard U-Net model. In the first stage of the proposed framework, a refined U-Net model was utilized to automatically delineate a breast region of interest (ROI) from the surrounding healthy tissue. Importantly, this automatic segmentation step reduced the impact of the background chest tissue on breast tumors' identification. For the second stage, we employed an improved U-Net model that combined a dense residual module based on dilated convolution with a recurrent attention module. This model was used to accurately and automatically segment the tumor tissue from healthy tissue in the breast ROI derived in the previous step. Overall, compared to the U-Net model, the proposed technique exhibited increases in the Dice similarity coefficient, Jaccard similarity, positive predictive value, sensitivity, and Hausdorff distance of 3%, 3%, 3%, 2%, and 16.2, respectively. The proposed model may in the future aid in the clinical diagnosis of breast cancer lesions and help guide individualized patient treatment.


Assuntos
Neoplasias da Mama , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos
8.
Comput Intell Neurosci ; 2019: 1910624, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30809254

RESUMO

Because of the lack of discriminative face representations and scarcity of labeled training data, facial beauty prediction (FBP), which aims at assessing facial attractiveness automatically, has become a challenging pattern recognition problem. Inspired by recent promising work on fine-grained image classification using the multiscale architecture to extend the diversity of deep features, BeautyNet for unconstrained facial beauty prediction is proposed in this paper. Firstly, a multiscale network is adopted to improve the discriminative of face features. Secondly, to alleviate the computational burden of the multiscale architecture, MFM (max-feature-map) is utilized as an activation function which can not only lighten the network and speed network convergence but also benefit the performance. Finally, transfer learning strategy is introduced here to mitigate the overfitting phenomenon which is caused by the scarcity of labeled facial beauty samples and improves the proposed BeautyNet's performance. Extensive experiments performed on LSFBD demonstrate that the proposed scheme outperforms the state-of-the-art methods, which can achieve 67.48% classification accuracy.


Assuntos
Beleza , Face/fisiologia , Reconhecimento Facial/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Transferência de Experiência , Humanos , Redes Neurais de Computação , Variações Dependentes do Observador
9.
Comput Intell Neurosci ; 2019: 9140167, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31915430

RESUMO

Though Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) via Convolutional Neural Networks (CNNs) has made huge progress toward deep learning, some key issues still remain unsolved due to the lack of sufficient samples and robust model. In this paper, we proposed an efficient transferred Max-Slice CNN (MS-CNN) with L2-Regularization for SAR ATR, which could enrich the features and recognize the targets with superior performance. Firstly, the data amplification method is presented to reduce the computational time and enrich the raw features of SAR targets. Secondly, the proposed MS-CNN framework with L2-Regularization is trained to extract robust features, in which the L2-Regularization is incorporated to avoid the overfitting phenomenon and further optimizing our proposed model. Thirdly, transfer learning is introduced to enhance the feature representation and discrimination, which could boost the performance and robustness of the proposed model on small samples. Finally, various activation functions and dropout strategies are evaluated for further improving recognition performance. Extensive experiments demonstrated that our proposed method could not only outperform other state-of-the-art methods on the public and extended MSTAR dataset but also obtain good performance on the random small datasets.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Radar , Humanos , Aprendizado de Máquina , Veículos Automotores , Guerra
10.
Comput Intell Neurosci ; 2018: 3803627, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30210533

RESUMO

Face recognition (FR) with single sample per person (SSPP) is a challenge in computer vision. Since there is only one sample to be trained, it makes facial variation such as pose, illumination, and disguise difficult to be predicted. To overcome this problem, this paper proposes a scheme combined traditional and deep learning (TDL) method to process the task. First, it proposes an expanding sample method based on traditional approach. Compared with other expanding sample methods, the method can be used easily and conveniently. Besides, it can generate samples such as disguise, expression, and mixed variation. Second, it uses transfer learning and introduces a well-trained deep convolutional neural network (DCNN) model and then selects some expanding samples to fine-tune the DCNN model. Third, the fine-tuned model is used to implement experiment. Experimental results on AR face database, Extend Yale B face database, FERET face database, and LFW database demonstrate that TDL achieves the state-of-the-art performance in SSPP FR.


Assuntos
Face , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA