Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Biomedicines ; 10(7)2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35885022

RESUMO

Infertility is one of the most important health concerns worldwide. It is characterized by not being successful of pregnancy after some periods of periodic unprotected sexual intercourse. In vitro fertilization (IVF) is an assisted reproduction technique that efficiently addresses infertility. IVF replaces the actual mode of reproduction through a manual procedure wherein embryos are cultivated in a controlled laboratory environment until they reach the blastocyst stage. The standard IVF procedure includes the transfer of one or two blastocysts from several blastocysts that are grown in a controlled environment. The morphometric properties of blastocysts with their compartments such as trophectoderm (TE), zona pellucida (ZP), inner cell mass (ICM), and blastocoel (BL), are analyzed through manual microscopic analysis to predict viability. Deep learning has been extensively used for medical diagnosis and analysis and can be a powerful tool to automate the morphological analysis of human blastocysts. However, the existing approaches are inaccurate and require extensive preprocessing and expensive architectures. Thus, to cope with the automatic detection of blastocyst components, this study proposed a novel multiscale aggregation semantic segmentation network (MASS-Net) that combined four different scales via depth-wise concatenation. The extensive use of depthwise separable convolutions resulted in a decrease in the number of trainable parameters. Further, the innovative multiscale design provided rich spatial information of different resolutions, thereby achieving good segmentation performance without a very deep architecture. MASS-Net utilized 2.06 million trainable parameters and accurately detects TE, ZP, ICM, and BL without using preprocessing stages. Moreover, it can provide a separate binary mask for each blastocyst component simultaneously, and these masks provide the structure of each component for embryonic analysis. Further, the proposed MASS-Net was evaluated using publicly available human blastocyst (microscopic) imaging data. The experimental results revealed that it can effectively detect TE, ZP, ICM, and BL with mean Jaccard indices of 79.08, 84.69, 85.88%, and 89.28%, respectively, for embryological analysis, which was higher than those of the state-of-the-art methods.

2.
Sensors (Basel) ; 20(18)2020 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-32937774

RESUMO

The long-distance recognition methods in indoor environments are commonly divided into two categories, namely face recognition and face and body recognition. Cameras are typically installed on ceilings for face recognition. Hence, it is difficult to obtain a front image of an individual. Therefore, in many studies, the face and body information of an individual are combined. However, the distance between the camera and an individual is closer in indoor environments than that in outdoor environments. Therefore, face information is distorted due to motion blur. Several studies have examined deblurring of face images. However, there is a paucity of studies on deblurring of body images. To tackle the blur problem, a recognition method is proposed wherein the blur of body and face images is restored using a generative adversarial network (GAN), and the features of face and body obtained using a deep convolutional neural network (CNN) are used to fuse the matching score. The database developed by us, Dongguk face and body dataset version 2 (DFB-DB2) and ChokePoint dataset, which is an open dataset, were used in this study. The equal error rate (EER) of human recognition in DFB-DB2 and ChokePoint dataset was 7.694% and 5.069%, respectively. The proposed method exhibited better results than the state-of-art methods.


Assuntos
Reconhecimento Facial Automatizado , Identificação Biométrica/instrumentação , Face , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Bases de Dados Factuais , Humanos , Movimento (Física)
3.
J Clin Med ; 8(9)2019 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-31514466

RESUMO

Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.

4.
Sensors (Basel) ; 18(9)2018 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-30208648

RESUMO

In the current field of human recognition, most of the research being performed currently is focused on re-identification of different body images taken by several cameras in an outdoor environment. On the other hand, there is almost no research being performed on indoor human recognition. Previous research on indoor recognition has mainly focused on face recognition because the camera is usually closer to a person in an indoor environment than an outdoor environment. However, due to the nature of indoor surveillance cameras, which are installed near the ceiling and capture images from above in a downward direction, people do not look directly at the cameras in most cases. Thus, it is often difficult to capture front face images, and when this is the case, facial recognition accuracy is greatly reduced. To overcome this problem, we can consider using the face and body for human recognition. However, when images are captured by indoor cameras rather than outdoor cameras, in many cases only part of the target body is included in the camera viewing angle and only part of the body is captured, which reduces the accuracy of human recognition. To address all of these problems, this paper proposes a multimodal human recognition method that uses both the face and body and is based on a deep convolutional neural network (CNN). Specifically, to solve the problem of not capturing part of the body, the results of recognizing the face and body through separate CNNs of VGG Face-16 and ResNet-50 are combined based on the score-level fusion by Weighted Sum rule to improve recognition performance. The results of experiments conducted using the custom-made Dongguk face and body database (DFB-DB1) and the open ChokePoint database demonstrate that the method proposed in this study achieves high recognition accuracy (the equal error rates of 1.52% and 0.58%, respectively) in comparison to face or body single modality-based recognition and other methods used in previous studies.


Assuntos
Identificação Biométrica/métodos , Redes Neurais de Computação , Constituição Corporal , Bases de Dados Factuais , Face/anatomia & histologia , Feminino , Humanos , Masculino
5.
Sensors (Basel) ; 18(9)2018 Sep 07.
Artigo em Inglês | MEDLINE | ID: mdl-30205500

RESUMO

Conventional nighttime face detection studies mostly use near-infrared (NIR) light cameras or thermal cameras, which are robust to environmental illumination variation and low illumination. However, for the NIR camera, it is difficult to adjust the intensity and angle of the additional NIR illuminator according to its distance from an object. As for the thermal camera, it is expensive to use as a surveillance camera. For these reasons, we propose a nighttime face detection method based on deep learning using a single visible-light camera. In a long-distance night image, it is difficult to detect faces directly from the entire image due to noise and image blur. Therefore, we propose Two-Step Faster region-based convolutional neural network (R-CNN) based on the image preprocessed by histogram equalization (HE). As a two-step scheme, our method sequentially performs the detectors of body and face areas, and locates the face inside a limited body area. By using our two-step method, the processing time by Faster R-CNN can be reduced while maintaining the accuracy of face detection by Faster R-CNN. Using a self-constructed database called Dongguk Nighttime Face Detection database (DNFD-DB1) and an open database of Fudan University, we proved that the proposed method performs better compared to other existing face detectors. In addition, the proposed Two-Step Faster R-CNN outperformed single Faster R-CNN and our method with HE showed higher accuracies than those without our preprocessing in nighttime face detection.

6.
Sensors (Basel) ; 17(11)2017 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-29143764

RESUMO

Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA