Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Heliyon ; 10(5): e26520, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38434298

RESUMO

Computational cell segmentation is a vital area of research, particularly in the analysis of images of cancer cells. The use of cell lines, such as the widely utilized HeLa cell line, is crucial for studying cancer. While deep learning algorithms have been commonly employed for cell segmentation, their resource and data requirements can be impractical for many laboratories. In contrast, image processing algorithms provide a promising alternative due to their effectiveness and minimal resource demands. This article presents the development of an algorithm utilizing digital image processing to segment the nucleus and shape of HeLa cells. The research aims to segment the cell shape in the image center and accurately identify the nucleus. The study uses and processes 300 images obtained from Serial Block-Face Scanning Electron Microscopy (SBF-SEM). For cell segmentation, the morphological operation of erosion was used to separate the cells, and through distance calculation, the cell located at the center of the image was selected. Subsequently, the eroded shape was employed to restore the original cell shape. The nucleus segmentation uses parameters such as distances and sizes, along with the implementation of verification stages to ensure accurate detection. The accuracy of the algorithm is demonstrated by comparing it with another algorithm meeting the same conditions, using four segmentation similarity metrics. The evaluation results rank the proposed algorithm as the superior choice, highlighting significant outcomes. The algorithm developed represents a crucial initial step towards more accurate disease analysis. In addition, it enables the measurement of shapes and the identification of morphological alterations, damages, and changes in organelles within the cell, which can be vital for diagnostic purposes.

2.
Healthcare (Basel) ; 10(4)2022 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-35455835

RESUMO

Humans express their emotions verbally and through actions, and hence emotions play a fundamental role in facial expressions and body gestures. Facial expression recognition is a popular topic in security, healthcare, entertainment, advertisement, education, and robotics. Detecting facial expressions via gesture recognition is a complex and challenging problem, especially in persons who suffer face impairments, such as patients with facial paralysis. Facial palsy or paralysis refers to the incapacity to move the facial muscles on one or both sides of the face. This work proposes a methodology based on neural networks and handcrafted features to recognize six gestures in patients with facial palsy. The proposed facial palsy gesture recognition system is designed and evaluated on a publicly available database with good results as a first attempt to perform this task in the medical field. We conclude that, to recognize facial gestures in patients with facial paralysis, the severity of the damage has to be considered because paralyzed organs exhibit different behavior than do healthy ones, and any recognition system must be capable of discerning these behaviors.

3.
Diagnostics (Basel) ; 12(7)2022 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-35885434

RESUMO

The incapability to move the facial muscles is known as facial palsy, and it affects various abilities of the patient, for example, performing facial expressions. Recently, automatic approaches aiming to diagnose facial palsy using images and machine learning algorithms have emerged, focusing on providing an objective evaluation of the paralysis severity. This research proposes an approach to analyze and assess the lesion severity as a classification problem with three levels: healthy, slight, and strong palsy. The method explores the use of regional information, meaning that only certain areas of the face are of interest. Experiments carrying on multi-class classification tasks are performed using four different classifiers to validate a set of proposed hand-crafted features. After a set of experiments using this methodology on available image databases, great results are revealed (up to 95.61% of correct detection of palsy patients and 95.58% of correct assessment of the severity level). This perspective leads us to believe that the analysis of facial paralysis is possible with partial occlusions if face detection is accomplished and facial features are obtained adequately. The results also show that our methodology is suited to operate with other databases while attaining high performance, even though the image conditions are different and the participants do not perform equivalent facial expressions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA