Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38931747

RESUMEN

The development of non-contact techniques for monitoring human vital signs has significant potential to improve patient care in diverse settings. By facilitating easier and more convenient monitoring, these techniques can prevent serious health issues and improve patient outcomes, especially for those unable or unwilling to travel to traditional healthcare environments. This systematic review examines recent advancements in non-contact vital sign monitoring techniques, evaluating publicly available datasets and signal preprocessing methods. Additionally, we identified potential future research directions in this rapidly evolving field.


Asunto(s)
Signos Vitales , Humanos , Signos Vitales/fisiología , Monitoreo Fisiológico/métodos , Procesamiento de Señales Asistido por Computador
2.
Sensors (Basel) ; 23(7)2023 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-37050678

RESUMEN

It is considered that 1 in 10 adults worldwide have diabetes. Diabetic foot ulcers are some of the most common complications of diabetes, and they are associated with a high risk of lower-limb amputation and, as a result, reduced life expectancy. Timely detection and periodic ulcer monitoring can considerably decrease amputation rates. Recent research has demonstrated that computer vision can be used to identify foot ulcers and perform non-contact telemetry by using ulcer and tissue area segmentation. However, the applications are limited to controlled lighting conditions, and expert knowledge is required for dataset annotation. This paper reviews the latest publications on the use of artificial intelligence for ulcer area detection and segmentation. The PRISMA methodology was used to search for and select articles, and the selected articles were reviewed to collect quantitative and qualitative data. Qualitative data were used to describe the methodologies used in individual studies, while quantitative data were used for generalization in terms of dataset preparation and feature extraction. Publicly available datasets were accounted for, and methods for preprocessing, augmentation, and feature extraction were evaluated. It was concluded that public datasets can be used to form a bigger, more diverse datasets, and the prospects of wider image preprocessing and the adoption of augmentation require further research.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Humanos , Pie Diabético/diagnóstico , Inteligencia Artificial , Cicatrización de Heridas , Úlcera
3.
Sensors (Basel) ; 23(7)2023 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-37050491

RESUMEN

In this study, a novel method for automatic microaneurysm detection in color fundus images is presented. The proposed method is based on three main steps: (1) image breakdown to smaller image patches, (2) inference to segmentation models, and (3) reconstruction of the predicted segmentation map from output patches. The proposed segmentation method is based on an ensemble of three individual deep networks, such as U-Net, ResNet34-UNet and UNet++. The performance evaluation is based on the calculation of the Dice score and IoU values. The ensemble-based model achieved higher Dice score (0.95) and IoU (0.91) values compared to other network architectures. The proposed ensemble-based model demonstrates the high practical application potential for detection of early-stage diabetic retinopathy in color fundus images.


Asunto(s)
Retinopatía Diabética , Microaneurisma , Humanos , Microaneurisma/diagnóstico por imagen , Fondo de Ojo , Retinopatía Diabética/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
4.
Sensors (Basel) ; 22(6)2022 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-35336387

RESUMEN

Intelligent video surveillance systems are rapidly being introduced to public places. The adoption of computer vision and machine learning techniques enables various applications for collected video features; one of the major is safety monitoring. The efficacy of violent event detection is measured by the efficiency and accuracy of violent event detection. In this paper, we present a novel architecture for violence detection from video surveillance cameras. Our proposed model is a spatial feature extracting a U-Net-like network that uses MobileNet V2 as an encoder followed by LSTM for temporal feature extraction and classification. The proposed model is computationally light and still achieves good results-experiments showed that an average accuracy is 0.82 ± 2% and average precision is 0.81 ± 3% using a complex real-world security camera footage dataset based on RWF-2000.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Violencia
5.
Sensors (Basel) ; 21(3)2021 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-33525420

RESUMEN

BACKGROUND: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. METHODS: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. RESULTS: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques-Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. CONCLUSION: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Automatización , Aumento de la Imagen
6.
Sensors (Basel) ; 21(1)2020 Dec 24.
Artículo en Inglés | MEDLINE | ID: mdl-33374461

RESUMEN

We propose a deep learning method based on the Region Based Convolutional Neural Networks (R-CNN) architecture for the evaluation of sperm head motility in human semen videos. The neural network performs the segmentation of sperm heads, while the proposed central coordinate tracking algorithm allows us to calculate the movement speed of sperm heads. We have achieved 91.77% (95% CI, 91.11-92.43%) accuracy of sperm head detection on the VISEM (A Multimodal Video Dataset of Human Spermatozoa) sperm sample video dataset. The mean absolute error (MAE) of sperm head vitality prediction was 2.92 (95% CI, 2.46-3.37), while the Pearson correlation between actual and predicted sperm head vitality was 0.969. The results of the experiments presented below will show the applicability of the proposed method to be used in automated artificial insemination workflow.


Asunto(s)
Aprendizaje Profundo , Inseminación Artificial , Análisis de Semen , Humanos , Masculino , Redes Neurales de la Computación , Espermatozoides
7.
Biomed Eng Online ; 18(1): 120, 2019 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-31830988

RESUMEN

BACKGROUND: Infertility and subfertility affect a significant proportion of humanity. Assisted reproductive technology has been proven capable of alleviating infertility issues. In vitro fertilisation is one such option whose success is highly dependent on the selection of a high-quality embryo for transfer. This is typically done manually by analysing embryos under a microscope. However, evidence has shown that the success rate of manual selection remains low. The use of new incubators with integrated time-lapse imaging system is providing new possibilities for embryo assessment. As such, we address this problem by proposing an approach based on deep learning for automated embryo quality evaluation through the analysis of time-lapse images. Automatic embryo detection is complicated by the topological changes of a tracked object. Moreover, the algorithm should process a large number of image files of different qualities in a reasonable amount of time. METHODS: We propose an automated approach to detect human embryo development stages during incubation and to highlight embryos with abnormal behaviour by focusing on five different stages. This method encompasses two major steps. First, the location of an embryo in the image is detected by employing a Haar feature-based cascade classifier and leveraging the radiating lines. Then, a multi-class prediction model is developed to identify a total cell number in the embryo using the technique of deep learning. RESULTS: The experimental results demonstrate that the proposed method achieves an accuracy of at least 90% in the detection of embryo location. The implemented deep learning approach to identify the early stages of embryo development resulted in an overall accuracy of over 92% using the selected architectures of convolutional neural networks. The most problematic stage was the 3-cell stage, presumably due to its short duration during development. CONCLUSION: This research contributes to the field by proposing a model to automate the monitoring of early-stage human embryo development. Unlike in other imaging fields, only a few published attempts have involved leveraging deep learning in this field. Therefore, the approach presented in this study could be used in the creation of novel algorithms integrated into the assisted reproductive technology used by embryologists.


Asunto(s)
Desarrollo Embrionario , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Automatización , Humanos , Imagen Molecular , Imagen de Lapso de Tiempo
8.
Sensors (Basel) ; 19(16)2019 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-31426441

RESUMEN

We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student's t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks.

9.
Animals (Basel) ; 14(9)2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38731394

RESUMEN

The aim of this study was to analyze the physical and chemical characteristics of chicken droppings (n = 73), which were collected during different age periods and classified by visual inspection into normal (N) and abnormal (A). Significant differences were found in the texture, pH, dry matter (DM), fatty acids (FAs), short-chain fatty acids (SCFAs), and volatile compounds (VCs) between the tested dropping groups (p ≤ 0.05). The age period of the chicken had a significant influence on the color coordinates, texture, pH, DM, and SCFA contents in N and A as well as on all FAs content in N (p ≤ 0.05). Droppings from the N group had a harder texture, lower values of a* and b* color coordinates, higher DM content, higher level of linoleic FA, and lower level of α-linolenic FA than the droppings from the A group in each age period (p ≤ 0.05). The predominant SCFA was acetic acid, the content of which was significantly lower in the N group compared to that of the A group. The alcohol and organic acid contents were the highest in most of the A group at different age periods, while ketones dominated in the N and A groups. In conclusion, the majority of the tested dropping characteristics were influenced by the age period. While certain characteristics demonstrate differences between N and A, a likely broader range of droppings is required to provide more distinct trends regarding the distribution of characteristics across different droppings.

10.
Int J Retina Vitreous ; 10(1): 40, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38783384

RESUMEN

BACKGROUND: Diabetic retinopathy (DR) is the leading cause of adult blindness in the working age population worldwide, which can be prevented by early detection. Regular eye examinations are recommended and crucial for detecting sight-threatening DR. Use of artificial intelligence (AI) to lessen the burden on the healthcare system is needed. PURPOSE: To perform a pilot cost-analysis study for detecting DR in a cohort of minority women with DM in Oslo, Norway, that have the highest prevalence of diabetes mellitus (DM) in the country, using both manual (ophthalmologist) and autonomous (AI) grading. This is the first study in Norway, as far as we know, that uses AI in DR- grading of retinal images. METHODS: On Minority Women's Day, November 1, 2017, in Oslo, Norway, 33 patients (66 eyes) over 18 years of age diagnosed with DM (T1D and T2D) were screened. The Eidon - True Color Confocal Scanner (CenterVue, United States) was used for retinal imaging and graded for DR after screening had been completed, by an ophthalmologist and automatically, using EyeArt Automated DR Detection System, version 2.1.0 (EyeArt, EyeNuk, CA, USA). The gradings were based on the International Clinical Diabetic Retinopathy (ICDR) severity scale [1] detecting the presence or absence of referable DR. Cost-minimization analyses were performed for both grading methods. RESULTS: 33 women (64 eyes) were eligible for the analysis. A very good inter-rater agreement was found: 0.98 (P < 0.01), between the human and AI-based EyeArt grading system for detecting DR. The prevalence of DR was 18.6% (95% CI: 11.4-25.8%), and the sensitivity and specificity were 100% (95% CI: 100-100% and 95% CI: 100-100%), respectively. The cost difference for AI screening compared to human screening was $143 lower per patient (cost-saving) in favour of AI. CONCLUSION: Our results indicate that The EyeArt AI system is both a reliable, cost-saving, and useful tool for DR grading in clinical practice.

11.
Diagnostics (Basel) ; 13(13)2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37443533

RESUMEN

Current artificial intelligence algorithms can classify melanomas at a level equivalent to that of experienced dermatologists. The objective of this study was to assess the accuracy of a smartphone-based "You Only Look Once" neural network model for the classification of melanomas, melanocytic nevi, and seborrheic keratoses. The algorithm was trained using 59,090 dermatoscopic images. Testing was performed on histologically confirmed lesions: 32 melanomas, 35 melanocytic nevi, and 33 seborrheic keratoses. The results of the algorithm's decisions were compared with those of two skilled dermatologists and five beginners in dermatoscopy. The algorithm's sensitivity and specificity for melanomas were 0.88 (0.71-0.96) and 0.87 (0.76-0.94), respectively. The algorithm surpassed the beginner dermatologists, who achieved a sensitivity of 0.83 (0.77-0.87). For melanocytic nevi, the algorithm outclassed each group of dermatologists, attaining a sensitivity of 0.77 (0.60-0.90). The algorithm's sensitivity for seborrheic keratoses was 0.52 (0.34-0.69). The smartphone-based "You Only Look Once" neural network model achieved a high sensitivity and specificity in the classification of melanomas and melanocytic nevi with an accuracy similar to that of skilled dermatologists. However, a bigger dataset is required in order to increase the algorithm's sensitivity for seborrheic keratoses.

12.
Animals (Basel) ; 13(19)2023 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-37835647

RESUMEN

The use of artificial intelligence techniques with advanced computer vision techniques offers great potential for non-invasive health assessments in the poultry industry. Evaluating the condition of poultry by monitoring their droppings can be highly valuable as significant changes in consistency and color can be indicators of serious and infectious diseases. While most studies have prioritized the classification of droppings into two categories (normal and abnormal), with some relevant studies dealing with up to five categories, this investigation goes a step further by employing image processing algorithms to categorize droppings into six classes, based on visual information indicating some level of abnormality. To ensure a diverse dataset, data were collected in three different poultry farms in Lithuania by capturing droppings on different types of litter. With the implementation of deep learning, the object detection rate reached 92.41% accuracy. A range of machine learning algorithms, including different deep learning architectures, has been explored and, based on the obtained results, we have proposed a comprehensive solution by combining different models for segmentation and classification purposes. The results revealed that the segmentation task achieved the highest accuracy of 0.88 in terms of the Dice coefficient employing the K-means algorithm. Meanwhile, YOLOv5 demonstrated the highest classification accuracy, achieving an ACC of 91.78%.

13.
Waste Manag ; 140: 31-39, 2022 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-35033802

RESUMEN

Forecasting municipal solid waste (MSW) generation and composition plays an essential role in effective waste management, policy decision-making and the MSW treatment process. An intelligent forecasting system could be used for short-term and long-term waste handling, ensuring a circular economy and a sustainable use of resources. This study contributes to the field by proposing a hybrid k-nearest neighbours (H-kNN) approach to forecasting municipal solid waste and its composition in the regions that experience data incompleteness and inaccessibility, as is the case for Lithuania and many other countries. For this purpose, the average MSW generation of neighbouring municipalities, as a geographical factor, was used to impute missing values, and socioeconomic factors together with demographic indicator affecting waste collected in municipalities were identified and quantified using correlation analysis. Among them, the most influential factors, such as population density, GDP per capita, private property, foreign investment per capita, and tourism, were then incorporated in the hierarchical setting of the H-kNN approach. The results showed that, in forecasting MSW generation, H-kNN achieved MAPE of 11.05%, on average, including all Lithuanian municipalities, which is by 7.17 percentage points lower than obtained using kNN. This implies that by finding relevant factors at the municipal level, we can compensate for the data incompleteness and enhance the forecasting results of MSW generation and composition.


Asunto(s)
Eliminación de Residuos , Administración de Residuos , Ciudades , Predicción , Lituania , Factores Socioeconómicos , Residuos Sólidos/análisis
14.
Comput Methods Programs Biomed ; 177: 161-174, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31319944

RESUMEN

BACKGROUND AND OBJECTIVE: Time-lapse microscopy has become an important tool for studying the embryo development process. Embryologists can monitor the entire embryo growth process and thus select the best embryos for fertilization. This time and the resource consuming process are among the key factors for success of pregnancies. Tools for automated evaluation of the embryo quality and development stage prediction are developed for improving embryo selection. METHODS: We present two-classifier vote-based method for embryo image classification. Our classification algorithms have been trained with features extracted using a Convolutional Neural Network (CNN). Prediction of embryo development stage is then completed by comparing confidence of two classifiers. Images are labeled depending on which one receives a larger confidence rating. RESULTS: The evaluation has been done with imagery of real embryos, taken in the ESCO Time Lapse incubator from four different developing embryos. The results illustrate the most effective combination of two classifiers leading to an increase of prediction accuracy and achievement of overall 97.62% accuracy for a test set classification. CONCLUSIONS: We have presented an approach for automated prediction of the embryo development stage for microscopy time-lapse incubator image. Our algorithm has extracted high-complexity image feature using CNN. Classification is done by comparing prediction of two classifiers and selecting the label of that classifier, which has a higher confidence value. This combination of two classifiers has allowed us to increase the overall accuracy of CNN from 96.58% by 1.04% up to 97.62%. The best results are achieved when combining the CNN and Discriminant classifiers. Practical implications include improvement of embryo selection process for in vitro fertilization.


Asunto(s)
Desarrollo Embrionario , Fertilización In Vitro , Procesamiento de Imagen Asistido por Computador/métodos , Incubadoras , Microscopía , Imagen de Lapso de Tiempo , Algoritmos , Árboles de Decisión , Análisis Discriminante , Transferencia de Embrión , Reacciones Falso Positivas , Femenino , Humanos , Cadenas de Markov , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Embarazo , Índice de Embarazo , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA