Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
1.
Sensors (Basel) ; 24(19)2024 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-39409328

RESUMO

Urban Heat Islands are a major environmental and public health concern, causing temperature increase in urban areas. This study used satellite imagery and machine learning to analyze the spatial and temporal patterns of land surface temperature distribution in the Metropolitan Area of Merida (MAM), Mexico, from 2001 to 2021. The results show that land surface temperature has increased in the MAM over the study period, while the urban footprint has expanded. The study also found a high correlation (r> 0.8) between changes in land surface temperature and land cover classes (urbanization/deforestation). If the current urbanization trend continues, the difference between the land surface temperature of the MAM and its surroundings is expected to reach 3.12 °C ± 1.11 °C by the year 2030. Hence, the findings of this study suggest that the Urban Heat Island effect is a growing problem in the MAM and highlight the importance of satellite imagery and machine learning for monitoring and developing mitigation strategies.

2.
PeerJ Comput Sci ; 10: e2052, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314724

RESUMO

Most natural disasters result from geodynamic events such as landslides and slope collapse. These failures cause catastrophes that directly impact the environment and cause financial and human losses. Visual inspection is the primary method for detecting failures in geotechnical structures, but on-site visits can be risky due to unstable soil. In addition, the body design and hostile and remote installation conditions make monitoring these structures inviable. When a fast and secure evaluation is required, analysis by computational methods becomes feasible. In this study, a convolutional neural network (CNN) approach to computer vision is applied to identify defects in the surface of geotechnical structures aided by unmanned aerial vehicle (UAV) and mobile devices, aiming to reduce the reliance on human-led on-site inspections. However, studies in computer vision algorithms still need to be explored in this field due to particularities of geotechnical engineering, such as limited public datasets and redundant images. Thus, this study obtained images of surface failure indicators from slopes near a Brazilian national road, assisted by UAV and mobile devices. We then proposed a custom CNN and low complexity model architecture to build a binary classifier image-aided to detect faults in geotechnical surfaces. The model achieved a satisfactory average accuracy rate of 94.26%. An AUC metric score of 0.99 from the receiver operator characteristic (ROC) curve and matrix confusion with a testing dataset show satisfactory results. The results suggest that the capability of the model to distinguish between the classes 'damage' and 'intact' is excellent. It enables the identification of failure indicators. Early failure indicator detection on the surface of slopes can facilitate proper maintenance and alarms and prevent disasters, as the integrity of the soil directly affects the structures built around and above it.

3.
Data Brief ; 56: 110857, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39281012

RESUMO

This dataset results from controlled experiments that assess the tolerance of Urochloa spp. and Megathyrsus maximus grasses to nymphal and adult spittlebug damage, particularly from Aeneolamia varia, which significantly impacts forage production in Neotropical regions. Data were collected under standardized conditions using high-throughput phenotyping methods, integrating image-capture techniques and analyses to ensure precise and consistent data acquisition. The dataset serves as a foundational resource for developing and validating computer vision models aimed at automated phenotyping, enabling accurate and high-throughput assessment of plant tolerance to spittlebug damage. Researchers can use the dataset to benchmark and compare different methodologies for plant damage assessment, fostering standardization and reproducibility in phenotyping studies.

4.
Sensors (Basel) ; 24(15)2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39123972

RESUMO

This study introduces an orbital monitoring system designed to quantify non-technical losses (NTLs) within electricity distribution networks. Leveraging Sentinel-2 satellite imagery alongside advanced techniques in computer vision and machine learning, this system focuses on accurately segmenting urban areas, facilitating the removal of clouds, and utilizing OpenStreetMap masks for pre-annotation. Through testing on two datasets, the method attained a Jaccard index (IoU) of 0.9210 on the training set, derived from the region of France, and 0.88 on the test set, obtained from the region of Brazil, underscoring its efficacy and resilience. The precise segmentation of urban zones enables the identification of areas beyond the electric distribution company's coverage, thereby highlighting potential irregularities with heightened reliability. This approach holds promise for mitigating NTL, particularly through its ability to pinpoint potential irregular areas.

5.
Data Brief ; 56: 110780, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39211486

RESUMO

This paper presents Libras SignWriting Handshape (LSWH100), a new handshape dataset focused on Sign Language Recognition. The dataset includes 144,000 synthetic images of a realistic human hand, covering 100 distinct handshape classes used in Brazilian Sign Language (Libras). Handshapes are named using the convention from SignWriting, a writing system for sign languages. The dataset contains annotations for classification, detection, segmentation, depth estimation, and 3D hand keypoints. Images include indoor and outdoor scenes during different times of day, centered on a single hand that can change size, 3D rotation, and skin tone. We generated these images using Blender, a free and open-source 3D creation software. This is a challenging dataset that can be further explored. With a focus on sign language, this dataset has the potential to advance sign language recognition systems, positively impacting those who rely on sign language for communication.

6.
Animals (Basel) ; 14(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38997962

RESUMO

Aquaculture requires precise non-invasive methods for biomass estimation. This research validates a novel computer vision methodology that uses a signature function-based feature extraction algorithm combining statistical morphological analysis of the size and shape of fish and machine learning to improve the accuracy of biomass estimation in fishponds and is specifically applied to tilapia (Oreochromis niloticus). These features that are automatically extracted from images are put to the test against previously manually extracted features by comparing the results when applied to three common machine learning methods under two different lighting conditions. The dataset for this analysis encompasses 129 tilapia samples. The results give promising outcomes since the multilayer perceptron model shows robust performance, consistently demonstrating superior accuracy across different features and lighting conditions. The interpretable nature of the model, rooted in the statistical features of the signature function, could provide insights into the morphological and allometric changes at different developmental stages. A comparative analysis against existing literature underscores the competitiveness of the proposed methodology, pointing to advancements in precision, interpretability, and species versatility. This research contributes significantly to the field, accelerating the quest for non-invasive fish biometrics that can be generalized across various aquaculture species in different stages of development. In combination with detection, tracking, and posture recognition, deep learning methodologies such as the one provided in the latest studies could generate a powerful method for real-time fish morphology development, biomass estimation, and welfare monitoring, which are crucial for the effective management of fish farms.

7.
Data Brief ; 55: 110679, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39044903

RESUMO

Digital image datasets for Precision Agriculture (PA) still need to be available. Many problems in this field of science have been studied to find solutions, such as detecting weeds, counting fruits and trees, and detecting diseases and pests, among others. One of the main fields of research in PA is detecting different crop types with aerial images. Crop detection is vital in PA to establish crop inventories, planting areas, and crop yields and to have information available for food markets and public entities that provide technical help to small farmers. This work proposes public access to a digital image dataset for detecting green onion and foliage flower crops located in the rural area of Medellín City - Colombia. This dataset consists of 245 images with their respective labels: green onion (Allium fistulosum), foliage flowers (Solidago Canadensis and Aster divaricatus), and non-crop areas prepared for planting. A total of 4315 instances were obtained, which were divided into subsets for training, validation, and testing. The classes in the images were labeled with the polygon method, which allows training machine learning algorithms for detection using bounding boxes or segmentation in the COCO format.

8.
PeerJ ; 12: e17686, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39006015

RESUMO

In the present investigation, we employ a novel and meticulously structured database assembled by experts, encompassing macrofungi field-collected in Brazil, featuring upwards of 13,894 photographs representing 505 distinct species. The purpose of utilizing this database is twofold: firstly, to furnish training and validation for convolutional neural networks (CNNs) with the capacity for autonomous identification of macrofungal species; secondly, to develop a sophisticated mobile application replete with an advanced user interface. This interface is specifically crafted to acquire images, and, utilizing the image recognition capabilities afforded by the trained CNN, proffer potential identifications for the macrofungal species depicted therein. Such technological advancements democratize access to the Brazilian Funga, thereby enhancing public engagement and knowledge dissemination, and also facilitating contributions from the populace to the expanding body of knowledge concerning the conservation of macrofungal species of Brazil.


Assuntos
Aprendizado Profundo , Fungos , Brasil , Fungos/classificação , Fungos/isolamento & purificação , Biodiversidade , Redes Neurais de Computação , Bases de Dados Factuais
9.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39066062

RESUMO

Marker-less hand-eye calibration permits the acquisition of an accurate transformation between an optical sensor and a robot in unstructured environments. Single monocular cameras, despite their low cost and modest computation requirements, present difficulties for this purpose due to their incomplete correspondence of projected coordinates. In this work, we introduce a hand-eye calibration procedure based on the rotation representations inferred by an augmented autoencoder neural network. Learning-based models that attempt to directly regress the spatial transform of objects such as the links of robotic manipulators perform poorly in the orientation domain, but this can be overcome through the analysis of the latent space vectors constructed in the autoencoding process. This technique is computationally inexpensive and can be run in real time in markedly varied lighting and occlusion conditions. To evaluate the procedure, we use a color-depth camera and perform a registration step between the predicted and the captured point clouds to measure translation and orientation errors and compare the results to a baseline based on traditional checkerboard markers.

10.
Front Robot AI ; 11: 1331249, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38933083

RESUMO

Implementing and deploying advanced technologies are principal in improving manufacturing processes, signifying a transformative stride in the industrial sector. Computer vision plays a crucial innovation role during this technological advancement, demonstrating broad applicability and profound impact across various industrial operations. This pivotal technology is not merely an additive enhancement but a revolutionary approach that redefines quality control, automation, and operational efficiency parameters in manufacturing landscapes. By integrating computer vision, industries are positioned to optimize their current processes significantly and spearhead innovations that could set new standards for future industrial endeavors. However, the integration of computer vision in these contexts necessitates comprehensive training programs for operators, given this advanced system's complexity and abstract nature. Historically, training modalities have grappled with the complexities of understanding concepts as advanced as computer vision. Despite these challenges, computer vision has recently surged to the forefront across various disciplines, attributed to its versatility and superior performance, often matching or exceeding the capabilities of other established technologies. Nonetheless, there is a noticeable knowledge gap among students, particularly in comprehending the application of Artificial Intelligence (AI) within Computer Vision. This disconnect underscores the need for an educational paradigm transcending traditional theoretical instruction. Cultivating a more practical understanding of the symbiotic relationship between AI and computer vision is essential. To address this, the current work proposes a project-based instructional approach to bridge the educational divide. This methodology will enable students to engage directly with the practical aspects of computer vision applications within AI. By guiding students through a hands-on project, they will learn how to effectively utilize a dataset, train an object detection model, and implement it within a microcomputer infrastructure. This immersive experience is intended to bolster theoretical knowledge and provide a practical understanding of deploying AI techniques within computer vision. The main goal is to equip students with a robust skill set that translates into practical acumen, preparing a competent workforce to navigate and innovate in the complex landscape of Industry 4.0. This approach emphasizes the criticality of adapting educational strategies to meet the evolving demands of advanced technological infrastructures. It ensures that emerging professionals are adept at harnessing the potential of transformative tools like computer vision in industrial settings.

11.
Sensors (Basel) ; 24(11)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38894161

RESUMO

Technological advancements have expanded the range of methods for capturing human body motion, including solutions involving inertial sensors (IMUs) and optical alternatives. However, the rising complexity and costs associated with commercial solutions have prompted the exploration of more cost-effective alternatives. This paper presents a markerless optical motion capture system using a RealSense depth camera and intelligent computer vision algorithms. It facilitates precise posture assessment, the real-time calculation of joint angles, and acquisition of subject-specific anthropometric data for gait analysis. The proposed system stands out for its simplicity and affordability in comparison to complex commercial solutions. The gathered data are stored in comma-separated value (CSV) files, simplifying subsequent analysis and data mining. Preliminary tests, conducted in controlled laboratory environments and employing a commercial MEMS-IMU system as a reference, revealed a maximum relative error of 7.6% in anthropometric measurements, with a maximum absolute error of 4.67 cm at average height. Stride length measurements showed a maximum relative error of 11.2%. Static joint angle tests had a maximum average error of 10.2%, while dynamic joint angle tests showed a maximum average error of 9.06%. The proposed optical system offers sufficient accuracy for potential application in areas such as rehabilitation, sports analysis, and entertainment.


Assuntos
Algoritmos , Antropometria , Análise da Marcha , Marcha , Humanos , Antropometria/métodos , Marcha/fisiologia , Análise da Marcha/métodos , Análise da Marcha/instrumentação , Masculino , Fenômenos Biomecânicos , Adulto , Captura de Movimento
12.
Heliyon ; 10(7): e27516, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38560155

RESUMO

The importance of radiology in modern medicine is acknowledged for its non-invasive diagnostic capabilities, yet the manual formulation of unstructured medical reports poses time constraints and error risks. This study addresses the common limitation of Artificial Intelligence applications in medical image captioning, which typically focus on classification problems, lacking detailed information about the patient's condition. Despite advancements in AI-generated medical reports that incorporate descriptive details from X-ray images, which are essential for comprehensive reports, the challenge persists. The proposed solution involves a multimodal model utilizing Computer Vision for image representation and Natural Language Processing for textual report generation. A notable contribution is the innovative use of the Swin Transformer as the image encoder, enabling hierarchical mapping and enhanced model perception without a surge in parameters or computational costs. The model incorporates GPT-2 as the textual decoder, integrating cross-attention layers and bilingual training with datasets in Portuguese PT-BR and English. Promising results are noted in the proposed database with ROUGE-L 0.748, METEOR 0.741, and NIH CHEST X-ray with ROUGE-L 0.404 and METEOR 0.393.

13.
Front Neurosci ; 18: 1340345, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38445254

RESUMO

The study of brain connectivity has been a cornerstone in understanding the complexities of neurological and psychiatric disorders. It has provided invaluable insights into the functional architecture of the brain and how it is perturbed in disorders. However, a persistent challenge has been achieving the proper spatial resolution, and developing computational algorithms to address biological questions at the multi-cellular level, a scale often referred to as the mesoscale. Historically, neuroimaging studies of brain connectivity have predominantly focused on the macroscale, providing insights into inter-regional brain connections but often falling short of resolving the intricacies of neural circuitry at the cellular or mesoscale level. This limitation has hindered our ability to fully comprehend the underlying mechanisms of neurological and psychiatric disorders and to develop targeted interventions. In light of this issue, our review manuscript seeks to bridge this critical gap by delving into the domain of mesoscale neuroimaging. We aim to provide a comprehensive overview of conditions affected by aberrant neural connections, image acquisition techniques, feature extraction, and data analysis methods that are specifically tailored to the mesoscale. We further delineate the potential of brain connectivity research to elucidate complex biological questions, with a particular focus on schizophrenia and epilepsy. This review encompasses topics such as dendritic spine quantification, single neuron morphology, and brain region connectivity. We aim to showcase the applicability and significance of mesoscale neuroimaging techniques in the field of neuroscience, highlighting their potential for gaining insights into the complexities of neurological and psychiatric disorders.

14.
BMC Public Health ; 24(1): 640, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38424562

RESUMO

BACKGROUND: Computer vision syndrome has become a significant public health problem, especially in developing countries. Therefore, this study aims to identify the prevalence of computer vision syndrome during the COVID-19 pandemic. METHODS: A systematic review and meta-analysis of the literature was conducted using the databases PubMed, Scopus, Web of Science, and Embase up to February 22, 2023, using the search terms "Computer Vision Syndrome" and "COVID-19". Three authors independently performed study selection, quality assessment, and data extraction, and the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument was used to evaluate study quality. Heterogeneity was assessed using the statistical test I2, and the R version 4.2.3 program was used for statistical analysis. RESULTS: A total of 192 studies were retrieved, of which 18 were included in the final meta-analysis. The total sample included 10,337 participants from 12 countries. The combined prevalence of computer vision syndrome was 74% (95% CI: 66, 81). Subgroup analysis based on country revealed a higher prevalence of computer vision syndrome in Pakistan (99%, 95% CI: 97, 100) and a lower prevalence in Turkey (48%, 95% CI: 44, 52). In addition, subgroup analysis based on study subjects showed a prevalence of 82% (95% CI: 74, 89) for computer vision syndrome in non-students and 70% (95% CI: 60, 80) among students. CONCLUSION: According to the study, 74% of the participants experienced computer vision syndrome during the COVID-19 pandemic. Given this finding, it is essential to implement preventive and therapeutic measures to reduce the risk of developing computer vision syndrome and improve the quality of life of those affected. TRIAL REGISTRATION: The protocol for this systematic review and meta-analysis was registered in the international registry of systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), with registration number CRD42022345965.


Assuntos
COVID-19 , Transtornos da Visão , Humanos , COVID-19/epidemiologia , Pandemias , Prevalência , Projetos de Pesquisa , Transtornos da Visão/epidemiologia
15.
JMIR Serious Games ; 12: e52661, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38265856

RESUMO

This research letter presents the co-design process for RG4Face, a mime therapy-based serious game that uses computer vision for human facial movement recognition and estimation to help health care professionals and patients in the facial rehabilitation process.

16.
J Optom ; 17(1): 100482, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37866176

RESUMO

PURPOSE: This review aimed to estimate the prevalence of computer vision syndrome (CVS) in the general population and subgroups. METHODS: A search was conducted in the following the databases: PubMed, SCOPUS, EMBASE, and Web of Science until February 13, 2023. We included studies that assessed the prevalence of CVS in any population. The Joanna Briggs Institute's critical appraisal tool was used to evaluate the methodological quality. A meta-analysis of the prevalence of CVS was done using a random-effects model, assessing the sources of heterogeneity using subgroup and meta-regression analyses. RESULTS: A total of 103 cross-sectional studies with 66 577 participants were included. The prevalence of CVS was 69.0% (95% CI: 62.3 to 75.3; I2: 99.7%), ranging from 12.1 to 97.3% across studies. Point prevalence was higher in women than in men (71.4 vs. 61.8%), university students (76.1%), Africa (71.2%), Asia (69.9%), contact lens wearers (73.1% vs. 63.8%) in studies conducted before the COVID-19 pandemic (72.8%), and in those that did not use the CVS-Q questionnaire (75.4%). In meta-regression, using the CVS-Q scale was associated with a lower prevalence of CVS. CONCLUSION: Seven out of ten people suffer from CVS. Preventive strategies and interventions are needed to decrease the prevalence of this condition which can affect productivity and quality of life. Future studies should standardize a definition of CVS.


Assuntos
Pandemias , Qualidade de Vida , Feminino , Humanos , Masculino , Computadores , Estudos Transversais , Prevalência , Síndrome , Astenopia
17.
Artigo em Espanhol | LILACS, CUMED | ID: biblio-1559799

RESUMO

Introducción: El uso excesivo de las redes sociales ha generado diversas secuelas en la salud mental y visual-ocular, dando lugar al Síndrome Visual Informático (SVI) por sobreexposición a las pantallas. Objetivo: Analizar la influencia del uso de las redes sociales en el síndrome visual informático en adolescentes en una realidad peruana. Métodos: Se realizó una investigación de enfoque cuantitativo, hipotético-deductivo, transaccional, correlacional-causal, con una muestra de 126 adolescentes. Para recabar información acerca del uso de las redes sociales, se utilizó el ARS de Escurra y Salas. Para verificar la prevalencia del SVI, se empleó el Cuestionario de Síndrome Visual Informático de Seguí y colaboradores; ambos instrumentos fueron adaptados a formato Google Forms para su aplicación en línea. Resultados: Se encontró una asociación significativa entre el tiempo de permanencia en las redes sociales (RS) y el SVI (p=0,027<0,05); además, se evidenció el impacto significativo de la Adicción a las Redes Sociales (ARS) sobre el SVI (p=0,000<0.01), los síntomas visuales (p=0,000<0.01), oculares (p=0,000<0.01) y astenópicos (p=0,003<0.01). Conclusiones: El estudio demuestra una clara conexión entre el uso excesivo de redes sociales y el SVI en adolescentes peruanos, subrayando la necesidad de una acción conjunta de padres y educadores para mitigar los riesgos y promover un uso digital saludable(AU)


Introduction: The excessive use of social media has generated various sequelae on mental and visual-ocular health, leading to Computer Vision Syndrome (CVS) due to screen overexposure. Objective: To analyze the influence of social media use on computer vision syndrome in adolescents in a Peruvian context. Methods: A quantitative, hypothetical-deductive, transactional, correlational-causal study was conducted with a sample of 126 adolescents. To gather information about social media use, the ARS by Escurra and Salas was used. To verify the prevalence of CVS, the Computer Vision Syndrome Questionnaire by Seguí and collaborators was employed; both instruments were adapted to Google Forms format for online application. Results: A significant association was found between the time spent on social media (SM) and CVS (p=0.027<0.05); additionally, a significant impact of Social Media Addiction (SMA) on CVS (p=0.000<0.01), visual symptoms (p=0.000<0.01), ocular symptoms (p=0.000<0.01), and asthenopic symptoms (p=0.003<0.01) was evidenced. Conclusions: The study demonstrates a clear connection between excessive use of social media and CVS in Peruvian adolescents, highlighting the need for joint action by parents and educators to mitigate risks and promote healthy digital use(AU)


Assuntos
Humanos , Adolescente , Síndrome , Astenopia/epidemiologia , Rede Social , Transtorno de Adição à Internet , Capacidades de Enfrentamento , Peru , Saúde Mental , Transtorno de Adição à Internet/etiologia
18.
Arq. bras. oftalmol ; Arq. bras. oftalmol;87(6): e2022, 2024. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1520244

RESUMO

ABSTRACT Purpose: As digital devices are increasingly used at work, valid and reliable tools are needed to assess their effect on visual health. This study aimed to translate, cross-culturally adapt, and validate the Computer Vision Syndrome Questionnaire (CVS-Q©) into Portuguese. Methods: A 5-phase process was followed: direct translation, synthesis of translation, back-translation, consolidation by an expert committee, and pretest. To run the pretest, a cross-sectional pilot study was conducted with 26 participants who completed the prefinal Portuguese version of the CVS-Q© and were asked about difficulties, comprehensibility, and suggestions to improve the questionnaire. To evaluate the reliability and validity of the Portuguese version of the CVS-Q©, a cross-sectional validation study was performed in a different sample (280 workers). Results: In the pretest, 96.2% had no difficulty in completing it, and 84.0% valued it as clear and understandable. CVS-Q© in Portuguese (Questionário da Síndrome Visual do Computador, CVS-Q PT©) was then obtained. Validation revealed the scale's good internal consistency (Cronbach's alpha=0.793), good temporal stability (intraclass correlation coefficient=0.847; 95% CI 0.764-0.902, kappa=0.839), adequate sensitivity and specificity (78.5% and 70.7%, respectively), good discriminant capacity (area under the curve=0.832; 95% CI 0.784-0.879), and adequate convergent validity with the ocular surface disease index (Spearman correlation coefficient=0.728, p<0.001). The factor analysis provided a single factor accounting for 37.7% of the explained common variance. A worker who scored ≥7 points would have computer vision syndrome. Conclusions: CVS-Q PT© can be considered an intuitive and easy-to-understand tool with good psychometric properties to measure computer vision syndrome in Portuguese workers exposed to digital devices. This questionnaire will assist in making decisions on preventive measures, interventions, and treatment and comparing exposed populations in different Portuguese-speaking countries.


RESUMO Objetivos: À medida que a utilização de equipamentos digitais no emprego aumenta, a avaliação do seu efeito na saúde visual necessita de ferramentas válidas e robustas. Este estudo teve como objetivo traduzir, adaptar culturalmente e validar para português o Questionário da Síndrome Visual do Computador (CVS-Q©). Métodos: O procedimento foi realizado em 5 fases: tradução direta, síntese da tradução, tradução inversa, consolidação por um painel de especialistas, e pré-teste. Para fazer o pré-teste foi realizado um estudo piloto transversal aplicado a uma amostra de 26 participantes que completaram a versão pré-final da versão portuguesa do CVS-Q©, questionando por dificuldades, compreensão e sugestões de melhoria do questionário. Para avaliar a confiança e validade da versão portuguesa do CVS-Q© foi realizado um estudo transversal de validação em uma amostra diferente (280 funcionários). Resultados: No préteste, 96.2% dos participantes não apresentaram dificuldades no preenchimento do questionário, enquanto 84.0% indicaram que era claro e compreensível. Obteve-se, então, o CVS-Q© em português (Questionário da Síndrome Visual do Computador, CVS-Q PT©). A sua validação revelou uma boa consistência interna da sua escala (Cronbach's alpha=0.793), boa estabilidade tem poral (coeficiente de correlação interclasse=0.847; 95% CI 0.764-0.902, kappa=0.839), sensibilidades e especificidades adequadas (78.5% e 70.7%, respetivamente), boa capacidade de discriminação (área abaixo da curva=0.832; 95% CI 0.784-0.879), e uma adequada validade da convergência com o índice de doença da superfície ocular (ocular surface disease index - OSDI; coeficiente de correlação de Spearman=0.728, p<0.001). A análise fatorial revelou um único fator responsável por explicar a variância comum em 37.7%. Um funcionário com uma pontuação ≥7 pontos sofria de síndrome visual do computador. Conclusão: O CVS-Q PT© pode ser considerada uma ferramenta intuitiva, de fácil interpretação e com boas pro priedades psicométricas para avaliar a síndrome visual do computador em funcionários portugueses expostos a ecrãs digitais. Este questionário facilitará as decisões sobre medidas preventivas, intervenções e tratamento, e a comparação entre as populações expostas em diferentes países de língua portuguesa.

19.
Sensors (Basel) ; 23(21)2023 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-37960401

RESUMO

The Internet of Things (IoT), projected to exceed 30 billion active device connections globally by 2025, presents an expansive attack surface. The frequent collection and dissemination of confidential data on these devices exposes them to significant security risks, including user information theft and denial-of-service attacks. This paper introduces a smart, network-based Intrusion Detection System (IDS) designed to protect IoT networks from distributed denial-of-service attacks. Our methodology involves generating synthetic images from flow-level traffic data of the Bot-IoT and the LATAM-DDoS-IoT datasets and conducting experiments within both supervised and self-supervised learning paradigms. Self-supervised learning is identified in the state of the art as a promising solution to replace the need for massive amounts of manually labeled data, as well as providing robust generalization. Our results showcase that self-supervised learning surpassed supervised learning in terms of classification performance for certain tests. Specifically, it exceeded the F1 score of supervised learning for attack detection by 4.83% and by 14.61% in accuracy for the multiclass task of protocol classification. Drawing from extensive ablation studies presented in our research, we recommend an optimal training framework for upcoming contrastive learning experiments that emphasize visual representations in the cybersecurity realm. This training approach has enabled us to highlight the broader applicability of self-supervised learning, which, in some instances, outperformed supervised learning transferability by over 5% in precision and nearly 1% in F1 score.

20.
PeerJ ; 11: e16219, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37953792

RESUMO

Corals are colonial animals within the Phylum Cnidaria that form coral reefs, playing a significant role in marine environments by providing habitat for fish, mollusks, crustaceans, sponges, algae, and other organisms. Global climate changes are causing more intense and frequent thermal stress events, leading to corals losing their color due to the disruption of a symbiotic relationship with photosynthetic endosymbionts. Given the importance of corals to the marine environment, monitoring coral reefs is critical to understanding their response to anthropogenic impacts. Most coral monitoring activities involve underwater photographs, which can be costly to generate on large spatial scales and require processing and analysis that may be time-consuming. The Marine Ecology Laboratory (LECOM) at the Federal University of Rio Grande do Norte (UFRN) developed the project "#DeOlhoNosCorais" which encourages users to post photos of coral reefs on their social media (Instagram) using this hashtag, enabling people without previous scientific training to contribute to coral monitoring. The laboratory team identifies the species and gathers information on coral health along the Brazilian coast by analyzing each picture posted on social media. To optimize this process, we conducted baseline experiments for image classification and semantic segmentation. We analyzed the classification results of three different machine learning models using the Local Interpretable Model-agnostic Explanations (LIME) algorithm. The best results were achieved by combining EfficientNet for feature extraction and Logistic Regression for classification. Regarding semantic segmentation, the U-Net Pix2Pix model produced a pixel-level accuracy of 86%. Our results indicate that this tool can enhance image selection for coral monitoring purposes and open several perspectives for improving classification performance. Furthermore, our findings can be expanded by incorporating other datasets to create a tool that streamlines the time and cost associated with analyzing coral reef images across various regions.


Assuntos
Antozoários , Humanos , Animais , Antozoários/fisiologia , Recifes de Corais , Ecossistema , Crustáceos , Peixes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA