Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 449
Filtrar
1.
J Atten Disord ; 27(13): 1467-1487, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37477014

RESUMEN

BACKGROUND: The purpose of this systematic review is to synthesize the existing literature reporting the effects of computerized cognitive trainings on the executive functions of children with ADHD. METHOD: A systematic review was carried out following the PRISMA statement; the primary sources used were five electronic databases (Scopus, Science Direct, Pubmed, Springer, Taylor & Francis). RESULTS: 20 articles met the eligibility criteria, data on the training characteristics and the effects on executive functions were extracted, followed by an analysis of bias and the methodological quality of the studies. The results of the studies were widely heterogeneous, largely associated with the variety of training programs and the measurement instruments used. The most studied executive functions were working memory and inhibitory control. Some of the studies reported that the intervention led to significant effects on working memory and attention (N = 7), and improvements in inhibitory control (N = 5) and planning (N = 4) were also reported. At the same time, others did not report the effects of the intervention on these processes. The assessment of the quality of the evidence showed important risk biases among the reviewed studies. CONCLUSION: Some training based on computer systems showed positive effects on the executive functions of working memory, attention, and inhibitory control in children with ADHD. However, other training sessions did not show significant effects. In general, the evidence shows mixed results, a high diversity of measurement instruments, and high risks of bias between the studies. Therefore, the evidence has not been consistent about the general benefits of computerized training on the executive functions of children with ADHD.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Función Ejecutiva , Niño , Humanos , Trastorno por Déficit de Atención con Hiperactividad/terapia , Trastorno por Déficit de Atención con Hiperactividad/psicología , Entrenamiento Cognitivo , Memoria a Corto Plazo , Sistemas de Computación
2.
Sensors (Basel) ; 23(4)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36850707

RESUMEN

New ways of interacting with computers is driving research, which is motivated mainly by the different types of user profiles. Referred to as non-conventional interactions, these are found with the use of hands, voice, head, mouth, and feet, etc. and these interactions occur in scenarios where the use of mouse and keyboard would be difficult. A constant challenge in the adoption of new forms of interaction, based on the movement of pointers and the selection of interface components, is the Midas Touch (MT) problem, defined as the involuntary action of selection by the user when interacting with the computer system, causing unwanted actions and harming the user experience during the usage process. Thus, this article aims to mitigate the TM problem in interaction with web pages using a solution centered on the Head Tracking (HT) technique. For this purpose, a component in the form of a Bar was developed and inserted on the left side of the web page, called the Pactolo Bar (PB), in order to enable or disable the clicking event during the interaction process. As a way of analyzing the effectiveness of PB in relation to TM, two stages of tests were carried out based on the collaboration of voluntary participants. The first step aims to find the data that would lead to the best configuration of the BP, while the second step aims to carry out a comparative analysis between the PB solution and the eViacam software, whose use is also focused on the HT technique. The results obtained from the use of PB were considered promising, since the analysis of quantitative data points to a significant prevention of involuntary clicks in the iteration interface and the analysis of qualitative data showed the development of a better user experience due to the ease of use, which can be noticed in elements such as the PB size, the triggering mechanism, and its positioning in the graphical interface. This study benefits in the context of the user experience, because, when using non-conventional interactions, basic items such as aspects of the graphic elements, and interaction events raise new studies that seek to mitigate the problem of the Midas Touch.


Asunto(s)
Sistemas de Computación , Interfaz Usuario-Computador , Humanos , Exactitud de los Datos
3.
Sensors (Basel) ; 23(4)2023 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-36850925

RESUMEN

The recognition of hypoxia symptoms is a critical part of physiological training in military aviation. Acute exposure protocols have been designed in hypobaric chambers to train aircrews to recognize hypoxia and quickly take corrective actions. The goal of the acute hypoxia test is to know the time of useful consciousness and the minimal arterial oxygen saturation tolerated. Currently, there is no computer system specifically designed to analyze the physiological variables obtained during the test. This paper reports the development and analytical capabilities of a computational tool specially designed for these purposes. The procedure was designed using the Igor Pro 8.01 language, which processes oxygen saturation and heart rate signals. To accomplish this, three functional boards are displayed. The first allows the loading and processing of the data. The second generates graphs that allow for a rapid visual examination to determine the validity of individual records and calculate slopes on selected segments of the recorded signal. Finally, the third can apply filters to generate data groups for analysis. In addition, this tool makes it possible to propose new study variables that are derived from the raw signals and can be applied simultaneously to large data sets. The program can generate graphs accompanied by basic statistical parameters and heat maps that facilitate data visualization. Moreover, there is a possibility of adding other signals during the test, such as the oxygenation level in vital organs, electrocardiogram, or electroencephalogram, which illustrates the test's excellent potential for application in aerospace medicine and for helping us develop a better understanding of complex physiological phenomena.


Asunto(s)
Aviación , Oximetría , Humanos , Sistemas de Computación , Visualización de Datos , Hipoxia/diagnóstico
4.
Esc. Anna Nery Rev. Enferm ; 27: e20220143, 2023. tab, graf
Artículo en Portugués | LILACS, BDENF - Enfermería | ID: biblio-1421431

RESUMEN

Resumo Objetivo Avaliar o desempenho do sistema web "Apoio à Revisão Sistemática" quanto à identificação de referências bibliográficas duplicadas, em comparação a outros programas. Métodos Trata-se de uma pesquisa metodológica que avalia o processo automático de identificação de duplicatas do sistema "Apoio à Revisão Sistemática" (versão 1.0), em comparação ao EndNote X9® e Rayyan® , considerando checagem manual como referência. Foi utilizado um conjunto de estudos relacionados a três temas sobre fibrose cística recuperados das bases de dados Pubmed, Embase e Web of Science. Para avaliação de desempenho, utilizaram-se a sensibilidade, especificidade, acurácia e área sob a curva ROC para cada software, em comparação à referência. Resultados As buscas nas bases de dados resultaram em 1332 estudos, sendo 273 (20,5%) verdadeiros duplicados. Em comparação aos dados de referência, o programa "Apoio à Revisão Sistemática" identificou maior proporção de duplicatas verdadeiras do que os demais. Os valores de sensibilidade, especificidade e acurácia do sistema "Apoio à Revisão Sistemática" apresentaram-se acima de 98%. Conclusão e implicações para a prática O sistema "Apoio à Revisão Sistemática" possui alta sensibilidade, especificidade e acurácia para identificação de estudos duplicados, otimizando o tempo e o trabalho dos revisores da área da saúde.


Resumen Objetivo Evaluar el desempeño del sistema web "Apoyo a la Revisión Sistemática" en cuanto a la identificación de referencias duplicadas en comparación a otros programas. Métodos Se trata de una investigación metodológica que evalúa el proceso automático de desduplicación del sistema web "Apoyo a la Revisión Sistemática" (versión 1.0), en comparación al EndNote X9® y Rayyan®, considerando la verificación manual como referencia. Fue utilizado, como ejemplo, un conjunto de estudios relacionados a tres temas sobre fibrosis quística recuperados de las bases de datos Pubmed, Embase y Web of Science. Se analizó la sensibilidad, especificidad, precisión y el área sobre la curva ROC de los programas. Resultados Las búsquedas en las bases de datos dieron como resultado 1332 estudios, siendo 273 (20,5%) verdaderos duplicados. En comparación a los datos de referencia, el programa "Apoyo a la Revisión Sistemática" identificó mayor proporción de duplicados verdaderos que los demás. Los valores de sensibilidad, especificidad y precisión del sistema "Apoyo a la Revisión Sistemática" fueron superiores a 98%. Conclusión e implicaciones para la práctica El sistema "Apoyo a la Revisión Sistemática" posee alta sensibilidad, especificidad y precisión para identificación de estudios duplicados obtenidos a partir de búsquedas en bases de datos en el área de salud, optimizando el trabajo de investigadores. Palabras clave Exactitud de los Datos; Bases de Datos Bibliográficas; Revisión Sistemática; Sensibilidad y Especificidad; Software.


Abstract Objective To evaluate the performance of the Systematic Review Support web-based system for the identification of duplicate records compared with similar software tools. Methods A methodological study was conducted assessing the automated process of de-duplication performed by the Systematic Review Support web-based system (version 1.0) versus the EndNote X9® and Rayyan® systems, adopting hand-checking as the benchmark reference for comparisons. A set of studies on three topics related to cystic fibrosis retrieved from the Pubmed, Embase and Web of Science electronic databases was used for testing purposes. The sensitivity, specificity, accuracy and area under the ROC curve of the software systems were compared to the benchmark values for performance evaluation. Results The database searches retrieved 1332 studies, of which 273 (20.5%) were true duplicates. The Systematic Review Support tool identified a larger proportion of true duplicates than the other systems tested. The sensitivity, specificity and accuracy of the Systematic Review Support tool exceeded 98%. Conclusion and implications for practice The Systematic Review Support system provided a high level of sensitivity, specificity and accuracy in identifying duplicate studies, optimizing time and effort by reviewers in the health field.


Asunto(s)
Humanos , Sistemas de Computación , Validación de Programas de Computación , Bases de Datos como Asunto , Sensibilidad y Especificidad , Exactitud de los Datos , Revisiones Sistemáticas como Asunto
5.
Psicol. ciênc. prof ; 43: e278525, 2023.
Artículo en Portugués | LILACS, Index Psicología - Revistas | ID: biblio-1529222

RESUMEN

O Sistema de Avaliação de Testes Psicológicos (SATEPSI) recebeu notoriedade entre brasileiros e estrangeiros por oferecer um complexo sistema de qualificação dos testes psicológicos, pouco visto em âmbito mundial. Sua elaboração dependeu de uma autarquia, que o financiou, normatizou e o mantém, mas também de pesquisadores docentes de avaliação psicológica, que trouxeram a expertise da área para que houvesse o pleno estabelecimento de seus parâmetros. Passadas duas décadas de seu lançamento, o SATEPSI foi tema de artigos, capítulos, lives e diálogos digitais, nos quais foram destaque, de modo geral, as Resoluções do Conselho Federal de Psicologia, que o normatiza, e seus impactos para a área de avaliação psicológica - como, por exemplo, o aumento do número de pesquisas e de testes brasileiros qualificados. O que se pretende neste artigo é mencionar sua construção, à luz dos autores que vivenciaram o SATEPSI em funções e tempos distintos. Atenção especial será dada aos Métodos Projetivos, cuja história ainda é pouco revelada.(AU)


The system to evaluate psychological tests (Satepsi) received notoriety among Brazilians and foreigners for offering a complex system of qualification of psychological tests, which is rarely seen worldwide. Its development depended on an autarchy (which financed, standardized, and maintains it) and on researchers teaching psychological assessment, who brought their expertise to the area so its parameters could be fully established. After two decades of its launch, Satepsi was the subject of articles, chapters, lives, and digital dialogues, which usually highlighted the Resolutions of the Federal Council of Psychology that normatize psychological evaluation and their impacts, such as the increase in the number of qualified Brazilian tests. This study aims to mention its construction in the light of the authors who experienced Satepsi in different functions and times, giving special attention to Projective Methods, whose history remains to be shown.(AU)


El Sistema de Evaluación de Tests Psicológicos (SATEPSI) ganó notoriedad entre los brasileños y los extranjeros por ofrecer un complejo sistema de calificación de los tests psicológicos, poco frecuente a nivel mundial. Su elaboración dependió de una autarquía, que lo financió, lo estandarizó y lo mantiene, pero también de investigadores docentes de evaluación psicológica, que trajeron la experiencia del área para que hubiera el pleno establecimiento de sus parámetros. Tras dos décadas de su lanzamiento, SATEPSI fue tema de artículos, capítulos, en directo y diálogos digitales, en los cuales destacaron, de modo general, las Resoluciones del Consejo Federal de Psicología que lo normatiza y sus impactos para el área de evaluación psicológica, como el aumento del número de investigaciones y de pruebas brasileñas calificadas. Lo que se pretende en este artículo es mencionar su construcción, a la luz de los autores que vivieron el SATEPSI en funciones y tiempos distintos. Se prestará especial atención a los métodos proyectivos cuya historia aún no se ha revelado.(AU)


Asunto(s)
Humanos , Masculino , Femenino , Escalas de Valoración Psiquiátrica Breve , Pruebas Psicológicas , Psicometría , Estándares de Referencia , Reproducibilidad de los Resultados , Determinación de la Personalidad , Pruebas de Personalidad , Pruebas de Aptitud , Competencia Profesional , Práctica Profesional , Interpretación Psicoanalítica , Psicología , Seguridad , Recursos Audiovisuales , Programas de Autoevaluación , Control Social Formal , Sociedades , Estudiantes , Orientación Vocacional , Conducta , Organizaciones de Normalización Profesional , Imagen Corporal , Sistemas de Computación , Salud Mental , Eficacia , Encuestas y Cuestionarios , Interpretación Estadística de Datos , Responsabilidad Legal , Resultado del Tratamiento , Guías de Práctica Clínica como Asunto , Gestión de la Calidad Total , Comercio , Clase , Disciplinas y Actividades Conductuales , Internet , Habilitación Profesional , Manipulaciones Musculoesqueléticas , Diagnóstico , Evaluación del Rendimiento de Empleados , Ciencia, Tecnología y Sociedad , Ética , Capacitación Profesional , Cursos , Estudios de Evaluación como Asunto , Testimonio de Experto , Autoinforme , Habilidades para Tomar Exámenes , Mejoramiento de la Calidad , Pandemias , Habilidades Sociales , Exactitud de los Datos , Escala de Evaluación de la Conducta , Compromiso Laboral , Acceso a Internet , Archivos Web como Asunto , Intervención basada en la Internet , Teletrabajo , COVID-19 , Bienestar Psicológico , Derechos Humanos , Inteligencia , Pruebas de Inteligencia , Manuales como Asunto , Pruebas Neuropsicológicas
6.
São Paulo; s.n; 2023. 190 p.
Tesis en Portugués | LILACS | ID: biblio-1551126

RESUMEN

Introdução. As revisões sistemáticas permitem que a produção científica sobre um assunto específico seja agregada e resumida, entretanto, sua realização é árdua, principalmente nas etapas iniciais, sendo de grande utilidade a existência de ferramentas computacionais que automatizem ou semiautomatizem o trabalho manual de investigadores. Objetivo. Desenvolver e avaliar um sistema computacional para identificar referências duplicadas e auxiliar na fase de elegibilidade de estudos de revisão de literatura. Métodos. Estudo metodológico que apresenta o sistema web "Apoio à Revisão Sistemática" (AReS), versão 1.0, com ênfase na avaliação de validade da identificação de referências duplicadas e na avaliação de usabilidade. Na descrição de suas funcionalidades, foram simuladas as etapas iniciais de uma revisão sistemática com estudos sobre fibrose cística. Na avaliação de validade, realizou-se comparação de sensibilidade e especificidade dos sistemas AReS, EndNote® e Rayyan®, considerando-se como referência a seleção manual de duplicatas. Para o teste de usabilidade, estudantes de pós-graduação, voluntários, resolveram 21 tarefas utilizando o sistema AReS. Os dados observados foram registrados e permitiram a avaliação da conclusão das tarefas (eficácia) e do tempo necessário para conclusão de cada tarefa (eficiência); a usabilidade foi avaliada pelos participantes por meio do instrumento System Usability Scale (SUS). Resultados: O sistema AReS permite eliminar referências duplicadas e apresentar resumos na mesma tela que contém os critérios de elegibilidade, facilitando a identificação dos elegíveis. O sistema compara as decisões dos pesquisadores sobre a elegibilidade dos estudos, mostrando as divergências a serem resolvidas. Na avaliação de validade, o sistema AReS identificou maior proporção de verdadeiras duplicatas do que os sistemas EndNote® e Rayyan®, com valores de sensibilidade, especificidade e acurácia acima de 98%. A avaliação de usabilidade resultou em índice de conclusão das tarefas acima de 90%, conforme esperado, com tempo médio total de 55,1 minutos, quando o esperado era de no máximo 60 minutos, e nota média de usabilidade de 82,4 em escala de 0 a 100. Conclusão: O sistema AReS (versão 1.0) permite a eliminação acurada de referências duplicadas e auxilia na seleção de estudos na fase de elegibilidade em revisões de literatura. A eficácia, eficiência e o grau de usabilidade fornecido pelo instrumento SUS poderão ser melhorados a partir da incorporação de ajustes que foram identificados no teste de usabilidade.


Introduction. Systematic reviews enable the scientific output on a specific subject to be pooled and summarized, but this can be painstaking, especially in the early stage. Thus, computer-based tools which semi or fully automate the manual work of investigators can be of great utility. Objective. To develop and assess a computer-based system for identifying duplicate references and aiding the eligibility stage of literature review studies. Methods. A methodological study presenting the Systematic Review Support web-based System (AReS), version 1.0, with assessment of the validity of the identification of duplicate references and usability evaluation. In the description of its functionalities, the initial stages of a systematic review of studies on cystic fibrosis were simulated. For the assessment of validity, a comparison of the sensitivity and specificity of the AReS, EndNote® and Rayyan® systems was made, adopting manual screening for duplicates as a benchmark. For the test of usability, volunteer post-graduate students performed 21 tasks using the AReS system. The data observed were recorded, allowing an assessment of task completion (effectiveness) and time required to complete each task (efficiency); usability was rated by participants using the System Usability Scale (SUS). Results: The AReS system allowed removal of duplicate references; displayed the abstracts on the same screen as the eligibility criteria, aiding the identification of eligible abstracts. The system compared the decisions of the researchers regarding the eligibility of the studies, showing the differences to be resolved. On the validity assessment, the AReS identified a larger proportion of true duplicates than the EndNote® and Rayyan® systems tested, yielding sensitivity, specificity and accuracy values exceeding 98%. The evaluation of usability resulted in a task completion rate of over 90%, as expected, with a mean time of 55.1 minutes within a maximum expected time of 60 minutes, and usability score of 82.4 on a scale of 0 to 100. Conclusion: The AReS system (version 1.0) enabled accurate exclusion of duplicate references in literature reviews and aided the screening of studies in the eligibility stage. The effectiveness, efficiency and level of usability on the SUS can be further enhanced by incorporating tweaks identified in the usability tests.


Asunto(s)
Sistemas de Computación , Programas Informáticos , Revisiones Sistemáticas como Asunto , Diseño Centrado en el Usuario , Actividades Científicas y Tecnológicas
7.
Sensors (Basel) ; 22(24)2022 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-36560091

RESUMEN

This paper introduces an analog notch filtering-based coupling circuit for receivers in ultra-narrowband and narrowband power line communication systems, which are connected to low-voltage electric power grids. It is composed of a twin-T notch analog filter, which is responsible for imposing a significant attenuation on the main frequency (i.e., f0∈{50,60} Hz) in cascade with an elliptic low-pass analog filter, designed with a 3 dB cut-off frequency of fc≫f0. For f0=60 Hz and fc=2 MHz, the prototype of the analog notch filtering-based coupling circuit attains attenuation values of 22 dB and less than 2 dB at the main frequency and in the rest of the frequency bandwidth, respectively, when practical scenarios are considered. Lastly, it shows that the analog notch filtering-based coupling circuit is more effective than a typical capacitive coupling circuit when frequencies lower than 3 kHz are considered for data communication and sensing purposes.


Asunto(s)
Comunicación , Sistemas de Computación , Electricidad , Electrodos
8.
Sensors (Basel) ; 22(22)2022 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-36433442

RESUMEN

A Kalman filter can be used to fill space-state reconstruction dynamics based on knowledge of a system and partial measurements. However, its performance relies on accurate modeling of the system dynamics and a proper characterization of the uncertainties, which can be hard to obtain in real-life scenarios. In this work, we explore how the values of a Kalman gain matrix can be estimated by using spiking neural networks through a combination of biologically plausible neuron models with spike-time-dependent plasticity learning algorithms. The performance of proposed neural architecture is verified with simulations of some representative nonlinear systems, which show promising results. This approach traces a path for its implementation in neuromorphic analog hardware that can learn and reconstruct partial and changing dynamics of a system without the massive power consumption that is typically needed in a Von Neumann-based computer architecture.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Neuronas/fisiología , Computadores , Sistemas de Computación
9.
Rev Soc Bras Med Trop ; 55: e0451, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35946632

RESUMEN

BACKGROUND: The Neural Clinical Score for tuberculosis (NCS-TB) is a computer system developed to improve the triage of presumed pulmonary TB (pPTB). METHODS: A study was performed with cohorts of pPTB patients cared for at a reference hospital in Northeast Brazil. RESULTS: The NCS-TB sensitivity was 76.5% for TB diagnosis, which shortened the time from triage to smear microscopy results (3.3 to 2.5 days; p<0.001) and therapy initiation (6.7 to 4.1 days; p=0.045). CONCLUSIONS: Although the NCS-TB was not suitable as a screening tool, it was able to optimize laboratory diagnosis and shorten the time to treatment initiation.


Asunto(s)
Infecciones por VIH , Mycobacterium tuberculosis , Tuberculosis Pulmonar , Tuberculosis , Brasil , Sistemas de Computación , Humanos , Proteína de Unión al Tracto de Polipirimidina , Sensibilidad y Especificidad , Triaje , Tuberculosis/diagnóstico , Tuberculosis Pulmonar/diagnóstico , Tuberculosis Pulmonar/tratamiento farmacológico
10.
Sensors (Basel) ; 22(14)2022 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-35891053

RESUMEN

The goal of this work is to present a systematic literature mapping (SLM) identifying algorithms for the search for data, determining the best path and types of communication between the local server and the drone, as well as possible simulators to validate proposed solutions. The concept, here considered as IoT Off-Grid, is characterized by being an environment without commercial electrical infrastructure and without communication connected to the internet. IoT equipment generates data to be stored on a local server. It collects these data through a drone that searches each local server for later integration with the commercial internet environment. As a result, we have algorithms to determine the best path based on the TSP-travelling salesman problem. Different types of communication between the drone and the server contain the data, predominantly WiFi 802.11. As a simulator, OMNeT++ stands out.


Asunto(s)
Comunicación , Sistemas de Computación , Algoritmos , Recolección de Datos
11.
Math Biosci Eng ; 19(3): 2403-2423, 2022 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-35240790

RESUMEN

Demand response programs allow consumers to participate in the operation of a smart electric grid by reducing or shifting their energy consumption, helping to match energy consumption with power supply. This article presents a bio-inspired approach for addressing the problem of colocation datacenters participating in demand response programs in a smart grid. The proposed approach allows the datacenter to negotiate with its tenants by offering monetary rewards in order to meet a demand response event on short notice. The objective of the underlying optimization problem is twofold. The goal of the datacenter is to minimize its offered rewards while the goal of the tenants is to maximize their profit. A two-level hierarchy is proposed for modeling the problem. The upper-level hierarchy models the datacenter planning problem, and the lower-level hierarchy models the task scheduling problem of the tenants. To address these problems, two bio-inspired algorithms are designed and compared for the datacenter planning problem, and an efficient greedy scheduling heuristic is proposed for task scheduling problem of the tenants. Results show the proposed approach reports average improvements between 72.9% and 82.2% when compared to the business as usual approach.


Asunto(s)
Sistemas de Computación , Negociación , Algoritmos , Suministros de Energía Eléctrica
12.
Sensors (Basel) ; 22(6)2022 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-35336275

RESUMEN

Recent theoretical studies demonstrate the advantages of using decentralized architectures over traditional centralized architectures for real-time Power Distribution Systems (PDSs) operation. These advantages include the reduction of the amount of data to be transmitted and processed when performing state estimation in PDSs. The main contribution of this paper is to provide lab validation of the advantages and feasibility of decentralized monitoring of PDSs. Therefore, this paper presents an advanced trial emulating realistic conditions and hardware setup. More specifically, the paper proposes: (i) The laboratory development and implementation of an Advanced Measurement Infrastructure (AMI) prototype to enable the simulation of a smart grid. To emulate the information traffic between smart meters and distribution operation centers, communication modules, that enable the use of wireless networks for sending messages in real-time, are used, bridging concepts from both IoT and Edge Computing. (ii) The laboratory development and implementation of a decentralized architecture based on Embedded State Estimator Modules (ESEMs) are carried out. ESEMs manage information from smart meters at lower voltage networks, performing real-time state estimation in PDSs. Simulations performed on a real PDS with 208 buses (considering both medium and low voltage buses) have met the aims of this paper. The results show that by using ESEMs in a decentralized architecture, both the data transit through the communication network, as well as the computational requirements involved in monitoring PDSs in real-time, are reduced considerably without any loss of accuracy.


Asunto(s)
Sistemas de Computación , Simulación por Computador , Medios de Cultivo
13.
Sensors (Basel) ; 22(2)2022 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-35062417

RESUMEN

Analyzing data related to the conditions of city streets and avenues could help to make better decisions about public spending on mobility. Generally, streets and avenues are fixed as soon as they have a citizen report or when a major incident occurs. However, it is uncommon for cities to have real-time reactive systems that detect the different problems they have to fix on the pavement. This work proposes a solution to detect anomalies in streets through state analysis using sensors within the vehicles that travel daily and connecting them to a fog-computing architecture on a V2I network. The system detects and classifies the main road problems or abnormal conditions in streets and avenues using Machine Learning Algorithms (MLA), comparing roughness against a flat reference. An instrumented vehicle obtained the reference through accelerometry sensors and then sent the data through a mid-range communication system. With these data, the system compared an Artificial Neural Network (supervised MLA) and a K-Nearest Neighbor (Supervised MLA) to select the best option to handle the acquired data. This system makes it desirable to visualize the streets' quality and map the areas with the most significant anomalies.


Asunto(s)
Algoritmos , Aprendizaje Automático , Análisis por Conglomerados , Sistemas de Computación , Redes Neurales de la Computación
14.
Washington, D.C.; OPS; 2022-01-20. (OPS/EIH/IS/21-023).
Monografía en Francés | PAHO-IRIS | ID: phr-55634

RESUMEN

La gestion des données est actuellement une exigence de base pour disposer de preuves pour éclairer la prise de décision dans le domaine de la santé. Par conséquent, il est nécessaire de concevoir des stratégies qui dominent le lexique spécialisé afin que les informations cliniques stockées dans les systèmes informatiques puissent être utilisées à des fins multiples, ce qui est réalisé grâce à la représentation des données de santé. Cette capsule de connaissances explique ce concept et d'autres, ainsi que l'importance des outils de codage terminologique et leur relation avec la transformation numérique de la santé publique.


Asunto(s)
Sistemas de Salud , Sistemas de Información , Sistemas de Información en Salud , Telemedicina , Interoperabilidad de la Información en Salud , Toma de Decisiones , Sistemas de Computación , Codificación Clínica , Américas
15.
Washington, D.C.; PAHO; 2021-12-21. (PAHO/EIH/IS/21-023).
Monografía en Inglés | PAHO-IRIS | ID: phr-55417

RESUMEN

Managing data is currently a basic requirement to have evidence to inform decision-making in the field of health. Therefore, it is necessary to design strategies that dominate the specialized lexicon so that the clinical information stored in computer systems can be used for multiple purposes, which is achieved through the representation of health data. This knowledge capsule explains this and other concepts, as well as the importance of term coding tools and their relationship with the digital transformation of public health.


Asunto(s)
Sistemas de Información , Salud Digital , Sistemas de Información en Salud , Interoperabilidad de la Información en Salud , Sistemas de Salud , Toma de Decisiones , Sistemas de Computación
16.
Washington, D.C.; OPAS; 2021-12-21. (OPAS/EIH/IS/21-023).
Monografía en Portugués | PAHO-IRIS | ID: phr-55416

RESUMEN

O gerenciamento de dados é atualmente um requisito básico para ter evidências para informar a tomada de decisão na área da saúde. Portanto, é necessário desenhar estratégias que dominem o léxico especializado para que as informações clínicas armazenadas em sistemas computacionais possam ser utilizadas para múltiplos propósitos, o que se consegue por meio da representação de dados de saúde. Esta cápsula do conhecimento explica este e outros conceitos, bem como a importância das ferramentas de codificação de termos e sua relação com a transformação digital da saúde pública.


Asunto(s)
Sistemas de Información , Sistemas de Información en Salud , Salud Digital , Interoperabilidad de la Información en Salud , Américas , Toma de Decisiones , Sistemas de Computación , Codificación Clínica
17.
Sensors (Basel) ; 21(21)2021 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-34770285

RESUMEN

Recently, the operation of distribution systems does not depend on the state or utility based on centralized procedures, but rather the decentralization of the decisions of the distribution companies whose objectives are the efficiency of interconnectivity. Therefore, distribution companies are exposed to greater risks, and due to this, the need to make decisions based on increasingly reliable models has grown up considerably. Therefore, we present a survey of key aspects, technologies, protocols, and case studies of the current and future trend of Smart Grids. This work proposes a taxonomy of a large number of technologies in Smart Grids and their applications in scenarios of Smart Networks, Neural Networks, Blockchain, Industrial Internet of Things, or Software-Defined Networks. Therefore, this work summarizes the main features of 94 research articles ranging the last four years. We classify these survey, according Smart Grid Network Topologies, because it can group as the main axis the sensors applied to Smart Grids, as it shows us the interconnection forms generalization of the Smart Networks with respect to the sensors found in a home or industry.


Asunto(s)
Cadena de Bloques , Sistemas de Computación , Industrias , Tecnología
18.
Sensors (Basel) ; 21(16)2021 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-34451092

RESUMEN

The Advanced Metering Infrastructure (AMI) data represent a source of information in real time not only about electricity consumption but also as an indicator of other social, demographic, and economic dynamics within a city. This paper presents a Data Analytics/Big Data framework applied to AMI data as a tool to leverage the potential of this data within the applications in a Smart City. The framework includes three fundamental aspects. First, the architectural view places AMI within the Smart Grids Architecture Model-SGAM. Second, the methodological view describes the transformation of raw data into knowledge represented by the DIKW hierarchy and the NIST Big Data interoperability model. Finally, a binding element between the two views is represented by human expertise and skills to obtain a deeper understanding of the results and transform knowledge into wisdom. Our new view faces the challenges arriving in energy markets by adding a binding element that gives support for optimal and efficient decision-making. To show how our framework works, we developed a case study. The case implements each component of the framework for a load forecasting application in a Colombian Retail Electricity Provider (REP). The MAPE for some of the REP's markets was less than 5%. In addition, the case shows the effect of the binding element as it raises new development alternatives and becomes a feedback mechanism for more assertive decision making.


Asunto(s)
Macrodatos , Ciencia de los Datos , Sistemas de Computación , Electricidad , Predicción , Humanos
19.
J Healthc Eng ; 2021: 3277988, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34150188

RESUMEN

The world has been facing the COVID-19 pandemic since December 2019. Timely and efficient diagnosis of COVID-19 suspected patients plays a significant role in medical treatment. The deep transfer learning-based automated COVID-19 diagnosis on chest X-ray is required to counter the COVID-19 outbreak. This work proposes a real-time Internet of Things (IoT) framework for early diagnosis of suspected COVID-19 patients by using ensemble deep transfer learning. The proposed framework offers real-time communication and diagnosis of COVID-19 suspected cases. The proposed IoT framework ensembles four deep learning models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201. The medical sensors are utilized to obtain the chest X-ray modalities and diagnose the infection by using the deep ensemble model stored on the cloud server. The proposed deep ensemble model is compared with six well-known transfer learning models over the chest X-ray dataset. Comparative analysis revealed that the proposed model can help radiologists to efficiently and timely diagnose the COVID-19 suspected patients.


Asunto(s)
Inteligencia Artificial , Prueba de COVID-19 , COVID-19/diagnóstico , Internet de las Cosas , SARS-CoV-2 , Brasil , China , Simulación por Computador , Sistemas de Computación , Bases de Datos Factuales , Aprendizaje Profundo , Diagnóstico por Computador , Humanos , Reconocimiento de Normas Patrones Automatizadas , Radiografía Torácica , Estados Unidos , Rayos X
20.
E-Cienc. inf ; 11(1)jun. 2021.
Artículo en Inglés | LILACS, SaludCR | ID: biblio-1384744

RESUMEN

Abstract This issue stems from the need for tools to analyze and make decisions around complex systems, where they apply the rules for linearly dependent sets, with the purpose of providing a visual tool, which serves to support complexity reduction processes. Two great precedents are Armstrong's Axioms, which has been applied from its publication to the present for database normalization, the other is set theory, a fundamental pillar of the Structured Query Language; based on them, together with the second-order logic, which adds qualifiers for subsets or properties, this work has been prepared, with an explanatory metrology with a qualitative approach, in an axiomatic system. As a result, a support tool has been provided to analyze complex systems naturally, by breaking cycles and detecting patterns, without interfering with existing models; however, for large systems it can be difficult to address it in its entirety, so it is recommended to divide by subsystems. With this work a technique has been accomplished, repeatable by anyone, but with a strong theoretical foundation. This work has great utility for the normalization of relational databases and an enormous potential for application in the design of systems beyond computational systems, it is also useful for understanding dependencies by their axiomatic nature.


Resumen Este tema nace de la necesidad de herramientas para analizar y tomar decisiones en torno a sistemas complejos, donde apliquen las reglas para conjuntos linealmente dependientes, con el fin de proporcionar una herramienta visual, que sirva de apoyo a procesos de reducción de la complejidad. Dos grandes precedentes son los Axiomas de W. Armstrong, el cual se ha aplicado desde su publicación hasta la actualidad para la normalización de bases de datos, el otro es la teoría de conjuntos, pilar fundamental del Lenguaje de Consultas Estructurado; en base a ellos, junto con la lógica de segundo orden, la cual añade cualificadores para subconjuntos o propiedades se ha elaborado este trabajo, con una metrología explicativa con enfoque cualitativo, en un sistema axiomático. Como resultado se ha proporciona una herramienta de soporte para analizar sistemas complejos de forma natural, rompiendo ciclos y detectando patrones, sin interferir con los modelos existentes; sin embargo, para sistemas de gran tamaño puede ser difícil abordarlo en su totalidad, por lo que se recomienda dividir por subsistemas. Con este trabajo se ha consumado una técnica, repetible por cualquiera, pero con fuerte fundamento teórico. Este trabajo tiene gran utilidad para la normalización de bases de datos relacionales y un enorme potencial de aplicación en el diseño de sistemas más allá de los sistemas computacionales, también resulta útil para la comprensión de dependencias por su naturaleza axiomática.


Asunto(s)
Análisis de Sistemas , Sistemas de Computación , Bases de Datos como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA