Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
EPMA J ; 15(2): 261-274, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38841619

RESUMO

Purpose: Retinopathy of prematurity (ROP) is a retinal vascular proliferative disease common in low birth weight and premature infants and is one of the main causes of blindness in children.In the context of predictive, preventive and personalized medicine (PPPM/3PM), early screening, identification and treatment of ROP will directly contribute to improve patients' long-term visual prognosis and reduce the risk of blindness. Thus, our objective is to establish an artificial intelligence (AI) algorithm combined with clinical demographics to create a risk model for ROP including treatment-requiring retinopathy of prematurity (TR-ROP) infants. Methods: A total of 22,569 infants who underwent routine ROP screening in Shenzhen Eye Hospital from March 2003 to September 2023 were collected, including 3335 infants with ROP and 1234 infants with TR-ROP among ROP infants. Two machine learning methods of logistic regression and decision tree and a deep learning method of multi-layer perceptron were trained by using the relevant combination of risk factors such as birth weight (BW), gestational age (GA), gender, whether multiple births (MB) and mode of delivery (MD) to achieve the risk prediction of ROP and TR-ROP. We used five evaluation metrics to evaluate the performance of the risk prediction model. The area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUCPR) were the main measurement metrics. Results: In the risk prediction for ROP, the BW + GA demonstrated the optimal performance (mean ± SD, AUCPR: 0.4849 ± 0.0175, AUC: 0.8124 ± 0.0033). In the risk prediction of TR-ROP, reasonable performance can be achieved by using GA + BW + Gender + MD + MB (AUCPR: 0.2713 ± 0.0214, AUC: 0.8328 ± 0.0088). Conclusions: Combining risk factors with AI in screening programs for ROP could achieve risk prediction of ROP and TR-ROP, detect TR-ROP earlier and reduce the number of ROP examinations and unnecessary physiological stress in low-risk infants. Therefore, combining ROP-related biometric information with AI is a cost-effective strategy for predictive diagnostic, targeted prevention, and personalization of medical services in early screening and treatment of ROP.

2.
Sensors (Basel) ; 22(21)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36365925

RESUMO

Cognitive Radio (CR) is a practical technique for overcoming spectrum inefficiencies by sensing and utilizing spectrum holes over a wide spectrum. In particular, cooperative spectrum sensing (CSS) determines the state of primary users (PUs) by cooperating with multiple secondary users (SUs) distributed around a Cognitive Radio Network (CRN), further overcoming various noise and fading issues in the radio environment. But it's still challenging to balance energy efficiency and good sensing performances in the existing CSS system, especially when the CRN consists of battery-limited sensors. This article investigates the application of machine learning technologies for cooperative spectrum sensing, especially through solving a multi-dimensional optimization that cannot be readily addressed by traditional approaches. Specifically, we develop a neural network, which involves parameters that are integral to the CSS performance, including a device sleeping rate for each sensor and thresholds used in the energy detection method, and a customized loss function based on the energy consumption of the CSS system and multiple penalty terms reflecting the system requirements. Using this formulation, energy consumption is to be minimized with the guarantee of reaching a certain probability of false alarm and detection in the CSS system. With the proposed method, comparison studies under different hard fusion rules ('OR' and 'AND') demonstrate its effectiveness in improving the CSS system performances, as well as its robustness in the face of changing global requirements. This paper also suggests the combination of the traditional and the proposed scheme to circumvent the respective inherent pitfalls of neural networks and the traditional semi-analytic methods.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Algoritmos , Aprendizado de Máquina , Fenômenos Físicos
3.
Front Neurosci ; 16: 1065366, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36825214

RESUMO

Complexity is the key element of software quality. This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods to measure code complexity. Participants (27 programmers) were asked to read and (try to) understand a set of programs, while the complexity of such programs is assessed through different methods and perspectives: (a) classic code complexity metrics such as McCabe and Halstead metrics, (b) cognitive complexity metrics based on scored code constructs, (c) cognitive complexity metrics from state-of-the-art tools such as SonarQube, (d) human-centered metrics relying on the direct assessment of programmers' behavioral features (e.g., reading time, and revisits) using eye tracking, and (e) cognitive load/mental effort assessed using electroencephalography (EEG). The human-centered perspective was complemented by the subjective evaluation of participants on the mental effort required to understand the programs using the NASA Task Load Index (TLX). Additionally, the evaluation of the code complexity is measured at both the program level and, whenever possible, at the very low level of code constructs/code regions, to identify the actual code elements and the code context that may trigger a complexity surge in the programmers' perception of code comprehension difficulty. The programmers' cognitive load measured using EEG was used as a reference to evaluate how the different metrics can express the (human) difficulty in comprehending the code. Extensive experimental results show that popular metrics such as V(g) and the complexity metric from SonarSource tools deviate considerably from the programmers' perception of code complexity and often do not show the expected monotonic behavior. The article summarizes the findings in a set of guidelines to improve existing code complexity metrics, particularly state-of-the-art metrics such as cognitive complexity from SonarSource tools.

4.
Sensors (Basel) ; 12(12): 16433-50, 2012 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-23443387

RESUMO

Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.


Assuntos
Técnicas Biossensoriais , Extremidades/fisiologia , Modelos Teóricos , Eletricidade Estática , Campos Eletromagnéticos , Desenho de Equipamento , Humanos , Telemetria
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA