Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 156: 29-42, 2017 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-28479475

RESUMO

Despite numerous important contributions, the investigation of brain connectivity with magnetoencephalography (MEG) still faces multiple challenges. One critical aspect of source-level connectivity, largely overlooked in the literature, is the putative effect of the choice of the inverse method on the subsequent cortico-cortical coupling analysis. We set out to investigate the impact of three inverse methods on source coherence detection using simulated MEG data. To this end, thousands of randomly located pairs of sources were created. Several parameters were manipulated, including inter- and intra-source correlation strength, source size and spatial configuration. The simulated pairs of sources were then used to generate sensor-level MEG measurements at varying signal-to-noise ratios (SNR). Next, the source level power and coherence maps were calculated using three methods (a) L2-Minimum-Norm Estimate (MNE), (b) Linearly Constrained Minimum Variance (LCMV) beamforming, and (c) Dynamic Imaging of Coherent Sources (DICS) beamforming. The performances of the methods were evaluated using Receiver Operating Characteristic (ROC) curves. The results indicate that beamformers perform better than MNE for coherence reconstructions if the interacting cortical sources consist of point-like sources. On the other hand, MNE provides better connectivity estimation than beamformers, if the interacting sources are simulated as extended cortical patches, where each patch consists of dipoles with identical time series (high intra-patch coherence). However, the performance of the beamformers for interacting patches improves substantially if each patch of active cortex is simulated with only partly coherent time series (partial intra-patch coherence). These results demonstrate that the choice of the inverse method impacts the results of MEG source-space coherence analysis, and that the optimal choice of the inverse solution depends on the spatial and synchronization profile of the interacting cortical sources. The insights revealed here can guide method selection and help improve data interpretation regarding MEG connectivity estimation.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Magnetoencefalografia/métodos , Vias Neurais/fisiologia , Processamento de Sinais Assistido por Computador , Algoritmos , Simulação por Computador , Humanos , Modelos Neurológicos
2.
Sensors (Basel) ; 17(10)2017 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-28974037

RESUMO

Automatic visual inspection allows for the identification of surface defects in manufactured parts. Nevertheless, when defects are on a sub-millimeter scale, detection and recognition are a challenge. This is particularly true when the defect generates topological deformations that are not shown with strong contrast in the 2D image. In this paper, we present a method for recognizing surface defects in 3D point clouds. Firstly, we propose a novel 3D local descriptor called the Model Point Feature Histogram (MPFH) for defect detection. Our descriptor is inspired from earlier descriptors such as the Point Feature Histogram (PFH). To construct the MPFH descriptor, the models that best fit the local surface and their normal vectors are estimated. For each surface model, its contribution weight to the formation of the surface region is calculated and from the relative difference between models of the same region a histogram is generated representing the underlying surface changes. Secondly, through a classification stage, the points on the surface are labeled according to five types of primitives and the defect is detected. Thirdly, the connected components of primitives are projected to a plane, forming a 2D image. Finally, 2D geometrical features are extracted and by a support vector machine, the defects are recognized. The database used is composed of 3D simulated surfaces and 3D reconstructions of defects in welding, artificial teeth, indentations in materials, ceramics and 3D models of defects. The quantitative and qualitative results showed that the proposed method of description is robust to noise and the scale factor, and it is sufficiently discriminative for detecting some surface defects. The performance evaluation of the proposed method was performed for a classification task of the 3D point cloud in primitives, reporting an accuracy of 95%, which is higher than for other state-of-art descriptors. The rate of recognition of defects was close to 94%.

3.
NPJ Digit Med ; 7(1): 125, 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744955

RESUMO

Scientific research of artificial intelligence (AI) in dermatology has increased exponentially. The objective of this study was to perform a systematic review and meta-analysis to evaluate the performance of AI algorithms for skin cancer classification in comparison to clinicians with different levels of expertise. Based on PRISMA guidelines, 3 electronic databases (PubMed, Embase, and Cochrane Library) were screened for relevant articles up to August 2022. The quality of the studies was assessed using QUADAS-2. A meta-analysis of sensitivity and specificity was performed for the accuracy of AI and clinicians. Fifty-three studies were included in the systematic review, and 19 met the inclusion criteria for the meta-analysis. Considering all studies and all subgroups of clinicians, we found a sensitivity (Sn) and specificity (Sp) of 87.0% and 77.1% for AI algorithms, respectively, and a Sn of 79.78% and Sp of 73.6% for all clinicians (overall); differences were statistically significant for both Sn and Sp. The difference between AI performance (Sn 92.5%, Sp 66.5%) vs. generalists (Sn 64.6%, Sp 72.8%), was greater, when compared with expert clinicians. Performance between AI algorithms (Sn 86.3%, Sp 78.4%) vs expert dermatologists (Sn 84.2%, Sp 74.4%) was clinically comparable. Limitations of AI algorithms in clinical practice should be considered, and future studies should focus on real-world settings, and towards AI-assistance.

4.
Comput Biol Med ; 141: 105147, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34929463

RESUMO

Recent advances in medical imaging have confirmed the presence of altered hemodynamics in bicuspid aortic valve (BAV) patients. Therefore, there is a need for new hemodynamic biomarkers to refine disease monitoring and improve patient risk stratification. This research aims to analyze and extract multiple correlation patterns of hemodynamic parameters from 4D Flow MRI data and find which parameters allow an accurate classification between healthy volunteers (HV) and BAV patients with dilated and non-dilated ascending aorta using machine learning. Sixteen hemodynamic parameters were calculated in the ascending aorta (AAo) and aortic arch (AArch) at peak systole from 4D Flow MRI. We used sequential forward selection (SFS) and principal component analysis (PCA) as feature selection algorithms. Then, eleven machine-learning classifiers were implemented to separate HV and BAV patients (non- and dilated ascending aorta). Multiple correlation patterns from hemodynamic parameters were extracted using hierarchical clustering. The linear discriminant analysis and random forest are the best performing classifiers, using five hemodynamic parameters selected with SFS (velocity angle, forward velocity, vorticity, and backward velocity in AAo; and helicity density in AArch) a 96.31 ± 1.76% and 96.00 ± 0.83% accuracy, respectively. Hierarchical clustering revealed three groups of correlated features. According to this analysis, we observed that features selected by SFS have a better performance than those selected by PCA because the five selected parameters were distributed according to 3 different clusters. Based on the proposed method, we concluded that the feature selection method found five potentially hemodynamic biomarkers related to this disease.


Assuntos
Doença da Válvula Aórtica Bicúspide , Doenças das Valvas Cardíacas , Valva Aórtica/diagnóstico por imagem , Biomarcadores , Dilatação , Doenças das Valvas Cardíacas/diagnóstico por imagem , Hemodinâmica , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos
5.
Biol Open ; 8(12)2019 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-31852668

RESUMO

Xenopus laevis frogs are a widely used organism to study aspects of modern biology ( Harland and Grainger, 2011). Its central nervous system is particularly interesting, because in certain stages of metamorphosis the spinal cord can regenerate after injury and recover swimming. With this in mind, automatic gait analysis could help evaluate the regenerative performance by means of a method that automatically and quantitatively establishes the degree in froglets' limb movement. Here, we present an algorithm that characterizes spinal cord damage in froglets. The proposed method tracks the position of the limbs throughout videos and extracts kinematic features, which posteriorly serve to differentiate froglets with different levels of damage to the spinal cord. The detection algorithm and kinematic features chosen were validated in a pattern recognition experiment in which 90 videos (divided equally in three classes: uninjured, hemisected and transected) were classified. We conclude that our system is effective in the characterization of damage to the spinal cord through video analysis of a swimming froglet with a 97% accuracy. These results potentially validate this methodology to automatically compare the recovery of spinal cord function after different treatments without the need to manually process videos. In addition, the procedure could be used to measure the kinematics and behavioral response of froglets to different experimental conditions such as nutritional state, stress, genetic background and age.

6.
Micron ; 117: 29-39, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30458300

RESUMO

Fault zone permeability and the real 3D-spatial distribution of the fault-related fracture networks are critical in the assessment of fault zones behavior for fluids. The study of the real 3D-spatial distribution of the microfracture network, using X-ray micro-computed tomography, is a crucial factor to unravel the real structural permeability conditions of a fault-zone. Despite the availability of several commercial software for rock properties estimation from X-ray micro-computed tomography scanning, their high cost and lack of programmability encourage the use of open-source data treatment. This work presents the implementation of a methodology flow for the quantification of both structural and geometrical parameters (fractures density, fractures aperture, fractures porosity, and fractures surface area), and the modeling of palaeopermeability of fault-related fractured samples, with focus in the proper spatial orientation of both the sample and the results. This is performed with an easy to follow step-by-step implementation, by a combination of open-source software, newly implemented codes, and numerical methods. This approach keeps track of the sample's spatial orientation from the physical to the virtual world, thus assessing any fault-related palaeopermeability anisotropy.

8.
Front Comput Neurosci ; 11: 80, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28943847

RESUMO

Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

9.
Comput Intell Neurosci ; 2016: 3979547, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27092179

RESUMO

Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Magnetoencefalografia/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Simulação por Computador , Humanos , Método de Monte Carlo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA