Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 26(2)2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38392358

RESUMO

Despite their remarkable performance, deep learning models still lack robustness guarantees, particularly in the presence of adversarial examples. This significant vulnerability raises concerns about their trustworthiness and hinders their deployment in critical domains that require certified levels of robustness. In this paper, we introduce an information geometric framework to establish precise robustness criteria for l2 white-box attacks in a multi-class classification setting. We endow the output space with the Fisher information metric and derive criteria on the input-output Jacobian to ensure robustness. We show that model robustness can be achieved by constraining the model to be partially isometric around the training points. We evaluate our approach using MNIST and CIFAR-10 datasets against adversarial attacks, revealing its substantial improvements over defensive distillation and Jacobian regularization for medium-sized perturbations and its superior robustness performance to adversarial training for large perturbations, all while maintaining the desired accuracy.

2.
PLoS Med ; 16(5): e1002810, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31136584

RESUMO

BACKGROUND: Low-grade gliomas cause significant neurological morbidity by brain invasion. There is no universally accepted objective technique available for detection of enlargement of low-grade gliomas in the clinical setting; subjective evaluation by clinicians using visual comparison of longitudinal radiological studies is the gold standard. The aim of this study is to determine whether a computer-assisted diagnosis (CAD) method helps physicians detect earlier growth of low-grade gliomas. METHODS AND FINDINGS: We reviewed 165 patients diagnosed with grade 2 gliomas, seen at the University of Alabama at Birmingham clinics from 1 July 2017 to 14 May 2018. MRI scans were collected during the spring and summer of 2018. Fifty-six gliomas met the inclusion criteria, including 19 oligodendrogliomas, 26 astrocytomas, and 11 mixed gliomas in 30 males and 26 females with a mean age of 48 years and a range of follow-up of 150.2 months (difference between highest and lowest values). None received radiation therapy. We also studied 7 patients with an imaging abnormality without pathological diagnosis, who were clinically stable at the time of retrospective review (14 May 2018). This study compared growth detection by 7 physicians aided by the CAD method with retrospective clinical reports. The tumors of 63 patients (56 + 7) in 627 MRI scans were digitized, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. The CAD method consisted of tumor segmentation, computing volumes, and pointing to growth by the online abrupt change-of-point method, which considers only past measurements. Independent scientists have evaluated the segmentation method. In 29 of the 34 patients with progression, the median time to growth detection was only 14 months for CAD compared to 44 months for current standard of care radiological evaluation (p < 0.001). Using CAD, accurate detection of tumor enlargement was possible with a median of only 57% change in the tumor volume as compared to a median of 174% change of volume necessary to diagnose tumor growth using standard of care clinical methods (p < 0.001). In the radiologically stable group, CAD facilitated growth detection in 13 out of 22 patients. CAD did not detect growth in the imaging abnormality group. The main limitation of this study was its retrospective design; nevertheless, the results depict the current state of a gold standard in clinical practice that allowed a significant increase in tumor volumes from baseline before detection. Such large increases in tumor volume would not be permitted in a prospective design. The number of glioma patients (n = 56) is a limitation; however, it is equivalent to the number of patients in phase II clinical trials. CONCLUSIONS: The current practice of visual comparison of longitudinal MRI scans is associated with significant delays in detecting growth of low-grade gliomas. Our findings support the idea that physicians aided by CAD detect growth at significantly smaller volumes than physicians using visual comparison alone. This study does not answer the questions whether to treat or not and which treatment modality is optimal. Nonetheless, early growth detection sets the stage for future clinical studies that address these questions and whether early therapeutic interventions prolong survival and improve quality of life.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Proliferação de Células , Glioma/diagnóstico por imagem , Imageamento por Ressonância Magnética , Neoplasias Encefálicas/patologia , Feminino , Glioma/patologia , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Invasividade Neoplásica , Valor Preditivo dos Testes , Estudos Retrospectivos , Fatores de Tempo , Carga Tumoral
3.
Bull Math Biol ; 78(7): 1450-76, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27417984

RESUMO

We address the problem of fully automated region discovery and robust image segmentation by devising a new deformable model based on the level set method (LSM) and the probabilistic nonnegative matrix factorization (NMF). We describe the use of NMF to calculate the number of distinct regions in the image and to derive the local distribution of the regions, which is incorporated into the energy functional of the LSM. The results demonstrate that our NMF-LSM method is superior to other approaches when applied to synthetic binary and gray-scale images and to clinical magnetic resonance images (MRI) of the human brain with and without a malignant brain tumor, glioblastoma multiforme. In particular, the NMF-LSM method is fully automated, highly accurate, less sensitive to the initial selection of the contour(s) or initial conditions, more robust to noise and model parameters, and able to detect as small distinct regions as desired. These advantages stem from the fact that the proposed method relies on histogram information instead of intensity values and does not introduce nuisance model parameters. These properties provide a general approach for automated robust region discovery and segmentation in heterogeneous images. Compared with the retrospective radiological diagnoses of two patients with non-enhancing grade 2 and 3 oligodendroglioma, the NMF-LSM detects earlier progression times and appears suitable for monitoring tumor response. The NMF-LSM method fills an important need of automated segmentation of clinical MRI.


Assuntos
Encéfalo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/estatística & dados numéricos , Adulto , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Simulação por Computador , Diagnóstico Precoce , Glioma/diagnóstico por imagem , Humanos , Masculino , Conceitos Matemáticos , Modelos Estatísticos , Neuroimagem/estatística & dados numéricos , Reconhecimento Automatizado de Padrão/estatística & dados numéricos
4.
Diagnostics (Basel) ; 14(11)2024 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-38893592

RESUMO

Patients diagnosed with glioblastoma multiforme (GBM) continue to face a dire prognosis. Developing accurate and efficient contouring methods is crucial, as they can significantly advance both clinical practice and research. This study evaluates the AI models developed by MRIMath© for GBM T1c and fluid attenuation inversion recovery (FLAIR) images by comparing their contours to those of three neuro-radiologists using a smart manual contouring platform. The mean overall Sørensen-Dice Similarity Coefficient metric score (DSC) for the post-contrast T1 (T1c) AI was 95%, with a 95% confidence interval (CI) of 93% to 96%, closely aligning with the radiologists' scores. For true positive T1c images, AI segmentation achieved a mean DSC of 81% compared to radiologists' ranging from 80% to 86%. Sensitivity and specificity for T1c AI were 91.6% and 97.5%, respectively. The FLAIR AI exhibited a mean DSC of 90% with a 95% CI interval of 87% to 92%, comparable to the radiologists' scores. It also achieved a mean DSC of 78% for true positive FLAIR slices versus radiologists' scores of 75% to 83% and recorded a median sensitivity and specificity of 92.1% and 96.1%, respectively. The T1C and FLAIR AI models produced mean Hausdorff distances (<5 mm), volume measurements, kappa scores, and Bland-Altman differences that align closely with those measured by radiologists. Moreover, the inter-user variability between radiologists using the smart manual contouring platform was under 5% for T1c and under 10% for FLAIR images. These results underscore the MRIMath© platform's low inter-user variability and the high accuracy of its T1c and FLAIR AI models.

5.
IEEE Trans Signal Process ; 61(7): 1733-1742, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24027380

RESUMO

In this paper, we develop a comprehensive framework for optimal perturbation control of dynamic networks. The aim of the perturbation is to drive the network away from an undesirable steady-state distribution and to force it to converge towards a desired steady-state distribution. The proposed framework does not make any assumptions about the topology of the initial network, and is thus applicable to general-topology networks. We define the optimal perturbation control as the minimum-energy perturbation measured in terms of the Frobenius-norm between the initial and perturbed probability transition matrices of the dynamic network. We subsequently demonstrate that there exists at most one optimal perturbation that forces the network into the desirable steady-state distribution. In the event where the optimal perturbation does not exist, we construct a family of suboptimal perturbations, and show that the suboptimal perturbation can be used to approximate the optimal limiting distribution arbitrarily closely. Moreover, we investigate the robustness of the optimal perturbation control to errors in the probability transition matrix, and demonstrate that the proposed optimal perturbation control is robust to data and inference errors in the probability transition matrix of the initial network. Finally, we apply the proposed optimal perturbation control method to the Human melanoma gene regulatory network in order to force the network from an initial steady-state distribution associated with melanoma and into a desirable steady-state distribution corresponding to a benign cell.

6.
Bioinformatics ; 27(1): 103-10, 2011 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-21062762

RESUMO

MOTIVATION: Analysis and intervention in the dynamics of gene regulatory networks is at the heart of emerging efforts in the development of modern treatment of numerous ailments including cancer. The ultimate goal is to develop methods to intervene in the function of living organisms in order to drive cells away from a malignant state into a benign form. A serious limitation of much of the previous work in cancer network analysis is the use of external control, which requires intervention at each time step, for an indefinite time interval. This is in sharp contrast to the proposed approach, which relies on the solution of an inverse perturbation problem to introduce a one-time intervention in the structure of regulatory networks. This isolated intervention transforms the steady-state distribution of the dynamic system to the desired steady-state distribution. RESULTS: We formulate the optimal intervention problem in gene regulatory networks as a minimal perturbation of the network in order to force it to converge to a desired steady-state distribution of gene regulation. We cast optimal intervention in gene regulation as a convex optimization problem, thus providing a globally optimal solution which can be efficiently computed using standard toolboxes for convex optimization. The criteria adopted for optimality is chosen to minimize potential adverse effects as a consequence of the intervention strategy. We consider a perturbation that minimizes (i) the overall energy of change between the original and controlled networks and (ii) the time needed to reach the desired steady-state distribution of gene regulation. Furthermore, we show that there is an inherent trade-off between minimizing the energy of the perturbation and the convergence rate to the desired distribution. We apply the proposed control to the human melanoma gene regulatory network. AVAILABILITY: The MATLAB code for optimal intervention in gene regulatory networks can be found online: http://syen.ualr.edu/nxbouaynaya/Bioinformatics2010.html.


Assuntos
Redes Reguladoras de Genes , Regulação da Expressão Gênica , Humanos , Cadeias de Markov , Melanoma/genética , Modelos Estatísticos , Processos Estocásticos
7.
Front Med Technol ; 4: 919046, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35958121

RESUMO

Deep neural networks (DNNs) have started to find their role in the modern healthcare system. DNNs are being developed for diagnosis, prognosis, treatment planning, and outcome prediction for various diseases. With the increasing number of applications of DNNs in modern healthcare, their trustworthiness and reliability are becoming increasingly important. An essential aspect of trustworthiness is detecting the performance degradation and failure of deployed DNNs in medical settings. The softmax output values produced by DNNs are not a calibrated measure of model confidence. Softmax probability numbers are generally higher than the actual model confidence. The model confidence-accuracy gap further increases for wrong predictions and noisy inputs. We employ recently proposed Bayesian deep neural networks (BDNNs) to learn uncertainty in the model parameters. These models simultaneously output the predictions and a measure of confidence in the predictions. By testing these models under various noisy conditions, we show that the (learned) predictive confidence is well calibrated. We use these reliable confidence values for monitoring performance degradation and failure detection in DNNs. We propose two different failure detection methods. In the first method, we define a fixed threshold value based on the behavior of the predictive confidence with changing signal-to-noise ratio (SNR) of the test dataset. The second method learns the threshold value with a neural network. The proposed failure detection mechanisms seamlessly abstain from making decisions when the confidence of the BDNN is below the defined threshold and hold the decision for manual review. Resultantly, the accuracy of the models improves on the unseen test samples. We tested our proposed approach on three medical imaging datasets: PathMNIST, DermaMNIST, and OrganAMNIST, under different levels and types of noise. An increase in the noise of the test images increases the number of abstained samples. BDNNs are inherently robust and show more than 10% accuracy improvement with the proposed failure detection methods. The increased number of abstained samples or an abrupt increase in the predictive variance indicates model performance degradation or possible failure. Our work has the potential to improve the trustworthiness of DNNs and enhance user confidence in the model predictions.

8.
BioData Min ; 12: 5, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30774716

RESUMO

BACKGROUND: Most existing algorithms for modeling and analyzing molecular networks assume a static or time-invariant network topology. Such view, however, does not render the temporal evolution of the underlying biological process as molecular networks are typically "re-wired" over time in response to cellular development and environmental changes. In our previous work, we formulated the inference of time-varying or dynamic networks as a tracking problem, where the target state is the ensemble of edges in the network. We used the Kalman filter to track the network topology over time. Unfortunately, the output of the Kalman filter does not reflect known properties of molecular networks, such as sparsity. RESULTS: To address the problem of inferring sparse time-varying networks from a set of under-sampled measurements, we propose the Approximate Kernel RecONstruction (AKRON) Kalman filter. AKRON supersedes the Lasso regularization by starting from the Lasso-Kalman inferred network and judiciously searching the space for a sparser solution. We derive theoretical bounds for the optimality of AKRON. We evaluate our approach against the Lasso-Kalman filter on synthetic data. The results show that not only does AKRON-Kalman provide better reconstruction errors, but it is also better at identifying if edges exist within a network. Furthermore, we perform a real-world benchmark on the lifecycle (embryonic, larval, pupal, and adult stages) of the Drosophila Melanogaster. CONCLUSIONS: We show that the networks inferred by the AKRON-Kalman filter are sparse and can detect more known gene-to-gene interactions for the Drosophila melanogaster than the Lasso-Kalman filter. Finally, all of the code reported in this contribution will be publicly available.

9.
Front Comput Neurosci ; 13: 44, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31354462

RESUMO

Magnetic resonance images of brain tumors are routinely used in neuro-oncology clinics for diagnosis, treatment planning, and post-treatment tumor surveillance. Currently, physicians spend considerable time manually delineating different structures of the brain. Spatial and structural variations, as well as intensity inhomogeneity across images, make the problem of computer-assisted segmentation very challenging. We propose a new image segmentation framework for tumor delineation that benefits from two state-of-the-art machine learning architectures in computer vision, i.e., Inception modules and U-Net image segmentation architecture. Furthermore, our framework includes two learning regimes, i.e., learning to segment intra-tumoral structures (necrotic and non-enhancing tumor core, peritumoral edema, and enhancing tumor) or learning to segment glioma sub-regions (whole tumor, tumor core, and enhancing tumor). These learning regimes are incorporated into a newly proposed loss function which is based on the Dice similarity coefficient (DSC). In our experiments, we quantified the impact of introducing the Inception modules in the U-Net architecture, as well as, changing the objective function for the learning algorithm from segmenting the intra-tumoral structures to glioma sub-regions. We found that incorporating Inception modules significantly improved the segmentation performance (p < 0.001) for all glioma sub-regions. Moreover, in architectures with Inception modules, the models trained with the learning objective of segmenting the intra-tumoral structures outperformed the models trained with the objective of segmenting the glioma sub-regions for the whole tumor (p < 0.001). The improved performance is linked to multiscale features extracted by newly introduced Inception module and the modified loss function based on the DSC.

10.
BMC Bioinformatics ; 9 Suppl 9: S14, 2008 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-18793459

RESUMO

BACKGROUND: Over the past decade, many investigators have used sophisticated time series tools for the analysis of genomic sequences. Specifically, the correlation of the nucleotide chain has been studied by examining the properties of the power spectrum. The main limitation of the power spectrum is that it is restricted to stationary time series. However, it has been observed over the past decade that genomic sequences exhibit non-stationary statistical behavior. Standard statistical tests have been used to verify that the genomic sequences are indeed not stationary. More recent analysis of genomic data has relied on time-varying power spectral methods to capture the statistical characteristics of genomic sequences. Techniques such as the evolutionary spectrum and evolutionary periodogram have been successful in extracting the time-varying correlation structure. The main difficulty in using time-varying spectral methods is that they are extremely unstable. Large deviations in the correlation structure results from very minor perturbations in the genomic data and experimental procedure. A fundamental new approach is needed in order to provide a stable platform for the non-stationary statistical analysis of genomic sequences. RESULTS: In this paper, we propose to model non-stationary genomic sequences by a time-dependent autoregressive moving average (TD-ARMA) process. The model is based on a classical ARMA process whose coefficients are allowed to vary with time. A series expansion of the time-varying coefficients is used to form a generalized Yule-Walker-type system of equations. A recursive least-squares algorithm is subsequently used to estimate the time-dependent coefficients of the model. The non-stationary parameters estimated are used as a basis for statistical inference and biophysical interpretation of genomic data. In particular, we rely on the TD-ARMA model of genomic sequences to investigate the statistical properties and differentiate between coding and non-coding regions in the nucleotide chain. Specifically, we define a quantitative measure of randomness to assess how far a process deviates from white noise. Our simulation results on various gene sequences show that both the coding and non-coding regions are non-random. However, coding sequences are "whiter" than non-coding sequences as attested by a higher index of randomness. CONCLUSION: We demonstrate that the proposed TD-ARMA model can be used to provide a stable time series tool for the analysis of non-stationary genomic sequences. The estimated time-varying coefficients are used to define an index of randomness, in order to assess the statistical correlations in coding and non-coding DNA sequences. It turns out that the statistical differences between coding and non-coding sequences are more subtle than previously thought using stationary analysis tools: Both coding and non-coding sequences exhibit statistical correlations, with the coding regions being "whiter" than the non-coding regions. These results corroborate the evolutionary periodogram analysis of genomic sequences and revoke the stationary analysis' conclusion that coding DNA behaves like random sequences.


Assuntos
Algoritmos , Mapeamento Cromossômico/métodos , Modelos Genéticos , Modelos Estatísticos , Análise de Sequência de DNA/métodos , Sequência de Bases , Simulação por Computador , Dados de Sequência Molecular , Análise de Regressão
11.
IEEE Trans Pattern Anal Mach Intell ; 30(5): 837-50, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18369253

RESUMO

In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.


Assuntos
Algoritmos , Inteligência Artificial , Colorimetria/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Cor , Simulação por Computador , Aumento da Imagem/métodos , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Pattern Anal Mach Intell ; 30(5): 823-36, 2008 May.
Artigo em Inglês | MEDLINE | ID: mdl-18369252

RESUMO

We develop a general theory of spatially-variant (SV) mathematical morphology for binary images in the Euclidean space. The basic SV morphological operators (i.e., SV erosion, SV dilation, SV opening and SV closing) are defined. We demonstrate the ubiquity of SV morphological operators by providing a SV kernel representation of increasing operators. The latter representation is a generalization of Matheron's representation theorem of increasing and translation-invariant operators. The SV kernel representation is redundant, in the sense that a smaller subset of the SV kernel is sufficient for the representation of increasing operators. We provide sufficient conditions for the existence of the minimal basis representation in terms of upper-semi-continuity in the hit-or-miss topology. The latter minimal basis representation is a generalization of Maragos' minimal basis representation for increasing and translation-invariant operators. Moreover, we investigate the upper-semi-continuity property of the basic SV morphological operators. Several examples are used to demonstrate that the theory of spatially-variant mathematical morphology provides a general framework for the unification of various morphological schemes based on spatiallyvariant geometrical structuring elements (e.g., circular, affine and motion morphology). Simulation results illustrate the theory of the proposed spatially-variant morphological framework and show its potential power in various image processing applications.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Simulação por Computador , Aumento da Imagem/métodos , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
IEEE J Biomed Health Inform ; 21(2): 573-581, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-26761909

RESUMO

We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).


Assuntos
Biologia Computacional/métodos , Redes Reguladoras de Genes/genética , Animais , Drosophila melanogaster/genética , Drosophila melanogaster/metabolismo , Análise Multivariada , Músculos/metabolismo , Asas de Animais/metabolismo
15.
IEEE Trans Image Process ; 15(11): 3579-91, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17076415

RESUMO

The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Armazenamento e Recuperação da Informação/métodos
16.
IEEE Trans Neural Syst Rehabil Eng ; 24(1): 98-108, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25769166

RESUMO

We present a novel formulation that employs task-specific muscle synergies and state-space representation of neural signals to tackle the challenging myoelectric control problem for lower arm prostheses. The proposed framework incorporates information about muscle configurations, e.g., muscles acting synergistically or in agonist/antagonist pairs, using the hypothesis of muscle synergies. The synergy activation coefficients are modeled as the latent system state and are estimated using a constrained Kalman filter. These task-dependent synergy activation coefficients are estimated in real-time from the electromyogram (EMG) data and are used to discriminate between various tasks. The task discrimination is helped by a post-processing algorithm that uses posterior probabilities. The proposed algorithm is robust as well as computationally efficient, yielding a decision with > 90% discrimination accuracy in approximately 3 ms . The real-time performance and controllability of the algorithm were evaluated using the targeted achievement control (TAC) test. The proposed algorithm outperformed common machine learning algorithms for single- as well as multi-degree-of-freedom (DOF) tasks in both off-line discrimination accuracy and real-time controllability (p < 0.01).


Assuntos
Algoritmos , Eletromiografia/métodos , Contração Muscular/fisiologia , Músculo Esquelético/fisiologia , Equilíbrio Postural/fisiologia , Análise e Desempenho de Tarefas , Adulto , Simulação por Computador , Sistemas Computacionais , Análise Discriminante , Feminino , Humanos , Masculino , Modelos Biológicos , Modelos Estatísticos , Movimento/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Articulação do Punho/fisiologia
17.
IEEE J Biomed Health Inform ; 20(3): 880-892, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-25794405

RESUMO

Electroencephalography (EEG)-based brain computer interface (BCI) is the most studied noninvasive interface to build a direct communication pathway between the brain and an external device. However, correlated noises in EEG measurements still constitute a significant challenge. Alternatively, building BCIs based on filtered brain activity source signals instead of using their surface projections, obtained from the noisy EEG signals, is a promising and not well-explored direction. In this context, finding the locations and waveforms of inner brain sources represents a crucial task for advancing source-based noninvasive BCI technologies. In this paper, we propose a novel multicore beamformer particle filter (multicore BPF) to estimate the EEG brain source spatial locations and their corresponding waveforms. In contrast to conventional (single-core) beamforming spatial filters, the developed multicore BPF considers explicitly temporal correlation among the estimated brain sources by suppressing activation from regions with interfering coherent sources. The hybrid multicore BPF brings together the advantages of both deterministic and Bayesian inverse problem algorithms in order to improve the estimation accuracy. It solves the brain activity localization problem without prior information about approximate areas of source locations. Moreover, the multicore BPF reduces the dimensionality of the problem to half compared with the PF solution, thus alleviating the curse of dimensionality problem. The results, based on generated and real EEG data, show that the proposed framework recovers correctly the dominant sources of brain activity.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Córtex Visual/fisiologia , Adulto , Algoritmos , Teorema de Bayes , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
18.
J Bioinform Comput Biol ; 12(1): 1450001, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24467759

RESUMO

Non-negative matrix factorization (NMF) has proven to be a useful decomposition technique for multivariate data, where the non-negativity constraint is necessary to have a meaningful physical interpretation. NMF reduces the dimensionality of non-negative data by decomposing it into two smaller non-negative factors with physical interpretation for class discovery. The NMF algorithm, however, assumes a deterministic framework. In particular, the effect of the data noise on the stability of the factorization and the convergence of the algorithm are unknown. Collected data, on the other hand, is stochastic in nature due to measurement noise and sometimes inherent variability in the physical process. This paper presents new theoretical and applied developments to the problem of non-negative matrix factorization (NMF). First, we generalize the deterministic NMF algorithm to include a general class of update rules that converges towards an optimal non-negative factorization. Second, we extend the NMF framework to the probabilistic case (PNMF). We show that the Maximum a posteriori (MAP) estimate of the non-negative factors is the solution to a weighted regularized non-negative matrix factorization problem. We subsequently derive update rules that converge towards an optimal solution. Third, we apply the PNMF to cluster and classify DNA microarrays data. The proposed PNMF is shown to outperform the deterministic NMF and the sparse NMF algorithms in clustering stability and classification accuracy.


Assuntos
Análise em Microsséries/métodos , Modelos Estatísticos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Algoritmos , Neoplasias Cerebelares/genética , Análise por Conglomerados , Humanos , Leucemia/genética , Masculino , Meduloblastoma/genética , Neoplasias da Próstata/genética
19.
EURASIP J Bioinform Syst Biol ; 2014(1): 3, 2014 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-24517200

RESUMO

: It is widely accepted that cellular requirements and environmental conditions dictate the architecture of genetic regulatory networks. Nonetheless, the status quo in regulatory network modeling and analysis assumes an invariant network topology over time. In this paper, we refocus on a dynamic perspective of genetic networks, one that can uncover substantial topological changes in network structure during biological processes such as developmental growth. We propose a novel outlook on the inference of time-varying genetic networks, from a limited number of noisy observations, by formulating the network estimation as a target tracking problem. We overcome the limited number of observations (small n large p problem) by performing tracking in a compressed domain. Assuming linear dynamics, we derive the LASSO-Kalman smoother, which recursively computes the minimum mean-square sparse estimate of the network connectivity at each time point. The LASSO operator, motivated by the sparsity of the genetic regulatory networks, allows simultaneous signal recovery and compression, thereby reducing the amount of required observations. The smoothing improves the estimation by incorporating all observations. We track the time-varying networks during the life cycle of the Drosophila melanogaster. The recovered networks show that few genes are permanent, whereas most are transient, acting only during specific developmental phases of the organism.

20.
IEEE Trans Pattern Anal Mach Intell ; 34(4): 805-13, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22184254

RESUMO

In this paper, we present a comprehensive analysis of self-dual and m-idempotent operators. We refer to an operator as m-idempotent if it converges after m iterations. We focus on an important special case of the general theory of lattice morphology: spatially variant morphology, which captures the geometrical interpretation of spatially variant structuring elements. We demonstrate that every increasing self-dual morphological operator can be viewed as a morphological center. Necessary and sufficient conditions for the idempotence of morphological operators are characterized in terms of their kernel representation. We further extend our results to the representation of the kernel of m-idempotent morphological operators. We then rely on the conditions on the kernel representation derived and establish methods for the construction of m-idempotent and self-dual morphological operators. Finally, we illustrate the importance of the self-duality and m-idempotence properties by an application to speckle noise removal in radar images.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa