RESUMO
Output from imaging sensors based on CMOS and CCD devices is prone to noise due to inherent electronic fluctuations and low photon count. The resulting noise in the acquired image could be effectively modelled as signal-dependent Poisson noise or as a mixture of Poisson and Gaussian noise. To that end, we propose a generalized framework based on detection theory and hypothesis testing coupled with the variance stability transformation (VST) for Poisson or Poissonâ»Gaussian denoising. VST transforms signal-dependent Poisson noise to a signal independent Gaussian noise with stable variance. Subsequently, multiscale transforms are employed on the noisy image to segregate signal and noise into separate coefficients. That facilitates the application of local binary hypothesis testing on multiple scales using empirical distribution function (EDF) for the purpose of detection and removal of noise. We demonstrate the effectiveness of the proposed framework with different multiscale transforms and on a wide variety of input datasets.
RESUMO
We study a binary hypothesis testing problem in which a defender must decide whether a test sequence has been drawn from a given memoryless source P 0 , while an attacker strives to impede the correct detection. With respect to previous works, the adversarial setup addressed in this paper considers an attacker who is active under both hypotheses, namely, a fully active attacker, as opposed to a partially active attacker who is active under one hypothesis only. In the fully active setup, the attacker distorts sequences drawn both from P 0 and from an alternative memoryless source P 1 , up to a certain distortion level, which is possibly different under the two hypotheses, to maximize the confusion in distinguishing between the two sources, i.e., to induce both false positive and false negative errors at the detector, also referred to as the defender. We model the defender-attacker interaction as a game and study two versions of this game, the Neyman-Pearson game and the Bayesian game. Our main result is in the characterization of an attack strategy that is asymptotically both dominant (i.e., optimal no matter what the defender's strategy is) and universal, i.e., independent of P 0 and P 1 . From the analysis of the equilibrium payoff, we also derive the best achievable performance of the defender, by relaxing the requirement on the exponential decay rate of the false positive error probability in the Neyman-Pearson setup and the tradeoff between the error exponents in the Bayesian setup. Such analysis permits characterizing the conditions for the distinguishability of the two sources given the distortion levels.
RESUMO
Objective. The diagnosis of attention deficit hyperactivity disorder (ADHD) subtypes is important for the refined treatment of ADHD children. Although automated diagnosis methods based on machine learning are performed with structural and functional magnetic resonance imaging (sMRI and fMRI) data which have full observation of brains, they are not satisfactory with the accuracy of less than80%for the ADHD subtype diagnosis.Approach. To improve the accuracy and obtain the biomarker of ADHD subtypes, we proposed a hierarchical binary hypothesis testing (H-BHT) framework by using brain functional connectivity (FC) as input bio-signals. The framework includes a two-stage procedure with a decision tree strategy and thus becomes suitable for the subtype classification. Also, typical FC is extracted in both two stages of identifying ADHD subtypes. That means the important FC is found out for the subtype recognition.Main results. We apply the proposed H-BHT framework to resting state fMRI datasets from ADHD-200 consortium. The results are achieved with the average accuracy97.1%and an average kappa score 0.947. Discriminative FC between ADHD subtypes is found by comparing the P-values of typical FC.Significance. The proposed framework not only is an effective structure for ADHD subtype classification, but also provides useful reference for multiclass classification of mental disease subtypes.
Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Criança , Humanos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Reconhecimento Psicológico , Projetos de PesquisaRESUMO
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disease of school-age children. Early diagnosis is crucial for ADHD treatment, wherein its neurobiological diagnosis (or classification) is helpful and provides the objective evidence to clinicians. The existing ADHD classification methods suffer two problems, i.e., insufficient data and feature noise disturbance from other associated disorders. As an attempt to overcome these difficulties, a novel deep-learning classification architecture based on a binary hypothesis testing framework and a modified auto-encoding (AE) network is proposed in this paper. The binary hypothesis testing framework is introduced to cope with insufficient data of ADHD database. Brain functional connectivities (FCs) of test data (without seeing their labels) are incorporated during feature selection along with those of training data and affect the sequential deep learning procedure under binary hypotheses. On the other hand, the modified AE network is developed to capture more effective features from training data, such that the difference of inter- and intra-class variability scores between binary hypotheses can be enlarged and effectively alleviate the disturbance of feature noise. On the test of ADHD-200 database, our method significantly outperforms the existing classification methods. The average accuracy reaches 99.6% with the leave-one-out cross validation. Our method is also more robust and practically convenient for ADHD classification due to its uniform parameter setting across various datasets.
Assuntos
Transtorno do Deficit de Atenção com Hiperatividade , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Encéfalo , Criança , Bases de Dados Factuais , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de ComputaçãoRESUMO
Infrared (IR) spectroscopic imaging instruments' performance can be characterized and optimized by an analysis of their limit of detection (LOD). Here we report a systematic analysis of the LOD for Fourier transform IR (FT-IR) and discrete frequency IR (DFIR) imaging spectrometers. In addition to traditional measurements of sample and blank data, we propose a decision theory perspective to pose the determination of LOD as a binary classification problem under different assumptions of noise uniformity and correlation. We also examine three spectral analysis approaches, namely, absorbance at a single frequency, average of absorbance over selected frequencies and total spectral distance - to suit instruments that acquire discrete or contiguous spectral bandwidths. The analysis is validated by refining the fabrication of a bovine serum albumin protein microarray to provide eight uniform spots from â¼2.8 nL of solution for each concentration over a wide range (0.05-10 mg/mL). Using scanning parameters that are typical for each instrument, we estimate a LOD of 0.16 mg/mL and 0.12 mg/mL for widefield and line scanning FT-IR imaging systems, respectively, using the spectral distance approach, and 0.22 mg/mL and 0.15 mg/mL using an optimal set of discrete frequencies. As expected, averaging and the use of post-processing techniques such as minimum noise fraction transformation results in LODs as low as â¼0.075 mg/mL that correspond to a spotted protein mass of â¼112 fg/pixel. We emphasize that these measurements were conducted at typical imaging parameters for each instrument and can be improved using the usual trading rules of IR spectroscopy. This systematic analysis and methodology for determining the LOD can allow for quantitative measures of confidence in imaging an analyte's concentration and a basis for further improving IR imaging technology.