RESUMO
This study aims to examine the complex relationships among verbal ability (VA), executive function (EF), and theory of mind (ToM) in young Chinese children with cochlear implants (CCI). All participants were tested using a set of nine measures: one VA, one non-VA, three EF, and four ToM. Our study cohort comprised 82 children aged from 3.8 to 6.9 years, including 36 CCI and 46 children with normal hearing (CNH). CNH outperformed CCI on measures of VA, EF, and ToM. One of the EF tasks, inhibitory control, was significantly associated with ToM after controlling for VA. VA was the primary predictor of EF, while inhibitory control significantly predicted ToM. Our findings suggest that inhibitory control explains the association between EF and ToM, thereby supporting the hypothesis that EF may be a prerequisite for ToM.
Assuntos
Comportamento Infantil , Implante Coclear/instrumentação , Implantes Cocleares , Crianças com Deficiência/reabilitação , Função Executiva , Pessoas com Deficiência Auditiva/reabilitação , Teoria da Mente , Comportamento Verbal , Fatores Etários , Percepção Auditiva , Estudos de Casos e Controles , Criança , Pré-Escolar , Crianças com Deficiência/psicologia , Feminino , Humanos , Inibição Psicológica , Masculino , Testes Neuropsicológicos , Pessoas com Deficiência Auditiva/psicologiaRESUMO
The male annihilation technique (MAT) plays a crucial role in the pest management program of the oriental fruit fly, Bactrocera dorsalis (Hendel) (Diptera: Tephritidae). However, a suitable method for real-time and accurate assessment of MAT's control efficiency has not been established. Laboratory investigations found that motile sperms can be observed clearly under the microscope when the spermathecae dissected from mated females were torn, and no sperms were found in the spermathecae of virgin females. Furthermore, it was confirmed that sperms can be preserved in the spermathecae for more than 50 days once females have mated. Laboratory results also indicated that proportion of mated females decreased from 100% to 2% when the sex ratio (â:â) was increased from 1:1 to 100:1. Further observation revealed that there were no significant differences in the superficial area of the ovary or spermatheca between mated females and virgin females. Field investigations revealed that the proportion of mated females (PMF) could reach 81.2% in abandoned mango orchards, whereas the PMF was less than 36.4% in mango orchards where MAT was applied. This indicates that the PMF of the field population can be determined by examining the presence of sperms in the spermathecae. Therefore, we suggest that this method can be used to monitor the control efficiency when MAT is used in the field.
RESUMO
Estimating diffusion tensors is an essential step in many applications - such as diffusion tensor image (DTI) registration, segmentation and fiber tractography. Most of the methods proposed in the literature for this task are not simultaneously statistically robust and feature preserving techniques. In this paper, we propose a novel and robust variational framework for simultaneous smoothing and estimation of diffusion tensors from diffusion MRI. Our variational principle makes use of a recently introduced total Kullback-Leibler (tKL) divergence for DTI regularization. tKL is a statistically robust dissimilarity measure for diffusion tensors, and regularization by using tKL ensures the symmetric positive definiteness of tensors automatically. Further, the regularization is weighted by a non-local factor adapted from the conventional non-local means filters. Finally, for the data fidelity, we use the nonlinear least-squares term derived from the Stejskal-Tanner model. We present experimental results depicting the positive performance of our method in comparison to competing methods on synthetic and real data examples.
Assuntos
Algoritmos , Encéfalo/citologia , Imagem de Tensor de Difusão/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Fibras Nervosas Mielinizadas/ultraestrutura , Reconhecimento Automatizado de Padrão/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Chimeric antigen receptor T-cell (CAR-T) therapy has demonstrated potent clinical efficacy in the treatment of hematopoietic malignancies. However, the application of CAR-T in solid tumors has been limited due in part to the expression of inhibitory molecules in the tumor microenvironment, leading to T-cell exhaustion. To overcome this limitation, we have developed a synthetic T-cell receptor (TCR) that targets programmed death-ligand 1 (PD-L1), a molecule that is widely expressed in various solid tumors and plays a pivotal role in T-cell exhaustion. Our novel TCR platform is based on antibody-based binding domain, which is typically a single-chain variable fragment (scFv), fused to the γδ TCRs (TCRγδ). We have utilized the T-cell receptor alpha constant (TRAC) locus editing approach to express cell surface scFv of anti-PD-L1, which is fused to the constant region of the TCRγ or TCRδ chain in activated T cells derived from peripheral blood mononuclear cells (PBMCs). Our results indicate that these reconfigured receptors, both γ-TCRγδ and δ-TCRγδ, have the capability to transduce signals, produce inflammatory cytokines, degranulate and exert tumor killing activity upon engagement with PD-L1 antigen in vitro. Additionally, we have also shown that γ-TCRγδ exerted superior efficacy than δ-TCRγδ in in vivo xenograft model.
RESUMO
BACKGROUND: In China, substance use disorders represent a significant burden on public health and the economy. However, while the range of drugs and drug markets expands and diversifies, the instruments available to evaluate users' dependence statuses from multiple dimensions have become insufficient. Accordingly, the present study presents the Chinese version of the Addiction Profile Index (API), explores its reliability and validity, and investigates the measurement invariance between males and females with substance use disorders. METHODS: The API, a self-report questionnaire, was administered to 2252 people with substance use disorders who were undergoing treatment in compulsory detoxification institutions located in five provinces in China (943 females; mean age = 33.5 years old, SD = 8.6). Additionally, to ensure the authenticity of the collected data, the study's volunteers completed the Drug Use Disorders Identification Test (DUDIT), DUDIT-Extended (DUDIT-E), and the Health Scale for Drug Abusers (HSDA). RESULTS: The revised API, with its updated substance list, featured 34 items. The new four-factor model, incorporating behavioral symptoms of dependence, impact on social life, cravings, and motivations for detoxification, explained 55.30% of the total variance, indicating a good fit. Moreover, Cronbach's α and mean item coefficient values showed good internal consistency reliability. Regarding criterion validity, the revised factors were moderately to highly correlated with their corresponding subscales in the DUDIT, DUDIT-E, and HSDA. In addition, the multigroup confirmatory factor analysis demonstrated that a measurement invariance of the revised four-factor model across genders was supported, fully assuming different degrees of invariance. The three factors of symptoms, social life, and motivation exhibited significant differences between male and female participants in the t test results (p < 0.01). CONCLUSIONS: The Chinese version of the API shows good psychometric properties in terms of reliability and validity, and exhibits measurement equivalence across the genders. Therefore, it could be used to comprehensively assess the severity of drug dependence in people with substance use disorders.
Assuntos
Comportamento Aditivo/psicologia , Transtornos Relacionados ao Uso de Substâncias , Adolescente , Adulto , China , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicometria , Autorrelato/normas , Adulto JovemRESUMO
Carver and White developed the Behavioral Inhibition/Behavioral Activation Scales (the BIS/BAS Scales) based on Reinforcement Sensitivity Theory proposed by Gray. Subsequent studies proposed that substance abuse was closely related to Behavioral Inhibition System (BIS) and Behavioral Activation System (BAS). However, researches on the psychometric properties of the BIS/BAS scales in clinical samples are scarce. The present study was conducted to analyze the applicability of the BIS/BAS scales in a sample suffering from a substance use disorder (SUD) and undergoing treatment in compulsory detoxification institutions (n = 1117). Meanwhile, 822 community residents were selected for comparison. Confirmatory factor analysis was carried out to examine the construct validity and the results showed that the five-factor model was the best fit for people with a substance use disorder' data. Besides, Cronbach's alpha coefficient for the total scale was 0.808, indicating the satisfactory internal consistency reliability. Analysis of the correlation coefficient of the questionnaire with the corresponding personality traits showed that BAS was more associated with the impulsive trait. Surprisingly, participants with a substance use disorder showed more insensitivity for the reward dimension compared with that of community residents and the result of comparison between two samples supported joint subsystems hypothesis. Generally, the BIS/BAS scales showed good reliability and validity. These findings provide more direct evidence on the personality traits of people with a substance use disorder and should form the basis for further research.
RESUMO
Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.
RESUMO
How to select and weigh features has always been a difficult problem in many image processing and pattern recognition applications. A data-dependent distance measure can address this problem to a certain extent, and therefore an accurate and efficient metric learning becomes necessary. In this paper, we propose a LogDet divergence-based metric learning with triplet constraints (LDMLT) approach, which can learn Mahalanobis distance metric accurately and efficiently. First of all, we demonstrate the good properties of triplet constraints and apply it in LogDet divergence-based metric learning model. Then, to deal with high-dimensional data, we apply a compressed representation method to learn, store, and evaluate Mahalanobis matrix efficiently. Besides, a dynamic triplets building strategy is proposed to build a feedback from the obtained Mahalanobis matrix to the triplet constraints, which can further improve the LDMLT algorithm. Furthermore, the proposed method is applied to various applications, including pattern recognition, facial expression recognition, and image retrieval. The results demonstrate the improved performance of the proposed approach.
Assuntos
Algoritmos , Face/anatomia & histologia , Expressão Facial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Inteligência Artificial , Biometria/métodos , Interpretação Estatística de Dados , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
A proper distance metric is fundamental in many computer vision and pattern recognition applications such as classification, image retrieval, face recognition and so on. However, it is usually not clear what metric is appropriate for specific applications, therefore it becomes more reliable to learn a task oriented metric. Over the years, many metric learning approaches have been reported in literature. A typical one is to learn a Mahalanobis distance which is parameterized by a positive semidefinite (PSD) matrix M. An efficient method of estimating M is to treat M as a linear combination of rank-one matrices that can be learned using a boosting type approach. However, such approaches have two main drawbacks. First, the weight change across the training samples maybe non-smooth. Second, the learned rank-one matrices might be redundant. In this paper, we propose a doubly regularized metric learning algorithm, termed by DRMetric, which imposes two regularizations on the conventional metric learning method. First, a regularization is applied on the weight of the training examples, which prevents unstable change of the weights and also prevents outlier examples from being weighed too much. Besides, a regularization is applied on the rank-one matrices to make them independent. This greatly reduces the redundancy of the rank-one matrices. We present experiments depicting the performance of the proposed method on a variety of datasets for various applications.
RESUMO
Fiber tracking from diffusion tensor images is an essential step in numerous clinical applications. There is a growing demand for an accurate and efficient framework to perform quantitative analysis of white matter fiber bundles. In this paper, we propose a robust framework for fiber clustering. This framework is composed of two parts: accessible fiber representation, and a statistically robust divergence measure for comparing fibers. Each fiber is represented using a Gaussian mixture model (GMM), which is the linear combination of Gaussian distributions. The dissimilarity between two fibers is measured using the total square loss function between their corresponding GMMs (which is statistically robust). Finally, we perform the hierarchical total Bregman soft clustering algorithm on the GMMs, yielding clustered fiber bundles. Further, our method is able to determine the number of clusters automatically. We present experimental results depicting favorable performance of our method on both synthetic and real data examples.
RESUMO
In this paper, we consider the family of total Bregman divergences (tBDs) as an efficient and robust "distance" measure to quantify the dissimilarity between shapes. We use the tBD-based â1-norm center as the representative of a set of shapes, and call it the t-center. First, we briefly present and analyze the properties of the tBDs and t-centers following our previous work in. Then, we prove that for any tBD, there exists a distribution which belongs to the lifted exponential family (lEF) of statistical distributions. Further, we show that finding the maximum a posteriori (MAP) estimate of the parameters of the lifted exponential family distribution is equivalent to minimizing the tBD to find the t-centers. This leads to a new clustering technique, namely, the total Bregman soft clustering algorithm. We evaluate the tBD, t-center, and the soft clustering algorithm on shape retrieval applications. Our shape retrieval framework is composed of three steps: 1) extraction of the shape boundary points, 2) affine alignment of the shapes and use of a Gaussian mixture model (GMM) to represent the aligned boundaries, and 3) comparison of the GMMs using tBD to find the best matches given a query shape. To further speed up the shape retrieval algorithm, we perform hierarchical clustering of the shapes using our total Bregman soft clustering algorithm. This enables us to compare the query with a small subset of shapes which are chosen to be the cluster t-centers. We evaluate our method on various public domain 2D and 3D databases, and demonstrate comparable or better results than state-of-the-art retrieval techniques.
RESUMO
Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized Boosting algorithms. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases and comparisons to several other state-of-the-art methods. Numerical results show that the proposed algorithm has much improved performance in efficiency and accuracy over other methods.
RESUMO
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) - used to represent the distribution over the training data and the classification error respectively - to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency.
RESUMO
Divergence measures provide a means to measure the pairwise dissimilarity between "objects," e.g., vectors and probability density functions (pdfs). Kullback-Leibler (KL) divergence and the square loss (SL) function are two examples of commonly used dissimilarity measures which along with others belong to the family of Bregman divergences (BD). In this paper, we present a novel divergence dubbed the Total Bregman divergence (TBD), which is intrinsically robust to outliers, a very desirable property in many applications. Further, we derive the TBD center, called the t-center (using the l(1)-norm), for a population of positive definite matrices in closed form and show that it is invariant to transformation from the special linear group. This t-center, which is also robust to outliers, is then used in tensor interpolation as well as in an active contour based piecewise constant segmentation of a diffusion tensor magnetic resonance image (DT-MRI). Additionally, we derive the piecewise smooth active contour model for segmentation of DT-MRI using the TBD and present several comparative results on real data.
Assuntos
Algoritmos , Imagem de Tensor de Difusão/métodos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Animais , Encéfalo/anatomia & histologia , Análise dos Mínimos Quadrados , Ratos , Medula Espinal/anatomia & histologiaRESUMO
Classification is one of the core problems in computer-aided cancer diagnosis (CAD) via medical image interpretation. High detection sensitivity with reasonably low false positive (FP) rate is essential for any CAD system to be accepted as a valuable or even indispensable tool in radiologists' workflow. In this paper, we propose a novel classification framework based on sparse representation. It first builds an overcomplete dictionary of atoms for each class via K-SVD learning, then classification is formulated as sparse coding which can be solved efficiently. This representation naturally generalizes for both binary and multiwise classification problems, and can be used as a standalone classifier or integrated with an existing decision system. Our method is extensively validated in CAD systems for both colorectal polyp and lung nodule detection, using hospital scale, multi-site clinical datasets. The results show that we achieve superior classification performance than existing state-of-the-arts, using support vector machine (SVM) and its variants, boosting, logistic regression, relevance vector machine (RVM), or kappa-nearest neighbor (KNN).
Assuntos
Pólipos do Colo/diagnóstico , Diagnóstico por Computador/instrumentação , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Radiologia/educação , Radiologia/métodos , Nódulo Pulmonar Solitário/diagnóstico , Algoritmos , Inteligência Artificial , Análise por Conglomerados , Pólipos do Colo/diagnóstico por imagem , Humanos , Aprendizagem , Modelos Estatísticos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Análise de Regressão , Sensibilidade e Especificidade , Nódulo Pulmonar Solitário/diagnóstico por imagemRESUMO
Computer aided detection (CAD) systems have emerged as noninvasive and effective tools, using 3D CT Colonography (CTC) for early detection of colonic polyps. In this paper, we propose a robust and automatic polyp prone-supine view matching method, to facilitate the regular CTC workflow where radiologists need to manually match the CAD findings in prone and supine CT scans for validation. Apart from previous colon registration approaches based on global geometric information, this paper presents a feature selection and metric distance learning approach to build a pairwise matching function (where true pairs of polyp detections have smaller distances than false pairs), learned using local polyp classification features. Thus our process can seamlessly handle collapsed colon segments or other severe structural artifacts which often exist in CTC, since only local features are used, whereas other global geometry dependent methods may become invalid for collapsed segmentation cases. Our automatic approach is extensively evaluated using a large multi-site dataset of 195 patient cases in training and 223 cases for testing. No external examination on the correctness of colon segmentation topology is needed. The results show that we achieve significantly superior matching accuracy than previous methods, on at least one order-of-magnitude larger CTC datasets.
Assuntos
Pólipos do Colo/diagnóstico por imagem , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Pólipos do Colo/diagnóstico , Bases de Dados Factuais , Humanos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Decúbito Ventral , Decúbito Dorsal , Tomografia Computadorizada por Raios X/métodosRESUMO
Shape database search is ubiquitous in the world of biometric systems, CAD systems etc. Shape data in these domains is experiencing an explosive growth and usually requires search of whole shape databases to retrieve the best matches with accuracy and efficiency for a variety of tasks. In this paper, we present a novel divergence measure between any two given points in [Formula: see text] or two distribution functions. This divergence measures the orthogonal distance between the tangent to the convex function (used in the definition of the divergence) at one of its input arguments and its second argument. This is in contrast to the ordinate distance taken in the usual definition of the Bregman class of divergences [4]. We use this orthogonal distance to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions. The new class of divergences are dubbed the total Bregman divergence (TBD). We present the l1-norm based TBD center that is dubbed the t-center which is then used as a cluster center of a class of shapes The t-center is weighted mean and this weight is small for noise and outliers. We present a shape retrieval scheme using TBD and the t-center for representing the classes of shapes from the MPEG-7 database and compare the results with other state-of-the-art methods in literature.