RESUMO
Recent advances in spatial transcriptomics (ST) enable measurements of transcriptome within intact biological tissues by preserving spatial information, offering biologists unprecedented opportunities to comprehensively understand tissue micro-environment, where spatial domains are basic units of tissues. Although great efforts are devoted to this issue, they still have many shortcomings, such as ignoring local information and relations of spatial domains, requiring alternatives to solve these problems. Here, a novel algorithm for spatial domain identification in Spatial Transcriptomics data with Structure Correlation and Self-Representation (ST-SCSR), which integrates local information, global information, and similarity of spatial domains. Specifically, ST-SCSR utilzes matrix tri-factorization to simultaneously decompose expression profiles and spatial network of spots, where expressional and spatial features of spots are fused via the shared factor matrix that interpreted as similarity of spatial domains. Furthermore, ST-SCSR learns affinity graph of spots by manipulating expressional and spatial features, where local preservation and sparse constraints are employed, thereby enhancing the quality of graph. The experimental results demonstrate that ST-SCSR not only outperforms state-of-the-art algorithms in terms of accuracy, but also identifies many potential interesting patterns.
Assuntos
Algoritmos , Perfilação da Expressão Gênica , Transcriptoma , Perfilação da Expressão Gênica/métodos , Biologia Computacional/métodos , HumanosRESUMO
The ubiquitous dropout problem in single-cell RNA sequencing technology causes a large amount of data noise in the gene expression profile. For this reason, we propose an evolutionary sparse imputation (ESI) algorithm for single-cell transcriptomes, which constructs a sparse representation model based on gene regulation relationships between cells. To solve this model, we design an optimization framework based on nondominated sorting genetics. This framework takes into account the topological relationship between cells and the variety of gene expression to iteratively search the global optimal solution, thereby learning the Pareto optimal cell-cell affinity matrix. Finally, we use the learned sparse relationship model between cells to improve data quality and reduce data noise. In simulated datasets, scESI performed significantly better than benchmark methods with various metrics. By applying scESI to real scRNA-seq datasets, we discovered scESI can not only further classify the cell types and separate cells in visualization successfully but also improve the performance in reconstructing trajectories differentiation and identifying differentially expressed genes. In addition, scESI successfully recovered the expression trends of marker genes in stem cell differentiation and can discover new cell types and putative pathways regulating biological processes.
Assuntos
Análise de Célula Única , Transcriptoma , Análise por Conglomerados , Perfilação da Expressão Gênica , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos , Sequenciamento do ExomaRESUMO
Vibration monitoring is one of the most effective approaches for bearing fault diagnosis. Within this category of techniques, sparsity constraint-based regularization has received considerable attention for its capability to accurately extract repetitive transients from noisy vibration signals. The optimal solution of a sparse regularization problem is determined by the regularization term and the data fitting term in the cost function according to their weights, so a tradeoff between sparsity and data fidelity has to be made inevitably, which restricts conventional regularization methods from maintaining strong sparsity-promoting capability and high fitting accuracy at the same time. To address the limitation, a stepwise sparse regularization (SSR) method with an adaptive sparse dictionary is proposed. In this method, the bearing fault diagnosis is modeled as a multi-parameter optimization problem, including time indexes of the sparse dictionary and sparse coefficients. Firstly, sparsity-enhanced optimization is conducted by amplifying the regularization parameter, making the time indexes and the number of atoms adaptively converge to the moments when impulses occur and the number of impulses, respectively. Then, fidelity-enhanced optimization is carried out by removing the regularization term, thereby obtaining the high-precision reconstruction amplitudes. Simulations and experiments verify that the reconstruction accuracy of the SSR method outperforms other sparse regularization methods under most noise conditions, and thus the proposed method can provide more accurate results for bearing fault diagnosis.
RESUMO
Single-cell RNA-sequencing (scRNA-seq) explores the transcriptome of genes at cell level, which sheds light on revealing the heterogeneity and dynamics of cell populations. Advances in biotechnologies make it possible to generate scRNA-seq profiles for large-scale cells, requiring effective and efficient clustering algorithms to identify cell types and informative genes. Although great efforts have been devoted to clustering of scRNA-seq, the accuracy, scalability and interpretability of available algorithms are not desirable. In this study, we solve these problems by developing a joint learning algorithm [a.k.a. joints sparse representation and clustering (jSRC)], where the dimension reduction (DR) and clustering are integrated. Specifically, DR is employed for the scalability and joint learning improves accuracy. To increase the interpretability of patterns, we assume that cells within the same type have similar expression patterns, where the sparse representation is imposed on features. We transform clustering of scRNA-seq into an optimization problem and then derive the update rules to optimize the objective of jSRC. Fifteen scRNA-seq datasets from various tissues and organisms are adopted to validate the performance of jSRC, where the number of single cells varies from 49 to 110 824. The experimental results demonstrate that jSRC significantly outperforms 12 state-of-the-art methods in terms of various measurements (on average 20.29% by improvement) with fewer running time. Furthermore, jSRC is efficient and robust across different scRNA-seq datasets from various tissues. Finally, jSRC also accurately identifies dynamic cell types associated with progression of COVID-19. The proposed model and methods provide an effective strategy to analyze scRNA-seq data (the software is coded using MATLAB and is free for academic purposes; https://github.com/xkmaxidian/jSRC).
Assuntos
Algoritmos , Aprendizado de Máquina , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos , Análise por ConglomeradosRESUMO
In this work, we propose a novel framework to recognize the cognitive and affective processes of the brain during neuromarketing-based stimuli using EEG signals. The most crucial component of our approach is the proposed classification algorithm that is based on a sparse representation classification scheme. The basic assumption of our approach is that EEG features from a cognitive or affective process lie on a linear subspace. Hence, a test brain signal can be represented as a linear (or weighted) combination of brain signals from all classes in the training set. The class membership of the brain signals is determined by adopting the Sparse Bayesian Framework with graph-based priors over the weights of linear combination. Furthermore, the classification rule is constructed by using the residuals of linear combination. The experiments on a publicly available neuromarketing EEG dataset demonstrate the usefulness of our approach. For the two classification tasks offered by the employed dataset, namely affective state recognition and cognitive state recognition, the proposed classification scheme manages to achieve a higher classification accuracy compared to the baseline and state-of-the art methods (more than 8% improvement in classification accuracy).
Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Teorema de Bayes , Eletroencefalografia/métodos , Encéfalo , Algoritmos , CogniçãoRESUMO
The increasing penetration of renewable energy sources tends to redirect the power systems community's interest from the traditional power grid model towards the smart grid framework. During this transition, load forecasting for various time horizons constitutes an essential electric utility task in network planning, operation, and management. This paper presents a novel mixed power-load forecasting scheme for multiple prediction horizons ranging from 15 min to 24 h ahead. The proposed approach makes use of a pool of models trained by several machine-learning methods with different characteristics, namely neural networks, linear regression, support vector regression, random forests, and sparse regression. The final prediction values are calculated using an online decision mechanism based on weighting the individual models according to their past performance. The proposed scheme is evaluated on real electrical load data sensed from a high voltage/medium voltage substation and is shown to be highly effective, as it results in R2 coefficient values ranging from 0.99 to 0.79 for prediction horizons ranging from 15 min to 24 h ahead, respectively. The method is compared to several state-of-the-art machine-learning approaches, as well as a different ensemble method, producing highly competitive results in terms of prediction accuracy.
Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Previsões , Algoritmo Florestas AleatóriasRESUMO
Mass production of high-quality synthetic SAR training imagery is essential for boosting the performance of deep-learning (DL)-based SAR automatic target recognition (ATR) algorithms in an open-world environment. To address this problem, we exploit both the widely used Moving and Stationary Target Acquisition and Recognition (MSTAR) SAR dataset and the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset, which consists of selected samples from the MSTAR dataset and their computer-generated synthetic counterparts. A series of data augmentation experiments are carried out. First, the sparsity of the scattering centers of the targets is exploited for new target pose synthesis. Additionally, training data with various clutter backgrounds are synthesized via clutter transfer, so that the neural networks are better prepared to cope with background changes in the test samples. To effectively augment the synthetic SAR imagery in the SAMPLE dataset, a novel contrast-based data augmentation technique is proposed. To improve the robustness of neural networks against out-of-distribution (OOD) samples, the SAR images of ground military vehicles collected by the self-developed MiniSAR system are used as the training data for the adversarial outlier exposure procedure. Simulation results show that the proposed data augmentation methods are effective in improving both the target classification accuracy and the OOD detection performance. The purpose of this work is to establish the foundation for large-scale, open-field implementation of DL-based SAR-ATR systems, which is not only of great value in the sense of theoretical research, but is also potentially meaningful in the aspect of military application.
Assuntos
Aprendizado Profundo , Militares , Humanos , Algoritmos , Simulação por Computador , Imagens, PsicoterapiaRESUMO
Multi-focus image fusion plays an important role in the application of computer vision. In the process of image fusion, there may be blurring and information loss, so it is our goal to obtain high-definition and information-rich fusion images. In this paper, a novel multi-focus image fusion method via local energy and sparse representation in the shearlet domain is proposed. The source images are decomposed into low- and high-frequency sub-bands according to the shearlet transform. The low-frequency sub-bands are fused by sparse representation, and the high-frequency sub-bands are fused by local energy. The inverse shearlet transform is used to reconstruct the fused image. The Lytro dataset with 20 pairs of images is used to verify the proposed method, and 8 state-of-the-art fusion methods and 8 metrics are used for comparison. According to the experimental results, our method can generate good performance for multi-focus image fusion.
RESUMO
Anomaly detection of hyperspectral remote sensing data has recently become more attractive in hyperspectral image processing. The low-rank and sparse matrix decomposition-based anomaly detection algorithm (LRaSMD) exhibits poor detection performance in complex scenes with multiple background edges and noise. Therefore, this study proposes a weighted sparse hyperspectral anomaly detection method. First, using the idea of matrix decomposition in mathematics, the original hyperspectral data matrix is reconstructed into three sub-matrices with low rank, small sparsity and representing noise, respectively. Second, to suppress the noise interference in the complex background, we employed the low-rank, background image as a reference, built a local spectral and spatial dictionary through the sliding window strategy, reconstructed the HSI pixels of the original data, and extracted the sparse coefficient. We proposed the sparse coefficient divergence evaluation index (SCDI) as a weighting factor to weight the sparse anomaly map to obtain a significant anomaly map to suppress the background edge, noise, and other residues caused by decomposition, and enhance the abnormal target. Finally, abnormal pixels are segmented based on the adaptive threshold. The experimental results demonstrate that, on a real-scene hyperspectral dataset with a complicated background, the proposed method outperforms the existing representative algorithms in terms of detection performance.
RESUMO
At present, the micro-Doppler effects of underwater targets is a challenging new research problem. This paper studies the micro-Doppler effect of underwater targets, analyzes the moving characteristics of underwater micro-motion components, establishes echo models of harmonic vibration points and plane and rotating propellers, and reveals the complex modulation laws of the micro-Doppler effect. In addition, since an echo is a multi-component signal superposed by multiple modulated signals, this paper provides a sparse reconstruction method combined with time-frequency distributions and realizes signal separation and time-frequency analysis. A MicroDopplerlet time-frequency atomic dictionary, matching the complex modulated form of echoes, is designed, which effectively realizes the concise representation of echoes and a micro-Doppler effect analysis. Meanwhile, the needed micro-motion parameter information for underwater signal detection and recognition is extracted.
RESUMO
Multiple sclerosis (MS) is a severely debilitating disease which requires accurate and timely diagnosis. MRI is the primary diagnostic vehicle; however, it is susceptible to noise and artifact which can limit diagnostic accuracy. A myriad of denoising algorithms have been developed over the years for medical imaging yet the models continue to become more complex. We developed a lightweight algorithm which utilizes the image's inherent noise via dictionary learning to improve image quality without high computational complexity or pretraining through a process known as orthogonal matching pursuit (OMP). Our algorithm is compared to existing traditional denoising algorithms to evaluate performance on real noise that would commonly be encountered in a clinical setting. Fifty patients with a history of MS who received 1.5 T MRI of the spine between the years of 2018 and 2022 were retrospectively identified in accordance with local IRB policies. Native resolution 5 mm sagittal images were selected from T2 weighted sequences for evaluation using various denoising techniques including our proposed OMP denoising algorithm. Peak signal to noise ratio (PSNR) and structural similarity index (SSIM) were measured. While wavelet denoising demonstrated an expected higher PSNR than other models, its SSIM was variable and consistently underperformed its comparators (0.94 ± 0.10). Our pilot OMP denoising algorithm provided superior performance with greater consistency in terms of SSIM (0.99 ± 0.01) with similar PSNR to non-local means filtering (NLM), both of which were superior to other comparators (OMP 37.6 ± 2.2, NLM 38.0 ± 1.8). The superior performance of our OMP denoising algorithm in comparison to traditional models is promising for clinical utility. Given its individualized and lightweight approach, implementation into PACS may be more easily incorporated. It is our hope that this technology will provide improved diagnostic accuracy and workflow optimization for Neurologists and Radiologists, as well as improved patient outcomes.
Assuntos
Esclerose Múltipla , Humanos , Esclerose Múltipla/diagnóstico por imagem , Estudos Retrospectivos , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodosRESUMO
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the FFOA images are acquired by intensity-fluctuation modulation effect. Secondly, we decompose the source images into low-pass and bandpass images by performing nonsubsampled contourlet transform. A sparse representation-based rule is introduced to fuse the lowpass images to effectively retain the useful energy information. Meanwhile, a contrast spatial frequency rule is proposed to fuse bandpass images, which considers the neighborhood correlation and gradient relationships of pixels. Finally, the fully focused image is produced by reconstruction. The proposed method significantly expands the range of focus of optical angiography and can be effectively extended to public multi-focused datasets. Experimental results confirm that the proposed method outperformed some state-of-the-art methods in both qualitative and quantitative evaluations.
RESUMO
BACKGROUND: Self-interacting proteins (SIPs), two or more copies of the protein that can interact with each other expressed by one gene, play a central role in the regulation of most living cells and cellular functions. Although numerous SIPs data can be provided by using high-throughput experimental techniques, there are still several shortcomings such as in time-consuming, costly, inefficient, and inherently high in false-positive rates, for the experimental identification of SIPs even nowadays. Therefore, it is more and more significant how to develop efficient and accurate automatic approaches as a supplement of experimental methods for assisting and accelerating the study of predicting SIPs from protein sequence information. RESULTS: In this paper, we present a novel framework, termed GLCM-WSRC (gray level co-occurrence matrix-weighted sparse representation based classification), for predicting SIPs automatically based on protein evolutionary information from protein primary sequences. More specifically, we firstly convert the protein sequence into Position Specific Scoring Matrix (PSSM) containing protein sequence evolutionary information, exploiting the Position Specific Iterated BLAST (PSI-BLAST) tool. Secondly, using an efficient feature extraction approach, i.e., GLCM, we extract abstract salient and invariant feature vectors from the PSSM, and then perform a pre-processing operation, the adaptive synthetic (ADASYN) technique, to balance the SIPs dataset to generate new feature vectors for classification. Finally, we employ an efficient and reliable WSRC model to identify SIPs according to the known information of self-interacting and non-interacting proteins. CONCLUSIONS: Extensive experimental results show that the proposed approach exhibits high prediction performance with 98.10% accuracy on the yeast dataset, and 91.51% accuracy on the human dataset, which further reveals that the proposed model could be a useful tool for large-scale self-interacting protein prediction and other bioinformatics tasks detection in the future.
Assuntos
Evolução Biológica , Biologia Computacional , Humanos , Sequência de Aminoácidos , Matrizes de Pontuação de Posição Específica , Leucócitos , Saccharomyces cerevisiae/genéticaRESUMO
OBJECTIVE: To investigate the ability of a multimodality MRI-based radiomics model in predicting the aggressiveness of papillary thyroid carcinoma (PTC). METHODS: This study included consecutive patients who underwent neck magnetic resonance (MR) scans and subsequent thyroidectomy during the study period. The pathological diagnosis of thyroidectomy specimens was the gold standard to determine the aggressiveness. Thyroid nodules were manually segmented on three modal MR images, and then radiomics features were extracted. A machine learning model was established to evaluate the prediction of PTC aggressiveness. RESULTS: The study cohort included 107 patients with PTC confirmed by pathology (cross-validation cohort: n = 71; test cohort: n = 36). A total of 1584 features were extracted from contrast-enhanced T1-weighted (CE-T1 WI), T2-weighted (T2 WI) and diffusion weighted (DWI) images of each patient. Sparse representation method is used for radiation feature selection and classification model establishment. The accuracy of the independent test set that using only one modality, like CE-T1WI, T2WI or DWI was not particularly satisfactory. In contrast, the result of these three modalities combined achieved 0.917. CONCLUSION: Our study shows that multimodality MR image based on radiomics model can accurately distinguish aggressiveness in PTC from non-aggressiveness PTC before operation. This method may be helpful to inform the treatment strategy and prognosis of patients with aggressiveness PTC.
Assuntos
Imageamento por Ressonância Magnética , Neoplasias da Glândula Tireoide , Humanos , Imageamento por Ressonância Magnética/métodos , Pescoço , Estudos Retrospectivos , Câncer Papilífero da Tireoide/diagnóstico por imagem , Câncer Papilífero da Tireoide/patologia , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/patologia , Neoplasias da Glândula Tireoide/cirurgiaRESUMO
PURPOSE: Compressed Sensing Magnetic Resonance Imaging (CS-MRI) is a promising technique to accelerate dynamic cardiac MR imaging (DCMRI). For DCMRI, the CS-MRI usually exploits image signal sparsity and low-rank property to reconstruct dynamic images from the undersampled k-space data. In this paper, a novel CS algorithm is investigated to improve dynamic cardiac MR image reconstruction quality under the condition of minimizing the k-space recording. METHODS: The sparse representation of 3D cardiac magnetic resonance data is implemented by synergistically integrating 3D total generalized variation (3D-TGV) algorithm and high order singular value decomposition (HOSVD) based Tensor Decomposition, termed k-t TGV-TD method. In the proposed method, the low rank structure of the 3D dynamic cardiac MR data is performed with the HOSVD method, and the localized image sparsity is achieved by the 3D-TGV method. Moreover, the Fast Composite Splitting Algorithm (FCSA) method, combining the variable splitting with operator splitting techniques, is employed to solve the low-rank and sparse problem. Two different cardiac MR datasets (cardiac perfusion and cine MR datasets) are used to evaluate the performance of the proposed method. RESULTS: Compared with the state-of-art methods, such as k-t SLR, 3D-TGV, HOSVD based tensor decomposition and low-rank plus sparse method, the proposed k-t TGV-TD method can offer improved reconstruction accuracy in terms of higher peak SNR (PSNR) and structural similarity index (SSIM). The proposed k-t TGV-TD method can achieve significantly better and stable reconstruction results than state-of-the-art methods in terms of both PSNR and SSIM, especially for cardiac perfusion MR dataset. CONCLUSIONS: This work proved that the k-t TGV-TD method was an effective sparse representation way for DCMRI, which was capable of significantly improving the reconstruction accuracy with different acceleration factors.
Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodosRESUMO
Wind turbines usually operate in harsh environments. The gearbox, the key component of the transmission chain in wind turbines, can easily be affected by multiple factors during the operation process and develop compound faults. Different types of faults can occur, coupled with each other and staggered interference. Thus, a challenge is to extract the fault characteristics from the composite fault signal to improve the reliability and the accuracy of compound fault diagnosis. To address the above problems, we propose a compound fault diagnosis method for wind turbine gearboxes based on multipoint optimal minimum entropy deconvolution adjusted (MOMEDA) and parallel parameter optimized resonant sparse decomposition (RSSD). Firstly, the MOMEDA is applied to the preprocess, setting the deconvolution period with different fault frequency types to eliminate the interference of the transmission path and environmental noise, while decoupling and separating the different types of single faults. Then, the RSSD method with parallel parameter optimization is applied for decomposing the preprocessed signal to obtain the low resonance components, further suppressing the interference components and enhancing the periodic fault characteristics. Finally, envelope demodulation of the enhanced signal is applied to extract the fault features and identify the different fault types. The effectiveness of the proposed method was verified using the actual data from the wind turbine gearbox. In addition, a comparison with some existing methods demonstrates the superiority of this method for decoupling composite fault characteristics.
RESUMO
Group-based sparse representation (GSR) uses image nonlocal self-similarity (NSS) prior to grouping similar image patches, and then performs sparse representation. However, the traditional GSR model restores the image by training degraded images, which leads to the inevitable over-fitting of the data in the training model, resulting in poor image restoration results. In this paper, we propose a new hybrid sparse representation model (HSR) for image restoration. The proposed HSR model is improved in two aspects. On the one hand, the proposed HSR model exploits the NSS priors of both degraded images and external image datasets, making the model complementary in feature space and the plane. On the other hand, we introduce a joint sparse representation model to make better use of local sparsity and NSS characteristics of the images. This joint model integrates the patch-based sparse representation (PSR) model and GSR model, while retaining the advantages of the GSR model and the PSR model, so that the sparse representation model is unified. Extensive experimental results show that the proposed hybrid model outperforms several existing image recovery algorithms in both objective and subjective evaluations.
RESUMO
The transient pulses caused by local faults of rolling bearings are an important measurement information for fault diagnosis. However, extracting transient pulses from complex nonstationary vibration signals with a large amount of background noise is challenging, especially in the early stage. To improve the anti-noise ability and detect incipient faults, a novel signal de-noising method based on enhanced time-frequency manifold (ETFM) and kurtosis-wavelet dictionary is proposed. First, to mine the high-dimensional features, the C-C method and Cao's method are combined to determine the embedding dimension and delay time of phase space reconstruction. Second, the input parameters of the liner local tangent space arrangement (LLTSA) algorithm are determined by the grid search method based on Renyi entropy, and the dimension is reduced by manifold learning to obtain the ETFM with the highest time-frequency aggregation. Finally, a kurtosis-wavelet dictionary is constructed for selecting the best atom and eliminating the noise and reconstruct the defective signal. Actual simulations showed that the proposed method is more effective in noise suppression than traditional algorithms and that it can accurately reproduce the amplitude and phase information of the raw signal.
RESUMO
As a detection method, X-ray Computed Tomography (CT) technology has the advantages of clear imaging, short detection time, and low detection cost. This makes it more widely used in clinical disease screening, detection, and disease tracking. This study exploits the ability of sparse representation to learn sparse transformations of information and combines it with image decomposition theory. The structural information of low-dose CT images is separated from noise and artifact information, and the sparse expression of sparse transformation is used to improve the imaging effect. In this paper, two different learned sparse transformations are used. The first covers more organizational information about the scanned object. The other can cover more noise artifacts. Both methods can improve the ability to learn sparse transformations to express various image information. Experimental results show that the algorithm is effective.
Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodosRESUMO
A reconstruction algorithm is proposed, based on multi-dictionary learning (MDL), to improve the reconstruction quality of acoustic tomography for complex temperature fields. Its aim is to improve the under-determination of the inverse problem by the sparse representation of the sound slowness signal (i.e., reciprocal of sound velocity). In the MDL algorithm, the K-SVD dictionary learning algorithm is used to construct corresponding sparse dictionaries for sound slowness signals of different types of temperature fields; the KNN peak-type classifier is employed for the joint use of multiple dictionaries; the orthogonal matching pursuit (OMP) algorithm is used to obtain the sparse representation of sound slowness signal in the sparse domain; then, the temperature distribution is obtained by using the relationship between sound slowness and temperature. Simulation and actual temperature distribution reconstruction experiments show that the MDL algorithm has smaller reconstruction errors and provides more accurate information about the temperature field, compared with the compressed sensing and improved orthogonal matching pursuit (CS-IMOMP) algorithm, which is an algorithm based on compressed sensing and improved orthogonal matching pursuit (in the CS-IMOMP, DFT dictionary is used), the least square algorithm (LSA) and the simultaneous iterative reconstruction technique (SIRT).