Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(13)2022 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-35808433

RESUMO

One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Modelos Logísticos , Aprendizado de Máquina , Mamografia/métodos
2.
Biomed Eng Online ; 13: 36, 2014 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-24708647

RESUMO

BACKGROUND: The signals acquired in brain-computer interface (BCI) experiments usually involve several complicated sampling, artifact and noise conditions. This mandated the use of several strategies as preprocessing to allow the extraction of meaningful components of the measured signals to be passed along to further processing steps. In spite of the success present preprocessing methods have to improve the reliability of BCI, there is still room for further improvement to boost the performance even more. METHODS: A new preprocessing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks is presented. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing preprocessing and allowing low channel counts to be used. RESULTS: The new method is verified using experimental data and compared to the classification results of the same data without denoising and with denoising using present wavelet shrinkage based technique. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed. CONCLUSION: The new preprocessing method based on spectral subtraction denoising offer superior performance to existing methods and has potential for practical utility as a new standard preprocessing block in BCI signal processing.


Assuntos
Interfaces Cérebro-Computador , Razão Sinal-Ruído , Estatística como Assunto/métodos , Técnica de Subtração , Eletroencefalografia , Processamento de Sinais Assistido por Computador
3.
Bioengineering (Basel) ; 11(5)2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38790344

RESUMO

The analysis of body motion is a valuable tool in the assessment and diagnosis of gait impairments, particularly those related to neurological disorders. In this study, we propose a novel automated system leveraging artificial intelligence for efficiently analyzing gait impairment from video-recorded images. The proposed methodology encompasses three key aspects. First, we generate a novel one-dimensional representation of each silhouette image, termed a silhouette sinogram, by computing the distance and angle between the centroid and each detected boundary points. This process enables us to effectively utilize relative variations in motion at different angles to detect gait patterns. Second, a one-dimensional convolutional neural network (1D CNN) model is developed and trained by incorporating the consecutive silhouette sinogram signals of silhouette frames to capture spatiotemporal information via assisted knowledge learning. This process allows the network to capture a broader context and temporal dependencies within the gait cycle, enabling a more accurate diagnosis of gait abnormalities. This study conducts training and an evaluation utilizing the publicly accessible INIT GAIT database. Finally, two evaluation schemes are employed: one leveraging individual silhouette frames and the other operating at the subject level, utilizing a majority voting technique. The outcomes of the proposed method showed superior enhancements in gait impairment recognition, with overall F1-scores of 100%, 90.62%, and 77.32% when evaluated based on sinogram signals, and 100%, 100%, and 83.33% when evaluated based on the subject level, for cases involving two, four, and six gait abnormalities, respectively. In conclusion, by comparing the observed locomotor function to a conventional gait pattern often seen in healthy individuals, the recommended approach allows for a quantitative and non-invasive evaluation of locomotion.

4.
Theor Biol Med Model ; 9: 34, 2012 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-22867264

RESUMO

BACKGROUND: Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC). The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. RESULTS: In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson's correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators) in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery) databases. The top ten genes selected using Pearson's correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers) genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3 genes using Euclidean distance and only one gene using Entropy). A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer) in the top ten genes of the list of intersected genes. CONCLUSIONS: To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when used individually according to biological interpretation and the examination of gene expression signal profiles.


Assuntos
Biomarcadores Tumorais/genética , Carcinoma Hepatocelular/genética , Neoplasias Hepáticas/genética , Oncogenes , Inteligência Artificial , Mineração de Dados , Bases de Dados Genéticas/estatística & dados numéricos , Sequenciamento de Nucleotídeos em Larga Escala , Humanos , Modelos Genéticos , Análise de Sequência com Séries de Oligonucleotídeos , Regulação para Cima
5.
Diagnostics (Basel) ; 12(11)2022 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-36428875

RESUMO

Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.

6.
Theor Biol Med Model ; 8: 11, 2011 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-21524280

RESUMO

BACKGROUND: Bioinformatics can be used to predict protein function, leading to an understanding of cellular activities, and equally-weighted protein-protein interactions (PPI) are normally used to predict such protein functions. The present study provides a weighting strategy for PPI to improve the prediction of protein functions. The weights are dependent on the local and global network topologies and the number of experimental verification methods. The proposed methods were applied to the yeast proteome and integrated with the neighbour counting method to predict the functions of unknown proteins. RESULTS: A new technique to weight interactions in the yeast proteome is presented. The weights are related to the network topology (local and global) and the number of identified methods, and the results revealed improvement in the sensitivity and specificity of prediction in terms of cellular role and cellular locations. This method (new weights) was compared with a method that utilises interactions with the same weight and it was shown to be superior. CONCLUSIONS: A new method for weighting the interactions in protein-protein interaction networks is presented. Experimental results concerning yeast proteins demonstrated that weighting interactions integrated with the neighbor counting method improved the sensitivity and specificity of prediction in terms of two functional categories: cellular role and cell locations.


Assuntos
Mapeamento de Interação de Proteínas/métodos , Proteínas de Saccharomyces cerevisiae/metabolismo , Saccharomyces cerevisiae/metabolismo , Anotação de Sequência Molecular , Ligação Proteica , Saccharomyces cerevisiae/citologia , Transdução de Sinais
7.
Theor Biol Med Model ; 8: 39, 2011 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-22018164

RESUMO

BACKGROUND: Understanding gene interactions in complex living systems can be seen as the ultimate goal of the systems biology revolution. Hence, to elucidate disease ontology fully and to reduce the cost of drug development, gene regulatory networks (GRNs) have to be constructed. During the last decade, many GRN inference algorithms based on genome-wide data have been developed to unravel the complexity of gene regulation. Time series transcriptomic data measured by genome-wide DNA microarrays are traditionally used for GRN modelling. One of the major problems with microarrays is that a dataset consists of relatively few time points with respect to the large number of genes. Dimensionality is one of the interesting problems in GRN modelling. RESULTS: In this paper, we develop a biclustering function enrichment analysis toolbox (BicAT-plus) to study the effect of biclustering in reducing data dimensions. The network generated from our system was validated via available interaction databases and was compared with previous methods. The results revealed the performance of our proposed method. CONCLUSIONS: Because of the sparse nature of GRNs, the results of biclustering techniques differ significantly from those of previous methods.


Assuntos
Redes Reguladoras de Genes/genética , Saccharomyces cerevisiae/genética , Algoritmos , Teorema de Bayes , Análise por Conglomerados , Bases de Dados Genéticas , Modelos Lineares , Curva ROC , Reprodutibilidade dos Testes
8.
PLoS One ; 15(3): e0230409, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32208428

RESUMO

Machine learning algorithms are currently being implemented in an escalating manner to classify and/or predict the onset of some neurodegenerative diseases; including Alzheimer's Disease (AD); this could be attributed to the fact of the abundance of data and powerful computers. The objective of this work was to deliver a robust classification system for AD and Mild Cognitive Impairment (MCI) against healthy controls (HC) in a low-cost network in terms of shallow architecture and processing. In this study, the dataset included was downloaded from the Alzheimer's disease neuroimaging initiative (ADNI). The classification methodology implemented was the convolutional neural network (CNN), where the diffusion maps, and gray-matter (GM) volumes were the input images. The number of scans included was 185, 106, and 115 for HC, MCI and AD respectively. Ten-fold cross-validation scheme was adopted and the stacked mean diffusivity (MD) and GM volume produced an AUC of 0.94 and 0.84, an accuracy of 93.5% and 79.6%, a sensitivity of 92.5% and 62.7%, and a specificity of 93.9% and 89% for AD/HC and MCI/HC classification respectively. This work elucidates the impact of incorporating data from different imaging modalities; i.e. structural Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI), where deep learning was employed for the aim of classification. To the best of our knowledge, this is the first study assessing the impact of having more than one scan per subject and propose the proper maneuver to confirm the robustness of the system. The results were competitive among the existing literature, which paves the way for improving medications that could slow down the progress of the AD or prevent it.


Assuntos
Doença de Alzheimer/diagnóstico , Disfunção Cognitiva/diagnóstico , Imagem de Tensor de Difusão/métodos , Imageamento por Ressonância Magnética/métodos , Idoso , Algoritmos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/patologia , Aprendizado Profundo , Progressão da Doença , Feminino , Substância Cinzenta/diagnóstico por imagem , Substância Cinzenta/fisiologia , Hipocampo/diagnóstico por imagem , Hipocampo/patologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Masculino , Redes Neurais de Computação , Neuroimagem/métodos , Máquina de Vetores de Suporte
9.
IEEE Trans Biomed Eng ; 52(1): 127-31, 2005 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-15651573

RESUMO

We develop a simple yet effective technique for motion artifact suppression in ultrasound images reconstructed from multiple acquisitions. Assuming a rigid-body motion model, a navigator echo is computed for each acquisition and then registered to estimate the motion in between acquisitions. By detecting this motion, it is possible to compensate for it in the reconstruction step to obtain images that are free of lateral motion artifacts. The theory and practical implementation details are described and the performance is analyzed using computer simulations as well as real data. The results indicate the potential of the new method for real-time implementation in lower cost ultrasound imaging systems.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Movimento , Técnica de Subtração , Ultrassonografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Ultrassonografia/instrumentação
10.
Artigo em Inglês | MEDLINE | ID: mdl-26737462

RESUMO

Cardiac arrhythmia is a serious disorder in heart electrical activity that may have fatal consequences especially if not detected early. This motivated the development of automated arrhythmia detection systems that can early detect and accurately recognize arrhythmias thus significantly improving the chances of patient survival. In this paper, we propose an improved arrhythmia detection system particularly designed to identify five different types based on nonlinear dynamical modeling of electrocardiogram signals. The new approach introduces a novel distance series domain derived from the reconstructed phase space as a transform space for the signals that is explored using classical features. The performance measures showed that the proposed system outperforms state of the art methods as it achieved 98.7% accuracy, 99.54% sensitivity, 99.42% specificity, 98.19% positive predictive value, and 99.85% negative predictive value.


Assuntos
Algoritmos , Arritmias Cardíacas/classificação , Arritmias Cardíacas/diagnóstico , Eletrocardiografia , Análise de Fourier , Frequência Cardíaca/fisiologia , Humanos , Processamento de Sinais Assistido por Computador
11.
IEEE Trans Biomed Eng ; 51(11): 1944-53, 2004 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-15536896

RESUMO

A new adaptive signal-preserving technique for noise suppression in event-related functional magnetic resonance imaging (fMRI) data is proposed based on spectral subtraction. The proposed technique estimates a parametric model for the power spectrum of random noise from the acquired data based on the characteristics of the Rician statistical model. This model is subsequently used to estimate a noise-suppressed power spectrum for any given pixel time course by simple subtraction of power spectra. The new technique is tested using computer simulations and real data from event-related fMRI experiments. The results show the potential of the new technique in suppressing noise while preserving the other deterministic components in the signal. Moreover, we demonstrate that further analysis using principal component analysis and independent component analysis shows a significant improvement in both convergence and clarity of results when the new technique is used. Given its simple form, the new method does not change the statistical characteristics of the signal or cause correlated noise to be present in the processed signal. This suggests the value of the new technique as a useful preprocessing step for fMRI data analysis.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Técnica de Subtração , Artefatos , Inteligência Artificial , Encéfalo/anatomia & histologia , Simulação por Computador , Retroalimentação , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Biológicos , Modelos Estatísticos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processos Estocásticos
12.
IEEE Trans Biomed Eng ; 49(9): 997-1014, 2002 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-12214889

RESUMO

This paper presents a novel approach for speckle reduction and coherence enhancement of ultrasound images based on nonlinear coherent diffusion (NCD) model. The proposed NCD model combines three different models. According to speckle extent and image anisotropy, the NCD model changes progressively from isotropic diffusion through anisotropic coherent diffusion to, finally, mean curvature motion. This structure maximally low-pass filters those parts of the image that correspond to fully developed speckle, while substantially preserving information associated with resolved-object structures. The proposed implementation algorithm utilizes an efficient discretization scheme that allows for real-time implementation on commercial systems. The theory and implementation of the new technique are presented and verified using phantom and clinical ultrasound images. In addition, the results from previous techniques are compared with the new method to demonstrate its performance.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Ultrassonografia Doppler de Pulso/instrumentação , Ultrassonografia Doppler de Pulso/métodos , Anisotropia , Simulação por Computador , Ecocardiografia/métodos , Estudos de Avaliação como Assunto , Coração , Humanos , Rim/diagnóstico por imagem , Fígado/diagnóstico por imagem , Modelos Estatísticos , Dinâmica não Linear , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Fatores de Tempo
13.
IEEE Trans Biomed Eng ; 49(7): 733-6, 2002 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-12083309

RESUMO

We present a study of the nonlinear dynamics of electrocardiogram (ECG) signals for arrhythmia characterization. The correlation dimension and largest Lyapunov exponent are used to model the chaotic nature of five different classes of ECG signals. The model parameters are evaluated for a large number of real ECG signals within each class and the results are reported. The presented algorithms allow automatic calculation of the features. The statistical analysis of the calculated features indicates that they differ significantly between normal heart rhythm and the different arrhythmia types and, hence, can be rather useful in ECG arrhythmia detection. On the other hand, the results indicate that the discrimination between different arrhythmia types is difficult using such features. The results of this work are supported by statistical analysis that provides a clear outline for the potential uses and limitations of these features.


Assuntos
Algoritmos , Arritmias Cardíacas/diagnóstico , Eletrocardiografia/métodos , Modelos Cardiovasculares , Dinâmica não Linear , Arritmias Cardíacas/classificação , Bases de Dados Factuais , Eletrocardiografia/classificação , Eletrocardiografia/estatística & dados numéricos , Humanos , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
14.
IEEE Trans Biomed Eng ; 49(9): 1059-67, 2002 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-12214880

RESUMO

A new system is proposed for tracking sensitive areas in the retina for computer-assisted laser treatment of choroidal neovascularization (CNV). The system consists of a fundus camera using red-free illumination mode interfaced to a computer that allows real-time capturing of video input. The first image acquired is used as the reference image and utilized by the treatment physician for treatment planning. A grid of seed contours over the whole image is initiated and allowed to deform by splitting and/or merging according to preset criteria until the whole vessel tree is demarcated. Then, the image is filtered using a one-dimensional Gaussian filter in two perpendicular directions to extract the core areas of such vessels. Faster segmentation can be obtained for subsequent images by automatic registration to compensate for eye movement and saccades. An efficient registration technique is developed whereby some landmarks are detected in the reference frame then tracked in the subsequent frames. Using the relation between these two sets of corresponding points, an optimal transformation can be obtained. The implementation details of proposed strategy are presented and the obtained results indicate that it is suitable for real-time location determination and tracking of treatment positions.


Assuntos
Movimentos Oculares/fisiologia , Aumento da Imagem/métodos , Fotocoagulação a Laser/métodos , Neovascularização Retiniana/diagnóstico , Neovascularização Retiniana/cirurgia , Vasos Retinianos/anatomia & histologia , Algoritmos , Neovascularização de Coroide/diagnóstico , Neovascularização de Coroide/etiologia , Neovascularização de Coroide/cirurgia , Doença Crônica , Corantes , Simulação por Computador , Retinopatia Diabética/complicações , Reações Falso-Positivas , Humanos , Verde de Indocianina , Fotocoagulação a Laser/instrumentação , Microscopia de Vídeo/métodos , Movimento , Oftalmoscopia/métodos , Reconhecimento Automatizado de Padrão , Neovascularização Retiniana/etiologia , Vasos Retinianos/fisiopatologia
15.
Artigo em Inglês | MEDLINE | ID: mdl-19963770

RESUMO

A new method is presented to identify Electrocardiogram (ECG) signals for abnormal heartbeats based on Prony's modeling algorithm and neural network. Hence, the ECG signals can be written as a finite sum of exponential depending on poles. Neural network is used to identify the ECG signal from the calculated poles. Algorithm classification including a multi-layer feed forward neural network using back propagation is proposed as a classifying model to categorize the beats into one of five types including normal sinus rhythm (NSR), ventricular couplet (VC), ventricular tachycardia (VT), ventricular bigeminy (VB), and ventricular fibrillation (VF).


Assuntos
Arritmias Cardíacas/diagnóstico , Eletrocardiografia/métodos , Algoritmos , Arritmia Sinusal/fisiopatologia , Arritmias Cardíacas/fisiopatologia , Simulação por Computador , Sistema de Condução Cardíaco/fisiopatologia , Frequência Cardíaca/fisiologia , Ventrículos do Coração/fisiopatologia , Humanos , Modelos Cardiovasculares , Rede Nervosa , Neurônios/fisiologia , Fibrilação Ventricular/diagnóstico , Fibrilação Ventricular/fisiopatologia
16.
Artigo em Inglês | MEDLINE | ID: mdl-18002734

RESUMO

The model-based approach for detecting the fMRI activations involves assumptions about the hemodynamic response function. If such assumptions are incorrect or incomplete, this may result in biased estimates of the true response, posing a significant obstacle to the practicality of the technique. In this work, a simple yet robust model-free technique is proposed for detecting the fMRI activations. The idea of the proposed model is to convert one of the model-based fMRI tools, namely canonical correlation analysis (CCA), to model-free with help of independent component analysis (ICA). In particular, ICA provides accurate reference functions for CCA instead of the harmonics originally used. This combination enables the elimination of the limitations of both techniques and provides a model-free approach for data analysis. Results from both numerical simulations and real fMRI data sets confirm the practicality and robustness of the proposed method.


Assuntos
Algoritmos , Mapeamento Encefálico/métodos , Potencial Evocado Motor/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Córtex Motor/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Humanos , Aumento da Imagem/métodos , Modelos Neurológicos , Análise de Componente Principal , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estatística como Assunto
18.
Magn Reson Med ; 56(6): 1182-91, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17089380

RESUMO

A simple iterative algorithm, termed deconvolution-interpolation gridding (DING), is presented to address the problem of reconstructing images from arbitrarily-sampled k-space. The new algorithm solves a sparse system of linear equations that is equivalent to a deconvolution of the k-space with a small window. The deconvolution operation results in increased reconstruction accuracy without grid subsampling, at some cost to computational load. By avoiding grid oversampling, the new solution saves memory, which is critical for 3D trajectories. The DING algorithm does not require the calculation of a sampling density compensation function, which is often problematic. DING's sparse linear system is inverted efficiently using the conjugate gradient (CG) method. The reconstruction of the gridding system matrix is simple and fast, and no regularization is needed. This feature renders DING suitable for situations where the k-space trajectory is changed often or is not known a priori, such as when patient motion occurs during the scan. DING was compared with conventional gridding and an iterative reconstruction method in computer simulations and in vivo spiral MRI experiments. The results demonstrate a stable performance and reduced root mean square (RMS) error for DING in different k-space trajectories.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Humanos , Análise Numérica Assistida por Computador , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Int J Biomed Imaging ; 2006: 49378, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-23165034

RESUMO

Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility.

20.
Appl Opt ; 42(31): 6398-411, 2003 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-14649284

RESUMO

We have investigated a method for solving the inverse problem of determining the optical properties of a two-layer turbid model. The method is based on deducing the optical properties (OPs) of the top layer from the absolute spatially resolved reflectance that results from photon migration within only the top layer by use of a multivariate calibration model. Then the OPs of the bottom layer are deduced from relative frequency-domain (FD) reflectance measurements by use of the two-layer FD diffusion model. The method was validated with Monte Carlo FD reflectance profiles and experimental measurements of two-layer phantoms. The results showed that the method is useful for two-layer models with interface depths of >5 mm; the OPs were estimated, within a relatively short time (<1 min), with a mean error of <10% for the Monte Carlo reflectance profiles and with errors of <25% for the phantom measurements.


Assuntos
Algoritmos , Tecido Conjuntivo/anatomia & histologia , Tecido Conjuntivo/fisiologia , Interpretação de Imagem Assistida por Computador/métodos , Óptica e Fotônica/instrumentação , Tomografia Óptica/instrumentação , Tomografia Óptica/métodos , Desenho de Equipamento , Análise de Falha de Equipamento , Método de Monte Carlo , Imagens de Fantasmas , Fótons , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA