Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Bioengineering (Basel) ; 11(5)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38790344

RESUMEN

The analysis of body motion is a valuable tool in the assessment and diagnosis of gait impairments, particularly those related to neurological disorders. In this study, we propose a novel automated system leveraging artificial intelligence for efficiently analyzing gait impairment from video-recorded images. The proposed methodology encompasses three key aspects. First, we generate a novel one-dimensional representation of each silhouette image, termed a silhouette sinogram, by computing the distance and angle between the centroid and each detected boundary points. This process enables us to effectively utilize relative variations in motion at different angles to detect gait patterns. Second, a one-dimensional convolutional neural network (1D CNN) model is developed and trained by incorporating the consecutive silhouette sinogram signals of silhouette frames to capture spatiotemporal information via assisted knowledge learning. This process allows the network to capture a broader context and temporal dependencies within the gait cycle, enabling a more accurate diagnosis of gait abnormalities. This study conducts training and an evaluation utilizing the publicly accessible INIT GAIT database. Finally, two evaluation schemes are employed: one leveraging individual silhouette frames and the other operating at the subject level, utilizing a majority voting technique. The outcomes of the proposed method showed superior enhancements in gait impairment recognition, with overall F1-scores of 100%, 90.62%, and 77.32% when evaluated based on sinogram signals, and 100%, 100%, and 83.33% when evaluated based on the subject level, for cases involving two, four, and six gait abnormalities, respectively. In conclusion, by comparing the observed locomotor function to a conventional gait pattern often seen in healthy individuals, the recommended approach allows for a quantitative and non-invasive evaluation of locomotion.

2.
Diagnostics (Basel) ; 12(11)2022 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-36428875

RESUMEN

Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.

3.
Sensors (Basel) ; 22(13)2022 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-35808433

RESUMEN

One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Modelos Logísticos , Aprendizaje Automático , Mamografía/métodos
4.
PLoS One ; 15(3): e0230409, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32208428

RESUMEN

Machine learning algorithms are currently being implemented in an escalating manner to classify and/or predict the onset of some neurodegenerative diseases; including Alzheimer's Disease (AD); this could be attributed to the fact of the abundance of data and powerful computers. The objective of this work was to deliver a robust classification system for AD and Mild Cognitive Impairment (MCI) against healthy controls (HC) in a low-cost network in terms of shallow architecture and processing. In this study, the dataset included was downloaded from the Alzheimer's disease neuroimaging initiative (ADNI). The classification methodology implemented was the convolutional neural network (CNN), where the diffusion maps, and gray-matter (GM) volumes were the input images. The number of scans included was 185, 106, and 115 for HC, MCI and AD respectively. Ten-fold cross-validation scheme was adopted and the stacked mean diffusivity (MD) and GM volume produced an AUC of 0.94 and 0.84, an accuracy of 93.5% and 79.6%, a sensitivity of 92.5% and 62.7%, and a specificity of 93.9% and 89% for AD/HC and MCI/HC classification respectively. This work elucidates the impact of incorporating data from different imaging modalities; i.e. structural Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI), where deep learning was employed for the aim of classification. To the best of our knowledge, this is the first study assessing the impact of having more than one scan per subject and propose the proper maneuver to confirm the robustness of the system. The results were competitive among the existing literature, which paves the way for improving medications that could slow down the progress of the AD or prevent it.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico , Disfunción Cognitiva/diagnóstico , Imagen de Difusión Tensora/métodos , Imagen por Resonancia Magnética/métodos , Anciano , Algoritmos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Disfunción Cognitiva/diagnóstico por imagen , Disfunción Cognitiva/patología , Aprendizaje Profundo , Progresión de la Enfermedad , Femenino , Sustancia Gris/diagnóstico por imagen , Sustancia Gris/fisiología , Hipocampo/diagnóstico por imagen , Hipocampo/patología , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Masculino , Redes Neurales de la Computación , Neuroimagen/métodos , Máquina de Vectores de Soporte
5.
Artículo en Inglés | MEDLINE | ID: mdl-26737462

RESUMEN

Cardiac arrhythmia is a serious disorder in heart electrical activity that may have fatal consequences especially if not detected early. This motivated the development of automated arrhythmia detection systems that can early detect and accurately recognize arrhythmias thus significantly improving the chances of patient survival. In this paper, we propose an improved arrhythmia detection system particularly designed to identify five different types based on nonlinear dynamical modeling of electrocardiogram signals. The new approach introduces a novel distance series domain derived from the reconstructed phase space as a transform space for the signals that is explored using classical features. The performance measures showed that the proposed system outperforms state of the art methods as it achieved 98.7% accuracy, 99.54% sensitivity, 99.42% specificity, 98.19% positive predictive value, and 99.85% negative predictive value.


Asunto(s)
Algoritmos , Arritmias Cardíacas/clasificación , Arritmias Cardíacas/diagnóstico , Electrocardiografía , Análisis de Fourier , Frecuencia Cardíaca/fisiología , Humanos , Procesamiento de Señales Asistido por Computador
6.
Biomed Eng Online ; 13: 36, 2014 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-24708647

RESUMEN

BACKGROUND: The signals acquired in brain-computer interface (BCI) experiments usually involve several complicated sampling, artifact and noise conditions. This mandated the use of several strategies as preprocessing to allow the extraction of meaningful components of the measured signals to be passed along to further processing steps. In spite of the success present preprocessing methods have to improve the reliability of BCI, there is still room for further improvement to boost the performance even more. METHODS: A new preprocessing method for denoising P300-based brain-computer interface data that allows better performance with lower number of channels and blocks is presented. The new denoising technique is based on a modified version of the spectral subtraction denoising and works on each temporal signal channel independently thus offering seamless integration with existing preprocessing and allowing low channel counts to be used. RESULTS: The new method is verified using experimental data and compared to the classification results of the same data without denoising and with denoising using present wavelet shrinkage based technique. Enhanced performance in different experiments as quantitatively assessed using classification block accuracy as well as bit rate estimates was confirmed. CONCLUSION: The new preprocessing method based on spectral subtraction denoising offer superior performance to existing methods and has potential for practical utility as a new standard preprocessing block in BCI signal processing.


Asunto(s)
Interfaces Cerebro-Computador , Relación Señal-Ruido , Estadística como Asunto/métodos , Técnica de Sustracción , Electroencefalografía , Procesamiento de Señales Asistido por Computador
7.
Theor Biol Med Model ; 9: 34, 2012 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-22867264

RESUMEN

BACKGROUND: Discovering new biomarkers has a great role in improving early diagnosis of Hepatocellular carcinoma (HCC). The experimental determination of biomarkers needs a lot of time and money. This motivates this work to use in-silico prediction of biomarkers to reduce the number of experiments required for detecting new ones. This is achieved by extracting the most representative genes in microarrays of HCC. RESULTS: In this work, we provide a method for extracting the differential expressed genes, up regulated ones, that can be considered candidate biomarkers in high throughput microarrays of HCC. We examine the power of several gene selection methods (such as Pearson's correlation coefficient, Cosine coefficient, Euclidean distance, Mutual information and Entropy with different estimators) in selecting informative genes. A biological interpretation of the highly ranked genes is done using KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways, ENTREZ and DAVID (Database for Annotation, Visualization, and Integrated Discovery) databases. The top ten genes selected using Pearson's correlation coefficient and Cosine coefficient contained six genes that have been implicated in cancer (often multiple cancers) genesis in previous studies. A fewer number of genes were obtained by the other methods (4 genes using Mutual information, 3 genes using Euclidean distance and only one gene using Entropy). A better result was obtained by the utilization of a hybrid approach based on intersecting the highly ranked genes in the output of all investigated methods. This hybrid combination yielded seven genes (2 genes for HCC and 5 genes in different types of cancer) in the top ten genes of the list of intersected genes. CONCLUSIONS: To strengthen the effectiveness of the univariate selection methods, we propose a hybrid approach by intersecting several of these methods in a cascaded manner. This approach surpasses all of univariate selection methods when used individually according to biological interpretation and the examination of gene expression signal profiles.


Asunto(s)
Biomarcadores de Tumor/genética , Carcinoma Hepatocelular/genética , Neoplasias Hepáticas/genética , Oncogenes , Inteligencia Artificial , Minería de Datos , Bases de Datos Genéticas/estadística & datos numéricos , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Modelos Genéticos , Análisis de Secuencia por Matrices de Oligonucleótidos , Regulación hacia Arriba
8.
Theor Biol Med Model ; 8: 39, 2011 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-22018164

RESUMEN

BACKGROUND: Understanding gene interactions in complex living systems can be seen as the ultimate goal of the systems biology revolution. Hence, to elucidate disease ontology fully and to reduce the cost of drug development, gene regulatory networks (GRNs) have to be constructed. During the last decade, many GRN inference algorithms based on genome-wide data have been developed to unravel the complexity of gene regulation. Time series transcriptomic data measured by genome-wide DNA microarrays are traditionally used for GRN modelling. One of the major problems with microarrays is that a dataset consists of relatively few time points with respect to the large number of genes. Dimensionality is one of the interesting problems in GRN modelling. RESULTS: In this paper, we develop a biclustering function enrichment analysis toolbox (BicAT-plus) to study the effect of biclustering in reducing data dimensions. The network generated from our system was validated via available interaction databases and was compared with previous methods. The results revealed the performance of our proposed method. CONCLUSIONS: Because of the sparse nature of GRNs, the results of biclustering techniques differ significantly from those of previous methods.


Asunto(s)
Redes Reguladoras de Genes/genética , Saccharomyces cerevisiae/genética , Algoritmos , Teorema de Bayes , Análisis por Conglomerados , Bases de Datos Genéticas , Modelos Lineales , Curva ROC , Reproducibilidad de los Resultados
9.
Theor Biol Med Model ; 8: 11, 2011 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-21524280

RESUMEN

BACKGROUND: Bioinformatics can be used to predict protein function, leading to an understanding of cellular activities, and equally-weighted protein-protein interactions (PPI) are normally used to predict such protein functions. The present study provides a weighting strategy for PPI to improve the prediction of protein functions. The weights are dependent on the local and global network topologies and the number of experimental verification methods. The proposed methods were applied to the yeast proteome and integrated with the neighbour counting method to predict the functions of unknown proteins. RESULTS: A new technique to weight interactions in the yeast proteome is presented. The weights are related to the network topology (local and global) and the number of identified methods, and the results revealed improvement in the sensitivity and specificity of prediction in terms of cellular role and cellular locations. This method (new weights) was compared with a method that utilises interactions with the same weight and it was shown to be superior. CONCLUSIONS: A new method for weighting the interactions in protein-protein interaction networks is presented. Experimental results concerning yeast proteins demonstrated that weighting interactions integrated with the neighbor counting method improved the sensitivity and specificity of prediction in terms of two functional categories: cellular role and cell locations.


Asunto(s)
Mapeo de Interacción de Proteínas/métodos , Proteínas de Saccharomyces cerevisiae/metabolismo , Saccharomyces cerevisiae/metabolismo , Anotación de Secuencia Molecular , Unión Proteica , Saccharomyces cerevisiae/citología , Transducción de Señal
11.
Artículo en Inglés | MEDLINE | ID: mdl-19963770

RESUMEN

A new method is presented to identify Electrocardiogram (ECG) signals for abnormal heartbeats based on Prony's modeling algorithm and neural network. Hence, the ECG signals can be written as a finite sum of exponential depending on poles. Neural network is used to identify the ECG signal from the calculated poles. Algorithm classification including a multi-layer feed forward neural network using back propagation is proposed as a classifying model to categorize the beats into one of five types including normal sinus rhythm (NSR), ventricular couplet (VC), ventricular tachycardia (VT), ventricular bigeminy (VB), and ventricular fibrillation (VF).


Asunto(s)
Arritmias Cardíacas/diagnóstico , Electrocardiografía/métodos , Algoritmos , Arritmia Sinusal/fisiopatología , Arritmias Cardíacas/fisiopatología , Simulación por Computador , Sistema de Conducción Cardíaco/fisiopatología , Frecuencia Cardíaca/fisiología , Ventrículos Cardíacos/fisiopatología , Humanos , Modelos Cardiovasculares , Red Nerviosa , Neuronas/fisiología , Fibrilación Ventricular/diagnóstico , Fibrilación Ventricular/fisiopatología
12.
Artículo en Inglés | MEDLINE | ID: mdl-18002734

RESUMEN

The model-based approach for detecting the fMRI activations involves assumptions about the hemodynamic response function. If such assumptions are incorrect or incomplete, this may result in biased estimates of the true response, posing a significant obstacle to the practicality of the technique. In this work, a simple yet robust model-free technique is proposed for detecting the fMRI activations. The idea of the proposed model is to convert one of the model-based fMRI tools, namely canonical correlation analysis (CCA), to model-free with help of independent component analysis (ICA). In particular, ICA provides accurate reference functions for CCA instead of the harmonics originally used. This combination enables the elimination of the limitations of both techniques and provides a model-free approach for data analysis. Results from both numerical simulations and real fMRI data sets confirm the practicality and robustness of the proposed method.


Asunto(s)
Algoritmos , Mapeo Encefálico/métodos , Potenciales Evocados Motores/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Corteza Motora/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Aumento de la Imagen/métodos , Modelos Neurológicos , Análisis de Componente Principal , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Estadística como Asunto
13.
Magn Reson Med ; 56(6): 1182-91, 2006 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-17089380

RESUMEN

A simple iterative algorithm, termed deconvolution-interpolation gridding (DING), is presented to address the problem of reconstructing images from arbitrarily-sampled k-space. The new algorithm solves a sparse system of linear equations that is equivalent to a deconvolution of the k-space with a small window. The deconvolution operation results in increased reconstruction accuracy without grid subsampling, at some cost to computational load. By avoiding grid oversampling, the new solution saves memory, which is critical for 3D trajectories. The DING algorithm does not require the calculation of a sampling density compensation function, which is often problematic. DING's sparse linear system is inverted efficiently using the conjugate gradient (CG) method. The reconstruction of the gridding system matrix is simple and fast, and no regularization is needed. This feature renders DING suitable for situations where the k-space trajectory is changed often or is not known a priori, such as when patient motion occurs during the scan. DING was compared with conventional gridding and an iterative reconstruction method in computer simulations and in vivo spiral MRI experiments. The results demonstrate a stable performance and reduced root mean square (RMS) error for DING in different k-space trajectories.


Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Humanos , Análisis Numérico Asistido por Computador , Fantasmas de Imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
Int J Biomed Imaging ; 2006: 49378, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-23165034

RESUMEN

Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility.

15.
IEEE Trans Biomed Eng ; 52(1): 127-31, 2005 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-15651573

RESUMEN

We develop a simple yet effective technique for motion artifact suppression in ultrasound images reconstructed from multiple acquisitions. Assuming a rigid-body motion model, a navigator echo is computed for each acquisition and then registered to estimate the motion in between acquisitions. By detecting this motion, it is possible to compensate for it in the reconstruction step to obtain images that are free of lateral motion artifacts. The theory and practical implementation details are described and the performance is analyzed using computer simulations as well as real data. The results indicate the potential of the new method for real-time implementation in lower cost ultrasound imaging systems.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Movimiento , Técnica de Sustracción , Ultrasonografía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fantasmas de Imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Ultrasonografía/instrumentación
16.
IEEE Trans Biomed Eng ; 51(11): 1944-53, 2004 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-15536896

RESUMEN

A new adaptive signal-preserving technique for noise suppression in event-related functional magnetic resonance imaging (fMRI) data is proposed based on spectral subtraction. The proposed technique estimates a parametric model for the power spectrum of random noise from the acquired data based on the characteristics of the Rician statistical model. This model is subsequently used to estimate a noise-suppressed power spectrum for any given pixel time course by simple subtraction of power spectra. The new technique is tested using computer simulations and real data from event-related fMRI experiments. The results show the potential of the new technique in suppressing noise while preserving the other deterministic components in the signal. Moreover, we demonstrate that further analysis using principal component analysis and independent component analysis shows a significant improvement in both convergence and clarity of results when the new technique is used. Given its simple form, the new method does not change the statistical characteristics of the signal or cause correlated noise to be present in the processed signal. This suggests the value of the new technique as a useful preprocessing step for fMRI data analysis.


Asunto(s)
Algoritmos , Mapeo Encefálico/métodos , Encéfalo/fisiología , Potenciales Evocados/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Técnica de Sustracción , Artefactos , Inteligencia Artificial , Encéfalo/anatomía & histología , Simulación por Computador , Retroalimentación , Humanos , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Modelos Biológicos , Modelos Estadísticos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesos Estocásticos
17.
Magn Reson Med ; 51(2): 403-7, 2004 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-14755668

RESUMEN

A modification of the classical navigator echo (NAV) technique is presented whereby both 2D translational motion components are computed from a single navigator line. Instead of acquiring the NAV at the center of the k-space, a kx line is acquired off-center in the phase-encoding (ky) direction as a floating NAV (FNAV). It is shown that the translational motion in both the readout and phase-encoding directions can be computed from this line. The algorithm used is described in detail and verified experimentally. The new technique can be readily implemented to replace classic NAV in MRI sequences, with little to no additional cost or complexity. The new method can help suppress 2D translational motion and provide more accurate motion estimates for other motion-suppression techniques, such as the diminishing variance algorithm.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Artefactos , Simulación por Computador , Movimiento (Física)
18.
Magn Reson Med ; 51(2): 423-7, 2004 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-14755672

RESUMEN

In this work, the effect of fluid-attenuated inversion recovery (FLAIR) on measured diffusion anisotropy was investigated in gray matter. DTI data were obtained with and without FLAIR in six normal volunteers. The application of FLAIR was experimentally demonstrated to lead to a consistent increase in fractional anisotropy (FA) in gray-matter regions, which was attributed to suppressed partial volume effects from CSF. In addition to these experimental results, Monte Carlo simulations were performed to ascertain the effect of noise on the measured FA under the experimental conditions of this study. The experimentally observed effect of noise was corroborated by the simulation, indicating that the increase in the measured FA was not due to a noise-related bias but to an actual increase in diffusion anisotropy. This enhanced measurement of diffusion anisotropy can be potentially used to differentiate directionally dependent structure and tracking fibers in gray matter.


Asunto(s)
Mapeo Encefálico/métodos , Imagen de Difusión por Resonancia Magnética , Aumento de la Imagen , Procesamiento de Imagen Asistido por Computador , Anisotropía , Humanos , Método de Montecarlo
19.
Appl Opt ; 42(31): 6398-411, 2003 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-14649284

RESUMEN

We have investigated a method for solving the inverse problem of determining the optical properties of a two-layer turbid model. The method is based on deducing the optical properties (OPs) of the top layer from the absolute spatially resolved reflectance that results from photon migration within only the top layer by use of a multivariate calibration model. Then the OPs of the bottom layer are deduced from relative frequency-domain (FD) reflectance measurements by use of the two-layer FD diffusion model. The method was validated with Monte Carlo FD reflectance profiles and experimental measurements of two-layer phantoms. The results showed that the method is useful for two-layer models with interface depths of >5 mm; the OPs were estimated, within a relatively short time (<1 min), with a mean error of <10% for the Monte Carlo reflectance profiles and with errors of <25% for the phantom measurements.


Asunto(s)
Algoritmos , Tejido Conectivo/anatomía & histología , Tejido Conectivo/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Óptica y Fotónica/instrumentación , Tomografía Óptica/instrumentación , Tomografía Óptica/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Método de Montecarlo , Fantasmas de Imagen , Fotones , Reproducibilidad de los Resultados , Dispersión de Radiación , Sensibilidad y Especificidad
20.
IEEE Trans Biomed Eng ; 49(9): 1059-67, 2002 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-12214880

RESUMEN

A new system is proposed for tracking sensitive areas in the retina for computer-assisted laser treatment of choroidal neovascularization (CNV). The system consists of a fundus camera using red-free illumination mode interfaced to a computer that allows real-time capturing of video input. The first image acquired is used as the reference image and utilized by the treatment physician for treatment planning. A grid of seed contours over the whole image is initiated and allowed to deform by splitting and/or merging according to preset criteria until the whole vessel tree is demarcated. Then, the image is filtered using a one-dimensional Gaussian filter in two perpendicular directions to extract the core areas of such vessels. Faster segmentation can be obtained for subsequent images by automatic registration to compensate for eye movement and saccades. An efficient registration technique is developed whereby some landmarks are detected in the reference frame then tracked in the subsequent frames. Using the relation between these two sets of corresponding points, an optimal transformation can be obtained. The implementation details of proposed strategy are presented and the obtained results indicate that it is suitable for real-time location determination and tracking of treatment positions.


Asunto(s)
Movimientos Oculares/fisiología , Aumento de la Imagen/métodos , Coagulación con Láser/métodos , Neovascularización Retiniana/diagnóstico , Neovascularización Retiniana/cirugía , Vasos Retinianos/anatomía & histología , Algoritmos , Neovascularización Coroidal/diagnóstico , Neovascularización Coroidal/etiología , Neovascularización Coroidal/cirugía , Enfermedad Crónica , Colorantes , Simulación por Computador , Retinopatía Diabética/complicaciones , Reacciones Falso Positivas , Humanos , Verde de Indocianina , Coagulación con Láser/instrumentación , Microscopía por Video/métodos , Movimiento , Oftalmoscopía/métodos , Reconocimiento de Normas Patrones Automatizadas , Neovascularización Retiniana/etiología , Vasos Retinianos/fisiopatología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA