Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Integr Zool ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38509845

RESUMEN

We found that the area of black round or irregular-shaped spots on the tiger's nose increased with age, indicating a positive relationship between age and nose features. We used the deep learning model to train the facial and nose image features to identify the age of Amur tigers, using a combination of classification and prediction methods to achieve age determination with an accuracy of 87.81%.

2.
Neuroinformatics ; 16(3-4): 411-423, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-29512026

RESUMEN

Automatic and accurate segmentation of hippocampal structures in medical images is of great importance in neuroscience studies. In multi-atlas based segmentation methods, to alleviate the misalignment when registering atlases to the target image, patch-based methods have been widely studied to improve the performance of label fusion. However, weights assigned to the fused labels are usually computed based on predefined features (e.g. image intensities), thus being not necessarily optimal. Due to the lack of discriminating features, the original feature space defined by image intensities may limit the description accuracy. To solve this problem, we propose a patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner. Specifically, multi-scale intensity features and texture features are first extracted from the image patch for feature representation. Margin fisher analysis (MFA) is then applied to the neighboring samples in the atlases for the target voxel, in order to learn a subspace in which the distance between intra-class samples is minimized and the distance between inter-class samples is simultaneously maximized. Finally, the k-nearest neighbor (kNN) classifier is employed in the learned subspace to determine the final label for the target voxel. In the experiments, we evaluate our proposed method by conducting hippocampus segmentation using the ADNI dataset. Both the qualitative and quantitative results show that our method outperforms the conventional multi-atlas based segmentation methods.


Asunto(s)
Hipocampo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Coloración y Etiquetado/métodos , Bases de Datos Factuales , Humanos
3.
Med Image Anal ; 43: 10-22, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28961451

RESUMEN

Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.


Asunto(s)
Hipocampo/anatomía & histología , Imagen por Resonancia Magnética/métodos , Humanos
4.
IEEE Trans Biomed Eng ; 64(3): 569-579, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-27187939

RESUMEN

OBJECTIVE: To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). METHODS: It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semisupervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. RESULTS: Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. CONCLUSION: This paper proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. SIGNIFICANCE: The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients.


Asunto(s)
Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía de Emisión de Positrones/métodos , Exposición a la Radiación/prevención & control , Aprendizaje Automático Supervisado , Algoritmos , Humanos , Aumento de la Imagen/métodos , Dosis de Radiación , Protección Radiológica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de Sustracción
5.
Med Phys ; 43(2): 1003-19, 2016 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26843260

RESUMEN

PURPOSE: It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). METHODS: In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. RESULTS: The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. CONCLUSIONS: The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.


Asunto(s)
Encéfalo/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética , Humanos , Dinámicas no Lineales
6.
Phys Med Biol ; 61(2): 791-812, 2016 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-26732849

RESUMEN

Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.


Asunto(s)
Mapeo Encefálico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Imagen Multimodal/métodos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Humanos , Dosis de Radiación
7.
Med Image Comput Comput Assist Interv ; 9351: 719-726, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26942235

RESUMEN

Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature.

8.
Mach Learn Med Imaging ; 9352: 17-25, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-30506064

RESUMEN

Random Forest (RF) has been widely used in the learning-based labeling. In RF, each sample is directed from the root to each leaf based on the decisions made in the interior nodes, also called splitting nodes. The splitting nodes assign a testing sample to either left or right child based on the learned splitting function. The final prediction is determined as the average of label probability distributions stored in all arrived leaf nodes. For ambiguous testing samples, which often lie near the splitting boundaries, the conventional splitting function, also referred to as hard split function, tends to make wrong assignments, hence leading to wrong predictions. To overcome this limitation, we propose a novel soft-split random forest (SSRF) framework to improve the reliability of node splitting and finally the accuracy of classification. Specifically, a soft split function is employed to assign a testing sample into both left and right child nodes with their certain probabilities, which can effectively reduce influence of the wrong node assignment on the prediction accuracy. As a result, each testing sample can arrive at multiple leaf nodes, and their respective results can be fused to obtain the final prediction according to the weights accumulated along the path from the root node to each leaf node. Besides, considering the importance of context information, we also adopt a Haar-features based context model to iteratively refine the classification map. We have comprehensively evaluated our method on two public datasets, respectively, for labeling hippocampus in MR images and also labeling three organs in Head & Neck CT images. Compared with the hard-split RF (HSRF), our method achieved a notable improvement in labeling accuracy.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...