Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(17)2022 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-36081120

RESUMEN

Color is an essential feature in histogram-based matching. This can be extracted as statistical data during the comparison process. Although the applicability of color features in histogram-based techniques has been proven, position information is lacking during the matching process. We present a conceptually simple and effective method called multiple-layered absent color indexing (ABC-ML) for template matching. Apparent and absent color histograms are obtained from the original color histogram, where the absent colors belong to low-frequency or vacant bins. To determine the color range of compared images, we propose a total color space (TCS) that can determine the operating range of the histogram bins. Furthermore, we invert the absent colors to obtain the properties of these colors using threshold hT. Then, we compute the similarity using the intersection. A multiple-layered structure is proposed against the shift issue in histogram-based approaches. Each layer is constructed using the isotonic principle. Thus, absent color indexing and multiple-layered structure are combined to solve the precision problem. Our experiments on real-world images and open data demonstrated that they have produced state-of-the-art results. Moreover, they retained the histogram merits of robustness in cases of deformation and scaling.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Reconocimiento de Normas Patrones Automatizadas , Algoritmos , Color , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
2.
Minim Invasive Ther Allied Technol ; 29(4): 210-216, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31187660

RESUMEN

Background: Accurate registration for surgical navigation of laparoscopic surgery is highly challenging due to vessel deformation. Here, we describe the design of a deformable model with improved matching accuracy by applying the finite element method (FEM).Material and methods: ANSYS software was used to simulate an FEM model of the vessel after pull-up based on laparoscopic gastrectomy requirements. The central line of the FEM model and the central line of the ground truth were drawn and compared. Based on the material and parameters determined from the animal experiment, a perigastric vessel FEM model of a gastric cancer patient was created, and its accuracy in a laparoscopic gastrectomy surgical scene was evaluated.Results: In the animal experiment, the FEM model created with Ogden foam material exhibited better results. The average distance between the two central lines was 6.5mm, and the average distance between their closest points was 3.8 mm. In the laparoscopic gastrectomy surgical scene, the FEM model and the true artery deformation demonstrated good coincidence.Conclusion: In this study, a deformable vessel model based on FEM was constructed using preoperative CT images to improve matching accuracy and to supply a reference for further research on deformation matching to facilitate laparoscopic gastrectomy navigation.


Asunto(s)
Análisis de Elementos Finitos , Gastrectomía/métodos , Artería Gástrica/anatomía & histología , Laparoscopía/métodos , Neoplasias Gástricas/cirugía , Animales , Artería Gástrica/diagnóstico por imagen , Humanos , Masculino , Porcinos , Tomografía Computarizada por Rayos X
3.
Biomed Eng Online ; 17(1): 181, 2018 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-30514298

RESUMEN

BACKGROUND: Imbalanced data classification is an inevitable problem in medical intelligent diagnosis. Most of real-world biomedical datasets are usually along with limited samples and high-dimensional feature. This seriously affects the classification performance of the model and causes erroneous guidance for the diagnosis of diseases. Exploring an effective classification method for imbalanced and limited biomedical dataset is a challenging task. METHODS: In this paper, we propose a novel multilayer extreme learning machine (ELM) classification model combined with dynamic generative adversarial net (GAN) to tackle limited and imbalanced biomedical data. Firstly, principal component analysis is utilized to remove irrelevant and redundant features. Meanwhile, more meaningful pathological features are extracted. After that, dynamic GAN is designed to generate the realistic-looking minority class samples, thereby balancing the class distribution and avoiding overfitting effectively. Finally, a self-adaptive multilayer ELM is proposed to classify the balanced dataset. The analytic expression for the numbers of hidden layer and node is determined by quantitatively establishing the relationship between the change of imbalance ratio and the hyper-parameters of the model. Reducing interactive parameters adjustment makes the classification model more robust. RESULTS: To evaluate the classification performance of the proposed method, numerical experiments are conducted on four real-world biomedical datasets. The proposed method can generate authentic minority class samples and self-adaptively select the optimal parameters of learning model. By comparing with W-ELM, SMOTE-ELM, and H-ELM methods, the quantitative experimental results demonstrate that our method can achieve better classification performance and higher computational efficiency in terms of ROC, AUC, G-mean, and F-measure metrics. CONCLUSIONS: Our study provides an effective solution for imbalanced biomedical data classification under the condition of limited samples and high-dimensional feature. The proposed method could offer a theoretical basis for computer-aided diagnosis. It has the potential to be applied in biomedical clinical practice.


Asunto(s)
Investigación Biomédica , Análisis de Datos , Aprendizaje Automático
4.
Int J Med Robot ; 20(1): e2619, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38536712

RESUMEN

BACKGROUND: 2D/3D medical image registration is one of the key technologies for surgical navigation systems to perform pose estimation and achieve accurate positioning, which still remains challenging. The purpose of this study is to introduce a new method for X-ray to CT 2D/3D registration and conduct a feasibility study. METHODS: In this study, a 2D/3D affine registration method based on feature point detection is investigated. It combines the morphological and edge features of spinal images to accurately extract feature points from the images, and uses graph neural networks to aggregate anatomical features of different points to increase the local detail information. Meanwhile, global and positional information are extracted by the Swin Transformer. RESULTS: The results indicate that the proposed method has shown improvements in both accuracy and success ratio compared with other methods. The mean target registration error value reached up to 0.31 mm; meanwhile, the runtime overhead was much lower, achieving an average runtime of about 0.6 s. This ultimately improves the registration accuracy and efficiency, demonstrating the effectiveness of the proposed method. CONCLUSIONS: The proposed method can provide more comprehensive image information and shows good prospects for pose estimation and achieving accurate positioning in surgical navigation systems.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Humanos , Rayos X , Radiografía , Imagenología Tridimensional/métodos , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Comput Biol Med ; 176: 108547, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38728994

RESUMEN

Self-supervised pre-training and fully supervised fine-tuning paradigms have received much attention to solve the data annotation problem in deep learning fields. Compared with traditional pre-training on large natural image datasets, medical self-supervised learning methods learn rich representations derived from unlabeled data itself thus avoiding the distribution shift between different image domains. However, nowadays state-of-the-art medical pre-training methods were specifically designed for downstream tasks making them less flexible and difficult to apply to new tasks. In this paper, we propose grid mask image modeling, a flexible and general self-supervised method to pre-train medical vision transformers for 3D medical image segmentation. Our goal is to guide networks to learn the correlations between organs and tissues by reconstructing original images based on partial observations. The relationships are consistent within the human body and invariant to disease type or imaging modality. To achieve this, we design a Siamese framework consisting of an online branch and a target branch. An adaptive and hierarchical masking strategy is employed in the online branch to (1) learn the boundaries or small contextual mutation regions within images; (2) to learn high-level semantic representations from deeper layers of the multiscale encoder. In addition, the target branch provides representations for contrastive learning to further reduce representation redundancy. We evaluate our method through segmentation performance on two public datasets. The experimental results demonstrate our method outperforms other self-supervised methods. Codes are available at https://github.com/mobiletomb/Gmim.


Asunto(s)
Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Aprendizaje Profundo , Algoritmos , Aprendizaje Automático Supervisado
6.
Comput Biol Med ; 170: 108057, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38301516

RESUMEN

Medical image segmentation is a fundamental research problem in the field of medical image processing. Recently, the Transformer have achieved highly competitive performance in computer vision. Therefore, many methods combining Transformer with convolutional neural networks (CNNs) have emerged for segmenting medical images. However, these methods cannot effectively capture the multi-scale features in medical images, even though texture and contextual information embedded in the multi-scale features are extremely beneficial for segmentation. To alleviate this limitation, we propose a novel Transformer-CNN combined network using multi-scale feature learning for three-dimensional (3D) medical image segmentation, which is called MS-TCNet. The proposed model utilizes a shunted Transformer and CNN to construct an encoder and pyramid decoder, allowing six different scale levels of feature learning. It captures multi-scale features with refinement at each scale level. Additionally, we propose a novel lightweight multi-scale feature fusion (MSFF) module that can fully fuse the different-scale semantic features generated by the pyramid decoder for each segmentation class, resulting in a more accurate segmentation output. We conducted experiments on three widely used 3D medical image segmentation datasets. The experimental results indicated that our method outperformed state-of-the-art medical image segmentation methods, suggesting its effectiveness, robustness, and superiority. Meanwhile, our model has a smaller number of parameters and lower computational complexity than conventional 3D segmentation networks. The results confirmed that the model is capable of effective multi-scale feature learning and that the learned multi-scale features are useful for improving segmentation performance. We open-sourced our code, which can be found at https://github.com/AustinYuAo/MS-TCNet.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje , Redes Neurales de la Computación
7.
Quant Imaging Med Surg ; 14(3): 2193-2212, 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38545044

RESUMEN

Background: Fundus fluorescein angiography (FFA) is an imaging method used to assess retinal vascular structures by injecting exogenous dye. FFA images provide complementary information to that provided by the widely used color fundus (CF) images. However, the injected dye can cause some adverse side effects, and the method is not suitable for all patients. Methods: To meet the demand for high-quality FFA images in the diagnosis of retinopathy without side effects to patients, this study proposed an unsupervised image synthesis framework based on dual contrastive learning that can synthesize FFA images from unpaired CF images by inferring the effective mappings and avoid the shortcoming of generating blurred pathological features caused by cycle-consistency in conventional approaches. By adding class activation mapping (CAM) to the adaptive layer-instance normalization (AdaLIN) function, the generated images are made more realistic. Additionally, the use of CAM improves the discriminative ability of the model. Further, the Coordinate Attention Block was used for better feature extraction, and it was compared with other attention mechanisms to demonstrate its effectiveness. The synthesized images were quantified by the Fréchet inception distance (FID), kernel inception distance (KID), and learned perceptual image patch similarity (LPIPS). Results: The extensive experimental results showed the proposed approach achieved the best results with the lowest overall average FID of 50.490, the lowest overall average KID of 0.01529, and the lowest overall average LPIPS of 0.245 among all the approaches. Conclusions: When compared with several popular image synthesis approaches, our approach not only produced higher-quality FFA images with clearer vascular structures and pathological features, but also achieved the best FID, KID, and LPIPS scores in the quantitative evaluation.

8.
Math Biosci Eng ; 20(5): 9327-9348, 2023 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-37161245

RESUMEN

The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.


Asunto(s)
COVID-19 , Redes Neurales de la Computación , Aprendizaje Automático Supervisado , COVID-19/clasificación , COVID-19/diagnóstico por imagen , Humanos , Conjuntos de Datos como Asunto
9.
Math Biosci Eng ; 20(12): 21692-21716, 2023 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-38124616

RESUMEN

Due to its immune evasion capability, the SARS-CoV-2 Omicron variant was declared a variant of concern by the World Health Organization. The spread of Omicron in Changchun (i.e., the capital of Jilin province in northeast of China) during the spring of 2022 was successfully curbed under the strategy of a dynamic Zero-COVID policy. To evaluate the impact of immune evasion on vaccination and other measures, and to understand how the dynamic Zero-COVID measure stopped the epidemics in Changchun, we establish a compartmental model over different stages and parameterized the model with actual reported data. The model simulation firstly shows a reasonably good fit between our model prediction and the data. Second, we estimate the testing rate in the early stage of the outbreak to reveal the real infection size. Third, numerical simulations show that the coverage of vaccine immunization in Changchun and the regular nucleic acid testing could not stop the epidemic, while the 'non-pharmaceutical' intervention measures utilized in the dynamic Zero-COVID policy could play significant roles in the containment of Omicron. Based on the parameterized model, numerical analysis demonstrates that if one wants to achieve epidemic control by fully utilizing the effect of 'dynamic Zero-COVID' measures, therefore social activities are restricted to the minimum level, and then the economic development may come to a halt. The insight analysis in this work could provide reference for infectious disease prevention and control measures in the future.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , COVID-19/prevención & control , Evasión Inmune , SARS-CoV-2 , Políticas
10.
Comput Math Methods Med ; 2022: 2484435, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36092785

RESUMEN

The worldwide outbreak of the new coronavirus disease (COVID-19) has been declared a pandemic by the World Health Organization (WHO). It has a devastating impact on daily life, public health, and global economy. Due to the highly infectiousness, it is urgent to early screening of suspected cases quickly and accurately. Chest X-ray medical image, as a diagnostic basis for COVID-19, arouses attention from medical engineering. However, due to small lesion difference and lack of training data, the accuracy of detection model is insufficient. In this work, a transfer learning strategy is introduced to hierarchical structure to enhance high-level features of deep convolutional neural networks. The proposed framework consisting of asymmetric pretrained DCNNs with attention networks integrates various information into a wider architecture to learn more discriminative and complementary features. Furthermore, a novel cross-entropy loss function with a penalty term weakens misclassification. Extensive experiments are implemented on the COVID-19 dataset. Compared with the state-of-the-arts, the effectiveness and high performance of the proposed method are demonstrated.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico , Humanos , Redes Neurales de la Computación , Radiografía
11.
Biomed Signal Process Control ; 76: 103677, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35432578

RESUMEN

The widespread of highly infectious disease, i.e., COVID-19, raises serious concerns regarding public health, and poses significant threats to the economy and society. In this study, an efficient method based on deep learning, deep feature fusion classification network (DFFCNet), is proposed to improve the overall diagnosis accuracy of the disease. The method is divided into two modules, deep feature fusion module (DFFM) and multi-disease classification module (MDCM). DFFM combines the advantages of different networks for feature fusion and MDCM uses support vector machine (SVM) as a classifier to improve the classification performance. Meanwhile, the spatial attention (SA) module and the channel attention (CA) module are introduced into the network to improve the feature extraction capability of the network. In addition, the multiple-way data augmentation (MDA) is performed on the images of chest X-ray images (CXRs), to improve the diversity of samples. Similarly, the utilized Grad-CAM++ is to make the features more intuitive, and the deep learning model more interpretable. On testing of a collection of publicly available datasets, results from experimentation reveal that the proposed method achieves 99.89% accuracy in a triple classification of COVID-19, pneumonia, and health X-ray images, there by outperforming the eight state-of-the-art classification techniques.

12.
Comput Math Methods Med ; 2022: 3836498, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35983526

RESUMEN

COVID-19 has become the largest public health event worldwide since its outbreak, and early detection is a prerequisite for effective treatment. Chest X-ray images have become an important basis for screening and monitoring the disease, and deep learning has shown great potential for this task. Many studies have proposed deep learning methods for automated diagnosis of COVID-19. Although these methods have achieved excellent performance in terms of detection, most have been evaluated using limited datasets and typically use a single deep learning network to extract features. To this end, the dual asymmetric feature learning network (DAFLNet) is proposed, which is divided into two modules, DAFFM and WDFM. DAFFM mainly comprises the backbone networks EfficientNetV2 and DenseNet for feature fusion. WDFM is mainly for weighted decision-level fusion and features a new pretrained network selection algorithm (PNSA) for determination of the optimal weights. Experiments on a large dataset were conducted using two schemes, DAFLNet-1 and DAFLNet-2, and both schemes outperformed eight state-of-the-art classification techniques in terms of classification performance. DAFLNet-1 achieved an average accuracy of up to 98.56% for the triple classification of COVID-19, pneumonia, and healthy images.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Prueba de COVID-19 , Humanos , Redes Neurales de la Computación , SARS-CoV-2 , Rayos X
13.
Comput Math Methods Med ; 2021: 9974017, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34621329

RESUMEN

Medical image quality is highly relative to clinical diagnosis and treatment, leading to a popular research topic of medical image denoising. Image denoising based on deep learning methods has attracted considerable attention owing to its excellent ability of automatic feature extraction. Most existing methods for medical image denoising adapted to certain types of noise have difficulties in handling spatially varying noise; meanwhile, image detail losses and structure changes occurred in the denoised image. Considering image context perception and structure preserving, this paper firstly introduces a medical image denoising method based on conditional generative adversarial network (CGAN) for various unknown noises. In the proposed architecture, noise image with the corresponding gradient image is merged as network conditional information, which enhances the contrast between the original signal and noise according to the structural specificity. A novel generator with residual dense blocks makes full use of the relationship among convolutional layers to explore image context. Furthermore, the reconstruction loss and WGAN loss are combined as the objective loss function to ensure the consistency of denoised image and real image. A series of experiments for medical image denoising are conducted with the denoising results of PSNR = 33.2642 and SSIM = 0.9206 on JSRT datasets and PSNR = 35.1086 and SSIM = 0.9328 on LIDC datasets. Compared with the state-of-the-art methods, the superior performance of the proposed method is outstanding.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Biología Computacional , Simulación por Computador , Bases de Datos Factuales , Aprendizaje Profundo , Humanos , Interpretación de Imagen Asistida por Computador/estadística & datos numéricos , Pulmón/diagnóstico por imagen , Relación Señal-Ruido , Tomografía Computarizada por Rayos X/estadística & datos numéricos
14.
Comput Math Methods Med ; 2021: 2973108, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34484414

RESUMEN

The X-ray radiation from computed tomography (CT) brought us the potential risk. Simply decreasing the dose makes the CT images noisy and diagnostic performance compromised. Here, we develop a novel denoising low-dose CT image method. Our framework is based on an improved generative adversarial network coupling with the hybrid loss function, including the adversarial loss, perceptual loss, sharpness loss, and structural similarity loss. Among the loss function terms, perceptual loss and structural similarity loss are made use of to preserve textural details, and sharpness loss can make reconstruction images clear. The adversarial loss can sharp the boundary regions. The results of experiments show the proposed method can effectively remove noise and artifacts better than the state-of-the-art methods in the aspects of the visual effect, the quantitative measurements, and the texture details.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador/estadística & datos numéricos , Tomografía Computarizada por Rayos X/estadística & datos numéricos , Algoritmos , Biología Computacional , Bases de Datos Factuales/estadística & datos numéricos , Humanos , Redes Neurales de la Computación , Dosis de Radiación , Intensificación de Imagen Radiográfica/métodos , Relación Señal-Ruido
15.
Comput Math Methods Med ; 2021: 5221111, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34589137

RESUMEN

Trigeminal neuralgia is a neurological disease. It is often treated by puncturing the trigeminal nerve through the skin and the oval foramen of the skull to selectively destroy the pain nerve. The process of puncture operation is difficult because the morphology of the foramen ovale in the skull base is varied and the surrounding anatomical structure is complex. Computer-aided puncture guidance technology is extremely valuable for the treatment of trigeminal neuralgia. Computer-aided guidance can help doctors determine the puncture target by accurately locating the foramen ovale in the skull base. Foramen ovale segmentation is a prerequisite for locating but is a tedious and error-prone task if done manually. In this paper, we present an image segmentation solution based on the multiatlas method that automatically segments the foramen ovale. We developed a data set of 30 CT scans containing 20 foramen ovale atlas and 10 CT scans for testing. Our approach can perform foramen ovale segmentation in puncture operation scenarios based solely on limited data. We propose to utilize this method as an enabler in clinical work.


Asunto(s)
Foramen Oval/diagnóstico por imagen , Foramen Oval/cirugía , Modelos Anatómicos , Cirugía Asistida por Computador/estadística & datos numéricos , Neuralgia del Trigémino/diagnóstico por imagen , Neuralgia del Trigémino/cirugía , Algoritmos , Atlas como Asunto , Biología Computacional , Humanos , Punciones/métodos , Punciones/estadística & datos numéricos , Interpretación de Imagen Radiográfica Asistida por Computador/estadística & datos numéricos , Tomografía Computarizada por Rayos X/estadística & datos numéricos , Nervio Trigémino/diagnóstico por imagen , Nervio Trigémino/cirugía
16.
Comput Math Methods Med ; 2020: 5487168, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32104203

RESUMEN

Multimodal medical images are useful for observing tissue structure clearly in clinical practice. To integrate multimodal information, multimodal registration is significant. The entropy-based registration applies a structure descriptor set to replace the original multimodal image and compute similarity to express the correlation of images. The accuracy and converging rate of the registration depend on this set. We propose a new method, logarithmic fuzzy entropy function, to compute the descriptor set. It is obvious that the proposed method can increase the upper bound value from log(r) to log(r) + ∆(r) so that a more representative structural descriptor set is formed. The experiment results show that our method has faster converging rate and wider quantified range in multimodal medical images registration.


Asunto(s)
Encéfalo/diagnóstico por imagen , Lógica Difusa , Procesamiento de Imagen Asistido por Computador/métodos , Imagen Multimodal , Algoritmos , Mapeo Encefálico , Entropía , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética , Modelos Estadísticos , Neuroimagen , Distribución Normal , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X
17.
Comput Math Methods Med ; 2018: 6213264, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30356395

RESUMEN

To solve the problem of scoliosis recognition without a labeled dataset, an unsupervised method is proposed by combining the cascade gentle AdaBoost (CGAdaBoost) classifier and distance regularized level set evolution (DRLSE). The main idea of the proposed method is to establish the relationship between individual vertebrae and the whole spine with vertebral centroids. Scoliosis recognition can be transferred into automatic vertebral detection and segmentation processes, which can avoid the manual data-labeling processing. In the CGAdaBoost classifier, diversified vertebrae images and multifeature descriptors are considered to generate more discriminative features, thus improving the vertebral detection accuracy. After that, the detected bounding box represents an appropriate initial contour of DRLSE to make the vertebral segmentation more accurate. It is helpful for the elimination of initialization sensitivity and quick convergence of vertebra boundaries. Meanwhile, vertebral centroids are extracted to connect the whole spine, thereby describing the spinal curvature. Different parts of the spine are determined as abnormal or normal in accordance with medical prior knowledge. The experimental results demonstrate that the proposed method cannot only effectively identify scoliosis with unlabeled spine CT images but also have superiority against other state-of-the-art methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Escoliosis/diagnóstico por imagen , Columna Vertebral/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Adolescente , Adulto , Algoritmos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Reconocimiento de Normas Patrones Automatizadas , Probabilidad , Reproducibilidad de los Resultados , Adulto Joven
18.
Knee ; 23(5): 777-84, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27329992

RESUMEN

BACKGROUND: To determine the relationship between the midpoints connecting the tibial attachments of the anterior and posterior cruciate ligament (ACL and PCL, APCL line) and the transepicondylar axis (TEA) in normal healthy Chinese, as well as the comparison with other rotational lines. METHODS: Left knees of 17 male and 15 female healthy Chinese volunteers were scanned by magnetic resonance imaging (MRI) and computer tomography (CT) respectively. 3D contours of each knee, the tibial attachments of ACL, PCL, the medial and lateral collateral ligaments were reconstructed separately from CT and MRI data. Using an iterative closest point algorithm, we superimposed them individually. The APCL line, the tibial posterior condylar line (PC line), the medial third of the tibial tubercle (1/3 line), the Akagi's line, and the midsulcus of the tibial spine (Midsulcus line), the clinical and surgical TEA (CTEA and STEA) were determined. The paired intersection angles of them were measured. RESULTS: The mean angle CTEA with APCL line, Akagi's line, Midsulcus line, 1/3 line, and PC line, respectively, was 90.3°±2.9°, 95.0°±3.0°, 94.0°±3.9°, 102.4°±2.7°, and 87.1°±3.0°. The APCL-CTEA was significant different than other angles (p<0.001). The mean angle STEA to the above lines, respectively, was 94.8°±3.1°, 99.4°±3.1°, 98.5°±4.0°, 106.9°±2.9°, and 91.6°±3.2°. The PC line-STEA was significantly different than other angles (p<0.05). CONCLUSIONS: APCL line was the closest perpendicular to the CTEA in normal Chinese subjects comparing with other rotational lines.


Asunto(s)
Ligamento Cruzado Anterior/diagnóstico por imagen , Fémur/diagnóstico por imagen , Ligamento Cruzado Posterior/diagnóstico por imagen , Tibia/diagnóstico por imagen , Adulto , Algoritmos , Artroplastia de Reemplazo de Rodilla , Femenino , Voluntarios Sanos , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Masculino , Tomografía Computarizada por Rayos X , Adulto Joven
19.
Knee ; 22(6): 585-90, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26277882

RESUMEN

BACKGROUND: The exact isometric points for medial patellofemoral ligament (MPFL) fixation during MPFL reconstruction remain a matter of debate. PURPOSE: The aim of this study was to characterize the functional length changes of various patellar and femoral fixation sites using in vivo three-dimensional (3D) movement patterns and to determine the ideal fixation sites at which the graft remains largely isometric. METHODS: Twelve right knees of healthy volunteers were examined at early flexion angles (0°, 10°, 20°, 30°, 40°, 50°, and 60°) with a horizontal-type open magnetic resonance scanner, and 3D models were reconstructed using the marching cubes algorithm. Six points on the femoral condyle and three points on the medial aspect of the patella were simulated. The matching points represented the MPFL, which crossed the bony obstacle. The MPFL length changes were analyzed at various flexion degrees. RESULTS: The lengths from the dome of Blumensaat's line (G), the point 10mm inferior to the adductor tubercle (H), to the midpoint between the adductor tubercle and the medial epicondyle (I) were more isometric than other points. The lengths between the dome of Blumensaat's line and the superior pole of the patella changes significantly between 20° and 60° of flexion (p=0.040). CONCLUSIONS: The femoral fixation site may be more accurately located during MPFL reconstruction at the G, H, and I points to restore the native biomechanical function of the MPFL. The dome of Blumensaat's line should be avoided during MPFL reconstruction with the superficial quad technique. CLINICAL RELEVANCE: A triangular region composed of the dome of Blumensaat's line, 10mm inferior to the adductor tubercle, and the midpoint between the adductor tubercle and medial epicondyle is recommended as the femoral fixation site.


Asunto(s)
Fémur/anatomía & histología , Imagenología Tridimensional , Articulación de la Rodilla/fisiología , Imagen por Resonancia Magnética/métodos , Ligamento Rotuliano/anatomía & histología , Procedimientos de Cirugía Plástica , Rango del Movimiento Articular , Adulto , Fémur/cirugía , Voluntarios Sanos , Humanos , Articulación de la Rodilla/anatomía & histología , Masculino , Ligamento Rotuliano/fisiología , Ligamento Rotuliano/cirugía , Adulto Joven
20.
Comput Med Imaging Graph ; 37(2): 131-41, 2013 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23562139

RESUMEN

The precise annotation of vascular structure is desired in computer-assisted systems to help surgeons identify each vessel branch. This paper proposes a method that annotates vessels on volume rendered images by rendering their names on them using a two-pass rendering process. In the first rendering pass, vessel surface models are generated using such properties as centerlines, radii, and running directions. Then the vessel names are drawn on the vessel surfaces. Finally, the vessel name images and the corresponding depth buffer are generated by a virtual camera at the viewpoint. In the second rendering pass, volume rendered images are generated by a ray casting volume rendering algorithm that considers the depth buffer generated in the first rendering pass. After the two-pass rendering is finished, an annotated image is generated by blending the volume rendered image with the surface rendered image. To confirm the effectiveness of our proposed method, we performed a computer-assisted system for the automated annotation of abdominal arteries. The experimental results show that vessel names can be drawn on the corresponding vessel surface in the volume rendered images at a computing cost that is nearly the same as that by volume rendering only. The proposed method has enormous potential to be adopted to annotate the vessels in the 3D medical images in clinical applications, such as image-guided surgery.


Asunto(s)
Angiografía/métodos , Inteligencia Artificial , Vasos Sanguíneos/anatomía & histología , Documentación/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Procesamiento de Lenguaje Natural , Terminología como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA