Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38781068

RESUMEN

Multiple Instance Learning (MIL) has demonstrated promise in Whole Slide Image (WSI) classification. However, a major challenge persists due to the high computational cost associated with processing these gigapixel images. Existing methods generally adopt a two-stage approach, comprising a non-learnable feature embedding stage and a classifier training stage. Though it can greatly reduce memory consumption by using a fixed feature embedder pre-trained on other domains, such a scheme also results in a disparity between the two stages, leading to suboptimal classification accuracy. To address this issue, we propose that a bag-level classifier can be a good instance-level teacher. Based on this idea, we design Iteratively Coupled Multiple Instance Learning (ICMIL) to couple the embedder and the bag classifier at a low cost. ICMIL initially fixes the patch embedder to train the bag classifier, followed by fixing the bag classifier to fine-tune the patch embedder. The refined embedder can then generate better representations in return, leading to a more accurate classifier for the next iteration. To realize more flexible and more effective embedder fine-tuning, we also introduce a teacher-student framework to efficiently distill the category knowledge in the bag classifier to help the instance-level embedder fine-tuning. Intensive experiments were conducted on four distinct datasets to validate the effectiveness of ICMIL. The experimental results consistently demonstrated that our method significantly improves the performance of existing MIL backbones, achieving state-of-the-art results. The code and the organized datasets can be accessed by: https://github.com/Dootmaan/ICMIL/tree/confidence-based.

2.
Stud Health Technol Inform ; 310: 901-905, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269939

RESUMEN

Object detection using convolutional neural networks (CNNs) has achieved high performance and achieved state-of-the-art results with natural images. Compared to natural images, medical images present several challenges for lesion detection. First, the sizes of lesions vary tremendously, from several millimeters to several centimeters. Scale variations significantly affect lesion detection accuracy, especially for the detection of small lesions. Moreover, the effective extraction of temporal and spatial features from multi-phase CT images is also an important issue. In this paper, we propose a group-based deep layer aggregation method with multiphase attention for liver lesion detection in multi-phase CT images. The method, which is called MSPA-DLA++, is a backbone feature extraction network for anchor-free liver lesion detection in multi-phase CT images that addresses scale variations and extracts hidden features from such images. The effectiveness of the proposed method is demonstrated on public datasets (LiTS2017) and our private multiphase dataset. The results of the experiments show that MSPA-DLA++ can improve upon the performance of state-of-the-art networks by approximately 3.7%.


Asunto(s)
Neoplasias Hepáticas , Redes Neurales de la Computación , Humanos , Tomografía Computarizada por Rayos X
3.
Artículo en Inglés | MEDLINE | ID: mdl-38082813

RESUMEN

MRI is crucial for the diagnosis of HCC patients, especially when combined with CT images for MVI prediction, richer complementary information can be learned. Many studies have shown that whether hepatocellular carcinoma is accompanied by vascular invasion can be evidenced by imaging examinations such as CT or MR, so they can be used as a multimodal joint prediction to improve the prediction accuracy of MVI. However, it is high-risk, time-consuming and expensive in current clinical diagnosis due to the use of gadolinium-based contrast agent (CA) injection. If MRI could be synthesized without CA injection, there is no doubt that it would greatly optimize the diagnosis. Based on this, this paper proposes a high-quality image synthesis network, MVI-Wise GAN, that can be used to improve the prediction of microvascular invasion in HCC. It starts from the underlying imaging perspective, introduces K-space and feature-level constraints, and combines three related networks (an attention-aware generator, a convolutional neural network-based discriminator and a region-based convolutional neural network detector) Together, precise tumor region detection by synthetic tumor-specific MRI. Accurate MRI synthesis is achieved through backpropagation, the feature representation and context learning of HCC MVI are enhanced, and the performance of loss convergence is improved through residual learning. The model was tested on a dataset of 256 subjects from Run Run Shaw Hospital of Zhejiang University. Experimental results and quantitative evaluation show that MVI-Wise GAN achieves high-quality MRI synthesis with a tumor detection accuracy of 92.3%, which is helpful for the clinical diagnosis of liver tumor MVI.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Neoplasias Hepáticas/diagnóstico por imagen , Invasividad Neoplásica , Imagen por Resonancia Magnética/métodos , Medios de Contraste/farmacología , Radiofármacos
4.
Artículo en Inglés | MEDLINE | ID: mdl-38083232

RESUMEN

As the most common malignant tumor worldwide, hepatocellular carcinoma (HCC) has a high rate of death and recurrence, and microvascular invasion (MVI) is considered to be an independent risk factor affecting its early recurrence and poor survival rate. Accurate preoperative prediction of MVI is of great significance for the formulation of individualized treatment plans and long-term prognosis assessment for HCC patients. However, as the mechanism of MVI is still unclear, existing studies use deep learning methods to directly train CT or MR images, with limited predictive performance and lack of explanation. We map the pathological "7-point" baseline sampling method used to confirm the diagnosis of MVI onto MR images, propose a vision-guided attention-enhanced network to improve the prediction performance of MVI, and validate the prediction on the corresponding pathological images reliability of the results. Specifically, we design a learnable online class activation map (CAM) to guide the network to focus on high-incidence regions of MVI guided by an extended tumor mask. Further, an attention-enhanced module is proposed to force the network to learn image regions that can explain the MVI results. The generated attention maps capture long-distance dependencies and can be used as spatial priors for MVI to promote the learning of vision-guided module. The experimental results on the constructed multi-center dataset show that the proposed algorithm achieves the state-of-the-art compared to other models.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico , Neoplasias Hepáticas/diagnóstico , Reproducibilidad de los Resultados , Estudios Retrospectivos , Invasividad Neoplásica/patología
5.
Artículo en Inglés | MEDLINE | ID: mdl-38083328

RESUMEN

High early recurrence (ER) rate is the main factor leading to the poor outcome of patients with hepatocellular carcinoma (HCC). Accurate preoperative prediction of ER is thus highly desired for HCC treatment. Many radiomics solutions have been proposed for the preoperative prediction of HCC using CT images based on machine learning and deep learning methods. Nevertheless, most current radiomics approaches extract features only from segmented tumor regions that neglect the liver tissue information which is useful for HCC prognosis. In this work, we propose a deep prediction network based on CT images of full liver combined with tumor mask that provides tumor location information for better feature extraction to predict the ER of HCC. While, due to the complex imaging characteristics of HCC, the image-based ER prediction methods suffer from limited capability. Therefore, on the one hand, we propose to employ supervised contrastive loss to jointly train the deep prediction model with cross-entropy loss to alleviate the problem of intra-class variation and inter-class similarity of HCC. On the other hand, we incorporate the clinical data to further improve the prediction ability of the model. Experiments are extensively conducted to verify the effectiveness of our proposed deep prediction model and the contribution of liver tissue for prognosis assessment of HCC.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/cirugía , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/cirugía , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático
6.
Comput Biol Med ; 166: 107467, 2023 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-37725849

RESUMEN

Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.

7.
IEEE J Biomed Health Inform ; 27(10): 4854-4865, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37585323

RESUMEN

High resolution (HR) 3D medical image segmentation is vital for an accurate diagnosis. However, in the field of medical imaging, it is still a challenging task to achieve a high segmentation performance with cost-effective and feasible computation resources. Previous methods commonly use patch-sampling to reduce the input size, but this inevitably harms the global context and decreases the model's performance. In recent years, a few patch-free strategies have been presented to deal with this issue, but either they have limited performance due to their over-simplified model structures or they follow a complicated training process. In this study, to effectively address these issues, we present Adaptive Decomposition (A-Decomp) and Shared Weight Volumetric Transformer Blocks (SW-VTB). A-Decomp can adaptively decompose features and reduce their spatial size, which greatly lowers GPU memory consumption. SW-VTB is able to capture long-range dependencies at a low cost with its lightweight design and cross-scale weight-sharing mechanism. Our proposed cross-scale weight-sharing approach enhances the network's ability to capture scale-invariant core semantic information in addition to reducing parameter numbers. By combining these two designs together, we present a novel patch-free segmentation framework named VolumeFormer. Experimental results on two datasets show that VolumeFormer outperforms existing patch-based and patch-free methods with a comparatively fast inference speed and relatively compact design.

8.
IEEE Trans Med Imaging ; 42(10): 3091-3103, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37171932

RESUMEN

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.


Asunto(s)
Neoplasias , Humanos , Incertidumbre , Neoplasias/diagnóstico por imagen , Movimiento (Física) , Frecuencia Respiratoria
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2097-2100, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086312

RESUMEN

Contrast-enhanced computed tomography (CE-CT) images are used extensively for the diagnosis of liver cancer in clinical practice. Compared with the non-contrast CT (NC-CT) images (CT scans without injection), the CE-CT images are obtained after injecting the contrast, which will increase physical burden of patients. To handle the limitation, we proposed an improved conditional generative adversarial network (improved cGAN) to generate CE-CT images from non-contrast CT images. In the improved cGAN, we incorporate a pyramid pooling module and an elaborate feature fusion module to the generator to improve the capability of encoder in capturing multi-scale semantic features and prevent the dilution of information in the process of decoding. We evaluate the performance of our proposed method on a contrast-enhanced CT dataset including three phases of CT images, (i.e., non-contrast image, CE-CT images in arterial and portal venous phases). Experimental results suggest that the proposed method is superior to existing GAN-based models in quantitative and qualitative results.


Asunto(s)
Arterias , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos
10.
IEEE J Biomed Health Inform ; 26(8): 3988-3998, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35213319

RESUMEN

Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABS3Net, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABS3Net consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABS3Net) and Reconstruction-Assisted Module (RAM). Specifically, the ABS3Net improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.


Asunto(s)
Abdomen , Aprendizaje Automático Supervisado , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Bazo/diagnóstico por imagen
11.
Front Radiol ; 2: 856460, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37492657

RESUMEN

Hepatocellular carcinoma (HCC) is a primary liver cancer that produces a high mortality rate. It is one of the most common malignancies worldwide, especially in Asia, Africa, and southern Europe. Although surgical resection is an effective treatment, patients with HCC are at risk of recurrence after surgery. Preoperative early recurrence prediction for patients with liver cancer can help physicians develop treatment plans and will enable physicians to guide patients in postoperative follow-up. However, the conventional clinical data based methods ignore the imaging information of patients. Certain studies have used radiomic models for early recurrence prediction in HCC patients with good results, and the medical images of patients have been shown to be effective in predicting the recurrence of HCC. In recent years, deep learning models have demonstrated the potential to outperform the radiomics-based models. In this paper, we propose a prediction model based on deep learning that contains intra-phase attention and inter-phase attention. Intra-phase attention focuses on important information of different channels and space in the same phase, whereas inter-phase attention focuses on important information between different phases. We also propose a fusion model to combine the image features with clinical data. Our experiment results prove that our fusion model has superior performance over the models that use clinical data only or the CT image only. Our model achieved a prediction accuracy of 81.2%, and the area under the curve was 0.869.

12.
IEEE J Biomed Health Inform ; 26(2): 614-625, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34161249

RESUMEN

Liver tumor segmentation (LiTS) is of primary importance in diagnosis and treatment of hepatocellular carcinoma. Known automated LiTS methods could not yield satisfactory results for clinical use since they were hard to model flexible tumor shapes and locations. In clinical practice, radiologists usually estimate tumor shape and size by a Response Evaluation Criteria in Solid Tumor (RECIST) mark. Inspired by this, in this paper, we explore a deep learning (DL) based interactive LiTS method, which incorporates guidance from user-provided RECIST marks. Our method takes a three-step framework to predict liver tumor boundaries. Under this architecture, we develop a RECIST mark propagation network (RMP-Net) to estimate RECIST-like marks in off-RECIST slices. We also devise a context-guided boundary-sensitive network (CGBS-Net) to distill tumors' contextual and boundary information from corresponding RECIST(-like) marks, and then predict tumor maps. To further refine the segmentation results, we process the tumor maps using a 3D conditional random field (CRF) algorithm and a morphology hole-filling operation. Verified on two clinical contrast-enhanced abdomen computed tomography (CT) image datasets, our proposed approach can produce promising segmentation results, and outperforms the state-of-the-art interactive segmentation methods.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagen , Carcinoma Hepatocelular/terapia , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/terapia , Criterios de Evaluación de Respuesta en Tumores Sólidos , Tomografía Computarizada por Rayos X/métodos
13.
IEEE Trans Med Imaging ; 40(12): 3519-3530, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34129495

RESUMEN

Organ segmentation from medical images is one of the most important pre-processing steps in computer-aided diagnosis, but it is a challenging task because of limited annotated data, low-contrast and non-homogenous textures. Compared with natural images, organs in the medical images have obvious anatomical prior knowledge (e.g., organ shape and position), which can be used to improve the segmentation accuracy. In this paper, we propose a novel segmentation framework which integrates the medical image anatomical prior through loss into the deep learning models. The proposed prior loss function is based on probabilistic atlas, which is called as deep atlas prior (DAP). It includes prior location and shape information of organs, which are important prior information for accurate organ segmentation. Further, we combine the proposed deep atlas prior loss with the conventional likelihood losses such as Dice loss and focal loss into an adaptive Bayesian loss in a Bayesian framework, which consists of a prior and a likelihood. The adaptive Bayesian loss dynamically adjusts the ratio of the DAP loss and the likelihood loss in the training epoch for better learning. The proposed loss function is universal and can be combined with a wide variety of existing deep segmentation models to further enhance their performance. We verify the significance of our proposed framework with some state-of-the-art models, including fully-supervised and semi-supervised segmentation models on a public dataset (ISBI LiTS 2017 Challenge) for liver segmentation and a private dataset for spleen segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Teorema de Bayes , Hígado , Bazo
14.
IEEE J Biomed Health Inform ; 25(7): 2363-2373, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34033549

RESUMEN

COVID-19 pneumonia is a disease that causes an existential health crisis in many people by directly affecting and damaging lung cells. The segmentation of infected areas from computed tomography (CT) images can be used to assist and provide useful information for COVID-19 diagnosis. Although several deep learning-based segmentation methods have been proposed for COVID-19 segmentation and have achieved state-of-the-art results, the segmentation accuracy is still not high enough (approximately 85%) due to the variations of COVID-19 infected areas (such as shape and size variations) and the similarities between COVID-19 and non-COVID-infected areas. To improve the segmentation accuracy of COVID-19 infected areas, we propose an interactive attention refinement network (Attention RefNet). The interactive attention refinement network can be connected with any segmentation network and trained with the segmentation network in an end-to-end fashion. We propose a skip connection attention module to improve the important features in both segmentation and refinement networks and a seed point module to enhance the important seeds (positions) for interactive refinement. The effectiveness of the proposed method was demonstrated on public datasets (COVID-19CTSeg and MICCAI) and our private multicenter dataset. The segmentation accuracy was improved to more than 90%. We also confirmed the generalizability of the proposed network on our multicenter dataset. The proposed method can still achieve high segmentation accuracy.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Bases de Datos Factuales , Humanos , Pulmón/diagnóstico por imagen
15.
IEEE Trans Image Process ; 30: 4840-4854, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33945478

RESUMEN

Deep learning-based super-resolution (SR) techniques have generally achieved excellent performance in the computer vision field. Recently, it has been proven that three-dimensional (3D) SR for medical volumetric data delivers better visual results than conventional two-dimensional (2D) processing. However, deepening and widening 3D networks increases training difficulty significantly due to the large number of parameters and small number of training samples. Thus, we propose a 3D convolutional neural network (CNN) for SR of magnetic resonance (MR) and computer tomography (CT) volumetric data called ParallelNet using parallel connections. We construct a parallel connection structure based on the group convolution and feature aggregation to build a 3D CNN that is as wide as possible with a few parameters. As a result, the model thoroughly learns more feature maps with larger receptive fields. In addition, to further improve accuracy, we present an efficient version of ParallelNet (called VolumeNet), which reduces the number of parameters and deepens ParallelNet using a proposed lightweight building block module called the Queue module. Unlike most lightweight CNNs based on depthwise convolutions, the Queue module is primarily constructed using separable 2D cross-channel convolutions. As a result, the number of network parameters and computational complexity can be reduced significantly while maintaining accuracy due to full channel fusion. Experimental results demonstrate that the proposed VolumeNet significantly reduces the number of model parameters and achieves high precision results compared to state-of-the-art methods in tasks of brain MR image SR, abdomen CT image SR, and reconstruction of super-resolution 7T-like images from their 3T counterparts.


Asunto(s)
Aprendizaje Profundo , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Humanos
16.
Med Phys ; 48(7): 3752-3766, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33950526

RESUMEN

PURPOSE: Liver tumor segmentation is a crucial prerequisite for computer-aided diagnosis of liver tumors. In the clinical diagnosis of liver tumors, radiologists usually examine multiphase CT images as these images provide abundant and complementary information of tumors. However, most known automatic segmentation methods extract tumor features from CT images merely of a single phase, in which valuable multiphase information is ignored. Therefore, it is highly demanded to develop a method effectively incorporating multiphase information for automatic and accurate liver tumor segmentation. METHODS: In this paper, we propose a phase attention residual network (PA-ResSeg) to model multiphase features for accurate liver tumor segmentation. A phase attention (PA) is newly proposed to additionally exploit the images of arterial (ART) phase to facilitate the segmentation of portal venous (PV) phase. The PA block consists of an intraphase attention (intra-PA) module and an interphase attention (inter-PA) module to capture channel-wise self-dependencies and cross-phase interdependencies, respectively. Thus, it enables the network to learn more representative multiphase features by refining the PV features according to the channel dependencies and recalibrating the ART features based on the learned interdependencies between phases. We propose a PA-based multiscale fusion (MSF) architecture to embed the PA blocks in the network at multiple levels along the encoding path to fuse multiscale features from multiphase images. Moreover, a 3D boundary-enhanced loss (BE-loss) is proposed for training to make the network more sensitive to boundaries. RESULTS: To evaluate the performance of our proposed PA-ResSeg, we conducted experiments on a multiphase CT dataset of focal liver lesions (MPCT-FLLs). Experimental results show the effectiveness of the proposed method by achieving a dice per case (DPC) of 0.7787, a dice global (DG) of 0.8682, a volumetric overlap error (VOE) of 0.3328, and a relative volume difference (RVD) of 0.0443 on the MPCT-FLLs. Furthermore, to validate the effectiveness and robustness of PA-ResSeg, we conducted extra experiments on another multiphase liver tumor dataset and obtained a DPC of 0.8290, a DG of 0.9132, a VOE of 0.2637, and a RVD of 0.0163. The proposed method shows its robustness and generalization capability in different datasets and different backbones. CONCLUSIONS: The study demonstrates that our method can effectively model information from multiphase CT images to segment liver tumors and outperforms other state-of-the-art methods. The PA-based MSF method can learn more representative multiphase features at multiple scales and thereby improve the segmentation performance. Besides, the proposed 3D BE-loss is conducive to tumor boundary segmentation by enforcing the network focus on boundary regions and marginal slices. Experimental results evaluated by quantitative metrics demonstrate the superiority of our PA-ResSeg over the best-known methods.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Hepáticas , Atención , Progresión de la Enfermedad , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Tomografía Computarizada por Rayos X
17.
BMC Bioinformatics ; 22(1): 91, 2021 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-33637042

RESUMEN

BACKGROUND: To effectively detect and investigate various cell-related diseases, it is essential to understand cell behaviour. The ability to detection mitotic cells is a fundamental step in diagnosing cell-related diseases. Convolutional neural networks (CNNs) have been successfully applied to object detection tasks, however, when applied to mitotic cell detection, most existing methods generate high false-positive rates due to the complex characteristics that differentiate normal cells from mitotic cells. Cell size and orientation variations in each stage make detecting mitotic cells difficult in 2D approaches. Therefore, effective extraction of the spatial and temporal features from mitotic data is an important and challenging task. The computational time required for detection is another major concern for mitotic detection in 4D microscopic images. RESULTS: In this paper, we propose a backbone feature extraction network named full scale connected recurrent deep layer aggregation (RDLA++) for anchor-free mitotic detection. We utilize a 2.5D method that includes 3D spatial information extracted from several 2D images from neighbouring slices that form a multi-stream input. CONCLUSIONS: Our proposed technique addresses the scale variation problem and can efficiently extract spatial and temporal features from 4D microscopic images, resulting in improved detection accuracy and reduced computation time compared with those of other state-of-the-art methods.


Asunto(s)
Microscopía , Redes Neurales de la Computación , Fenómenos Fisiológicos Celulares
18.
Med Biol Eng Comput ; 58(1): 155-170, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31792782

RESUMEN

Joined fragment segmentation for fractured bones segmented from CT (computed tomography) images is a time-consuming task and calls for lots of interactions. To alleviate segmentation burdens of radiologists, we propose a graphics processing unit (GPU)-accelerated 3D segmentation framework requiring less interactions and lower time cost compared with existing methods. We first leverage the normal-based erosion method to separate joined bone fragments. After labeling the separated fragments via CCL (connected component labeling) algorithm, the record-based dilation method is eventually employed to restore bone's original shape. Besides, we introduce an additional random walk algorithm to tackle the special case where fragments are strongly joined. For efficient fragment segmentation, the framework is carried out in parallel with GPU-acceleration technology. Experiments on realistic CT volumes demonstrate that our framework can attain accurate fragment segmentations with dice scores over 99% and averagely takes 3.47 s to complete the segmentation task for a fractured bone volume of 512 × 512 × 425 voxels. We propose a GPU accelerated segmentation framework, which mainly consists of normal-based erosion and record-based dilation, to automatically segment joined fragments for most cases. For the remaining cases, we introduce a random walk algorithm for segmentation with a few interactions.


Asunto(s)
Gráficos por Computador , Fracturas Óseas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Algoritmos , Humanos , Tomografía Computarizada por Rayos X
19.
Med Phys ; 45(5): 2097-2107, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29500816

RESUMEN

PURPOSE: The automatic detection of pulmonary nodules using CT scans improves the efficiency of lung cancer diagnosis, and false-positive reduction plays a significant role in the detection. In this paper, we focus on the false-positive reduction task and propose an effective method for this task. METHODS: We construct a deep 3D residual CNN (convolution neural network) to reduce false-positive nodules from candidate nodules. The proposed network is much deeper than the traditional 3D CNNs used in medical image processing. Specifically, in the network, we design a spatial pooling and cropping (SPC) layer to extract multilevel contextual information of CT data. Moreover, we employ an online hard sample selection strategy in the training process to make the network better fit hard samples (e.g., nodules with irregular shapes). RESULTS: Our method is evaluated on 888 CT scans from the dataset of the LUNA16 Challenge. The free-response receiver operating characteristic (FROC) curve shows that the proposed method achieves a high detection performance. CONCLUSIONS: Our experiments confirm that our method is robust and that the SPC layer helps increase the prediction accuracy. Additionally, the proposed method can easily be extended to other 3D object detection tasks in medical image processing.


Asunto(s)
Imagenología Tridimensional/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Reacciones Falso Positivas , Humanos , Tomografía Computarizada por Rayos X
20.
IEEE Trans Vis Comput Graph ; 22(11): 2467-79, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-26841401

RESUMEN

We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.


Asunto(s)
Cuerpo Humano , Algoritmos , Gráficos por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...