Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 91: 102988, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37924750

RESUMO

Pulmonary Embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients death. This disorder is commonly diagnosed using Computed Tomography Pulmonary Angiography (CTPA). Deep learning holds great promise for the Computer-aided Diagnosis (CAD) of PE. However, numerous deep learning methods, such as Convolutional Neural Networks (CNN) and Transformer-based models, exist for a given task, causing great confusion regarding the development of CAD systems for PE. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis based on four datasets. First, we use the RSNA PE dataset, which includes (weak) slice-level and exam-level labels, for PE classification and diagnosis, respectively. At the slice level, we compare CNNs with the Vision Transformer (ViT) and the Swin Transformer. We also investigate the impact of self-supervised versus (fully) supervised ImageNet pre-training, and transfer learning over training models from scratch. Additionally, at the exam level, we compare sequence model learning with our proposed transformer-based architecture, Embedding-based ViT (E-ViT). For the second and third datasets, we utilize the CAD-PE Challenge Dataset and Ferdowsi University of Mashad's PE Dataset, where we convert (strong) clot-level masks into slice-level annotations to evaluate the optimal CNN model for slice-level PE classification. Finally, we use our in-house PE-CAD dataset, which contains (strong) clot-level masks. Here, we investigate the impact of our vessel-oriented image representations and self-supervised pre-training on PE false positive reduction at the clot level across image dimensions (2D, 2.5D, and 3D). Our experiments show that (1) transfer learning boosts performance despite differences between photographic images and CTPA scans; (2) self-supervised pre-training can surpass (fully) supervised pre-training; (3) transformer-based models demonstrate comparable performance but slower convergence compared with CNNs for slice-level PE classification; (4) model trained on the RSNA PE dataset demonstrates promising performance when tested on unseen datasets for slice-level PE classification; (5) our E-ViT framework excels in handling variable numbers of slices and outperforms sequence model learning for exam-level diagnosis; and (6) vessel-oriented image representation and self-supervised pre-training both enhance performance for PE false positive reduction across image dimensions. Our optimal approach surpasses state-of-the-art results on the RSNA PE dataset, enhancing AUC by 0.62% (slice-level) and 2.22% (exam-level). On our in-house PE-CAD dataset, 3D vessel-oriented images improve performance from 80.07% to 91.35%, a remarkable 11% gain. Codes are available at GitHub.com/JLiangLab/CAD_PE.


Assuntos
Diagnóstico por Computador , Embolia Pulmonar , Humanos , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Imageamento Tridimensional , Embolia Pulmonar/diagnóstico por imagem , Computadores
2.
iScience ; 26(10): 107243, 2023 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-37767002

RESUMO

Image-based AI has thrived as a potentially revolutionary tool for predicting molecular biomarker statuses, which aids in categorizing patients for appropriate medical treatments. However, many methods using hematoxylin and eosin-stained (H&E) whole-slide images (WSIs) have been found to be inefficient because of the presence of numerous uninformative or irrelevant image patches. In this study, we introduced the region of biomarker relevance (ROB) concept to identify the morphological areas most closely associated with biomarkers for accurate status prediction. We actualized this concept within a framework called saliency ROB search (SRS) to enable efficient and effective predictions. By evaluating various lung adenocarcinoma (LUAD) biomarkers, we showcased the superior performance of SRS compared to current state-of-the-art AI approaches. These findings suggest that AI tools, built on the ROB concept, can achieve enhanced molecular biomarker prediction accuracy from pathological images.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37060893

RESUMO

BACKGROUND: To explore the clinical efficacy of using tongue-shaped flaps and advancement flaps to reconstruct the fingertips in congenital syndactyly patients with osseous fusion of the distal phalanges. METHODS: From January 2016 to January 2019, 12 patients with congenital syndactyly, involving 30 digits in total, presented to our hospital and were surgically treated with tongue-shaped flaps, as well as with advancement flaps to reconstruct the fingertips. The flap infection rate, necrosis rate and any other early complications were recorded. Fingertip aesthetics were reported according to the modified Bulic scale. A questionnaire was used to assess the satisfaction of the patients' family members. RESULTS: All cases were thoroughly reviewed. The postoperative period for inclusion in this study ranged from 36 to 60 months, with an average follow-up time of 45 months. During this period, no complications such as infection and/or necrosis of any flap were observed. Significant improvements in finger aesthetics and functioning compared to preoperative values were observed in most cases. Based on the modified Bulic scale, of 30 fingertips, an excellent result was obtained for 3, a very good result for 13, a good result for 13 and a poor result for just 1. Family members were satisfied with the treatment outcome. CONCLUSIONS: This technique employing tongue-shaped flaps and advancement flaps to reconstruct fingertips is effective, which enables the attainment of favourable aesthetic and functional outcomes in congenital syndactyly patients with osseous fusion of the distal phalanges.


Assuntos
Traumatismos dos Dedos , Procedimentos de Cirurgia Plástica , Sindactilia , Humanos , Transplante de Pele/métodos , Retalhos Cirúrgicos/cirurgia , Sindactilia/cirurgia , Resultado do Tratamento , Língua/cirurgia , Traumatismos dos Dedos/cirurgia
4.
Med Image Anal ; 71: 101997, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33853034

RESUMO

The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.


Assuntos
Diagnóstico por Imagem , Redes Neurais de Computação , Humanos , Estudos Longitudinais
5.
IEEE Trans Med Imaging ; 40(10): 2857-2868, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33617450

RESUMO

This paper introduces a new concept called "transferable visual words" (TransVW), aiming to achieve annotation efficiency for deep learning in medical image analysis. Medical imaging-focusing on particular parts of the body for defined clinical purposes-generates images of great similarity in anatomy across patients and yields sophisticated anatomical patterns across images, which are associated with rich semantics about human anatomy and which are natural visual words. We show that these visual words can be automatically harvested according to anatomical consistency via self-discovery, and that the self-discovered visual words can serve as strong yet free supervision signals for deep models to learn semantics-enriched generic image representation via self-supervision (self-classification and self-restoration). Our extensive experiments demonstrate the annotation efficiency of TransVW by offering higher performance and faster convergence with reduced annotation cost in several applications. Our TransVW has several important advantages, including (1) TransVW is a fully autodidactic scheme, which exploits the semantics of visual words for self-supervised learning, requiring no expert annotation; (2) visual word learning is an add-on strategy, which complements existing self-supervised methods, boosting their performance; and (3) the learned image representation is semantics-enriched models, which have proven to be more robust and generalizable, saving annotation efforts for a variety of applications through transfer learning. Our code, pre-trained models, and curated visual words are available at https://github.com/JLiangLab/TransVW.


Assuntos
Diagnóstico por Imagem , Semântica , Humanos , Radiografia , Aprendizado de Máquina Supervisionado
6.
Mach Learn Med Imaging ; 12966: 692-702, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35695860

RESUMO

Pulmonary embolism (PE) represents a thrombus ("blood clot"), usually originating from a lower extremity vein, that travels to the blood vessels in the lung, causing vascular obstruction and in some patients, death. This disorder is commonly diagnosed using CT pulmonary angiography (CTPA). Deep learning holds great promise for the computer-aided CTPA diagnosis (CAD) of PE. However, numerous competing methods for a given task in the deep learning literature exist, causing great confusion regarding the development of a CAD PE system. To address this confusion, we present a comprehensive analysis of competing deep learning methods applicable to PE diagnosis using CTPA at the both image and exam levels. At the image level, we compare convolutional neural networks (CNNs) with vision transformers, and contrast self-supervised learning (SSL) with supervised learning, followed by an evaluation of transfer learning compared with training from scratch. At the exam level, we focus on comparing conventional classification (CC) with multiple instance learning (MIL). Our extensive experiments consistently show: (1) transfer learning consistently boosts performance despite differences between natural images and CT scans, (2) transfer learning with SSL surpasses its supervised counterparts; (3) CNNs outperform vision transformers, which otherwise show satisfactory performance; and (4) CC is, surprisingly, superior to MIL. Compared with the state of the art, our optimal approach provides an AUC gain of 0.2% and 1.05% for image-level and exam-level, respectively.

7.
J Invest Surg ; 34(6): 610-616, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31870195

RESUMO

BACKGROUND: The therapeutics used to promote perforator flap survival function induces vascular regeneration and inhibit apoptosis. The present study aimed to explore the potential mechanism of the angiogenesis effects of Ginkgolide B (GB) in perforator flaps. Methods: A total of 72 rats were divided into three groups and treated with saline, GB, or GB + tunicamycin (TM; ER stress activator) for seven consecutive days, respectively. Apoptosis was assayed by determining the Bax/Bcl-2 ratio and caspase-3 level. Endoplasmic reticulum (ER) stress markers (CHOP, GRP78, and caspase-12) were detected by Western blot analysis. Oxidative stress was assessed by measuring the superoxide dismutase activity (SOD) and malondialdehyde (MDA), heme oxygenase-1(HO-1), and nuclear factor erythroid-2-related factor 2 (Nrf2) mRNA levels in the flaps. The percentage flap survival area and blood flow were assessed on postoperative day (POD) 7. Angiogenesis was visualized by hematoxylin and eosin and CD34 staining on POD 7. Results: GB increased the survival of perforator flaps, the flap survival area of GB, GB + TM, and control groups was 90.83 ± 1.93%, 70.93 ± 4.13%, and 62.97 ± 6.50%. GB decreased the Bax/Bcl-2 ratio and caspase-3 level. ER stress-related proteins were downregulated by GB. GB also decreased the MDA level and increased SOD activity, HO-1 and Nrf2 mRNA levels in the flaps. Further, GB induced regeneration of vascular vessels in comparison with saline or GB + TM. Conclusions: GB increased angiogenesis and alleviated oxidative stress by inhibiting ER stress, which increased the survival of perforator flaps. In contrast, GB + TM alleviated angiogenesis and induced oxidative stress by activating ER stress and decreasing the survival of perforator flaps.


Assuntos
Retalho Perfurante , Animais , Apoptose , Estresse do Retículo Endoplasmático , Ginkgolídeos , Lactonas , Estresse Oxidativo , Ratos , Ratos Sprague-Dawley
8.
Med Image Anal ; 67: 101840, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33188996

RESUMO

Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.


Assuntos
Imageamento Tridimensional , Imageamento por Ressonância Magnética , Humanos
9.
Zhonghua Yi Xue Yi Chuan Xue Za Zhi ; 37(3): 313-317, 2020 Mar 10.
Artigo em Chinês | MEDLINE | ID: mdl-32128750

RESUMO

Brachydactyly type A1 (BDA1) is the first autosomal dominant genetic disease recorded in the literature. The main characteristics of BDA1 include shortening of the middle phalanx and fusion of the middle and distal phalanges. So far more than 100 pedigrees have been reported around the world. This paper summarizes the clinical manifestation, pathogenesis, diagnostic criteria and treatment plan for BDA1, with an aim to improve its diagnosis and clinical management.


Assuntos
Braquidactilia/diagnóstico , Braquidactilia/terapia , Guias de Prática Clínica como Assunto , Humanos
10.
Med Image Comput Comput Assist Interv ; 12261: 137-147, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35695848

RESUMO

Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e., CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis.

11.
Artigo em Inglês | MEDLINE | ID: mdl-35713588

RESUMO

Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the universal and intrinsic part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own parts, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.

12.
J Surg Res ; 245: 453-460, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31445497

RESUMO

BACKGROUND: Leonurine (Leo), a natural active compound of Leonurus cardiaca, has been shown to possess various biological activities. However, it is not known whether Leo promotes perforator flap survival. METHODS: In this study, a perforator flap was outlined in the rat dorsum. The rats that survived surgery were divided randomly to control and Leo groups (n = 36 per group). Flap viability, flap perfusion, and level of protein linked with oxidative stress, cell apoptosis, and angiogenesis were evaluated. RESULTS: Relative to control group, the Leo group showed significantly higher the flap survival percentage (70.5% versus 90.2%, P < 0.05) and blood perfusion (197.1 versus 286.3, P < 0.05). Leo also increased 1.8-fold mean vessel density and upregulated 2.1-fold vascular endothelial growth factor protein expression compared with the control group, both of which indicate increased angiogenesis. Moreover, it significantly inhibited apoptosis by lowering caspase-3 activity. Superoxide dismutase expression was remarkably elevated in Leo group compared with the control group (56.0 versus 43.2 U/mg/protein, P < 0.01), but malondialdehyde quantities were significantly lower in the Leo group compared with control group (41.9 versus 57.5 nmol/mg/protein, P < 0.05). CONCLUSIONS: Leo may serve as an effective drug for improving perforator flap survival in rats via antioxidant and antiapoptotic mechanisms and promotion of angiogenesis.


Assuntos
Ácido Gálico/análogos & derivados , Leonurus , Retalho Perfurante , Extratos Vegetais/uso terapêutico , Sobrevivência de Tecidos/efeitos dos fármacos , Animais , Apoptose/efeitos dos fármacos , Avaliação Pré-Clínica de Medicamentos , Ácido Gálico/farmacologia , Ácido Gálico/uso terapêutico , Masculino , Neovascularização Fisiológica/efeitos dos fármacos , Estresse Oxidativo/efeitos dos fármacos , Fitoterapia , Extratos Vegetais/farmacologia , Ratos Sprague-Dawley , Fator A de Crescimento do Endotélio Vascular/metabolismo
13.
IEEE Trans Med Imaging ; 39(6): 1856-1867, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31841402

RESUMO

The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects-an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X
14.
J Plast Reconstr Aesthet Surg ; 72(4): 636-641, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30661916

RESUMO

PURPOSE: To evaluate nail appearance after nail fusion plasty to treat thumb duplication. METHODS: A modified form of nail fusion plasty was performed on 17 reconstructed thumbs of 16 children with thumb duplications, commencing in January 2010. We assessed nail width and nail, lunular, and nail fold deformities using the Wang-Gao scoring system. All 17 thumbs were evaluated over an average of 32 months (range, 12-48 months) of follow-up. RESULTS: One patient with bilateral thumb deformities was excluded. The width ratios of 15 reconstructed nails (compared with those of the contralateral thumbs) were 82-118% (average, 97%). Nine thumbs exhibited nail ridges or gaps; the average ridge/gap score was 1.23 (maximum, 2). Six thumbs exhibited lunular deformities; the average score was 1.58 (maximum, 2). Another six thumbs evidenced nail fold deformities; the average score was 1.64 (maximum, 2). Only one thumb exhibited nail dehiscence. Two thumbs had no nail deformity. The final assessments were excellent in 14 cases, good in 2 cases, and fair in 1 case. CONCLUSIONS: We could not significantly reduce the deformity rate of the nail plate, nail fold, or lunula using our new technique, but the deformities were much less marked than previously. Nail fusion plasty usefully enlarges the nail and pulp in patients with hypoplastically duplicated thumbs.


Assuntos
Deformidades da Mão/cirurgia , Unhas/cirurgia , Procedimentos de Cirurgia Plástica/métodos , Polegar/anormalidades , Estética , Feminino , Humanos , Lactente , Masculino , Unhas/patologia , Polegar/cirurgia
15.
Proc IEEE Int Conf Comput Vis ; 2019: 191-200, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32612486

RESUMO

Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN "virtually heal" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.

16.
Med Image Comput Comput Assist Interv ; 11767: 384-393, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32766570

RESUMO

Transfer learning from natural image to medical image has established as one of the most practical paradigms in deep learning for medical image analysis. However, to fit this paradigm, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information and inevitably compromising the performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learned by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of our Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated yet recurrent anatomy in medical images can serve as strong supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.

17.
J Digit Imaging ; 32(2): 290-299, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30402668

RESUMO

Cardiovascular disease (CVD) is the number one killer in the USA, yet it is largely preventable (World Health Organization 2011). To prevent CVD, carotid intima-media thickness (CIMT) imaging, a noninvasive ultrasonography method, has proven to be clinically valuable in identifying at-risk persons before adverse events. Researchers are developing systems to automate CIMT video interpretation based on deep learning, but such efforts are impeded by the lack of large annotated CIMT video datasets. CIMT video annotation is not only tedious, laborious, and time consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. To dramatically reduce the cost of CIMT video annotation, this paper makes three main contributions. Our first contribution is a new concept, called Annotation Unit (AU), which simplifies the entire CIMT video annotation process down to six simple mouse clicks. Our second contribution is a new algorithm, called AFT (active fine-tuning), which naturally integrates active learning and transfer learning (fine-tuning) into a single framework. AFT starts directly with a pre-trained convolutional neural network (CNN), focuses on selecting the most informative and representative AU s from the unannotated pool for annotation, and then fine-tunes the CNN by incorporating newly annotated AU s in each iteration to enhance the CNN's performance gradually. Our third contribution is a systematic evaluation, which shows that, in comparison with the state-of-the-art method (Tajbakhsh et al., IEEE Trans Med Imaging 35(5):1299-1312, 2016), our method can cut the annotation cost by >81% relative to their training from scratch and >50% relative to their random selection. This performance is attributed to the several advantages derived from the advanced active, continuous learning capability of our AFT method.


Assuntos
Artérias Carótidas/diagnóstico por imagem , Espessura Intima-Media Carotídea/classificação , Aprendizado de Máquina , Ultrassonografia/métodos , Gravação em Vídeo , Humanos
18.
Zhongguo Xiu Fu Chong Jian Wai Ke Za Zhi ; 32(7): 827-831, 2018 07 15.
Artigo em Chinês | MEDLINE | ID: mdl-30129303

RESUMO

Surgery is still the main treatment for congenital polydactyly, and the aim of surgical reconstruction is to obtain a thumb with excellent function and appearance. A systematic assessment of polydactyly is required prior to surgery, including bone stress lines, joint deviation, joint activity and joint instability, size and development of finger and nail. Bone shape, joint incongruency, and abnormal tendon insertions must be corrected completely, in order to obtain good function and to avoide secondary surgery. Bilhault-Cloquet procedure can reconstruct the size of the finger and nails. Fine manipulation can improve the postoperative nail deformity, so that the reconstructed nail reaches a satisfactory aesthetic score.


Assuntos
Procedimentos de Cirurgia Plástica , Polidactilia , Polegar , Estética , Humanos , Polidactilia/cirurgia
19.
Artigo em Inglês | MEDLINE | ID: mdl-32613207

RESUMO

In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.

20.
Biomed Pharmacother ; 97: 45-52, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29080457

RESUMO

Aloperine (ALO) is a novel type of alkaloid drug that is extracted from S. alopecuroide, and exert an anti-inflammatory, anti-allergenic, antitumor and antiviral effects. In our study, we evaluated the effects and underlying mechanisms of ALO on MG-63 and U2OS osteosarcoma (OS) cells. ALO suppressed the proliferation and clonogenecity of both cell lines in a dose- and time-dependent manner as observed by CCK-8 and clonogenic survival assays. Data of morphologic changes, DAPI assays and flow cytometry showed that ALO induced apoptosis of OS cells, and the results of western blotting and qRT-PCR indicated that ALO upregulated protein and mRNA of Bax and cleaved caspase-3, while downregulated Bcl-2. Besides, ALO inhibited the invasion of MG-63 and U2OS cells as shown by transwell invasion assay. The protein and mRNA of MMP-2 and MMP-9 were decreased with ALO treatment. ALO also downregulated the protein and mRNA expression of PI3K and p-AKT1. In conclusion, ALO induced apoptosis and inhibited invasion in MG-63 and U2OS cells, which maybe through suppression of PI3K/AKT signaling pathway.


Assuntos
Alcaloides/farmacologia , Apoptose/efeitos dos fármacos , Neoplasias Ósseas/metabolismo , Osteossarcoma/metabolismo , Piperidinas/farmacologia , Apoptose/fisiologia , Neoplasias Ósseas/patologia , Linhagem Celular Tumoral , Sobrevivência Celular/efeitos dos fármacos , Sobrevivência Celular/fisiologia , Humanos , Invasividade Neoplásica/patologia , Invasividade Neoplásica/prevenção & controle , Osteossarcoma/patologia , Quinolizidinas , Transdução de Sinais/efeitos dos fármacos , Transdução de Sinais/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...