Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Med Image Anal ; 95: 103156, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38603844

RESUMO

The state-of-the-art multi-organ CT segmentation relies on deep learning models, which only generalize when trained on large samples of carefully curated data. However, it is challenging to train a single model that can segment all organs and types of tumors since most large datasets are partially labeled or are acquired across multiple institutes that may differ in their acquisitions. A possible solution is Federated learning, which is often used to train models on multi-institutional datasets where the data is not shared across sites. However, predictions of federated learning can be unreliable after the model is locally updated at sites due to 'catastrophic forgetting'. Here, we address this issue by using knowledge distillation (KD) so that the local training is regularized with the knowledge of a global model and pre-trained organ-specific segmentation models. We implement the models in a multi-head U-Net architecture that learns a shared embedding space for different organ segmentation, thereby obtaining multi-organ predictions without repeated processes. We evaluate the proposed method using 8 publicly available abdominal CT datasets of 7 different organs. Of those datasets, 889 CTs were used for training, 233 for internal testing, and 30 volumes for external testing. Experimental results verified that our proposed method substantially outperforms other state-of-the-art methods in terms of accuracy, inference time, and the number of parameters.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Conjuntos de Dados como Assunto , Bases de Dados Factuais
2.
Med Image Anal ; 82: 102626, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36208573

RESUMO

Semantic instance segmentation is crucial for many medical image analysis applications, including computational pathology and automated radiation therapy. Existing methods for this task can be roughly classified into two categories: (1) proposal-based methods and (2) proposal-free methods. However, in medical images, the irregular shape-variations and crowding instances (e.g., nuclei and cells) make it hard for the proposal-based methods to achieve robust instance localization. On the other hand, ambiguous boundaries caused by the low-contrast nature of medical images (e.g., CT images) challenge the accuracy of the proposal-free methods. To tackle these issues, we propose a proposal-free segmentation network with discriminative deep supervision (DDS), which at the same time allows us to gain the power of the proposal-based method. The DDS module is interleaved with a carefully designed proposal-free segmentation backbone in our network. Consequently, the features learned by the backbone network become more sensitive to instance localization. Also, with the proposed DDS module, robust pixel-wise instance-level cues (especially structural information) are introduced for semantic segmentation. Extensive experiments on three datasets, i.e., a nuclei dataset, a pelvic CT image dataset, and a synthetic dataset, demonstrate the superior performance of the proposed algorithm compared to the previous works.


Assuntos
Algoritmos , Semântica , Humanos , Pelve
3.
JBJS Rev ; 9(12)2021 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-34936580

RESUMO

BACKGROUND: There is increasing evidence supporting the association between frailty and adverse outcomes after surgery. There is, however, no consensus on how frailty should be assessed and used to inform treatment. In this review, we aimed to synthesize the current literature on the use of frailty as a predictor of adverse outcomes following orthopaedic surgery by (1) identifying the frailty instruments used and (2) evaluating the strength of the association between frailty and adverse outcomes after orthopaedic surgery. METHODS: A systematic review was performed using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. PubMed, Scopus, and the Cochrane Central Register of Controlled Trials were searched to identify articles that reported on outcomes after orthopaedic surgery within frail populations. Only studies that defined frail patients using a frailty instrument were included. The methodological quality of studies was assessed using the Newcastle-Ottawa Scale (NOS). Study demographic information, frailty instrument information (e.g., number of items, domains included), and clinical outcome measures (including mortality, readmissions, and length of stay) were collected and reported. RESULTS: The initial search yielded 630 articles. Of these, 177 articles underwent full-text review; 82 articles were ultimately included and analyzed. The modified frailty index (mFI) was the most commonly used frailty instrument (38% of the studies used the mFI-11 [11-item mFI], and 24% of the studies used the mFI-5 [5-item mFI]), although a large variety of instruments were used (24 different instruments identified). Total joint arthroplasty (22%), hip fracture management (17%), and adult spinal deformity management (15%) were the most frequently studied procedures. Complications (71%) and mortality (51%) were the most frequently reported outcomes; 17% of studies reported on a functional outcome. CONCLUSIONS: There is no consensus on the best approach to defining frailty among orthopaedic surgery patients, although instruments based on the accumulation-of-deficits model (such as the mFI) were the most common. Frailty was highly associated with adverse outcomes, but the majority of the studies were retrospective and did not identify frailty prospectively in a prediction model. Although many outcomes were described (complications and mortality being the most common), there was a considerable amount of heterogeneity in measurement strategy and subsequent strength of association. Future investigations evaluating the association between frailty and orthopaedic surgical outcomes should focus on prospective study designs, long-term outcomes, and assessments of patient-reported outcomes and/or functional recovery scores. CLINICAL RELEVANCE: Preoperatively identifying high-risk orthopaedic surgery patients through frailty instruments has the potential to improve patient outcomes. Frailty screenings can create opportunities for targeted intervention efforts and guide patient-provider decision-making.


Assuntos
Fragilidade , Procedimentos Ortopédicos , Ortopedia , Adulto , Fragilidade/complicações , Fragilidade/diagnóstico , Humanos , Procedimentos Ortopédicos/efeitos adversos , Estudos Prospectivos , Estudos Retrospectivos
4.
Med Image Anal ; 71: 102039, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33831595

RESUMO

Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.


Assuntos
Próstata , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Próstata/diagnóstico por imagem
5.
IEEE Trans Cybern ; 51(4): 2153-2165, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31869812

RESUMO

Automatic pancreas segmentation is crucial to the diagnostic assessment of diabetes or pancreatic cancer. However, the relatively small size of the pancreas in the upper body, as well as large variations of its location and shape in retroperitoneum, make the segmentation task challenging. To alleviate these challenges, in this article, we propose a cascaded multitask 3-D fully convolution network (FCN) to automatically segment the pancreas. Our cascaded network is composed of two parts. The first part focuses on fast locating the region of the pancreas, and the second part uses a multitask FCN with dense connections to refine the segmentation map for fine voxel-wise segmentation. In particular, our multitask FCN with dense connections is implemented to simultaneously complete tasks of the voxel-wise segmentation and skeleton extraction from the pancreas. These two tasks are complementary, that is, the extracted skeleton provides rich information about the shape and size of the pancreas in retroperitoneum, which can boost the segmentation of pancreas. The multitask FCN is also designed to share the low- and mid-level features across the tasks. A feature consistency module is further introduced to enhance the connection and fusion of different levels of feature maps. Evaluations on two pancreas datasets demonstrate the robustness of our proposed method in correctly segmenting the pancreas in various settings. Our experimental results outperform both baseline and state-of-the-art methods. Moreover, the ablation study shows that our proposed parts/modules are critical for effective multitask learning.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem
6.
Inf Process Med Imaging ; 12729: 321-333, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35173402

RESUMO

Multi-modal MRIs are widely used in neuroimaging applications since different MR sequences provide complementary information about brain structures. Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) information into separate image presentations. In this work, we challenge mainstream strategies by showing that they do not naturally lead to representation disentanglement both in theory and in practice. To address this issue, we propose a margin loss that regularizes the similarity in relationships of the representations across subjects and modalities. To enable robust training, we further use a conditional convolution to design a single model for encoding images of all modalities. Lastly, we propose a fusion function to combine the disentangled anatomical representations as a set of modality-invariant features for downstream tasks. We evaluate the proposed method on three multi-modal neuroimaging datasets. Experiments show that our proposed method can achieve superior disentangled representations compared to existing disentanglement strategies. Results also indicate that the fused anatomical representation has potential in the downstream task of zero-dose PET reconstruction and brain tumor segmentation.

7.
Artigo em Inglês | MEDLINE | ID: mdl-31226074

RESUMO

Automatic image segmentation is an essential step for many medical image analysis applications, include computer-aided radiation therapy, disease diagnosis, and treatment effect evaluation. One of the major challenges for this task is the blurry nature of medical images (e.g., CT, MR and, microscopic images), which can often result in low-contrast and vanishing boundaries. With the recent advances in convolutional neural networks, vast improvements have been made for image segmentation, mainly based on the skip-connection-linked encoder-decoder deep architectures. However, in many applications (with adjacent targets in blurry images), these models often fail to accurately locate complex boundaries and properly segment tiny isolated parts. In this paper, we aim to provide a method for blurry medical image segmentation and argue that skip connections are not enough to help accurately locate indistinct boundaries. Accordingly, we propose a novel high-resolution multi-scale encoder-decoder network (HMEDN), in which multi-scale dense connections are introduced for the encoder-decoder structure to finely exploit comprehensive semantic information. Besides skip connections, extra deeply-supervised high-resolution pathways (comprised of densely connected dilated convolutions) are integrated to collect high-resolution semantic information for accurate boundary localization. These pathways are paired with a difficulty-guided cross-entropy loss function and a contour regression task to enhance the quality of boundary detection. Extensive experiments on a pelvic CT image dataset, a multi-modal brain tumor dataset, and a cell segmentation dataset show the effectiveness of our method for 2D/3D semantic segmentation and 2D instance segmentation, respectively. Our experimental results also show that besides increasing the network complexity, raising the resolution of semantic feature maps can largely affect the overall model performance. For different tasks, finding a balance between these two factors can further improve the performance of the corresponding network.

8.
Sci Rep ; 9(1): 1103, 2019 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-30705340

RESUMO

High-grade gliomas are the most aggressive malignant brain tumors. Accurate pre-operative prognosis for this cohort can lead to better treatment planning. Conventional survival prediction based on clinical information is subjective and could be inaccurate. Recent radiomics studies have shown better prognosis by using carefully-engineered image features from magnetic resonance images (MRI). However, feature engineering is usually time consuming, laborious and subjective. Most importantly, the engineered features cannot effectively encode other predictive but implicit information provided by multi-modal neuroimages. We propose a two-stage learning-based method to predict the overall survival (OS) time of high-grade gliomas patient. At the first stage, we adopt deep learning, a recently dominant technique of artificial intelligence, to automatically extract implicit and high-level features from multi-modal, multi-channel preoperative MRI such that the features are competent of predicting survival time. Specifically, we utilize not only contrast-enhanced T1 MRI, but also diffusion tensor imaging (DTI) and resting-state functional MRI (rs-fMRI), for computing multiple metric maps (including various diffusivity metric maps derived from DTI, and also the frequency-specific brain fluctuation amplitude maps and local functional connectivity anisotropy-related metric maps derived from rs-fMRI) from 68 high-grade glioma patients with different survival time. We propose a multi-channel architecture of 3D convolutional neural networks (CNNs) for deep learning upon those metric maps, from which high-level predictive features are extracted for each individual patch of these maps. At the second stage, those deeply learned features along with the pivotal limited demographic and tumor-related features (such as age, tumor size and histological type) are fed into a support vector machine (SVM) to generate the final prediction result (i.e., long or short overall survival time). The experimental results demonstrate that this multi-model, multi-channel deep survival prediction framework achieves an accuracy of 90.66%, outperforming all the competing methods. This study indicates highly demanded effectiveness on prognosis of deep learning technique in neuro-oncological applications for better individualized treatment planning towards precision medicine.


Assuntos
Algoritmos , Neoplasias Encefálicas , Bases de Dados Factuais , Aprendizado Profundo , Imagem de Tensor de Difusão , Glioma , Adolescente , Adulto , Idoso , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Intervalo Livre de Doença , Feminino , Glioma/diagnóstico por imagem , Glioma/mortalidade , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Taxa de Sobrevida
9.
IEEE Trans Image Process ; 25(7): 3303-3315, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27187957

RESUMO

Positron emission tomography (PET) images are widely used in many clinical applications, such as tumor detection and brain disorder diagnosis. To obtain PET images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality. This may be achieved through mapping both S-PET and low-dose PET data into a common space and then performing patch-based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the estimation accuracy. In this paper, we propose a data-driven multi-level canonical correlation analysis scheme to solve this problem. In particular, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve estimation. In addition, we also use multi-modal magnetic resonance images to help improve the estimation with complementary information. Validations on phantom and real human brain data sets show that our method effectively estimates S-PET images and well preserves critical clinical quantification measures, such as standard uptake value.

10.
Med Image Comput Comput Assist Interv ; 9901: 212-220, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28149967

RESUMO

High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.


Assuntos
Algoritmos , Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Glioma/diagnóstico por imagem , Expectativa de Vida , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Neoplasias Encefálicas/patologia , Glioma/mortalidade , Glioma/patologia , Humanos , Gradação de Tumores , Redes Neurais de Computação , Prognóstico , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
Braz. j. pharm. sci ; 52(1): 1-13, Jan.-Mar. 2016. tab, graf
Artigo em Inglês | LILACS | ID: lil-789083

RESUMO

ABSTRACT Azithromycin is a water-insoluble drug, with a very low bioavailability. In order to increase the solubility and dissolution rate, and consequently increase the bioavailability of poorly-soluble drugs (such as azithromycin), various techniques can be applied. One of such techniques is "solid dispersion". This technique is frequently used to improve the dissolution rate of poorly water-soluble compounds. Owing to its low solubility and dissolution rate, azithromycin does not have a suitable bioavailability. Therefore, the main purpose of this investigation was to increase the solubility and dissolution rate of azithromycin by preparing its solid dispersion, using different Polyethylene glycols (PEG). Preparations of solid dispersions and physical mixtures of azithromycin were made using PEG 4000, 6000, 8000, 12000 and 20000 in various ratios, based on the solvent evaporation method. From the studied drug release profile, it was discovered that the dissolution rate of the physical mixture, as the well as the solid dispersions, were higher than those of the drug alone. There was no chemical incompatibility between the drug and polymer from the observed Infrared (IR) spectra. Drug-polymer interactions were also investigated using Differential Scanning Calorimetry (DSC), Powder X-Ray Diffraction (PXRD) and Scanning Election Microscopy (SEM). In conclusion, the dissolution rate and solubility of azithromycin were found to improve significantly, using hydrophilic carriers, especially PEG 6000.


RESUMO A azitromicina é um fármaco insolúvel em água, com biodisponibilidade muito baixa. A fim de aumentar a taxa de solubilidade e dissolução e, consequentemente, aumentar a biodisponibilidade de fármacos fracamente solúveis (tais como azitromicina), várias técnicas podem ser aplicadas. Uma dessas técnicas é a "dispersão sólida", frequentemente usada para melhorar a taxa de dissolução de compostos fracamente solúveis em água. Devido à baixa taxa de solubilidade e de dissolução, este fármaco não tem biodisponibilidade adequada. Portanto, o principal objetivo desta pesquisa foi o de aumentar a taxa de solubilidade e dissolução da azitromicina, preparando a sua dispersão sólida, utilizando diferentes glicóis de polietileno (PEG). As dispersões sólidas e as misturas físicas de azitromicina foram preparadas utilizando PEG 4000, 6000, 8000, 12000 e 20000, em várias proporções, com base no método de evaporação do solvente. O perfil de liberação do fármaco foi estudado e verificou-se que tanto a taxa de dissolução da mistura física quanto as dispersões sólidas foram maiores do que as do fármaco sozinho. Espectros de IR não revelaram incompatibilidade química entre o fármaco e o polímero. Interações fármaco-polímero também foram investigadas usando calorimetria diferencial de varredura (DSC), Difração de Raios X (PXRD) e Microscopia Eletrônica de Varredura(SEM). Em conclusão, a taxa de dissolução e a solubilidade da azitromicina melhoraram, de forma significativa, utilizando suportes hidrofílicos, especialmente PEG 6000.


Assuntos
Azitromicina/análise , Azitromicina/farmacologia , Dissolução/métodos , Polietilenoglicóis/análise , Solubilidade/efeitos dos fármacos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA