Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Comput Biol Med ; 178: 108676, 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38878395

RESUMO

Novel portable diffuse optical tomography (DOT) devices for breast cancer lesions hold great promise for non-invasive, non-ionizing breast cancer screening. Critical to this capability is not just the identification of lesions but rather the complex problem of discriminating between malignant and benign lesions. To accurately reconstruct the highly heterogeneous tissue of a cancer lesion in healthy breast tissue using DOT, multiple wavelengths can be leveraged to maximize signal penetration while minimizing sensitivity to noise. However, these wavelength responses can overlap, capture common information, and correlate, potentially confounding reconstruction and downstream end tasks. We show that an orthogonal fusion loss regularizes multi-wavelength DOT leading to improved reconstruction and accuracy of end-to-end discrimination of malignant versus benign lesions. We further show that our raw-to-task model significantly reduces computational complexity without sacrificing accuracy, making it ideal for real-time throughput, desired in medical settings where handheld devices have severely restricted power budgets. Furthermore, our results indicate that image reconstruction is not necessary for unbiased classification of lesions with a balanced accuracy of 77% and 66% on the synthetic dataset and clinical dataset, respectively, using the raw-to-task model. Code is available at https://github.com/sfu-mial/FuseNet.

2.
Artif Intell Med ; 148: 102751, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38325929

RESUMO

Clinical evaluation evidence and model explainability are key gatekeepers to ensure the safe, accountable, and effective use of artificial intelligence (AI) in clinical settings. We conducted a clinical user-centered evaluation with 35 neurosurgeons to assess the utility of AI assistance and its explanation on the glioma grading task. Each participant read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI prediction and explanation. The AI model was trained on the BraTS dataset with 88.0% accuracy. The AI explanation was generated using the explainable AI algorithm of SmoothGrad, which was selected from 16 algorithms based on the criterion of being truthful to the AI decision process. Results showed that compared to the average accuracy of 82.5±8.7% when physicians performed the task alone, physicians' task performance increased to 87.7±7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5±7.0% (p-value = 0.35) with the additional assistance of AI explanation. Based on quantitative and qualitative results, the observed improvement in physicians' task performance assisted by AI prediction was mainly because physicians' decision patterns converged to be similar to AI, as physicians only switched their decisions when disagreeing with AI. The insignificant change in physicians' performance with the additional assistance of AI explanation was because the AI explanations did not provide explicit reasons, contexts, or descriptions of clinical features to help doctors discern potentially incorrect AI predictions. The evaluation showed the clinical utility of AI to assist physicians on the glioma grading task, and identified the limitations and clinical usage gaps of existing explainable AI techniques for future improvement.


Assuntos
Inteligência Artificial , Glioma , Humanos , Algoritmos , Encéfalo , Glioma/diagnóstico por imagem , Neurocirurgiões
3.
Med Image Anal ; 88: 102863, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37343323

RESUMO

Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.


Assuntos
Aprendizado Profundo , Dermatopatias , Neoplasias Cutâneas , Humanos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Cell Mol Life Sci ; 79(11): 565, 2022 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-36284011

RESUMO

Mitochondria are major sources of cytotoxic reactive oxygen species (ROS), such as superoxide and hydrogen peroxide, that when uncontrolled contribute to cancer progression. Maintaining a finely tuned, healthy mitochondrial population is essential for cellular homeostasis and survival. Mitophagy, the selective elimination of mitochondria by autophagy, monitors and maintains mitochondrial health and integrity, eliminating damaged ROS-producing mitochondria. However, mechanisms underlying mitophagic control of mitochondrial homeostasis under basal conditions remain poorly understood. E3 ubiquitin ligase Gp78 is an endoplasmic reticulum membrane protein that induces mitochondrial fission and mitophagy of depolarized mitochondria. Here, we report that CRISPR/Cas9 knockout of Gp78 in HT-1080 fibrosarcoma cells increased mitochondrial volume, elevated ROS production and rendered cells resistant to carbonyl cyanide m-chlorophenyl hydrazone (CCCP)-induced mitophagy. These effects were phenocopied by knockdown of the essential autophagy protein ATG5 in wild-type HT-1080 cells. Use of the mito-Keima mitophagy probe confirmed that Gp78 promoted both basal and damage-induced mitophagy. Application of a spot detection algorithm (SPECHT) to GFP-mRFP tandem fluorescent-tagged LC3 (tfLC3)-positive autophagosomes reported elevated autophagosomal maturation in wild-type HT-1080 cells relative to Gp78 knockout cells, predominantly in proximity to mitochondria. Mitophagy inhibition by either Gp78 knockout or ATG5 knockdown reduced mitochondrial potential and increased mitochondrial ROS. Live cell analysis of tfLC3 in HT-1080 cells showed the preferential association of autophagosomes with mitochondria of reduced potential. Xenograft tumors of HT-1080 knockout cells show increased labeling for mitochondria and the cell proliferation marker Ki67 and reduced labeling for the TUNEL cell death reporter. Basal Gp78-dependent mitophagic flux is, therefore, selectively associated with reduced potential mitochondria promoting maintenance of a healthy mitochondrial population, limiting ROS production and tumor cell proliferation.


Assuntos
Mitofagia , Superóxidos , Humanos , Carbonil Cianeto m-Clorofenil Hidrazona/farmacologia , Espécies Reativas de Oxigênio/metabolismo , Antígeno Ki-67/metabolismo , Superóxidos/metabolismo , Peróxido de Hidrogênio/farmacologia , Mitocôndrias/metabolismo , Ubiquitina-Proteína Ligases/genética , Ubiquitina-Proteína Ligases/metabolismo , Autofagia/genética
5.
Comput Methods Programs Biomed ; 219: 106750, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35381490

RESUMO

BACKGROUND AND OBJECTIVE: Radiomics and deep learning have emerged as two distinct approaches to medical image analysis. However, their relative expressive power remains largely unknown. Theoretically, hand-crafted radiomic features represent a mere subset of features that neural networks can approximate, thus making deep learning a more powerful approach. On the other hand, automated learning of hand-crafted features may require a prohibitively large number of training samples. Here we directly test the ability of convolutional neural networks (CNNs) to learn and predict the intensity, shape, and texture properties of tumors as defined by standardized radiomic features. METHODS: Conventional 2D and 3D CNN architectures with an increasing number of convolutional layers were trained to predict the values of 16 standardized radiomic features from real and synthetic PET images of tumors, and tested. In addition, several ImageNet-pretrained advanced networks were tested. A total of 4000 images were used for training, 500 for validation, and 500 for testing. RESULTS: Features quantifying size and intensity were predicted with high accuracy, while shape irregularity and heterogeneity features had very high prediction errors and generalized poorly. For example, mean normalized prediction error of tumor diameter with a 5-layer CNN was 4.23 ± 0.25, while the error for tumor sphericity was 15.64 ± 0.93. We additionally found that learning shape features required an order of magnitude more samples compared to intensity and size features. CONCLUSIONS: Our findings imply that CNNs trained to perform various image-based clinical tasks may generally under-utilize the shape and texture information that is more easily captured by radiomics. We speculate that to improve the CNN performance, shape and texture features can be computed explicitly and added as auxiliary variables to the networks, or supplied as synthetic inputs.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico por imagem , Redes Neurais de Computação
6.
IEEE Trans Med Imaging ; 41(3): 515-530, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34606449

RESUMO

Diffuse optical tomography (DOT) leverages near-infrared light propagation through tissue to assess its optical properties and identify abnormalities. DOT image reconstruction is an ill-posed problem due to the highly scattered photons in the medium and the smaller number of measurements compared to the number of unknowns. Limited-angle DOT reduces probe complexity at the cost of increased reconstruction complexity. Reconstructions are thus commonly marred by artifacts and, as a result, it is difficult to obtain an accurate reconstruction of target objects, e.g., malignant lesions. Reconstruction does not always ensure good localization of small lesions. Furthermore, conventional optimization-based reconstruction methods are computationally expensive, rendering them too slow for real-time imaging applications. Our goal is to develop a fast and accurate image reconstruction method using deep learning, where multitask learning ensures accurate lesion localization in addition to improved reconstruction. We apply spatial-wise attention and a distance transform based loss function in a novel multitask learning formulation to improve localization and reconstruction compared to single-task optimized methods. Given the scarcity of real-world sensor-image pairs required for training supervised deep learning models, we leverage physics-based simulation to generate synthetic datasets and use a transfer learning module to align the sensor domain distribution between in silico and real-world data, while taking advantage of cross-domain learning. Applying our method, we find that we can reconstruct and localize lesions faithfully while allowing real-time reconstruction. We also demonstrate that the present algorithm can reconstruct multiple cancer lesions. The results demonstrate that multitask learning provides sharper and more accurate reconstruction.


Assuntos
Aprendizado Profundo , Tomografia Óptica , Algoritmos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Óptica/métodos
7.
Comput Med Imaging Graph ; 90: 101924, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33895621

RESUMO

Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Feminino , Humanos , Rim , Neoplasias Renais/diagnóstico por imagem , Masculino , Gradação de Tumores , Tomografia Computadorizada por Raios X
8.
Sci Rep ; 11(1): 7810, 2021 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-33833286

RESUMO

Caveolin-1 (CAV1), the caveolae coat protein, also associates with non-caveolar scaffold domains. Single molecule localization microscopy (SMLM) network analysis distinguishes caveolae and three scaffold domains, hemispherical S2 scaffolds and smaller S1B and S1A scaffolds. The caveolin scaffolding domain (CSD) is a highly conserved hydrophobic region that mediates interaction of CAV1 with multiple effector molecules. F92A/V94A mutation disrupts CSD function, however the structural impact of CSD mutation on caveolae or scaffolds remains unknown. Here, SMLM network analysis quantitatively shows that expression of the CAV1 CSD F92A/V94A mutant in CRISPR/Cas CAV1 knockout MDA-MB-231 breast cancer cells reduces the size and volume and enhances the elongation of caveolae and scaffold domains, with more pronounced effects on S2 and S1B scaffolds. Convex hull analysis of the outer surface of the CAV1 point clouds confirms the size reduction of CSD mutant CAV1 blobs and shows that CSD mutation reduces volume variation amongst S2 and S1B CAV1 blobs at increasing shrink values, that may reflect retraction of the CAV1 N-terminus towards the membrane, potentially preventing accessibility of the CSD. Detection of point mutation-induced changes to CAV1 domains highlights the utility of SMLM network analysis for mesoscale structural analysis of oligomers in their native environment.


Assuntos
Caveolina 1/química , Domínios Proteicos/genética , Linhagem Celular , Humanos , Mutação , Conformação Proteica
9.
IEEE Trans Med Imaging ; 40(6): 1555-1567, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33606626

RESUMO

Kidney volume is an essential biomarker for a number of kidney disease diagnoses, for example, chronic kidney disease. Existing total kidney volume estimation methods often rely on an intermediate kidney segmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subsequent data processing and analysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper proposes an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) segmentation-free renal volume estimation. Our localization method uses a selection-convolutional neural network that approximates the kidney inferior-superior span along the axial direction. Cross-sectional (2D) slices from the estimated span are subsequently used in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we use a fully convolutional network to estimate the kidney volume that skips the segmentation procedure. We also present a mathematical expression to approximate the 'volume error' metric from the 'Sørensen-Dice coefficient.' We accessed 100 patients' CT scans from the Vancouver General Hospital records and obtained 210 patients' CT scans from the 2019 Kidney Tumor Segmentation Challenge database to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Estudos Transversais , Humanos , Rim/diagnóstico por imagem , Tomografia Computadorizada por Raios X
10.
Sci Rep ; 10(1): 20937, 2020 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-33262363

RESUMO

The endoplasmic reticulum (ER) is a complex subcellular organelle composed of diverse structures such as tubules, sheets and tubular matrices. Flaviviruses such as Zika virus (ZIKV) induce reorganization of ER membranes to facilitate viral replication. Here, using 3D super resolution microscopy, ZIKV infection is shown to induce the formation of dense tubular matrices associated with viral replication in the central ER. Viral non-structural proteins NS4B and NS2B associate with replication complexes within the ZIKV-induced tubular matrix and exhibit distinct ER distributions outside this central ER region. Deep neural networks trained to distinguish ZIKV-infected versus mock-infected cells successfully identified ZIKV-induced central ER tubular matrices as a determinant of viral infection. Super resolution microscopy and deep learning are therefore able to identify and localize morphological features of the ER and allow for better understanding of how ER morphology changes due to viral infection.


Assuntos
Aprendizado Profundo , Retículo Endoplasmático/metabolismo , Microscopia/métodos , Zika virus/fisiologia , Encéfalo/patologia , Encéfalo/virologia , Linhagem Celular Tumoral , Retículo Endoplasmático/ultraestrutura , Matriz Extracelular/metabolismo , Humanos , Organoides/metabolismo , Organoides/ultraestrutura , Organoides/virologia , RNA de Cadeia Dupla/metabolismo , Proteínas não Estruturais Virais/metabolismo , Zika virus/ultraestrutura , Infecção por Zika virus/virologia
11.
Patterns (N Y) ; 1(3): 100038, 2020 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-33205106

RESUMO

Single-molecule localization microscopy (SMLM) is a relatively new imaging modality, winning the 2014 Nobel Prize in Chemistry, and considered as one of the key super-resolution techniques. SMLM resolution goes beyond the diffraction limit of light microscopy and achieves resolution on the order of 10-20 nm. SMLM thus enables imaging single molecules and study of the low-level molecular interactions at the subcellular level. In contrast to standard microscopy imaging that produces 2D pixel or 3D voxel grid data, SMLM generates big data of 2D or 3D point clouds with millions of localizations and associated uncertainties. This unprecedented breakthrough in imaging helps researchers employ SMLM in many fields within biology and medicine, such as studying cancerous cells and cell-mediated immunity and accelerating drug discovery. However, SMLM data quantification and interpretation methods have yet to keep pace with the rapid advancement of SMLM imaging. Researchers have been actively exploring new computational methods for SMLM data analysis to extract biosignatures of various biological structures and functions. In this survey, we describe the state-of-the-art clustering methods adopted to analyze and quantify SMLM data and examine the capabilities and shortcomings of the surveyed methods. We classify the methods according to (1) the biological application (i.e., the imaged molecules/structures), (2) the data acquisition (such as imaging modality, dimension, resolution, and number of localizations), and (3) the analysis details (2D versus 3D, field of view versus region of interest, use of machine-learning and multi-scale analysis, biosignature extraction, etc.). We observe that the majority of methods that are based on second-order statistics are sensitive to noise and imaging artifacts, have not been applied to 3D data, do not leverage machine-learning formulations, and are not scalable for big-data analysis. Finally, we summarize state-of-the-art methodology, discuss some key open challenges, and identify future opportunities for better modeling and design of an integrated computational pipeline to address the key challenges.

12.
Tomography ; 6(2): 65-76, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548282

RESUMO

Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.


Assuntos
Fluordesoxiglucose F18 , Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons , Teorema de Bayes , Biomarcadores Tumorais , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X
13.
J Neural Eng ; 17(2): 021002, 2020 04 30.
Artigo em Inglês | MEDLINE | ID: mdl-32191935

RESUMO

Primary brain tumors including gliomas continue to pose significant management challenges to clinicians. While the presentation, the pathology, and the clinical course of these lesions are variable, the initial investigations are usually similar. Patients who are suspected to have a brain tumor will be assessed with computed tomography (CT) and magnetic resonance imaging (MRI). The imaging findings are used by neurosurgeons to determine the feasibility of surgical resection and plan such an undertaking. Imaging studies are also an indispensable tool in tracking tumor progression or its response to treatment. As these imaging studies are non-invasive, relatively cheap and accessible to patients, there have been many efforts over the past two decades to increase the amount of clinically-relevant information that can be extracted from brain imaging. Most recently, artificial intelligence (AI) techniques have been employed to segment and characterize brain tumors, as well as to detect progression or treatment-response. However, the clinical utility of such endeavours remains limited due to challenges in data collection and annotation, model training, and the reliability of AI-generated information. We provide a review of recent advances in addressing the above challenges. First, to overcome the challenge of data paucity, different image imputation and synthesis techniques along with annotation collection efforts are summarized. Next, various training strategies are presented to meet multiple desiderata, such as model performance, generalization ability, data privacy protection, and learning with sparse annotations. Finally, standardized performance evaluation and model interpretability methods have been reviewed. We believe that these technical approaches will facilitate the development of a fully-functional AI tool in the clinical care of patients with gliomas.


Assuntos
Inteligência Artificial , Glioma , Glioma/diagnóstico por imagem , Glioma/patologia , Glioma/cirurgia , Humanos , Imageamento por Ressonância Magnética , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X
14.
IEEE Trans Med Imaging ; 39(6): 1942-1956, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31880546

RESUMO

Single molecule localization microscopy (SMLM) allows unprecedented insight into the three-dimensional organization of proteins at the nanometer scale. The combination of minimal invasive cell imaging with high resolution positions SMLM at the forefront of scientific discovery in cancer, infectious, and degenerative diseases. By stochastic temporal and spatial separation of light emissions from fluorescent labelled proteins, SMLM is capable of nanometer scale reconstruction of cellular structures. Precise localization of proteins in 3D astigmatic SMLM is dependent on parameter sensitive preprocessing steps to select regions of interest. With SMLM acquisition highly variable over time, it is non-trivial to find an optimal static parameter configuration. The high emitter density required for reconstruction of complex protein structures can compromise accuracy and introduce artifacts. To address these problems, we introduce two modular auto-tuning pre-processing methods: adaptive signal detection and learned recurrent signal density estimation that can leverage the information stored in the sequence of frames that compose the SMLM acquisition process. We show empirically that our contributions improve accuracy, precision and recall with respect to the state of the art. Both modules auto-tune their hyper-parameters to reduce the parameter space for practitioners, improve robustness and reproducibility, and are validated on a reference in silico dataset. Adaptive signal detection and density prediction can offer a practitioner, in addition to informed localization, a tool to tune acquisition parameters ensuring improved reconstruction of the underlying protein complex. We illustrate the challenges faced by practitioners in applying SMLM algorithms on real world data markedly different from the data used in development and show how ERGO can be run on new datasets without retraining while motivating the need for robust transfer learning in SMLM.


Assuntos
Microscopia , Imagem Individual de Molécula , Algoritmos , Artefatos , Reprodutibilidade dos Testes
15.
PLoS One ; 14(8): e0211659, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31449531

RESUMO

Caveolae are plasma membrane invaginations whose formation requires caveolin-1 (Cav1), the adaptor protein polymerase I, and the transcript release factor (PTRF or CAVIN1). Caveolae have an important role in cell functioning, signaling, and disease. In the absence of CAVIN1/PTRF, Cav1 forms non-caveolar membrane domains called scaffolds. In this work, we train machine learning models to automatically distinguish between caveolae and scaffolds from single molecule localization microscopy (SMLM) data. We apply machine learning algorithms to discriminate biological structures from SMLM data. Our work is the first that is leveraging machine learning approaches (including deep learning models) to automatically identifying biological structures from SMLM data. In particular, we develop and compare three binary classification methods to identify whether or not a given 3D cluster of Cav1 proteins is a caveolae. The first uses a random forest classifier applied to 28 hand-crafted/designed features, the second uses a convolutional neural net (CNN) applied to a projection of the point clouds onto three planes, and the third uses a PointNet model, a recent development that can directly take point clouds as its input. We validate our methods on a dataset of super-resolution microscopy images of PC3 prostate cancer cells labeled for Cav1. Specifically, we have images from two cell populations: 10 PC3 and 10 CAVIN1/PTRF-transfected PC3 cells (PC3-PTRF cells) that form caveolae. We obtained a balanced set of 1714 different cellular structures. Our results show that both the random forest on hand-designed features and the deep learning approach achieve high accuracy in distinguishing the intrinsic features of the caveolae and non-caveolae biological structures. More specifically, both random forest and deep CNN classifiers achieve classification accuracy reaching 94% on our test set, while the PointNet model only reached 83% accuracy. We also discuss the pros and cons of the different approaches.


Assuntos
Cavéolas/metabolismo , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imagem Individual de Molécula , Humanos , Células PC-3
16.
Sci Rep ; 9(1): 9888, 2019 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-31285524

RESUMO

Caveolin-1 (Cav1), the coat protein for caveolae, also forms non-caveolar Cav1 scaffolds. Single molecule Cav1 super-resolution microscopy analysis previously identified caveolae and three distinct scaffold domains: smaller S1A and S2B scaffolds and larger hemispherical S2 scaffolds. Application here of network modularity analysis of SMLM data for endogenous Cav1 labeling in HeLa cells shows that small scaffolds combine to form larger scaffolds and caveolae. We find modules within Cav1 blobs by maximizing the intra-connectivity between Cav1 molecules within a module and minimizing the inter-connectivity between Cav1 molecules across modules, which is achieved via spectral decomposition of the localizations adjacency matrix. Features of modules are then matched with intact blobs to find the similarity between the module-blob pairs of group centers. Our results show that smaller S1A and S1B scaffolds are made up of small polygons, that S1B scaffolds correspond to S1A scaffold dimers and that caveolae and hemispherical S2 scaffolds are complex, modular structures formed from S1B and S1A scaffolds, respectively. Polyhedral interactions of Cav1 oligomers, therefore, leads progressively to the formation of larger and more complex scaffold domains and the biogenesis of caveolae.


Assuntos
Cavéolas/metabolismo , Caveolina 1/metabolismo , Linhagem Celular Tumoral , Membrana Celular/metabolismo , Células HeLa , Humanos , Microscopia/métodos , Imagem Individual de Molécula/métodos
17.
Comput Med Imaging Graph ; 75: 24-33, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31129477

RESUMO

Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.


Assuntos
Interpretação de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Currículo , Aprendizado Profundo , Educação Médica , Eletrocardiografia , Humanos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Ultrassonografia
18.
Bioinformatics ; 35(18): 3468-3475, 2019 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-30759191

RESUMO

MOTIVATION: Network analysis and unsupervised machine learning processing of single-molecule localization microscopy of caveolin-1 (Cav1) antibody labeling of prostate cancer cells identified biosignatures and structures for caveolae and three distinct non-caveolar scaffolds (S1A, S1B and S2). To obtain further insight into low-level molecular interactions within these different structural domains, we now introduce graphlet decomposition over a range of proximity thresholds and show that frequency of different subgraph (k = 4 nodes) patterns for machine learning approaches (classification, identification, automatic labeling, etc.) effectively distinguishes caveolae and scaffold blobs. RESULTS: Caveolae formation requires both Cav1 and the adaptor protein CAVIN1 (also called PTRF). As a supervised learning approach, we applied a wide-field CAVIN1/PTRF mask to CAVIN1/PTRF-transfected PC3 prostate cancer cells and used the random forest classifier to classify blobs based on graphlet frequency distribution (GFD). GFD of CAVIN1/PTRF-positive (PTRF+) and -negative Cav1 clusters showed poor classification accuracy that was significantly improved by stratifying the PTRF+ clusters by either number of localizations or volume. Low classification accuracy (<50%) of large PTRF+ clusters and caveolae blobs identified by unsupervised learning suggests that their GFD is specific to caveolae. High classification accuracy for small PTRF+ clusters and caveolae blobs argues that CAVIN1/PTRF associates not only with caveolae but also non-caveolar scaffolds. At low proximity thresholds (50-100 nm), the caveolae groups showed reduced frequency of highly connected graphlets and increased frequency of completely disconnected graphlets. GFD analysis of single-molecule localization microscopy Cav1 clusters defines changes in structural organization in caveolae and scaffolds independent of association with CAVIN1/PTRF. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Aprendizado de Máquina , Cavéolas , Caveolina 1 , Humanos , Masculino , Neoplasias da Próstata , Proteínas de Ligação a RNA
19.
IEEE J Biomed Health Inform ; 23(2): 578-585, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-29994053

RESUMO

The presence of certain clinical dermoscopic features within a skin lesion may indicate melanoma, and automatically detecting these features may lead to more quantitative and reproducible diagnoses. We reformulate the task of classifying clinical dermoscopic features within superpixels as a segmentation problem, and propose a fully convolutional neural network to detect clinical dermoscopic features from dermoscopy skin lesion images. Our neural network architecture uses interpolated feature maps from several intermediate network layers, and addresses imbalanced labels by minimizing a negative multilabel Dice-F 1 score, where the score is computed across the minibatch for each label. Our approach ranked first place in the 2017 ISIC-ISBI Part 2: Dermoscopic Feature Classification Task, challenge over both the provided validation and test datasets, achieving a 0.895% area under the receiver operator characteristic curve score. We show how simple baseline models can outrank state-of-the-art approaches when using the official metrics of the challenge, and propose to use a fuzzy Jaccard Index that ignores the empty set (i.e., masks devoid of positive pixels) when ranking models. Our results suggest that the classification of clinical dermoscopic features can be effectively approached as a segmentation problem, and the current metrics used to rank models may not well capture the efficacy of the model. We plan to make our trained model and code publicly available.


Assuntos
Dermoscopia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Pele/diagnóstico por imagem , Bases de Dados Factuais , Humanos , Melanoma/diagnóstico por imagem , Curva ROC , Neoplasias Cutâneas/diagnóstico por imagem
20.
Comput Med Imaging Graph ; 70: 111-118, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30340095

RESUMO

PET imaging captures the metabolic activity of tissues and is commonly visually interpreted by clinicians for detecting cancer, assessing tumor progression, and evaluating response to treatment. To automate accomplishing these tasks, it is important to distinguish between normal active organs and activity due to abnormal tumor growth. In this paper, we propose a deep learning method to localize and detect normal active organs visible in a 3D PET scan field-of-view. Our method adapts the deep network architecture of YOLO to detect multiple organs in 2D slices and aggregates the results to produce semantically labeled 3D bounding boxes. We evaluate our method on 479 18F-FDG PET scans of 156 patients achieving an average organ detection precision of 75-98%, recall of 94-100%, average bounding box centroid localization error of less than 14 mm, wall localization error of less than 24 mm and a mean IOU of up to 72%.


Assuntos
Imageamento Tridimensional/métodos , Tomografia por Emissão de Pósitrons/métodos , Humanos , Neoplasias/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA