Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 119
Filter
Add more filters

Publication year range
1.
Cell Mol Life Sci ; 79(11): 565, 2022 Oct 25.
Article in English | MEDLINE | ID: mdl-36284011

ABSTRACT

Mitochondria are major sources of cytotoxic reactive oxygen species (ROS), such as superoxide and hydrogen peroxide, that when uncontrolled contribute to cancer progression. Maintaining a finely tuned, healthy mitochondrial population is essential for cellular homeostasis and survival. Mitophagy, the selective elimination of mitochondria by autophagy, monitors and maintains mitochondrial health and integrity, eliminating damaged ROS-producing mitochondria. However, mechanisms underlying mitophagic control of mitochondrial homeostasis under basal conditions remain poorly understood. E3 ubiquitin ligase Gp78 is an endoplasmic reticulum membrane protein that induces mitochondrial fission and mitophagy of depolarized mitochondria. Here, we report that CRISPR/Cas9 knockout of Gp78 in HT-1080 fibrosarcoma cells increased mitochondrial volume, elevated ROS production and rendered cells resistant to carbonyl cyanide m-chlorophenyl hydrazone (CCCP)-induced mitophagy. These effects were phenocopied by knockdown of the essential autophagy protein ATG5 in wild-type HT-1080 cells. Use of the mito-Keima mitophagy probe confirmed that Gp78 promoted both basal and damage-induced mitophagy. Application of a spot detection algorithm (SPECHT) to GFP-mRFP tandem fluorescent-tagged LC3 (tfLC3)-positive autophagosomes reported elevated autophagosomal maturation in wild-type HT-1080 cells relative to Gp78 knockout cells, predominantly in proximity to mitochondria. Mitophagy inhibition by either Gp78 knockout or ATG5 knockdown reduced mitochondrial potential and increased mitochondrial ROS. Live cell analysis of tfLC3 in HT-1080 cells showed the preferential association of autophagosomes with mitochondria of reduced potential. Xenograft tumors of HT-1080 knockout cells show increased labeling for mitochondria and the cell proliferation marker Ki67 and reduced labeling for the TUNEL cell death reporter. Basal Gp78-dependent mitophagic flux is, therefore, selectively associated with reduced potential mitochondria promoting maintenance of a healthy mitochondrial population, limiting ROS production and tumor cell proliferation.


Subject(s)
Mitophagy , Superoxides , Humans , Carbonyl Cyanide m-Chlorophenyl Hydrazone/pharmacology , Reactive Oxygen Species/metabolism , Ki-67 Antigen/metabolism , Superoxides/metabolism , Hydrogen Peroxide/pharmacology , Mitochondria/metabolism , Ubiquitin-Protein Ligases/genetics , Ubiquitin-Protein Ligases/metabolism , Autophagy/genetics
2.
Skin Res Technol ; 27(6): 1128-1134, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34251055

ABSTRACT

BACKGROUND: Although many hair disorders can be readily diagnosed based on their clinical appearance, their progression and response to treatment are often difficult to monitor, particularly in quantitative terms. We introduce an innovative technique utilizing a smartphone and computerized image analysis to expeditiously and automatically measure and compute hair density and diameter in patients in real time. METHODS: A smartphone equipped with a dermatoscope lens wirelessly transmits trichoscopy images to a computer for image processing. A black-and-white binary mask image representing hair and skin is produced, and the hairs are thinned into single-pixel-thick fiber skeletons. Further analysis based on these fibers allows morphometric characteristics such as hair shaft number and diameters to be computed rapidly. The hair-bearing scalps of fifty participants were imaged to assess the precision of our automated smartphone-based device in comparison with a specialized trichometry device for hair shaft density and diameter measurement. The precision and operation time of our technique relative to manual trichometry, which is commonly used by hair disorder specialists, is determined. RESULTS: An equivalence test, based on two 1-sided t tests, demonstrates statistical equivalence in hair density and diameter values between this automated technique and manual trichometry within a 20% margin. On average, this technique actively required 24 seconds of the clinician's time whereas manual trichometry necessitated 9.2 minutes. CONCLUSION: Automated smartphone-based trichometry is a rapid, precise, and clinically feasible technique which can significantly facilitate the assessment and monitoring of hair loss. Its use could be easily integrated into clinical practice to improve standard trichoscopy.


Subject(s)
Hair Diseases , Smartphone , Alopecia , Hair , Humans , Scalp
3.
Bioinformatics ; 35(18): 3468-3475, 2019 09 15.
Article in English | MEDLINE | ID: mdl-30759191

ABSTRACT

MOTIVATION: Network analysis and unsupervised machine learning processing of single-molecule localization microscopy of caveolin-1 (Cav1) antibody labeling of prostate cancer cells identified biosignatures and structures for caveolae and three distinct non-caveolar scaffolds (S1A, S1B and S2). To obtain further insight into low-level molecular interactions within these different structural domains, we now introduce graphlet decomposition over a range of proximity thresholds and show that frequency of different subgraph (k = 4 nodes) patterns for machine learning approaches (classification, identification, automatic labeling, etc.) effectively distinguishes caveolae and scaffold blobs. RESULTS: Caveolae formation requires both Cav1 and the adaptor protein CAVIN1 (also called PTRF). As a supervised learning approach, we applied a wide-field CAVIN1/PTRF mask to CAVIN1/PTRF-transfected PC3 prostate cancer cells and used the random forest classifier to classify blobs based on graphlet frequency distribution (GFD). GFD of CAVIN1/PTRF-positive (PTRF+) and -negative Cav1 clusters showed poor classification accuracy that was significantly improved by stratifying the PTRF+ clusters by either number of localizations or volume. Low classification accuracy (<50%) of large PTRF+ clusters and caveolae blobs identified by unsupervised learning suggests that their GFD is specific to caveolae. High classification accuracy for small PTRF+ clusters and caveolae blobs argues that CAVIN1/PTRF associates not only with caveolae but also non-caveolar scaffolds. At low proximity thresholds (50-100 nm), the caveolae groups showed reduced frequency of highly connected graphlets and increased frequency of completely disconnected graphlets. GFD analysis of single-molecule localization microscopy Cav1 clusters defines changes in structural organization in caveolae and scaffolds independent of association with CAVIN1/PTRF. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Machine Learning , Caveolae , Caveolin 1 , Humans , Male , Prostatic Neoplasms , RNA-Binding Proteins
4.
Neuroimage ; 146: 1038-1049, 2017 02 01.
Article in English | MEDLINE | ID: mdl-27693612

ABSTRACT

We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.


Subject(s)
Brain Mapping/methods , Brain/diagnostic imaging , Neural Networks, Computer , Neurodevelopmental Disorders/diagnostic imaging , Brain/pathology , Diffusion Tensor Imaging , Female , Humans , Infant, Newborn , Infant, Premature , Infant, Premature, Diseases , Male , Neural Pathways/diagnostic imaging , Neural Pathways/pathology , Neurodevelopmental Disorders/pathology
5.
Neuroimage ; 125: 705-723, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26515903

ABSTRACT

We introduce the STEAM DTI analysis engine: a whole brain voxel-based analysis technique for the examination of diffusion tensor images (DTIs). Our STEAM analysis technique consists of two parts. First, we introduce a collection of statistical templates that represent the distribution of DTIs for a normative population. These templates include various diffusion measures from the full tensor, to fractional anisotropy, to 12 other tensor features. Second, we propose a voxel-based analysis (VBA) pipeline that is reliable enough to identify areas in individual DTI scans that differ significantly from the normative group represented in the STEAM statistical templates. We identify and justify choices in the VBA pipeline relating to multiple comparison correction, image smoothing, and dealing with non-normally distributed data. Finally, we provide a proof of concept for the utility of STEAM on a cohort of 134 very preterm infants. We generated templates from scans of 55 very preterm infants whose T1 MRI scans show no abnormalities and who have normal neurodevelopmental outcome. The remaining 79 infants were then compared to the templates using our VBA technique. We show: (a) that our statistical templates display the white matter development expected over the modeled time period, and (b) that our VBA results detect abnormalities in the diffusion measurements that relate significantly with both the presence of white matter lesions and with neurodevelopmental outcomes at 18months. Most notably, we show that STEAM produces personalized results while also being able to highlight abnormalities across the whole brain and at the scale of individual voxels. While we show the value of STEAM on DTI scans from a preterm infant cohort, STEAM can be equally applied to other cohorts as well. To facilitate this whole-brain personalized DTI analysis, we made STEAM publicly available at http://www.sfu.ca/bgb2/steam.


Subject(s)
Brain Mapping/methods , Brain/abnormalities , Infant, Premature , Neonatal Screening/methods , White Matter/abnormalities , Diffusion Tensor Imaging/methods , Female , Humans , Image Interpretation, Computer-Assisted/methods , Infant, Newborn , Male
6.
Neuroimage ; 101: 667-80, 2014 Nov 01.
Article in English | MEDLINE | ID: mdl-25076107

ABSTRACT

Preterm infants develop differently than those born at term and are at higher risk of brain pathology. Thus, an understanding of their development is of particular importance. Diffusion tensor imaging (DTI) of preterm infants offers a window into brain development at a very early age, an age at which that development is not yet fully understood. Recent works have used DTI to analyze structural connectome of the brain scans using network analysis. These studies have shown that, even from infancy, the brain exhibits small-world properties. Here we examine a cohort of 47 normal preterm neonates (i.e., without brain injury and with normal neurodevelopment at 18 months of age) scanned between 27 and 45 weeks post-menstrual age to further the understanding of how the structural connectome develops. We use full-brain tractography to find white matter tracts between the 90 cortical and sub-cortical regions defined in the University of North Carolina Chapel Hill neonatal atlas. We then analyze the resulting connectomes and explore the differences between weighting edges by tract count versus fractional anisotropy. We observe that the brain networks in preterm infants, much like infants born at term, show high efficiency and clustering measures across a range of network scales. Further, the development of many individual region-pair connections, particularly in the frontal and occipital lobes, is significantly correlated with age. Finally, we observe that the preterm infant connectome remains highly efficient yet becomes more clustered across this age range, leading to a significant increase in its small-world structure.


Subject(s)
Brain/anatomy & histology , Diffusion Tensor Imaging/methods , Nerve Net/anatomy & histology , Brain/growth & development , Connectome , Female , Gestational Age , Humans , Infant, Newborn , Infant, Premature , Male , Nerve Net/growth & development
7.
Sensors (Basel) ; 14(6): 9429-50, 2014 May 27.
Article in English | MEDLINE | ID: mdl-24871987

ABSTRACT

Arterial motion estimation in ultrasound (US) sequences is a hard task due to noise and discontinuities in the signal derived from US artifacts. Characterizing the mechanical properties of the artery is a promising novel imaging technique to diagnose various cardiovascular pathologies and a new way of obtaining relevant clinical information, such as determining the absence of dicrotic peak, estimating the Augmentation Index (AIx), the arterial pressure or the arterial stiffness. One of the advantages of using US imaging is the non-invasive nature of the technique unlike Intra Vascular Ultra Sound (IVUS) or angiography invasive techniques, plus the relative low cost of the US units. In this paper, we propose a semi rigid deformable method based on Soft Bodies dynamics realized by a hybrid motion approach based on cross-correlation and optical flow methods to quantify the elasticity of the artery. We evaluate and compare different techniques (for instance optical flow methods) on which our approach is based. The goal of this comparative study is to identify the best model to be used and the impact of the accuracy of these different stages in the proposed method. To this end, an exhaustive assessment has been conducted in order to decide which model is the most appropriate for registering the variation of the arterial diameter over time. Our experiments involved a total of 1620 evaluations within nine simulated sequences of 84 frames each and the estimation of four error metrics. We conclude that our proposed approach obtains approximately 2.5 times higher accuracy than conventional state-of-the-art techniques.


Subject(s)
Arteries/diagnostic imaging , Arteries/physiology , Movement/physiology , Signal Processing, Computer-Assisted , Vascular Stiffness/physiology , Algorithms , Computer Simulation , Elasticity Imaging Techniques , Humans
8.
IEEE Trans Med Imaging ; PP2024 May 08.
Article in English | MEDLINE | ID: mdl-38717881

ABSTRACT

Deep learning models have achieved remarkable success in medical image classification. These models are typically trained once on the available annotated images and thus lack the ability of continually learning new tasks (i.e., new classes or data distributions) due to the problem of catastrophic forgetting. Recently, there has been more interest in designing continual learning methods to learn different tasks presented sequentially over time while preserving previously acquired knowledge. However, these methods focus mainly on preventing catastrophic forgetting and are tested under a closed-world assumption; i.e., assuming the test data is drawn from the same distribution as the training data. In this work, we advance the state-of-the-art in continual learning by proposing GC2 for medical image classification, which learns a sequence of tasks while simultaneously enhancing its out-of-distribution robustness. To alleviate forgetting, GC2 employs a gradual culpability-based network pruning to identify an optimal subnetwork for each task. To improve generalization, GC2 incorporates adversarial image augmentation and knowledge distillation approaches for learning generalized and robust representations for each subnetwork. Our extensive experiments on multiple benchmarks in a task-agnostic inference demonstrate that GC2 significantly outperforms baselines and other continual learning methods in reducing forgetting and enhancing generalization. Our code is publicly available at the following link: https://github.com/ nourhanb/TMI2024-GC2.

9.
Artif Intell Med ; 148: 102751, 2024 02.
Article in English | MEDLINE | ID: mdl-38325929

ABSTRACT

Clinical evaluation evidence and model explainability are key gatekeepers to ensure the safe, accountable, and effective use of artificial intelligence (AI) in clinical settings. We conducted a clinical user-centered evaluation with 35 neurosurgeons to assess the utility of AI assistance and its explanation on the glioma grading task. Each participant read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI prediction and explanation. The AI model was trained on the BraTS dataset with 88.0% accuracy. The AI explanation was generated using the explainable AI algorithm of SmoothGrad, which was selected from 16 algorithms based on the criterion of being truthful to the AI decision process. Results showed that compared to the average accuracy of 82.5±8.7% when physicians performed the task alone, physicians' task performance increased to 87.7±7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5±7.0% (p-value = 0.35) with the additional assistance of AI explanation. Based on quantitative and qualitative results, the observed improvement in physicians' task performance assisted by AI prediction was mainly because physicians' decision patterns converged to be similar to AI, as physicians only switched their decisions when disagreeing with AI. The insignificant change in physicians' performance with the additional assistance of AI explanation was because the AI explanations did not provide explicit reasons, contexts, or descriptions of clinical features to help doctors discern potentially incorrect AI predictions. The evaluation showed the clinical utility of AI to assist physicians on the glioma grading task, and identified the limitations and clinical usage gaps of existing explainable AI techniques for future improvement.


Subject(s)
Artificial Intelligence , Glioma , Humans , Algorithms , Brain , Glioma/diagnostic imaging , Neurosurgeons
10.
J Big Data ; 11(1): 43, 2024.
Article in English | MEDLINE | ID: mdl-38528850

ABSTRACT

Modern deep learning training procedures rely on model regularization techniques such as data augmentation methods, which generate training samples that increase the diversity of data and richness of label information. A popular recent method, mixup, uses convex combinations of pairs of original samples to generate new samples. However, as we show in our experiments, mixup  can produce undesirable synthetic samples, where the data is sampled off the manifold and can contain incorrect labels. We propose ζ-mixup, a generalization of mixup  with provably and demonstrably desirable properties that allows convex combinations of T≥2 samples, leading to more realistic and diverse outputs that incorporate information from T original samples by using a p-series interpolant. We show that, compared to mixup, ζ-mixup  better preserves the intrinsic dimensionality of the original datasets, which is a desirable property for training generalizable models. Furthermore, we show that our implementation of ζ-mixup  is faster than mixup, and extensive evaluation on controlled synthetic and 26 diverse real-world natural and medical image classification datasets shows that ζ-mixup  outperforms mixup, CutMix, and traditional data augmentation techniques. The code will be released at https://github.com/kakumarabhishek/zeta-mixup.

11.
Comput Biol Med ; 178: 108676, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38878395

ABSTRACT

Novel portable diffuse optical tomography (DOT) devices for breast cancer lesions hold great promise for non-invasive, non-ionizing breast cancer screening. Critical to this capability is not just the identification of lesions but rather the complex problem of discriminating between malignant and benign lesions. To accurately reconstruct the highly heterogeneous tissue of a cancer lesion in healthy breast tissue using DOT, multiple wavelengths can be leveraged to maximize signal penetration while minimizing sensitivity to noise. However, these wavelength responses can overlap, capture common information, and correlate, potentially confounding reconstruction and downstream end tasks. We show that an orthogonal fusion loss regularizes multi-wavelength DOT leading to improved reconstruction and accuracy of end-to-end discrimination of malignant versus benign lesions. We further show that our raw-to-task model significantly reduces computational complexity without sacrificing accuracy, making it ideal for real-time throughput, desired in medical settings where handheld devices have severely restricted power budgets. Furthermore, our results indicate that image reconstruction is not necessary for unbiased classification of lesions with a balanced accuracy of 77% and 66% on the synthetic dataset and clinical dataset, respectively, using the raw-to-task model. Code is available at https://github.com/sfu-mial/FuseNet.

12.
Z Med Phys ; 2024 Jan 31.
Article in English | MEDLINE | ID: mdl-38302292

ABSTRACT

In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.

13.
J Cell Biol ; 223(8)2024 Aug 05.
Article in English | MEDLINE | ID: mdl-38865088

ABSTRACT

Super-resolution microscopy, or nanoscopy, enables the use of fluorescent-based molecular localization tools to study molecular structure at the nanoscale level in the intact cell, bridging the mesoscale gap to classical structural biology methodologies. Analysis of super-resolution data by artificial intelligence (AI), such as machine learning, offers tremendous potential for the discovery of new biology, that, by definition, is not known and lacks ground truth. Herein, we describe the application of weakly supervised paradigms to super-resolution microscopy and its potential to enable the accelerated exploration of the nanoscale architecture of subcellular macromolecules and organelles.


Subject(s)
Artificial Intelligence , Microscopy , Animals , Humans , Image Processing, Computer-Assisted/methods , Machine Learning , Microscopy/methods , Microscopy, Fluorescence/methods
14.
Med Image Anal ; 95: 103145, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38615432

ABSTRACT

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.


Subject(s)
Skin Diseases , Humans , Skin Diseases/diagnostic imaging , Imaging, Three-Dimensional/methods , Deep Learning , Image Interpretation, Computer-Assisted/methods
15.
Med Image Anal ; 84: 102684, 2023 02.
Article in English | MEDLINE | ID: mdl-36516555

ABSTRACT

Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.


Subject(s)
Artificial Intelligence , Benchmarking , Humans , Clinical Relevance , Evidence Gaps
16.
MethodsX ; 10: 102009, 2023.
Article in English | MEDLINE | ID: mdl-36793676

ABSTRACT

Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

17.
Bioinform Adv ; 3(1): vbad068, 2023.
Article in English | MEDLINE | ID: mdl-37359728

ABSTRACT

Large-scale processing of heterogeneous datasets in interdisciplinary research often requires time-consuming manual data curation. Ambiguity in the data layout and preprocessing conventions can easily compromise reproducibility and scientific discovery, and even when detected, it requires time and effort to be corrected by domain experts. Poor data curation can also interrupt processing jobs on large computing clusters, causing frustration and delays. We introduce DataCurator, a portable software package that verifies arbitrarily complex datasets of mixed formats, working equally well on clusters as on local systems. Human-readable TOML recipes are converted into executable, machine-verifiable templates, enabling users to easily verify datasets using custom rules without writing code. Recipes can be used to transform and validate data, for pre- or post-processing, selection of data subsets, sampling and aggregation, such as summary statistics. Processing pipelines no longer need to be burdened by laborious data validation, with data curation and validation replaced by human and machine-verifiable recipes specifying rules and actions. Multithreaded execution ensures scalability on clusters, and existing Julia, R and Python libraries can be reused. DataCurator enables efficient remote workflows, offering integration with Slack and the ability to transfer curated data to clusters using OwnCloud and SCP. Code available at: https://github.com/bencardoen/DataCurator.jl.

18.
Int J Speech Technol ; 26(1): 163-184, 2023.
Article in English | MEDLINE | ID: mdl-37008883

ABSTRACT

Clearly articulated speech, relative to plain-style speech, has been shown to improve intelligibility. We examine if visible speech cues in video only can be systematically modified to enhance clear-speech visual features and improve intelligibility. We extract clear-speech visual features of English words varying in vowels produced by multiple male and female talkers. Via a frame-by-frame image-warping based video generation method with a controllable parameter (displacement factor), we apply the extracted clear-speech visual features to videos of plain speech to synthesize clear speech videos. We evaluate the generated videos using a robust, state of the art AI Lip Reader as well as human intelligibility testing. The contributions of this study are: (1) we successfully extract relevant visual cues for video modifications across speech styles, and have achieved enhanced intelligibility for AI; (2) this work suggests that universal talker-independent clear-speech features may be utilized to modify any talker's visual speech style; (3) we introduce "displacement factor" as a way of systematically scaling the magnitude of displacement modifications between speech styles; and (4) the high definition generated videos make them ideal candidates for human-centric intelligibility and perceptual training studies.

19.
Med Image Anal ; 88: 102863, 2023 08.
Article in English | MEDLINE | ID: mdl-37343323

ABSTRACT

Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.


Subject(s)
Deep Learning , Skin Diseases , Skin Neoplasms , Humans , Neural Networks, Computer , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Diagnosis, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods
20.
Med Image Anal ; 77: 102329, 2022 04.
Article in English | MEDLINE | ID: mdl-35144199

ABSTRACT

We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surface scans. The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained objected detection model, Faster R-CNN, localizes the lesions within the 2D domain. These detected skin lesions are mapped back to the 3D surface of the subject and, for subjects imaged multiple times, we construct a graph-based matching procedure to longitudinally track lesions that considers the anatomical correspondences among pairs of meshes and the geodesic proximity of corresponding lesions and the inter-lesion geodesic distances. We evaluated the proposed approach using 3DBodyTex, a publicly available dataset composed of 3D scans imaging the coloured skin (textured meshes) of 200 human subjects. We manually annotated locations that appeared to the human eye to contain a pigmented skin lesion as well as tracked a subset of lesions occurring on the same subject imaged in different poses. Our results, when compared to three human annotators, suggest that the trained Faster R-CNN detects lesions at a similar performance level as the human annotators. Our lesion tracking algorithm achieves an average matching accuracy of 88% on a set of detected corresponding pairs of prominent lesions of subjects imaged in different poses, and an average longitudinal accuracy of 71% when encompassing additional errors due to lesion detection. As there currently is no other large-scale publicly available dataset of 3D total-body skin lesions, we publicly release over 25,000 3DBodyTex manual annotations, which we hope will further research on total-body skin lesion analysis.


Subject(s)
Algorithms , Whole Body Imaging , Humans , Whole Body Imaging/methods
SELECTION OF CITATIONS
SEARCH DETAIL