Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 117
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717881

RESUMO

Deep learning models have achieved remarkable success in medical image classification. These models are typically trained once on the available annotated images and thus lack the ability of continually learning new tasks (i.e., new classes or data distributions) due to the problem of catastrophic forgetting. Recently, there has been more interest in designing continual learning methods to learn different tasks presented sequentially over time while preserving previously acquired knowledge. However, these methods focus mainly on preventing catastrophic forgetting and are tested under a closed-world assumption; i.e., assuming the test data is drawn from the same distribution as the training data. In this work, we advance the state-of-the-art in continual learning by proposing GC2 for medical image classification, which learns a sequence of tasks while simultaneously enhancing its out-of-distribution robustness. To alleviate forgetting, GC2 employs a gradual culpability-based network pruning to identify an optimal subnetwork for each task. To improve generalization, GC2 incorporates adversarial image augmentation and knowledge distillation approaches for learning generalized and robust representations for each subnetwork. Our extensive experiments on multiple benchmarks in a task-agnostic inference demonstrate that GC2 significantly outperforms baselines and other continual learning methods in reducing forgetting and enhancing generalization. Our code is publicly available at the following link: https://github.com/ nourhanb/TMI2024-GC2.

2.
Med Image Anal ; 95: 103145, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38615432

RESUMO

In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.

3.
J Big Data ; 11(1): 43, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38528850

RESUMO

Modern deep learning training procedures rely on model regularization techniques such as data augmentation methods, which generate training samples that increase the diversity of data and richness of label information. A popular recent method, mixup, uses convex combinations of pairs of original samples to generate new samples. However, as we show in our experiments, mixup  can produce undesirable synthetic samples, where the data is sampled off the manifold and can contain incorrect labels. We propose ζ-mixup, a generalization of mixup  with provably and demonstrably desirable properties that allows convex combinations of T≥2 samples, leading to more realistic and diverse outputs that incorporate information from T original samples by using a p-series interpolant. We show that, compared to mixup, ζ-mixup  better preserves the intrinsic dimensionality of the original datasets, which is a desirable property for training generalizable models. Furthermore, we show that our implementation of ζ-mixup  is faster than mixup, and extensive evaluation on controlled synthetic and 26 diverse real-world natural and medical image classification datasets shows that ζ-mixup  outperforms mixup, CutMix, and traditional data augmentation techniques. The code will be released at https://github.com/kakumarabhishek/zeta-mixup.

4.
Artif Intell Med ; 148: 102751, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38325929

RESUMO

Clinical evaluation evidence and model explainability are key gatekeepers to ensure the safe, accountable, and effective use of artificial intelligence (AI) in clinical settings. We conducted a clinical user-centered evaluation with 35 neurosurgeons to assess the utility of AI assistance and its explanation on the glioma grading task. Each participant read 25 brain MRI scans of patients with gliomas, and gave their judgment on the glioma grading without and with the assistance of AI prediction and explanation. The AI model was trained on the BraTS dataset with 88.0% accuracy. The AI explanation was generated using the explainable AI algorithm of SmoothGrad, which was selected from 16 algorithms based on the criterion of being truthful to the AI decision process. Results showed that compared to the average accuracy of 82.5±8.7% when physicians performed the task alone, physicians' task performance increased to 87.7±7.3% with statistical significance (p-value = 0.002) when assisted by AI prediction, and remained at almost the same level of 88.5±7.0% (p-value = 0.35) with the additional assistance of AI explanation. Based on quantitative and qualitative results, the observed improvement in physicians' task performance assisted by AI prediction was mainly because physicians' decision patterns converged to be similar to AI, as physicians only switched their decisions when disagreeing with AI. The insignificant change in physicians' performance with the additional assistance of AI explanation was because the AI explanations did not provide explicit reasons, contexts, or descriptions of clinical features to help doctors discern potentially incorrect AI predictions. The evaluation showed the clinical utility of AI to assist physicians on the glioma grading task, and identified the limitations and clinical usage gaps of existing explainable AI techniques for future improvement.


Assuntos
Inteligência Artificial , Glioma , Humanos , Algoritmos , Encéfalo , Glioma/diagnóstico por imagem , Neurocirurgiões
5.
Z Med Phys ; 2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38302292

RESUMO

In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.

6.
Med Image Anal ; 88: 102863, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37343323

RESUMO

Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.


Assuntos
Aprendizado Profundo , Dermatopatias , Neoplasias Cutâneas , Humanos , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
7.
Bioinform Adv ; 3(1): vbad068, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37359728

RESUMO

Large-scale processing of heterogeneous datasets in interdisciplinary research often requires time-consuming manual data curation. Ambiguity in the data layout and preprocessing conventions can easily compromise reproducibility and scientific discovery, and even when detected, it requires time and effort to be corrected by domain experts. Poor data curation can also interrupt processing jobs on large computing clusters, causing frustration and delays. We introduce DataCurator, a portable software package that verifies arbitrarily complex datasets of mixed formats, working equally well on clusters as on local systems. Human-readable TOML recipes are converted into executable, machine-verifiable templates, enabling users to easily verify datasets using custom rules without writing code. Recipes can be used to transform and validate data, for pre- or post-processing, selection of data subsets, sampling and aggregation, such as summary statistics. Processing pipelines no longer need to be burdened by laborious data validation, with data curation and validation replaced by human and machine-verifiable recipes specifying rules and actions. Multithreaded execution ensures scalability on clusters, and existing Julia, R and Python libraries can be reused. DataCurator enables efficient remote workflows, offering integration with Slack and the ability to transfer curated data to clusters using OwnCloud and SCP. Code available at: https://github.com/bencardoen/DataCurator.jl.

8.
Int J Speech Technol ; 26(1): 163-184, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37008883

RESUMO

Clearly articulated speech, relative to plain-style speech, has been shown to improve intelligibility. We examine if visible speech cues in video only can be systematically modified to enhance clear-speech visual features and improve intelligibility. We extract clear-speech visual features of English words varying in vowels produced by multiple male and female talkers. Via a frame-by-frame image-warping based video generation method with a controllable parameter (displacement factor), we apply the extracted clear-speech visual features to videos of plain speech to synthesize clear speech videos. We evaluate the generated videos using a robust, state of the art AI Lip Reader as well as human intelligibility testing. The contributions of this study are: (1) we successfully extract relevant visual cues for video modifications across speech styles, and have achieved enhanced intelligibility for AI; (2) this work suggests that universal talker-independent clear-speech features may be utilized to modify any talker's visual speech style; (3) we introduce "displacement factor" as a way of systematically scaling the magnitude of displacement modifications between speech styles; and (4) the high definition generated videos make them ideal candidates for human-centric intelligibility and perceptual training studies.

9.
MethodsX ; 10: 102009, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36793676

RESUMO

Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods - such as Guided BackProp, DeepLift - utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods - such as occlusion, LIME, kernel SHAP - utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.

10.
Med Image Anal ; 84: 102684, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36516555

RESUMO

Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Relevância Clínica , Lacunas de Evidências
11.
PLoS One ; 17(12): e0276726, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36580473

RESUMO

Identification of small objects in fluorescence microscopy is a non-trivial task burdened by parameter-sensitive algorithms, for which there is a clear need for an approach that adapts dynamically to changing imaging conditions. Here, we introduce an adaptive object detection method that, given a microscopy image and an image level label, uses kurtosis-based matching of the distribution of the image differential to express operator intent in terms of recall or precision. We show how a theoretical upper bound of the statistical distance in feature space enables application of belief theory to obtain statistical support for each detected object, capturing those aspects of the image that support the label, and to what extent. We validate our method on 2 datasets: distinguishing sub-diffraction limit caveolae and scaffold by stimulated emission depletion (STED) super-resolution microscopy; and detecting amyloid-ß deposits in confocal microscopy retinal cross-sections of neuropathologically confirmed Alzheimer's disease donor tissue. Our results are consistent with biological ground truth and with previous subcellular object classification results, and add insight into more nuanced class transition dynamics. We illustrate the novel application of belief theory to object detection in heterogeneous microscopy datasets and the quantification of conflict of evidence in a joint belief function. By applying our method successfully to diffraction-limited confocal imaging of tissue sections and super-resolution microscopy of subcellular structures, we demonstrate multi-scale applicability.


Assuntos
Algoritmos , Doença de Alzheimer , Humanos , Microscopia de Fluorescência/métodos , Microscopia Confocal/métodos , Doença de Alzheimer/diagnóstico por imagem , Peptídeos beta-Amiloides
12.
Cell Mol Life Sci ; 79(11): 565, 2022 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-36284011

RESUMO

Mitochondria are major sources of cytotoxic reactive oxygen species (ROS), such as superoxide and hydrogen peroxide, that when uncontrolled contribute to cancer progression. Maintaining a finely tuned, healthy mitochondrial population is essential for cellular homeostasis and survival. Mitophagy, the selective elimination of mitochondria by autophagy, monitors and maintains mitochondrial health and integrity, eliminating damaged ROS-producing mitochondria. However, mechanisms underlying mitophagic control of mitochondrial homeostasis under basal conditions remain poorly understood. E3 ubiquitin ligase Gp78 is an endoplasmic reticulum membrane protein that induces mitochondrial fission and mitophagy of depolarized mitochondria. Here, we report that CRISPR/Cas9 knockout of Gp78 in HT-1080 fibrosarcoma cells increased mitochondrial volume, elevated ROS production and rendered cells resistant to carbonyl cyanide m-chlorophenyl hydrazone (CCCP)-induced mitophagy. These effects were phenocopied by knockdown of the essential autophagy protein ATG5 in wild-type HT-1080 cells. Use of the mito-Keima mitophagy probe confirmed that Gp78 promoted both basal and damage-induced mitophagy. Application of a spot detection algorithm (SPECHT) to GFP-mRFP tandem fluorescent-tagged LC3 (tfLC3)-positive autophagosomes reported elevated autophagosomal maturation in wild-type HT-1080 cells relative to Gp78 knockout cells, predominantly in proximity to mitochondria. Mitophagy inhibition by either Gp78 knockout or ATG5 knockdown reduced mitochondrial potential and increased mitochondrial ROS. Live cell analysis of tfLC3 in HT-1080 cells showed the preferential association of autophagosomes with mitochondria of reduced potential. Xenograft tumors of HT-1080 knockout cells show increased labeling for mitochondria and the cell proliferation marker Ki67 and reduced labeling for the TUNEL cell death reporter. Basal Gp78-dependent mitophagic flux is, therefore, selectively associated with reduced potential mitochondria promoting maintenance of a healthy mitochondrial population, limiting ROS production and tumor cell proliferation.


Assuntos
Mitofagia , Superóxidos , Humanos , Carbonil Cianeto m-Clorofenil Hidrazona/farmacologia , Espécies Reativas de Oxigênio/metabolismo , Antígeno Ki-67/metabolismo , Superóxidos/metabolismo , Peróxido de Hidrogênio/farmacologia , Mitocôndrias/metabolismo , Ubiquitina-Proteína Ligases/genética , Ubiquitina-Proteína Ligases/metabolismo , Autofagia/genética
13.
Comput Med Imaging Graph ; 102: 102127, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36257092

RESUMO

Supervised deep learning has become a standard approach to solving medical image segmentation tasks. However, serious difficulties in attaining pixel-level annotations for sufficiently large volumetric datasets in real-life applications have highlighted the critical need for alternative approaches, such as semi-supervised learning, where model training can leverage small expert-annotated datasets to enable learning from much larger datasets without laborious annotation. Most of the semi-supervised approaches combine expert annotations and machine-generated annotations with equal weights within deep model training, despite the latter annotations being relatively unreliable and likely to affect model optimization negatively. To overcome this, we propose an active learning approach that uses an example re-weighting strategy, where machine-annotated samples are weighted (i) based on the similarity of their gradient directions of descent to those of expert-annotated data, and (ii) based on the gradient magnitude of the last layer of the deep model. Specifically, we present an active learning strategy with a query function that enables the selection of reliable and more informative samples from machine-annotated batch data generated by a noisy teacher. When validated on clinical COVID-19 CT benchmark data, our method improved the performance of pneumonia infection segmentation compared to the state of the art.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Imageamento Tridimensional/métodos , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
14.
IEEE Trans Med Imaging ; 41(11): 3128-3145, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35622798

RESUMO

Drug repurposing can accelerate the identification of effective compounds for clinical use against SARS-CoV-2, with the advantage of pre-existing clinical safety data and an established supply chain. RNA viruses such as SARS-CoV-2 manipulate cellular pathways and induce reorganization of subcellular structures to support their life cycle. These morphological changes can be quantified using bioimaging techniques. In this work, we developed DEEMD: a computational pipeline using deep neural network models within a multiple instance learning framework, to identify putative treatments effective against SARS-CoV-2 based on morphological analysis of the publicly available RxRx19a dataset. This dataset consists of fluorescence microscopy images of SARS-CoV-2 non-infected cells and infected cells, with and without drug treatment. DEEMD first extracts discriminative morphological features to generate cell morphological profiles from the non-infected and infected cells. These morphological profiles are then used in a statistical model to estimate the applied treatment efficacy on infected cells based on similarities to non-infected cells. DEEMD is capable of localizing infected cells via weak supervision without any expensive pixel-level annotations. DEEMD identifies known SARS-CoV-2 inhibitors, such as Remdesivir and Aloxistatin, supporting the validity of our approach. DEEMD can be explored for use on other emerging viruses and datasets to rapidly identify candidate antiviral treatments in the future. Our implementation is available online at https://www.github.com/Sadegh-Saberian/DEEMD.


Assuntos
COVID-19 , SARS-CoV-2 , Humanos , Antivirais/farmacologia , Antivirais/química , Antivirais/metabolismo
15.
Comput Methods Programs Biomed ; 219: 106750, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35381490

RESUMO

BACKGROUND AND OBJECTIVE: Radiomics and deep learning have emerged as two distinct approaches to medical image analysis. However, their relative expressive power remains largely unknown. Theoretically, hand-crafted radiomic features represent a mere subset of features that neural networks can approximate, thus making deep learning a more powerful approach. On the other hand, automated learning of hand-crafted features may require a prohibitively large number of training samples. Here we directly test the ability of convolutional neural networks (CNNs) to learn and predict the intensity, shape, and texture properties of tumors as defined by standardized radiomic features. METHODS: Conventional 2D and 3D CNN architectures with an increasing number of convolutional layers were trained to predict the values of 16 standardized radiomic features from real and synthetic PET images of tumors, and tested. In addition, several ImageNet-pretrained advanced networks were tested. A total of 4000 images were used for training, 500 for validation, and 500 for testing. RESULTS: Features quantifying size and intensity were predicted with high accuracy, while shape irregularity and heterogeneity features had very high prediction errors and generalized poorly. For example, mean normalized prediction error of tumor diameter with a 5-layer CNN was 4.23 ± 0.25, while the error for tumor sphericity was 15.64 ± 0.93. We additionally found that learning shape features required an order of magnitude more samples compared to intensity and size features. CONCLUSIONS: Our findings imply that CNNs trained to perform various image-based clinical tasks may generally under-utilize the shape and texture information that is more easily captured by radiomics. We speculate that to improve the CNN performance, shape and texture features can be computed explicitly and added as auxiliary variables to the networks, or supplied as synthetic inputs.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico por imagem , Redes Neurais de Computação
16.
Med Image Anal ; 77: 102329, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35144199

RESUMO

We present an automated approach to detect and longitudinally track skin lesions on 3D total-body skin surface scans. The acquired 3D mesh of the subject is unwrapped to a 2D texture image, where a trained objected detection model, Faster R-CNN, localizes the lesions within the 2D domain. These detected skin lesions are mapped back to the 3D surface of the subject and, for subjects imaged multiple times, we construct a graph-based matching procedure to longitudinally track lesions that considers the anatomical correspondences among pairs of meshes and the geodesic proximity of corresponding lesions and the inter-lesion geodesic distances. We evaluated the proposed approach using 3DBodyTex, a publicly available dataset composed of 3D scans imaging the coloured skin (textured meshes) of 200 human subjects. We manually annotated locations that appeared to the human eye to contain a pigmented skin lesion as well as tracked a subset of lesions occurring on the same subject imaged in different poses. Our results, when compared to three human annotators, suggest that the trained Faster R-CNN detects lesions at a similar performance level as the human annotators. Our lesion tracking algorithm achieves an average matching accuracy of 88% on a set of detected corresponding pairs of prominent lesions of subjects imaged in different poses, and an average longitudinal accuracy of 71% when encompassing additional errors due to lesion detection. As there currently is no other large-scale publicly available dataset of 3D total-body skin lesions, we publicly release over 25,000 3DBodyTex manual annotations, which we hope will further research on total-body skin lesion analysis.


Assuntos
Algoritmos , Imagem Corporal Total , Humanos , Imagem Corporal Total/métodos
17.
IEEE Trans Med Imaging ; 41(3): 515-530, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34606449

RESUMO

Diffuse optical tomography (DOT) leverages near-infrared light propagation through tissue to assess its optical properties and identify abnormalities. DOT image reconstruction is an ill-posed problem due to the highly scattered photons in the medium and the smaller number of measurements compared to the number of unknowns. Limited-angle DOT reduces probe complexity at the cost of increased reconstruction complexity. Reconstructions are thus commonly marred by artifacts and, as a result, it is difficult to obtain an accurate reconstruction of target objects, e.g., malignant lesions. Reconstruction does not always ensure good localization of small lesions. Furthermore, conventional optimization-based reconstruction methods are computationally expensive, rendering them too slow for real-time imaging applications. Our goal is to develop a fast and accurate image reconstruction method using deep learning, where multitask learning ensures accurate lesion localization in addition to improved reconstruction. We apply spatial-wise attention and a distance transform based loss function in a novel multitask learning formulation to improve localization and reconstruction compared to single-task optimized methods. Given the scarcity of real-world sensor-image pairs required for training supervised deep learning models, we leverage physics-based simulation to generate synthetic datasets and use a transfer learning module to align the sensor domain distribution between in silico and real-world data, while taking advantage of cross-domain learning. Applying our method, we find that we can reconstruct and localize lesions faithfully while allowing real-time reconstruction. We also demonstrate that the present algorithm can reconstruct multiple cancer lesions. The results demonstrate that multitask learning provides sharper and more accurate reconstruction.


Assuntos
Aprendizado Profundo , Tomografia Óptica , Algoritmos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Óptica/métodos
18.
Comput Biol Med ; 136: 104704, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34352454

RESUMO

Chest X-ray images are used in deep convolutional neural networks for the detection of COVID-19, the greatest human challenge of the 21st century. Robustness to noise and improvement of generalization are the major challenges in designing these networks. In this paper, we introduce a strategy for data augmentation using the determination of the type and value of noise density to improve the robustness and generalization of deep CNNs for COVID-19 detection. Firstly, we present a learning-to-augment approach that generates new noisy variants of the original image data with optimized noise density. We apply a Bayesian optimization technique to control and choose the optimal noise type and its parameters. Secondly, we propose a novel data augmentation strategy, based on denoised X-ray images, that uses the distance between denoised and original pixels to generate new data. We develop an autoencoder model to create new data using denoised images corrupted by the Gaussian and impulse noise. A database of chest X-ray images, containing COVID-19 positive, healthy, and non-COVID pneumonia cases, is used to fine-tune the pre-trained networks (AlexNet, ShuffleNet, ResNet18, and GoogleNet). The proposed method performs better results compared to the state-of-the-art learning to augment strategies in terms of sensitivity (0.808), specificity (0.915), and F-Measure (0.737). The source code of the proposed method is available at https://github.com/mohamadmomeny/Learning-to-augment-strategy.


Assuntos
COVID-19 , Aprendizado Profundo , Teorema de Bayes , Humanos , Radiografia Torácica , SARS-CoV-2 , Raios X
19.
Skin Res Technol ; 27(6): 1128-1134, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34251055

RESUMO

BACKGROUND: Although many hair disorders can be readily diagnosed based on their clinical appearance, their progression and response to treatment are often difficult to monitor, particularly in quantitative terms. We introduce an innovative technique utilizing a smartphone and computerized image analysis to expeditiously and automatically measure and compute hair density and diameter in patients in real time. METHODS: A smartphone equipped with a dermatoscope lens wirelessly transmits trichoscopy images to a computer for image processing. A black-and-white binary mask image representing hair and skin is produced, and the hairs are thinned into single-pixel-thick fiber skeletons. Further analysis based on these fibers allows morphometric characteristics such as hair shaft number and diameters to be computed rapidly. The hair-bearing scalps of fifty participants were imaged to assess the precision of our automated smartphone-based device in comparison with a specialized trichometry device for hair shaft density and diameter measurement. The precision and operation time of our technique relative to manual trichometry, which is commonly used by hair disorder specialists, is determined. RESULTS: An equivalence test, based on two 1-sided t tests, demonstrates statistical equivalence in hair density and diameter values between this automated technique and manual trichometry within a 20% margin. On average, this technique actively required 24 seconds of the clinician's time whereas manual trichometry necessitated 9.2 minutes. CONCLUSION: Automated smartphone-based trichometry is a rapid, precise, and clinically feasible technique which can significantly facilitate the assessment and monitoring of hair loss. Its use could be easily integrated into clinical practice to improve standard trichoscopy.


Assuntos
Doenças do Cabelo , Smartphone , Alopecia , Cabelo , Humanos , Couro Cabeludo
20.
Comput Med Imaging Graph ; 90: 101924, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33895621

RESUMO

Fuhrman cancer grading and tumor-node-metastasis (TNM) cancer staging systems are typically used by clinicians in the treatment planning of renal cell carcinoma (RCC), a common cancer in men and women worldwide. Pathologists typically use percutaneous renal biopsy for RCC grading, while staging is performed by volumetric medical image analysis before renal surgery. Recent studies suggest that clinicians can effectively perform these classification tasks non-invasively by analyzing image texture features of RCC from computed tomography (CT) data. However, image feature identification for RCC grading and staging often relies on laborious manual processes, which is error prone and time-intensive. To address this challenge, this paper proposes a learnable image histogram in the deep neural network framework that can learn task-specific image histograms with variable bin centers and widths. The proposed approach enables learning statistical context features from raw medical data, which cannot be performed by a conventional convolutional neural network (CNN). The linear basis function of our learnable image histogram is piece-wise differentiable, enabling back-propagating errors to update the variable bin centers and widths during training. This novel approach can segregate the CT textures of an RCC in different intensity spectra, which enables efficient Fuhrman low (I/II) and high (III/IV) grading as well as RCC low (I/II) and high (III/IV) staging. The proposed method is validated on a clinical CT dataset of 159 patients from The Cancer Imaging Archive (TCIA) database, and it demonstrates 80% and 83% accuracy in RCC grading and staging, respectively.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Feminino , Humanos , Rim , Neoplasias Renais/diagnóstico por imagem , Masculino , Gradação de Tumores , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA