Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Nat Methods ; 21(2): 195-212, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38347141

RESUMO

Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Semântica
2.
IEEE Trans Med Imaging ; 43(1): 529-541, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37672368

RESUMO

Deep neural networks are often applied to medical images to automate the problem of medical diagnosis. However, a more clinically relevant question that practitioners usually face is how to predict the future trajectory of a disease. Current methods for prognosis or disease trajectory forecasting often require domain knowledge and are complicated to apply. In this paper, we formulate the prognosis prediction problem as a one-to-many prediction problem. Inspired by a clinical decision-making process with two agents-a radiologist and a general practitioner - we predict prognosis with two transformer-based components that share information with each other. The first transformer in this framework aims to analyze the imaging data, and the second one leverages its internal states as inputs, also fusing them with auxiliary clinical data. The temporal nature of the problem is modeled within the transformer states, allowing us to treat the forecasting problem as a multi-task classification, for which we propose a novel loss. We show the effectiveness of our approach in predicting the development of structural knee osteoarthritis changes and forecasting Alzheimer's disease clinical status directly from raw multi-modal data. The proposed method outperforms multiple state-of-the-art baselines with respect to performance and calibration, both of which are needed for real-world applications. An open-source implementation of our method is made publicly available at https://github.com/Oulu-IMEDS/CLIMATv2.


Assuntos
Doença de Alzheimer , Osteoartrite do Joelho , Humanos , Doença de Alzheimer/diagnóstico por imagem , Calibragem , Redes Neurais de Computação , Radiologistas
3.
IEEE Trans Cybern ; 53(4): 2261-2274, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34613931

RESUMO

Engineering design is traditionally performed by hand: an expert makes design proposals based on past experience, and these proposals are then tested for compliance with certain target specifications. Testing for compliance is performed first by computer simulation using what is called a discipline model. Such a model can be implemented by finite element analysis, multibody systems approach, etc. Designs passing this simulation are then considered for physical prototyping. The overall process may take months and is a significant cost in practice. We have developed a Bayesian optimization (BO) system for partially automating this process by directly optimizing compliance with the target specification with respect to the design parameters. The proposed method is a general framework for computing the generalized inverse of a high-dimensional nonlinear function that does not require, for example, gradient information, which is often unavailable from discipline models. We furthermore develop a three-tier convergence criterion based on: 1) convergence to a solution optimally satisfying all specified design criteria; 2) detection that a design satisfying all criteria is infeasible; or 3) convergence to a probably approximately correct (PAC) solution. We demonstrate the proposed approach on benchmark functions and a vehicle chassis design problem motivated by an industry setting using a state-of-the-art commercial discipline model. We show that the proposed approach is general, scalable, and efficient and that the novel convergence criteria can be implemented straightforwardly based on the existing concepts and subroutines in popular BO software packages.

4.
NPJ Digit Med ; 6(1): 112, 2023 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-37311940

RESUMO

A plethora of classification models for the detection of glaucoma from fundus images have been proposed in recent years. Often trained with data from a single glaucoma clinic, they report impressive performance on internal test sets, but tend to struggle in generalizing to external sets. This performance drop can be attributed to data shifts in glaucoma prevalence, fundus camera, and the definition of glaucoma ground truth. In this study, we confirm that a previously described regression network for glaucoma referral (G-RISK) obtains excellent results in a variety of challenging settings. Thirteen different data sources of labeled fundus images were utilized. The data sources include two large population cohorts (Australian Blue Mountains Eye Study, BMES and German Gutenberg Health Study, GHS) and 11 publicly available datasets (AIROGS, ORIGA, REFUGE1, LAG, ODIR, REFUGE2, GAMMA, RIM-ONEr3, RIM-ONE DL, ACRIMA, PAPILA). To minimize data shifts in input data, a standardized image processing strategy was developed to obtain 30° disc-centered images from the original data. A total of 149,455 images were included for model testing. Area under the receiver operating characteristic curve (AUC) for BMES and GHS population cohorts were at 0.976 [95% CI: 0.967-0.986] and 0.984 [95% CI: 0.980-0.991] on participant level, respectively. At a fixed specificity of 95%, sensitivities were at 87.3% and 90.3%, respectively, surpassing the minimum criteria of 85% sensitivity recommended by Prevent Blindness America. AUC values on the eleven publicly available data sets ranged from 0.854 to 0.988. These results confirm the excellent generalizability of a glaucoma risk regression model trained with homogeneous data from a single tertiary referral center. Further validation using prospective cohort studies is warranted.

5.
Transl Vis Sci Technol ; 11(8): 22, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35998059

RESUMO

Purpose: Standard automated perimetry is the gold standard to monitor visual field (VF) loss in glaucoma management, but it is prone to intrasubject variability. We trained and validated a customized deep learning (DL) regression model with Xception backbone that estimates pointwise and overall VF sensitivity from unsegmented optical coherence tomography (OCT) scans. Methods: DL regression models have been trained with four imaging modalities (circumpapillary OCT at 3.5 mm, 4.1 mm, and 4.7 mm diameter) and scanning laser ophthalmoscopy en face images to estimate mean deviation (MD) and 52 threshold values. This retrospective study used data from patients who underwent a complete glaucoma examination, including a reliable Humphrey Field Analyzer (HFA) 24-2 SITA Standard (SS) VF exam and a SPECTRALIS OCT. Results: For MD estimation, weighted prediction averaging of all four individuals yielded a mean absolute error (MAE) of 2.89 dB (2.50-3.30) on 186 test images, reducing the baseline by 54% (MAEdecr%). For 52 VF threshold values' estimation, the weighted ensemble model resulted in an MAE of 4.82 dB (4.45-5.22), representing an MAEdecr% of 38% from baseline when predicting the pointwise mean value. DL managed to explain 75% and 58% of the variance (R2) in MD and pointwise sensitivity estimation, respectively. Conclusions: Deep learning can estimate global and pointwise VF sensitivities that fall almost entirely within the 90% test-retest confidence intervals of the 24-2 SS test. Translational Relevance: Fast and consistent VF prediction from unsegmented OCT scans could become a solution for visual function estimation in patients unable to perform reliable VF exams.


Assuntos
Aprendizado Profundo , Glaucoma , Glaucoma/diagnóstico por imagem , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica , Transtornos da Visão/diagnóstico , Campos Visuais
6.
Acta Neuropathol Commun ; 10(1): 128, 2022 09 03.
Artigo em Inglês | MEDLINE | ID: mdl-36057624

RESUMO

It has become evident that Alzheimer's Disease (AD) is not only linked to its hallmark lesions-amyloid plaques and neurofibrillary tangles (NFTs)-but also to other co-occurring pathologies. This may lead to synergistic effects of the respective cellular and molecular players, resulting in neuronal death. One of these co-pathologies is the accumulation of phosphorylated transactive-response DNA binding protein 43 (pTDP-43) as neuronal cytoplasmic inclusions, currently considered to represent limbic-predominant age-related TDP-43 encephalopathy neuropathological changes (LATE-NC), in up to 70% of symptomatic AD cases. Granulovacuolar degeneration (GVD) is another AD co-pathology, which also contains TDP-43 and other AD-related proteins. Recently, we found that all proteins required for necroptosis execution, a previously defined programmed form of neuronal cell death, are present in GVD, such as the phosphorylated necroptosis executioner mixed-lineage kinase domain-like protein (pMLKL). Accordingly, this protein is a reliable marker for GVD lesions, similar to other known GVD proteins. Importantly, it is not yet known whether the presence of LATE-NC in symptomatic AD cases is associated with necroptosis pathway activation, presumably contributing to neuron loss by cell death execution. In this study, we investigated the impact of LATE-NC on the severity of necroptosis-associated GVD lesions, phosphorylated tau (pTau) pathology and neuronal density. First, we used 230 human post-mortem cases, including 82 controls without AD neuropathological changes (non-ADNC), 81 non-demented cases with ADNC, i.e.: pathologically-defined preclinical AD (p-preAD) and 67 demented cases with ADNC. We found that Braak NFT stage and LATE-NC stage were good predictors for GVD expansion and neuronal loss in the hippocampal CA1 region. Further, we compared the impact of TDP-43 accumulation on hippocampal expression of pMLKL-positive GVD, pTau as well as on neuronal density in a subset of nine non-ADNC controls, ten symptomatic AD cases with (ADTDP+) and eight without LATE-NC (ADTDP-). Here, we observed increased levels of pMLKL-positive, GVD-exhibiting neurons in ADTDP+ cases, compared to ADTDP- and controls, which was accompanied by augmented pTau pathology. Neuronal loss in the CA1 region was increased in ADTDP+ compared to ADTDP- cases. These data suggest that co-morbid LATE-NC in AD impacts not only pTau pathology but also GVD-mediated necroptosis pathway activation, which results in an accelerated neuronal demise. This further highlights the cumulative and synergistic effects of comorbid pathologies leading to neuronal loss in AD. Accordingly, protection against necroptotic neuronal death appears to be a promising therapeutic option for AD and LATE.


Assuntos
Doença de Alzheimer , Doença de Alzheimer/patologia , Proteínas de Ligação a DNA/metabolismo , Humanos , Necroptose , Degeneração Neural/patologia , Emaranhados Neurofibrilares/patologia
7.
IEEE Trans Pattern Anal Mach Intell ; 43(9): 3024-3036, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32960762

RESUMO

Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate. Existing literature on model based optimization in conditional parameter spaces are usually built on trees. In this work, we generalize the additive assumption to tree-structured functions and propose an additive tree-structured covariance function, showing improved sample-efficiency, wider applicability and greater flexibility. Furthermore, by incorporating the structure information of parameter spaces and the additive assumption in the BO loop, we develop a parallel algorithm to optimize the acquisition function and this optimization can be performed in a low dimensional space. We demonstrate our method on an optimization benchmark function, on a neural network compression problem and on pruning pre-trained VGG16 and ResNet50 models. Experimental results show our approach significantly outperforms the current state of the art for conditional parameter optimization including SMAC, TPE and Jenatton et al. (2017).

8.
Sci Rep ; 11(1): 20313, 2021 10 13.
Artigo em Inglês | MEDLINE | ID: mdl-34645908

RESUMO

Although unprecedented sensitivity and specificity values are reported, recent glaucoma detection deep learning models lack in decision transparency. Here, we propose a methodology that advances explainable deep learning in the field of glaucoma detection and vertical cup-disc ratio (VCDR), an important risk factor. We trained and evaluated deep learning models using fundus images that underwent a certain cropping policy. We defined the crop radius as a percentage of image size, centered on the optic nerve head (ONH), with an equidistant spaced range from 10-60% (ONH crop policy). The inverse of the cropping mask was also applied (periphery crop policy). Trained models using original images resulted in an area under the curve (AUC) of 0.94 [95% CI 0.92-0.96] for glaucoma detection, and a coefficient of determination (R2) equal to 77% [95% CI 0.77-0.79] for VCDR estimation. Models that were trained on images with absence of the ONH are still able to obtain significant performance (0.88 [95% CI 0.85-0.90] AUC for glaucoma detection and 37% [95% CI 0.35-0.40] R2 score for VCDR estimation in the most extreme setup of 60% ONH crop). Our findings provide the first irrefutable evidence that deep learning can detect glaucoma from fundus image regions outside the ONH.


Assuntos
Aprendizado Profundo , Fundo de Olho , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem , Doenças do Nervo Óptico/diagnóstico por imagem , Idoso , Área Sob a Curva , Diagnóstico por Computador/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Análise de Regressão , Retina/diagnóstico por imagem , Sensibilidade e Especificidade
9.
Comput Methods Programs Biomed ; 199: 105920, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33412285

RESUMO

BACKGROUND AND OBJECTIVES: Pathological myopia (PM) is the seventh leading cause of blindness, with a reported global prevalence up to 3%. Early and automated PM detection from fundus images could aid to prevent blindness in a world population that is characterized by a rising myopia prevalence. We aim to assess the use of convolutional neural networks (CNNs) for the detection of PM and semantic segmentation of myopia-induced lesions from fundus images on a recently introduced reference data set. METHODS: This investigation reports on the results of CNNs developed for the recently introduced Pathological Myopia (PALM) dataset, which consists of 1200 images. Our CNN bundles lesion segmentation and PM classification, as the two tasks are heavily intertwined. Domain knowledge is also inserted through the introduction of a new Optic Nerve Head (ONH)-based prediction enhancement for the segmentation of atrophy and fovea localization. Finally, we are the first to approach fovea localization using segmentation instead of detection or regression models. Evaluation metrics include area under the receiver operating characteristic curve (AUC) for PM detection, Euclidean distance for fovea localization, and Dice and F1 metrics for the semantic segmentation tasks (optic disc, retinal atrophy and retinal detachment). RESULTS: Models trained with 400 available training images achieved an AUC of 0.9867 for PM detection, and a Euclidean distance of 58.27 pixels on the fovea localization task, evaluated on a test set of 400 images. Dice and F1 metrics for semantic segmentation of lesions scored 0.9303 and 0.9869 on optic disc, 0.8001 and 0.9135 on retinal atrophy, and 0.8073 and 0.7059 on retinal detachment, respectively. CONCLUSIONS: We report a successful approach for a simultaneous classification of pathological myopia and segmentation of associated lesions. Our work was acknowledged with an award in the context of the "Pathological Myopia detection from retinal images" challenge held during the IEEE International Symposium on Biomedical Imaging (April 2019). Considering that (pathological) myopia cases are often identified as false positives and negatives in glaucoma deep learning models, we envisage that the current work could aid in future research to discriminate between glaucomatous and highly-myopic eyes, complemented by the localization and segmentation of landmarks such as fovea, optic disc and atrophy.


Assuntos
Aprendizado Profundo , Glaucoma , Miopia Degenerativa , Disco Óptico , Fundo de Olho , Humanos , Miopia Degenerativa/diagnóstico por imagem
10.
IEEE Trans Pattern Anal Mach Intell ; 42(3): 735-748, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30489261

RESUMO

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p logp) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. We prove that the Lovász hinge is convex and yields an extension. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through several set prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets.

11.
IEEE Trans Med Imaging ; 39(12): 4346-4356, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32804644

RESUMO

Knee osteoarthritis (OA) is one of the highest disability factors in the world. This musculoskeletal disorder is assessed from clinical symptoms, and typically confirmed via radiographic assessment. This visual assessment done by a radiologist requires experience, and suffers from moderate to high inter-observer variability. The recent literature has shown that deep learning methods can reliably perform the OA severity assessment according to the gold standard Kellgren-Lawrence (KL) grading system. However, these methods require large amounts of labeled data, which are costly to obtain. In this study, we propose the Semixup algorithm, a semi-supervised learning (SSL) approach to leverage unlabeled data. Semixup relies on consistency regularization using in- and out-of-manifold samples, together with interpolated consistency. On an independent test set, our method significantly outperformed other state-of-the-art SSL methods in most cases. Finally, when compared to a well-tuned fully supervised baseline that yielded a balanced accuracy (BA) of 70.9 ± 0.8% on the test set, Semixup had comparable performance - BA of 71 ± 0.8% ( p=0.368 ) while requiring 6 times less labeled data. These results show that our proposed SSL method allows building fully automatic OA severity assessment tools with datasets that are available outside research settings.


Assuntos
Osteoartrite do Joelho , Algoritmos , Humanos , Variações Dependentes do Observador , Osteoartrite do Joelho/diagnóstico por imagem , Radiografia , Aprendizado de Máquina Supervisionado
12.
IEEE Trans Med Imaging ; 39(11): 3679-3690, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32746113

RESUMO

In many medical imaging and classical computer vision tasks, the Dice score and Jaccard index are used to evaluate the segmentation performance. Despite the existence and great empirical success of metric-sensitive losses, i.e. relaxations of these metrics such as soft Dice, soft Jaccard and Lovász-Softmax, many researchers still use per-pixel losses, such as (weighted) cross-entropy to train CNNs for segmentation. Therefore, the target metric is in many cases not directly optimized. We investigate from a theoretical perspective, the relation within the group of metric-sensitive loss functions and question the existence of an optimal weighting scheme for weighted cross-entropy to optimize the Dice score and Jaccard index at test time. We find that the Dice score and Jaccard index approximate each other relatively and absolutely, but we find no such approximation for a weighted Hamming similarity. For the Tversky loss, the approximation gets monotonically worse when deviating from the trivial weight setting where soft Tversky equals soft Dice. We verify these results empirically in an extensive validation on six medical segmentation tasks and can confirm that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index. This further holds in a multi-class setting, and across different object sizes and foreground/background ratios. These results encourage a wider adoption of metric-sensitive loss functions for medical segmentation tasks where the performance measure of interest is the Dice score or Jaccard index.


Assuntos
Diagnóstico por Imagem , Entropia
13.
Acta Ophthalmol ; 98(1): e94-e100, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31344328

RESUMO

PURPOSE: To assess the use of deep learning (DL) for computer-assisted glaucoma identification, and the impact of training using images selected by an active learning strategy, which minimizes labelling cost. Additionally, this study focuses on the explainability of the glaucoma classifier. METHODS: This original investigation pooled 8433 retrospectively collected and anonymized colour optic disc-centred fundus images, in order to develop a deep learning-based classifier for glaucoma diagnosis. The labels of the various deep learning models were compared with the clinical assessment by glaucoma experts. Data were analysed between March and October 2018. Sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and amount of data used for discriminating between glaucomatous and non-glaucomatous fundus images, on both image and patient level. RESULTS: Trained using 2072 colour fundus images, representing 42% of the original training data, the trained DL model achieved an AUC of 0.995, sensitivity and specificity of, respectively, 98.0% (CI 95.5%-99.4%) and 91% (CI 84.0%-96.0%), for glaucoma versus non-glaucoma patient referral. CONCLUSIONS: These results demonstrate the benefits of deep learning for automated glaucoma detection based on optic disc-centred fundus images. The combined use of transfer and active learning in the medical community can optimize performance of DL models, while minimizing the labelling cost of domain-specific mavens. Glaucoma experts are able to make use of heat maps generated by the deep learning classifier to assess its decision, which seems to be related to inferior and superior neuroretinal rim (within ONH), and RNFL in superotemporal and inferotemporal zones (outside ONH).


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Glaucoma/diagnóstico , Disco Óptico/patologia , Seguimentos , Fundo de Olho , Humanos , Curva ROC , Estudos Retrospectivos
14.
Comput Med Imaging Graph ; 76: 101636, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31288217

RESUMO

Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery-vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.


Assuntos
Aprendizado Profundo , Fundo de Olho , Vasos Retinianos/diagnóstico por imagem , Benchmarking , Conjuntos de Dados como Assunto , Humanos , Fotografação , Artéria Retiniana/diagnóstico por imagem , Veia Retiniana/diagnóstico por imagem
15.
Comput Methods Programs Biomed ; 153: 115-127, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-29157445

RESUMO

BACKGROUND AND OBJECTIVES: Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. METHODS: In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. RESULTS: We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. CONCLUSIONS: Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our system is publicly available at https://github.com/ignaciorlando/red-lesion-detection.


Assuntos
Retinopatia Diabética/patologia , Fundo de Olho , Aprendizado de Máquina , Retinopatia Diabética/diagnóstico por imagem , Humanos , Interpretação de Imagem Assistida por Computador , Microaneurisma/diagnóstico por imagem , Redes Neurais de Computação
16.
Comput Med Imaging Graph ; 69: 21-32, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30172090

RESUMO

Assessing the surgical margin during breast lumpectomy operations can avoid the need for additional surgery. Optical coherence tomography (OCT) is an imaging technique that has been proven to be efficient for this purpose. However, to avoid overloading the surgeon during the operation, automatic cancer detection at the surface of the removed tissue is needed. This work explores automated margin assessment on a sample of patient data collected at the Pathology Department, Severance Hospital (Seoul, South Korea). Some methods based on the spatial statistics of the images have been developed, but the obtained results are still far from human performance. In this work, we investigate the possibility to use deep neural networks (DNNs) for real time margin assessment, demonstrating performance significantly better than the reported literature and close to the level of a human expert. Since the goal is to detect the presence of cancer, a patch-based classification method is proposed, as it is sufficient for detection, and requires training data that is easier and cheaper to collect than for other approaches such as segmentation. For that purpose, we train a DNN architecture that was proved to be efficient for small images on patches extracted from images containing only cancer or only normal tissue as determined by pathologists in a university hospital. As the number of available images in all such studies is by necessity small relative to other deep network applications such as ImageNet, a good regularization method is needed. In this work, we propose to use a recently introduced function norm regularization that attempts to directly control the function complexity, in contrast to classical approaches such as weight decay and DropOut. As neither the code nor the data of previous results are publicly available, the obtained results are compared with reported results in the literature for a conservative comparison. Moreover, our method is applied to locally collected data on several data configurations. The reported results are the average over the different trials. The experimental results show that the use of DNNs yields significantly better results than other techniques when evaluated in terms of sensitivity, specificity, F1 score, G-mean and Matthews correlation coefficient. Function norm regularization yielded higher and more robust results than competing regularization methods. We have demonstrated a system that shows high promise for (partially) automated margin assessment of human breast tissue, Equal error rate (EER) is reduced from approximately 12% (the lowest reported in the literature) to 5% - a 58% reduction. The method is computationally feasible for intraoperative application (less than 2 s per image) at the only cost of a longer offline training time.


Assuntos
Neoplasias da Mama/cirurgia , Margens de Excisão , Intensificação de Imagem Radiográfica/métodos , Algoritmos , Feminino , Humanos , Rede Nervosa , Tomografia Computadorizada por Raios X
18.
IEEE Trans Biomed Eng ; 64(1): 16-27, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26930672

RESUMO

GOAL: In this work, we present an extensive description and evaluation of our method for blood vessel segmentation in fundus images based on a discriminatively trained fully connected conditional random field model. METHODS: Standard segmentation priors such as a Potts model or total variation usually fail when dealing with thin and elongated structures. We overcome this difficulty by using a conditional random field model with more expressive potentials, taking advantage of recent results enabling inference of fully connected models almost in real time. Parameters of the method are learned automatically using a structured output support vector machine, a supervised technique widely used for structured prediction in a number of machine learning applications. RESULTS: Our method, trained with state of the art features, is evaluated both quantitatively and qualitatively on four publicly available datasets: DRIVE, STARE, CHASEDB1, and HRF. Additionally, a quantitative comparison with respect to other strategies is included. CONCLUSION: The experimental results show that this approach outperforms other techniques when evaluated in terms of sensitivity, F1-score, G-mean, and Matthews correlation coefficient. Additionally, it was observed that the fully connected model is able to better distinguish the desired structures than the local neighborhood-based approach. SIGNIFICANCE: Results suggest that this method is suitable for the task of segmenting elongated structures, a feature that can be exploited to contribute with other medical and biological applications.


Assuntos
Angiofluoresceinografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Oftalmoscopia/métodos , Artéria Retiniana/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem , Simulação por Computador , Humanos , Modelos Cardiovasculares , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Reprodutibilidade dos Testes , Artéria Retiniana/patologia , Doenças Retinianas/patologia , Sensibilidade e Especificidade
19.
Med Phys ; 44(12): 6425-6434, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29044550

RESUMO

PURPOSE: Diabetic retinopathy (DR) is one of the most widespread causes of preventable blindness in the world. The most dangerous stage of this condition is proliferative DR (PDR), in which the risk of vision loss is high and treatments are less effective. Fractal features of the retinal vasculature have been previously explored as potential biomarkers of DR, yet the current literature is inconclusive with respect to their correlation with PDR. In this study, we experimentally assess their discrimination ability to recognize PDR cases. METHODS: A statistical analysis of the viability of using three reference fractal characterization schemes - namely box, information, and correlation dimensions - to identify patients with PDR is presented. These descriptors are also evaluated as input features for training ℓ1 and ℓ2 regularized logistic regression classifiers, to estimate their performance. RESULTS: Our results on MESSIDOR, a public dataset of 1200 fundus photographs, indicate that patients with PDR are more likely to exhibit a higher fractal dimension than healthy subjects or patients with mild levels of DR (P≤1.3×10-2). Moreover, a supervised classifier trained with both fractal measurements and red lesion-based features reports an area under the ROC curve of 0.93 for PDR screening and 0.96 for detecting patients with optic disc neovascularizations. CONCLUSIONS: The fractal dimension of the vasculature increases with the level of DR. Furthermore, PDR screening using multiscale fractal measurements is more feasible than using their derived fractal dimensions. Code and further resources are provided at https://github.com/ignaciorlando/fundus-fractal-analysis.


Assuntos
Bases de Dados Factuais , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/patologia , Diagnóstico por Imagem , Fractais , Processamento de Imagem Assistida por Computador/métodos , Técnicas de Diagnóstico Oftalmológico , Humanos , Retina/diagnóstico por imagem , Retina/patologia
20.
Comput Med Imaging Graph ; 46 Pt 1: 40-46, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25861834

RESUMO

We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA