Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Clin Oral Investig ; 28(5): 266, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38652317

RESUMEN

OBJECTIVES: Confocal laser endomicroscopy (CLE) is an optical method that enables microscopic visualization of oral mucosa. Previous studies have shown that it is possible to differentiate between physiological and malignant oral mucosa. However, differences in mucosal architecture were not taken into account. The objective was to map the different oral mucosal morphologies and to establish a "CLE map" of physiological mucosa as baseline for further application of this powerful technology. MATERIALS AND METHODS: The CLE database consisted of 27 patients. The following spots were examined: (1) upper lip (intraoral) (2) alveolar ridge (3) lateral tongue (4) floor of the mouth (5) hard palate (6) intercalary line. All sequences were examined by two CLE experts for morphological differences and video quality. RESULTS: Analysis revealed clear differences in image quality and possibility of depicting tissue morphologies between the various localizations of oral mucosa: imaging of the alveolar ridge and hard palate showed visually most discriminative tissue morphology. Labial mucosa was also visualized well using CLE. Here, typical morphological features such as uniform cells with regular intercellular gaps and vessels could be clearly depicted. Image generation and evaluation was particularly difficult in the area of the buccal mucosa, the lateral tongue and the floor of the mouth. CONCLUSION: A physiological "CLE map" for the entire oral cavity could be created for the first time. CLINICAL RELEVANCE: This will make it possible to take into account the existing physiological morphological features when differentiating between normal mucosa and oral squamous cell carcinoma in future work.


Asunto(s)
Microscopía Confocal , Mucosa Bucal , Humanos , Microscopía Confocal/métodos , Mucosa Bucal/diagnóstico por imagen , Mucosa Bucal/citología , Masculino , Femenino , Persona de Mediana Edad , Neoplasias de la Boca/patología , Neoplasias de la Boca/diagnóstico por imagen
2.
Med Image Anal ; 94: 103155, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38537415

RESUMEN

Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking.


Asunto(s)
Laboratorios , Mitosis , Humanos , Animales , Gatos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Estándares de Referencia
3.
Eur Arch Otorhinolaryngol ; 281(4): 2115-2122, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38329525

RESUMEN

PURPOSE: Confocal Laser Endomicroscopy (CLE) is an imaging tool, that has demonstrated potential for intraoperative, real-time, non-invasive, microscopical assessment of surgical margins of oropharyngeal squamous cell carcinoma (OPSCC). However, interpreting CLE images remains challenging. This study investigates the application of OpenAI's Generative Pretrained Transformer (GPT) 4.0 with Vision capabilities for automated classification of CLE images in OPSCC. METHODS: CLE Images of histological confirmed SCC or healthy mucosa from a database of 12 809 CLE images from 5 patients with OPSCC were retrieved and anonymized. Using a training data set of 16 images, a validation set of 139 images, comprising SCC (83 images, 59.7%) and healthy normal mucosa (56 images, 40.3%) was classified using the application programming interface (API) of GPT4.0. The same set of images was also classified by CLE experts (two surgeons and one pathologist), who were blinded to the histology. Diagnostic metrics, the reliability of GPT and inter-rater reliability were assessed. RESULTS: Overall accuracy of the GPT model was 71.2%, the intra-rater agreement was κ = 0.837, indicating an almost perfect agreement across the three runs of GPT-generated results. Human experts achieved an accuracy of 88.5% with a substantial level of agreement (κ = 0.773). CONCLUSIONS: Though limited to a specific clinical framework, patient and image set, this study sheds light on some previously unexplored diagnostic capabilities of large language models using few-shot prompting. It suggests the model`s ability to extrapolate information and classify CLE images with minimal example data. Whether future versions of the model can achieve clinically relevant diagnostic accuracy, especially in uncurated data sets, remains to be investigated.


Asunto(s)
Neoplasias de Cabeza y Cuello , Humanos , Reproducibilidad de los Resultados , Microscopía Confocal/métodos , Carcinoma de Células Escamosas de Cabeza y Cuello , Rayos Láser
4.
Ophthalmol Glaucoma ; 2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38296108

RESUMEN

PURPOSE: Develop and evaluate the performance of a deep learning model (DLM) that forecasts eyes with low future visual field (VF) variability, and study the impact of using this DLM on sample size requirements for neuroprotective trials. DESIGN: Retrospective cohort and simulation study. METHODS: We included 1 eye per patient with baseline reliable VFs, OCT, clinical measures (demographics, intraocular pressure, and visual acuity), and 5 subsequent reliable VFs to forecast VF variability using DLMs and perform sample size estimates. We estimated sample size for 3 groups of eyes: all eyes (AE), low variability eyes (LVE: the subset of AE with a standard deviation of mean deviation [MD] slope residuals in the bottom 25th percentile), and DLM-predicted low variability eyes (DLPE: the subset of AE predicted to be low variability by the DLM). Deep learning models using only baseline VF/OCT/clinical data as input (DLM1), or also using a second VF (DLM2) were constructed to predict low VF variability (DLPE1 and DLPE2, respectively). Data were split 60/10/30 into train/val/test. Clinical trial simulations were performed only on the test set. We estimated the sample size necessary to detect treatment effects of 20% to 50% in MD slope with 80% power. Power was defined as the percentage of simulated clinical trials where the MD slope was significantly worse from the control. Clinical trials were simulated with visits every 3 months with a total of 10 visits. RESULTS: A total of 2817 eyes were included in the analysis. Deep learning models 1 and 2 achieved an area under the receiver operating characteristic curve of 0.73 (95% confidence interval [CI]: 0.68, 0.76) and 0.82 (95% CI: 0.78, 0.85) in forecasting low VF variability. When compared with including AE, using DLPE1 and DLPE2 reduced sample size to achieve 80% power by 30% and 38% for 30% treatment effect, and 31% and 38% for 50% treatment effect. CONCLUSIONS: Deep learning models can forecast eyes with low VF variability using data from a single baseline clinical visit. This can reduce sample size requirements, and potentially reduce the burden of future glaucoma clinical trials. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

5.
Sci Rep ; 14(1): 599, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-38182701

RESUMEN

To develop and evaluate the performance of a deep learning model (DLM) that predicts eyes at high risk of surgical intervention for uncontrolled glaucoma based on multimodal data from an initial ophthalmology visit. Longitudinal, observational, retrospective study. 4898 unique eyes from 4038 adult glaucoma or glaucoma-suspect patients who underwent surgery for uncontrolled glaucoma (trabeculectomy, tube shunt, xen, or diode surgery) between 2013 and 2021, or did not undergo glaucoma surgery but had 3 or more ophthalmology visits. We constructed a DLM to predict the occurrence of glaucoma surgery within various time horizons from a baseline visit. Model inputs included spatially oriented visual field (VF) and optical coherence tomography (OCT) data as well as clinical and demographic features. Separate DLMs with the same architecture were trained to predict the occurrence of surgery within 3 months, within 3-6 months, within 6 months-1 year, within 1-2 years, within 2-3 years, within 3-4 years, and within 4-5 years from the baseline visit. Included eyes were randomly split into 60%, 20%, and 20% for training, validation, and testing. DLM performance was measured using area under the receiver operating characteristic curve (AUC) and precision-recall curve (PRC). Shapley additive explanations (SHAP) were utilized to assess the importance of different features. Model prediction of surgery for uncontrolled glaucoma within 3 months had the best AUC of 0.92 (95% CI 0.88, 0.96). DLMs achieved clinically useful AUC values (> 0.8) for all models that predicted the occurrence of surgery within 3 years. According to SHAP analysis, all 7 models placed intraocular pressure (IOP) within the five most important features in predicting the occurrence of glaucoma surgery. Mean deviation (MD) and average retinal nerve fiber layer (RNFL) thickness were listed among the top 5 most important features by 6 of the 7 models. DLMs can successfully identify eyes requiring surgery for uncontrolled glaucoma within specific time horizons. Predictive performance decreases as the time horizon for forecasting surgery increases. Implementing prediction models in a clinical setting may help identify patients that should be referred to a glaucoma specialist for surgical evaluation.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Oftalmología , Trabeculectomía , Adulto , Humanos , Estudios Retrospectivos , Glaucoma/cirugía , Retina
6.
Vet Pathol ; 60(6): 865-875, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37515411

RESUMEN

Microscopic evaluation of hematoxylin and eosin-stained slides is still the diagnostic gold standard for a variety of diseases, including neoplasms. Nevertheless, intra- and interrater variability are well documented among pathologists. So far, computer assistance via automated image analysis has shown potential to support pathologists in improving accuracy and reproducibility of quantitative tasks. In this proof of principle study, we describe a machine-learning-based algorithm for the automated diagnosis of 7 of the most common canine skin tumors: trichoblastoma, squamous cell carcinoma, peripheral nerve sheath tumor, melanoma, histiocytoma, mast cell tumor, and plasmacytoma. We selected, digitized, and annotated 350 hematoxylin and eosin-stained slides (50 per tumor type) to create a database divided into training, n = 245 whole-slide images (WSIs), validation (n = 35 WSIs), and test sets (n = 70 WSIs). Full annotations included the 7 tumor classes and 6 normal skin structures. The data set was used to train a convolutional neural network (CNN) for the automatic segmentation of tumor and nontumor classes. Subsequently, the detected tumor regions were classified patch-wise into 1 of the 7 tumor classes. A majority of patches-approach led to a tumor classification accuracy of the network on the slide-level of 95% (133/140 WSIs), with a patch-level precision of 85%. The same 140 WSIs were provided to 6 experienced pathologists for diagnosis, who achieved a similar slide-level accuracy of 98% (137/140 correct majority votes). Our results highlight the feasibility of artificial intelligence-based methods as a support tool in diagnostic oncologic pathology with future applications in other species and tumor types.


Asunto(s)
Aprendizaje Profundo , Enfermedades de los Perros , Neoplasias Cutáneas , Animales , Perros , Inteligencia Artificial , Eosina Amarillenta-(YS) , Hematoxilina , Reproducibilidad de los Resultados , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/veterinaria , Aprendizaje Automático , Enfermedades de los Perros/diagnóstico
7.
Sci Data ; 10(1): 484, 2023 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-37491536

RESUMEN

The prognostic value of mitotic figures in tumor tissue is well-established for many tumor types and automating this task is of high research interest. However, especially deep learning-based methods face performance deterioration in the presence of domain shifts, which may arise from different tumor types, slide preparation and digitization devices. We introduce the MIDOG++ dataset, an extension of the MIDOG 2021 and 2022 challenge datasets. We provide region of interest images from 503 histological specimens of seven different tumor types with variable morphology with in total labels for 11,937 mitotic figures: breast carcinoma, lung carcinoma, lymphosarcoma, neuroendocrine tumor, cutaneous mast cell tumor, cutaneous melanoma, and (sub)cutaneous soft tissue sarcoma. The specimens were processed in several laboratories utilizing diverse scanners. We evaluated the extent of the domain shift by using state-of-the-art approaches, observing notable differences in single-domain training. In a leave-one-domain-out setting, generalizability improved considerably. This mitotic figure dataset is the first that incorporates a wide domain shift based on different tumor types, laboratories, whole slide image scanners, and species.


Asunto(s)
Mitosis , Neoplasias , Humanos , Algoritmos , Pronóstico , Neoplasias/patología
8.
J Pathol Inform ; 14: 100301, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36994311

RESUMEN

The success of immuno-oncology treatments promises long-term cancer remission for an increasing number of patients. The response to checkpoint inhibitor drugs has shown a correlation with the presence of immune cells in the tumor and tumor microenvironment. An in-depth understanding of the spatial localization of immune cells is therefore critical for understanding the tumor's immune landscape and predicting drug response. Computer-aided systems are well suited for efficiently quantifying immune cells in their spatial context. Conventional image analysis approaches are often based on color features and therefore require a high level of manual interaction. More robust image analysis methods based on deep learning are expected to decrease this reliance on human interaction and improve the reproducibility of immune cell scoring. However, these methods require sufficient training data and previous work has reported low robustness of these algorithms when they are tested on out-of-distribution data from different pathology labs or samples from different organs. In this work, we used a new image analysis pipeline to explicitly evaluate the robustness of marker-labeled lymphocyte quantification algorithms depending on the number of training samples before and after being transferred to a new tumor indication. For these experiments, we adapted the RetinaNet architecture for the task of T-lymphocyte detection and employed transfer learning to bridge the domain gap between tumor indications and reduce the annotation costs for unseen domains. On our test set, we achieved human-level performance for almost all tumor indications with an average precision of 0.74 in-domain and 0.72-0.74 cross-domain. From our results, we derive recommendations for model development regarding annotation extent, training sample selection, and label extraction for the development of robust algorithms for immune cell scoring. By extending the task of marker-labeled lymphocyte quantification to a multi-class detection task, the pre-requisite for subsequent analyses, e.g., distinguishing lymphocytes in the tumor stroma from tumor-infiltrating lymphocytes, is met.

9.
Sci Rep ; 13(1): 2563, 2023 02 13.
Artículo en Inglés | MEDLINE | ID: mdl-36781953

RESUMEN

Recently, algorithms capable of assessing the severity of Coronary Artery Disease (CAD) in form of the Coronary Artery Disease-Reporting and Data System (CAD-RADS) grade from Coronary Computed Tomography Angiography (CCTA) scans using Deep Learning (DL) were proposed. Before considering to apply these algorithms in clinical practice, their robustness regarding different commonly used Computed Tomography (CT)-specific image formation parameters-including denoising strength, slab combination, and reconstruction kernel-needs to be evaluated. For this study, we reconstructed a data set of 500 patient CCTA scans under seven image formation parameter configurations. We select one default configuration and evaluate how varying individual parameters impacts the performance and stability of a typical algorithm for automated CAD assessment from CCTA. This algorithm consists of multiple preprocessing and a DL prediction step. We evaluate the influence of the parameter changes on the entire pipeline and additionally on only the DL step by propagating the centerline extraction results of the default configuration to all others. We consider the standard deviation of the CAD severity prediction grade difference between the default and variation configurations to assess the stability w.r.t. parameter changes. For the full pipeline we observe slight instability (± 0.226 CAD-RADS) for all variations. Predictions are more stable with centerlines propagated from the default to the variation configurations (± 0.122 CAD-RADS), especially for differing denoising strengths (± 0.046 CAD-RADS). However, stacking slabs with sharp boundaries instead of mixing slabs in overlapping regions (called true stack ± 0.313 CAD-RADS) and increasing the sharpness of the reconstruction kernel (± 0.150 CAD-RADS) leads to unstable predictions. Regarding the clinically relevant tasks of excluding CAD (called rule-out; AUC default 0.957, min 0.937) and excluding obstructive CAD (called hold-out; AUC default 0.971, min 0.964) the performance remains on a high level for all variations. Concluding, an influence of reconstruction parameters on the predictions is observed. Especially, scans reconstructed with the true stack parameter need to be treated with caution when using a DL-based method. Also, reconstruction kernels which are underrepresented in the training data increase the prediction uncertainty.


Asunto(s)
Enfermedad de la Arteria Coronaria , Aprendizaje Profundo , Humanos , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/terapia , Angiografía Coronaria/métodos , Tomografía Computarizada por Rayos X , Corazón , Valor Predictivo de las Pruebas
11.
Med Image Anal ; 84: 102699, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36463832

RESUMEN

The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.


Asunto(s)
Algoritmos , Mitosis , Humanos , Clasificación del Tumor , Pronóstico
12.
Vet Pathol ; 60(1): 75-85, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36384369

RESUMEN

Exercise-induced pulmonary hemorrhage (EIPH) is a relevant respiratory disease in sport horses, which can be diagnosed by examination of bronchoalveolar lavage fluid (BALF) cells using the total hemosiderin score (THS). The aim of this study was to evaluate the diagnostic accuracy and reproducibility of annotators and to validate a deep learning-based algorithm for the THS. Digitized cytological specimens stained for iron were prepared from 52 equine BALF samples. Ten annotators produced a THS for each slide according to published methods. The reference methods for comparing annotator's and algorithmic performance included a ground truth dataset, the mean annotators' THSs, and chemical iron measurements. Results of the study showed that annotators had marked interobserver variability of the THS, which was mostly due to a systematic error between annotators in grading the intracytoplasmatic hemosiderin content of individual macrophages. Regarding overall measurement error between the annotators, 87.7% of the variance could be reduced by using standardized grades based on the ground truth. The algorithm was highly consistent with the ground truth in assigning hemosiderin grades. Compared with the ground truth THS, annotators had an accuracy of diagnosing EIPH (THS of < or ≥ 75) of 75.7%, whereas, the algorithm had an accuracy of 92.3% with no relevant differences in correlation with chemical iron measurements. The results show that deep learning-based algorithms are useful for improving reproducibility and routine applicability of the THS. For THS by experts, a diagnostic uncertainty interval of 40 to 110 is proposed. THSs within this interval have insufficient reproducibility regarding the EIPH diagnosis.


Asunto(s)
Aprendizaje Profundo , Enfermedades de los Caballos , Enfermedades Pulmonares , Animales , Líquido del Lavado Bronquioalveolar , Hemorragia/diagnóstico , Hemorragia/veterinaria , Hemosiderina , Enfermedades de los Caballos/diagnóstico , Caballos , Hierro , Enfermedades Pulmonares/diagnóstico , Enfermedades Pulmonares/veterinaria , Reproducibilidad de los Resultados
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 945-949, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086450

RESUMEN

Automated Electrocardiogram (ECG) classification using deep neural networks requires large datasets annotated by medical professionals, which is time-consuming and expensive. This work examines ECG augmentation as a method for enriching existing datasets at low cost. First, we introduce three novel augmentations: Limb Electrode Move and Chest Electrode Move both simulate a minor electrode mislocation during signal measurement, and Heart Vector Transform generates an ECG by modeling a rotated main heart axis. These techniques are then combined with nine time series signal augmentations from literature. Evaluation was performed on ICBEB, PTB-XL Diagnostic, PTB-XL Rhythm, and PTB-XL Form datasets. Compared to models trained without data augmentation, area under the receiver operating characteristic curve (AUC) was increased by 3.5%, 1.7%, 1.4% and 3.5%, respectively. Our experiments demonstrated that data augmentation can improve deep learning performance in ECG classification. Analyses of the individual augmentation effects established the efficacy of the three proposed augmentations.


Asunto(s)
Aprendizaje Profundo , Electrocardiografía/métodos , Redes Neurales de la Computación , Curva ROC
14.
Sci Data ; 9(1): 588, 2022 09 27.
Artículo en Inglés | MEDLINE | ID: mdl-36167846

RESUMEN

Due to morphological similarities, the differentiation of histologic sections of cutaneous tumors into individual subtypes can be challenging. Recently, deep learning-based approaches have proven their potential for supporting pathologists in this regard. However, many of these supervised algorithms require a large amount of annotated data for robust development. We present a publicly available dataset of 350 whole slide images of seven different canine cutaneous tumors complemented by 12,424 polygon annotations for 13 histologic classes, including seven cutaneous tumor subtypes. In inter-rater experiments, we show a high consistency of the provided labels, especially for tumor annotations. We further validate the dataset by training a deep neural network for the task of tissue segmentation and tumor subtype classification. We achieve a class-averaged Jaccard coefficient of 0.7047, and 0.9044 for tumor in particular. For classification, we achieve a slide-level accuracy of 0.9857. Since canine cutaneous tumors possess various histologic homologies to human tumors the added value of this dataset is not limited to veterinary pathology but extends to more general fields of application.


Asunto(s)
Enfermedades de los Perros , Redes Neurales de la Computación , Neoplasias Cutáneas , Algoritmos , Animales , Enfermedades de los Perros/patología , Perros , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/veterinaria
15.
Sci Rep ; 12(1): 14292, 2022 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-35995933

RESUMEN

Glottis segmentation is a crucial step to quantify endoscopic footage in laryngeal high-speed videoendoscopy. Recent advances in deep neural networks for glottis segmentation allow for a fully automatic workflow. However, exact knowledge of integral parts of these deep segmentation networks remains unknown, and understanding the inner workings is crucial for acceptance in clinical practice. Here, we show that a single latent channel as a bottleneck layer is sufficient for glottal area segmentation using systematic ablations. We further demonstrate that the latent space is an abstraction of the glottal area segmentation relying on three spatially defined pixel subtypes allowing for a transparent interpretation. We further provide evidence that the latent space is highly correlated with the glottal area waveform, can be encoded with four bits, and decoded using lean decoders while maintaining a high reconstruction accuracy. Our findings suggest that glottis segmentation is a task that can be highly optimized to gain very efficient and explainable deep neural networks, important for application in the clinic. In the future, we believe that online deep learning-assisted monitoring is a game-changer in laryngeal examinations.


Asunto(s)
Glotis , Laringe , Endoscopía , Glotis/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Grabación en Video
16.
Sci Data ; 9(1): 269, 2022 06 03.
Artículo en Inglés | MEDLINE | ID: mdl-35660753

RESUMEN

Pulmonary hemorrhage (P-Hem) occurs among multiple species and can have various causes. Cytology of bronchoalveolar lavage fluid (BALF) using a 5-tier scoring system of alveolar macrophages based on their hemosiderin content is considered the most sensitive diagnostic method. We introduce a novel, fully annotated multi-species P-Hem dataset, which consists of 74 cytology whole slide images (WSIs) with equine, feline and human samples. To create this high-quality and high-quantity dataset, we developed an annotation pipeline combining human expertise with deep learning and data visualisation techniques. We applied a deep learning-based object detection approach trained on 17 expertly annotated equine WSIs, to the remaining 39 equine, 12 human and 7 feline WSIs. The resulting annotations were semi-automatically screened for errors on multiple types of specialised annotation maps and finally reviewed by a trained pathologist. Our dataset contains a total of 297,383 hemosiderophages classified into five grades. It is one of the largest publicly available WSIs datasets with respect to the number of annotations, the scanned area and the number of species covered.


Asunto(s)
Líquido del Lavado Bronquioalveolar , Macrófagos Alveolares , Animales , Líquido del Lavado Bronquioalveolar/citología , Gatos , Hemosiderina , Caballos , Humanos , Especificidad de la Especie
17.
Rheumatology (Oxford) ; 61(12): 4945-4951, 2022 11 28.
Artículo en Inglés | MEDLINE | ID: mdl-35333316

RESUMEN

OBJECTIVES: To evaluate whether neural networks can distinguish between seropositive RA, seronegative RA, and PsA based on inflammatory patterns from hand MRIs and to test how psoriasis patients with subclinical inflammation fit into such patterns. METHODS: ResNet neural networks were utilized to compare seropositive RA vs PsA, seronegative RA vs PsA, and seropositive vs seronegative RA with respect to hand MRI data. Results from T1 coronal, T2 coronal, T1 coronal and axial fat-suppressed contrast-enhanced (CE), and T2 fat-suppressed axial sequences were used. The performance of such trained networks was analysed by the area under the receiver operating characteristics curve (AUROC) with and without presentation of demographic and clinical parameters. Additionally, the trained networks were applied to psoriasis patients without clinical arthritis. RESULTS: MRI scans from 649 patients (135 seronegative RA, 190 seropositive RA, 177 PsA, 147 psoriasis) were fed into ResNet neural networks. The AUROC was 75% for seropositive RA vs PsA, 74% for seronegative RA vs PsA, and 67% for seropositive vs seronegative RA. All MRI sequences were relevant for classification, however, when deleting contrast agent-based sequences the loss of performance was only marginal. The addition of demographic and clinical data to the networks did not provide significant improvements for classification. Psoriasis patients were mostly assigned to PsA by the neural networks, suggesting that a PsA-like MRI pattern may be present early in the course of psoriatic disease. CONCLUSION: Neural networks can be successfully trained to distinguish MRI inflammation related to seropositive RA, seronegative RA, and PsA.


Asunto(s)
Artritis Psoriásica , Artritis Reumatoide , Psoriasis , Humanos , Artritis Psoriásica/diagnóstico por imagen , Artritis Reumatoide/diagnóstico por imagen , Psoriasis/diagnóstico por imagen , Inflamación , Imagen por Resonancia Magnética , Redes Neurales de la Computación
18.
Vet Pathol ; 59(2): 211-226, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34965805

RESUMEN

The mitotic count (MC) is an important histological parameter for prognostication of malignant neoplasms. However, it has inter- and intraobserver discrepancies due to difficulties in selecting the region of interest (MC-ROI) and in identifying or classifying mitotic figures (MFs). Recent progress in the field of artificial intelligence has allowed the development of high-performance algorithms that may improve standardization of the MC. As algorithmic predictions are not flawless, computer-assisted review by pathologists may ensure reliability. In the present study, we compared partial (MC-ROI preselection) and full (additional visualization of MF candidates and display of algorithmic confidence values) computer-assisted MC analysis to the routine (unaided) MC analysis by 23 pathologists for whole-slide images of 50 canine cutaneous mast cell tumors (ccMCTs). Algorithmic predictions aimed to assist pathologists in detecting mitotic hotspot locations, reducing omission of MFs, and improving classification against imposters. The interobserver consistency for the MC significantly increased with computer assistance (interobserver correlation coefficient, ICC = 0.92) compared to the unaided approach (ICC = 0.70). Classification into prognostic stratifications had a higher accuracy with computer assistance. The algorithmically preselected hotspot MC-ROIs had a consistently higher MCs than the manually selected MC-ROIs. Compared to a ground truth (developed with immunohistochemistry for phosphohistone H3), pathologist performance in detecting individual MF was augmented when using computer assistance (F1-score of 0.68 increased to 0.79) with a reduction in false negatives by 38%. The results of this study demonstrate that computer assistance may lead to more reproducible and accurate MCs in ccMCTs.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Animales , Inteligencia Artificial , Perros , Humanos , Patólogos , Reproducibilidad de los Resultados
19.
Int J Comput Assist Radiol Surg ; 16(6): 967-978, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33929676

RESUMEN

PURPOSE: With the recent development of deep learning technologies, various neural networks have been proposed for fundus retinal vessel segmentation. Among them, the U-Net is regarded as one of the most successful architectures. In this work, we start with simplification of the U-Net, and explore the performance of few-parameter networks on this task. METHODS: We firstly modify the model with popular functional blocks and additional resolution levels, then we switch to exploring the limits for compression of the network architecture. Experiments are designed to simplify the network structure, decrease the number of trainable parameters, and reduce the amount of training data. Performance evaluation is carried out on four public databases, namely DRIVE, STARE, HRF and CHASE_DB1. In addition, the generalization ability of the few-parameter networks are compared against the state-of-the-art segmentation network. RESULTS: We demonstrate that the additive variants do not significantly improve the segmentation performance. The performance of the models are not severely harmed unless they are harshly degenerated: one level, or one filter in the input convolutional layer, or trained with one image. We also demonstrate that few-parameter networks have strong generalization ability. CONCLUSION: It is counter-intuitive that the U-Net produces reasonably good segmentation predictions until reaching the mentioned limits. Our work has two main contributions. On the one hand, the importance of different elements of the U-Net is evaluated, and the minimal U-Net which is capable of the task is presented. On the other hand, our work demonstrates that retinal vessel segmentation can be tackled by surprisingly simple configurations of U-Net reaching almost state-of-the-art performance. We also show that the simple configurations have better generalization ability than state-of-the-art models with high model complexity. These observations seem to be in contradiction to the current trend of continued increase in model complexity and capacity for the task under consideration.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Enfermedades de la Retina/diagnóstico , Vasos Retinianos/diagnóstico por imagen , Bases de Datos Factuales , Fondo de Ojo , Humanos
20.
Sci Rep ; 11(1): 4343, 2021 02 23.
Artículo en Inglés | MEDLINE | ID: mdl-33623058

RESUMEN

In many research areas, scientific progress is accelerated by multidisciplinary access to image data and their interdisciplinary annotation. However, keeping track of these annotations to ensure a high-quality multi-purpose data set is a challenging and labour intensive task. We developed the open-source online platform EXACT (EXpert Algorithm Collaboration Tool) that enables the collaborative interdisciplinary analysis of images from different domains online and offline. EXACT supports multi-gigapixel medical whole slide images as well as image series with thousands of images. The software utilises a flexible plugin system that can be adapted to diverse applications such as counting mitotic figures with a screening mode, finding false annotations on a novel validation view, or using the latest deep learning image analysis technologies. This is combined with a version control system which makes it possible to keep track of changes in the data sets and, for example, to link the results of deep learning experiments to specific data set versions. EXACT is freely available and has already been successfully applied to a broad range of annotation tasks, including highly diverse applications like deep learning supported cytology scoring, interdisciplinary multi-centre whole slide image tumour annotation, and highly specialised whale sound spectroscopy clustering.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...