RESUMEN
Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry.
Asunto(s)
Blastocisto/citología , Técnicas de Cultivo de Embriones/métodos , Desarrollo Embrionario , Fertilización In Vitro/métodos , Fetoscopía/métodos , Estudios de Cohortes , Diagnóstico por Computador , Fetoscopios , Humanos , Procesamiento de Imagen Asistido por Computador , Estudios Retrospectivos , Imagen de Lapso de TiempoRESUMEN
Mild Cognitive Impairment (MCI) is a condition characterized by a decline in cognitive abilities, specifically in memory, language, and attention, that is beyond what is expected due to normal aging. Detection of MCI is crucial for providing appropriate interventions and slowing down the progression of dementia. There are several automated predictive algorithms for prediction using time-to-event data, but it is not clear which is best to predict the time to conversion to MCI. There is also confusion if algorithms with fewer training weights are less accurate. We compared three algorithms, from smaller to large numbers of training weights: a statistical predictive model (Cox proportional hazards model, CoxPH), a machine learning model (Random Survival Forest, RSF), and a deep learning model (DeepSurv). To compare the algorithms under different scenarios, we created a simulated dataset based on the Alzheimer NACC dataset. We found that the CoxPH model was among the best-performing models, in all simulated scenarios. In a larger sample size (n = 6,000), the deep learning algorithm (DeepSurv) exhibited comparable accuracy (73.1%) to the CoxPH model (73%). In the past, ignoring heterogeneity in the CoxPH model led to the conclusion that deep learning methods are superior. We found that when using the CoxPH model with heterogeneity, its accuracy is comparable to that of DeepSurv and RSF. Furthermore, when unobserved heterogeneity is present, such as missing features in the training, all three models showed a similar drop in accuracy. This simulation study suggests that in some applications an algorithm with a smaller number of training weights is not disadvantaged in terms of accuracy. Since algorithms with fewer weights are inherently easier to explain, this study can help artificial intelligence research develop a principled approach to comparing statistical, machine learning, and deep learning algorithms for time-to-event predictions.
Asunto(s)
Disfunción Cognitiva , Aprendizaje Profundo , Humanos , Inteligencia Artificial , Algoritmos , Disfunción Cognitiva/diagnóstico , Aprendizaje AutomáticoRESUMEN
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Asunto(s)
Glaucoma , Disco Óptico , Humanos , Inteligencia Artificial , Fondo de Ojo , Glaucoma/diagnóstico , Disco Óptico/diagnóstico por imagen , Aprendizaje AutomáticoRESUMEN
Current research in automated disease detection focuses on making algorithms "slimmer" reducing the need for large training datasets and accelerating recalibration for new data while achieving high accuracy. The development of slimmer models has become a hot research topic in medical imaging. In this work, we develop a two-phase model for glaucoma detection, identifying and exploiting a redundancy in fundus image data relating particularly to the geometry. We propose a novel algorithm for the cup and disc segmentation "EffUnet" with an efficient convolution block and combine this with an extended spatial generative approach for geometry modelling and classification, termed "SpaGen" We demonstrate the high accuracy achievable by EffUnet in detecting the optic disc and cup boundaries and show how our algorithm can be quickly trained with new data by recalibrating the EffUnet layer only. Our resulting glaucoma detection algorithm, "EffUnet-SpaGen", is optimized to significantly reduce the computational burden while at the same time surpassing the current state-of-art in glaucoma detection algorithms with AUROC 0.997 and 0.969 in the benchmark online datasets ORIGA and DRISHTI, respectively. Our algorithm also allows deformed areas of the optic rim to be displayed and investigated, providing explainability, which is crucial to successful adoption and implementation in clinical settings.
RESUMEN
Automated MRI-derived measurements of in-vivo human brain volumes provide novel insights into normal and abnormal neuroanatomy, but little is known about measurement reliability. Here we assess the impact of image acquisition variables (scan session, MRI sequence, scanner upgrade, vendor and field strengths), FreeSurfer segmentation pre-processing variables (image averaging, B1 field inhomogeneity correction) and segmentation analysis variables (probabilistic atlas) on resultant image segmentation volumes from older (n=15, mean age 69.5) and younger (both n=5, mean ages 34 and 36.5) healthy subjects. The variability between hippocampal, thalamic, caudate, putamen, lateral ventricular and total intracranial volume measures across sessions on the same scanner on different days is less than 4.3% for the older group and less than 2.3% for the younger group. Within-scanner measurements are remarkably reliable across scan sessions, being minimally affected by averaging of multiple acquisitions, B1 correction, acquisition sequence (MPRAGE vs. multi-echo-FLASH), major scanner upgrades (Sonata-Avanto, Trio-TrioTIM), and segmentation atlas (MPRAGE or multi-echo-FLASH). Volume measurements across platforms (Siemens Sonata vs. GE Signa) and field strengths (1.5 T vs. 3 T) result in a volume difference bias but with a comparable variance as that measured within-scanner, implying that multi-site studies may not necessarily require a much larger sample to detect a specific effect. These results suggest that volumes derived from automated segmentation of T1-weighted structural images are reliable measures within the same scanner platform, even after upgrades; however, combining data across platform and across field-strength introduces a bias that should be considered in the design of multi-site studies, such as clinical drug trials. The results derived from the young groups (scanner upgrade effects and B1 inhomogeneity correction effects) should be considered as preliminary and in need for further validation with a larger dataset.
Asunto(s)
Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/instrumentación , Imagen por Resonancia Magnética/métodos , Adulto , Anciano , Anciano de 80 o más Años , Mapeo Encefálico/instrumentación , Mapeo Encefálico/métodos , Humanos , Estudios Multicéntricos como Asunto , Reproducibilidad de los ResultadosRESUMEN
Schizophrenia can be a devastating lifelong psychotic disorder with a poor prognosis. National guidelines in the UK recommend the provision of cognitive behavioral therapy (CBT) to all those suffering with psychotic disorders, but there is a lack of trained therapists in the UK able to provide such a treatment. Developing high quality automated technologies that can serve as an adjunct to conventional CBT should enhance the provision of this therapy, and increase the efficiency of the therapists in practice. The latter will occur by enabling alternate professionals to aid in the delivery of therapy, to enable behavioral experiments to be conducted in the clinic, and for sessions to be recorded and re-played such that the patient can deliver therapy to him or herself. As such the system will enable patients to become experts in, and providers of, their own treatment and decrease the number of sessions needed to be led by a trained CBT therapist. A key feature of any such system is the level of realism required to ensure a compelling session in which the user is not adversely affected by the system itself. This paper presents a high-fidelity virtual environment to help better understand the environmental triggers for psychosis.
Asunto(s)
Trastornos Psicóticos , Esquizofrenia , Terapia Cognitivo-Conductual , Humanos , Trastornos Psicóticos/psicología , Esquizofrenia/terapiaRESUMEN
[This corrects the article DOI: 10.1371/journal.pone.0209409.].
RESUMEN
BACKGROUND: Glaucoma is the leading cause of irreversible blindness worldwide. It is a heterogeneous group of conditions with a common optic neuropathy and associated loss of peripheral vision. Both over and under-diagnosis carry high costs in terms of healthcare spending and preventable blindness. The characteristic clinical feature of glaucoma is asymmetrical optic nerve rim narrowing, which is difficult for humans to quantify reliably. Strategies to improve and automate optic disc assessment are therefore needed to prevent sight loss. METHODS: We developed a novel glaucoma detection algorithm that segments and analyses colour photographs to quantify optic nerve rim consistency around the whole disc at 15-degree intervals. This provides a profile of the cup/disc ratio, in contrast to the vertical cup/disc ratio in common use. We introduce a spatial probabilistic model, to account for the optic nerve shape, we then use this model to derive a disc deformation index and a decision rule for glaucoma. We tested our algorithm on two separate image datasets (ORIGA and RIM-ONE). RESULTS: The spatial algorithm accurately distinguished glaucomatous and healthy discs on internal and external validation (AUROC 99.6% and 91.0% respectively). It achieves this using a dataset 100-times smaller than that required for deep learning algorithms, is flexible to the type of cup and disc segmentation (automated or semi-automated), utilises images with missing data, and is correlated with the disc size (p = 0.02) and the rim-to-disc at the narrowest rim (p<0.001, in external validation). DISCUSSION: The spatial probabilistic algorithm is highly accurate, highly data efficient and it extends to any imaging hardware in which the boundaries of cup and disc can be segmented, thus making the algorithm particularly applicable to research into disease mechanisms, and also glaucoma screening in low resource settings.
Asunto(s)
Algoritmos , Diagnóstico por Computador/métodos , Técnicas de Diagnóstico Oftalmológico/estadística & datos numéricos , Glaucoma/diagnóstico por imagen , Diagnóstico por Computador/estadística & datos numéricos , Glaucoma/diagnóstico , Humanos , Modelos Estadísticos , Disco Óptico/diagnóstico por imagen , Nervio Óptico/diagnóstico por imagen , Análisis Espacial , Máquina de Vectores de SoporteRESUMEN
Single-cell studies using noninvasive imaging is a challenging, yet appealing way to study cellular characteristics over extended periods of time, for instance to follow cell interactions and the behavior of different cell types within the same sample. In some cases, e.g., transplantation culturing, real-time cellular monitoring, stem cell studies, in vivo studies, and embryo growth studies, it is also crucial to keep the sample intact and invasive imaging using fluorophores or dyes is not an option. Computerized methods are needed to improve throughput of image-based analysis and for use with noninvasive microscopy such methods are poorly developed. By combining a set of well-documented image analysis and classification tools with noninvasive microscopy, we demonstrate the ability for long-term image-based analysis of morphological changes in single cells as induced by a toxin, and show how these changes can be used to indicate changes in biological function. In this study, adherent cell cultures of DU-145 treated with low-concentration (LC) etoposide were imaged during 3 days. Single cells were identified by image segmentation and subsequently classified on image features, extracted for each cell. In parallel with image analysis, an MTS assay was performed to allow comparison between metabolic activity and morphological changes after long-term low-level drug response. Results show a decrease in proliferation rate for LC etoposide, accompanied by changes in cell morphology, primarily leading to an increase in cell area and textural changes. It is shown that changes detected by image analysis are already visible on day 1 for [Formula: see text] etoposide, whereas effects on MTS and viability are detected only on day 3 for [Formula: see text] etoposide concentration, leading to the conclusion that the morphological changes observed occur before and at lower concentrations than a reduction in cell metabolic activity or viability. Three classifiers are compared and we report a best case sensitivity of 88% and specificity of 94% for classification of cells as treated/untreated.
RESUMEN
Manual grading of lesions in retinal images is relevant to clinical management and clinical trials, but it is time-consuming and expensive. Furthermore, it collects only limited information - such as lesion size or frequency. The spatial distribution of lesions is ignored, even though it may contribute to the overall clinical assessment of disease severity, and correspond to microvascular and physiological topography. Capillary non-perfusion (CNP) lesions are central to the pathogenesis of major causes of vision loss. Here we propose a novel method to analyse CNP using spatial statistical modelling. This quantifies the percentage of CNP-pixels in each of 48 sectors and then characterises the spatial distribution with goniometric functions. We applied our spatial approach to a set of images from patients with malarial retinopathy, and found it compares favourably with the raw percentage of CNP-pixels and also with manual grading. Furthermore, we were able to quantify a biological characteristic of macular CNP in malaria that had previously only been described subjectively: clustering at the temporal raphe. Microvascular location is likely to be biologically relevant to many diseases, and so our spatial approach may be applicable to a diverse range of pathological features in the retina and other organs.
Asunto(s)
Capilares/diagnóstico por imagen , Malaria/complicaciones , Enfermedades de la Retina/diagnóstico por imagen , Capilares/patología , Humanos , Interpretación de Imagen Asistida por Computador , Malaria/diagnóstico por imagen , Malaria/patología , Modelos Estadísticos , Retina/diagnóstico por imagen , Retina/patología , Enfermedades de la Retina/parasitología , Enfermedades de la Retina/patologíaRESUMEN
Psychotic disorders carry social and economic costs for sufferers and society. Recent evidence highlights the risk posed by urban upbringing and social deprivation in the genesis of paranoia and psychosis. Evidence based psychological interventions are often not offered because of a lack of therapists. Virtual reality (VR) environments have been used to treat mental health problems. VR may be a way of understanding the aetiological processes in psychosis and increasing psychotherapeutic resources for its treatment. We developed a high-fidelity virtual reality scenario of an urban street scene to test the hypothesis that virtual urban exposure is able to generate paranoia to a comparable or greater extent than scenarios using indoor scenes. Participants (n = 32) entered the VR scenario for four minutes, after which time their degree of paranoid ideation was assessed. We demonstrated that the virtual reality scenario was able to elicit paranoia in a nonclinical, healthy group and that an urban scene was more likely to lead to higher levels of paranoia than a virtual indoor environment. We suggest that this study offers evidence to support the role of exposure to factors in the urban environment in the genesis and maintenance of psychotic experiences and symptoms. The realistic high-fidelity street scene scenario may offer a useful tool for therapists.
RESUMEN
Longitudinal and multi-site clinical studies create the imperative to characterize and correct technological sources of variance that limit image reproducibility in high-resolution structural MRI studies, thus facilitating precise, quantitative, platform-independent, multi-site evaluation. In this work, we investigated the effects that imaging gradient non-linearity have on reproducibility of multi-site human MRI. We applied an image distortion correction method based on spherical harmonics description of the gradients and verified the accuracy of the method using phantom data. The correction method was then applied to the brain image data from a group of subjects scanned twice at multiple sites having different 1.5 T platforms. Within-site and across-site variability of the image data was assessed by evaluating voxel-based image intensity reproducibility. The image intensity reproducibility of the human brain data was significantly improved with distortion correction, suggesting that this method may offer improved reproducibility in morphometry studies. We provide the source code for the gradient distortion algorithm together with the phantom data.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/estadística & datos numéricos , Calibración , Simulación por Computador , Humanos , Estudios Multicéntricos como Asunto/métodos , Dinámicas no Lineales , Reproducibilidad de los ResultadosRESUMEN
In vivo MRI-derived measurements of human cerebral cortex thickness are providing novel insights into normal and abnormal neuroanatomy, but little is known about their reliability. We investigated how the reliability of cortical thickness measurements is affected by MRI instrument-related factors, including scanner field strength, manufacturer, upgrade and pulse sequence. Several data processing factors were also studied. Two test-retest data sets were analyzed: 1) 15 healthy older subjects scanned four times at 2-week intervals on three scanners; 2) 5 subjects scanned before and after a major scanner upgrade. Within-scanner variability of global cortical thickness measurements was <0.03 mm, and the point-wise standard deviation of measurement error was approximately 0.12 mm. Variability was 0.15 mm and 0.17 mm in average, respectively, for cross-scanner (Siemens/GE) and cross-field strength (1.5 T/3 T) comparisons. Scanner upgrade did not increase variability nor introduce bias. Measurements across field strength, however, were slightly biased (thicker at 3 T). The number of (single vs. multiple averaged) acquisitions had a negligible effect on reliability, but the use of a different pulse sequence had a larger impact, as did different parameters employed in data processing. Sample size estimates indicate that regional cortical thickness difference of 0.2 mm between two different groups could be identified with as few as 7 subjects per group, and a difference of 0.1 mm could be detected with 26 subjects per group. These results demonstrate that MRI-derived cortical thickness measures are highly reliable when MRI instrument and data processing factors are controlled but that it is important to consider these factors in the design of multi-site or longitudinal studies, such as clinical drug trials.