Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Eye (Lond) ; 33(11): 1791-1797, 2019 11.
Article in English | MEDLINE | ID: mdl-31267086

ABSTRACT

OBJECTIVES: To evaluate the performance of a deep learning based Artificial Intelligence (AI) software for detection of glaucoma from stereoscopic optic disc photographs, and to compare this performance to the performance of a large cohort of ophthalmologists and optometrists. METHODS: A retrospective study evaluating the diagnostic performance of an AI software (Pegasus v1.0, Visulytix Ltd., London UK) and comparing it with that of 243 European ophthalmologists and 208 British optometrists, as determined in previous studies, for the detection of glaucomatous optic neuropathy from 94 scanned stereoscopic photographic slides scanned into digital format. RESULTS: Pegasus was able to detect glaucomatous optic neuropathy with an accuracy of 83.4% (95% CI: 77.5-89.2). This is comparable to an average ophthalmologist accuracy of 80.5% (95% CI: 67.2-93.8) and average optometrist accuracy of 80% (95% CI: 67-88) on the same images. In addition, the AI system had an intra-observer agreement (Cohen's Kappa, κ) of 0.74 (95% CI: 0.63-0.85), compared with 0.70 (range: -0.13-1.00; 95% CI: 0.67-0.73) and 0.71 (range: 0.08-1.00) for ophthalmologists and optometrists, respectively. There was no statistically significant difference between the performance of the deep learning system and ophthalmologists or optometrists. CONCLUSION: The AI system obtained a diagnostic performance and repeatability comparable to that of the ophthalmologists and optometrists. We conclude that deep learning based AI systems, such as Pegasus, demonstrate significant promise in the assisted detection of glaucomatous optic neuropathy.


Subject(s)
Artificial Intelligence , Deep Learning , Diagnosis, Computer-Assisted , Glaucoma, Open-Angle/diagnosis , Optic Disk/pathology , Optic Nerve Diseases/diagnosis , Photography , Clinical Competence , Europe , False Positive Reactions , Humans , Observer Variation , Ophthalmologists , Optic Disk/diagnostic imaging , Optometrists , Predictive Value of Tests , ROC Curve , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity
2.
J Glaucoma ; 28(12): 1029-1034, 2019 12.
Article in English | MEDLINE | ID: mdl-31233461

ABSTRACT

PRECIS: Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the "best case" consensus between the ophthalmologists. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Furthermore, the high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy. PURPOSE: The purpose of this study was to evaluate the performance of a deep learning system for the identification of glaucomatous optic neuropathy. MATERIALS AND METHODS: Six ophthalmologists and the deep learning system, Pegasus, graded 110 color fundus photographs in this retrospective single-center study. Patient images were randomly sampled from the Singapore Malay Eye Study. Ophthalmologists and Pegasus were compared with each other and to the original clinical diagnosis given by the Singapore Malay Eye Study, which was defined as the gold standard. Pegasus' performance was compared with the "best case" consensus scenario, which was the combination of ophthalmologists whose consensus opinion most closely matched the gold standard. The performance of the ophthalmologists and Pegasus, at the binary classification of nonglaucoma versus glaucoma from fundus photographs, was assessed in terms of sensitivity, specificity and the area under the receiver operating characteristic curve (AUROC), and the intraobserver and interobserver agreements were determined. RESULTS: Pegasus achieved an AUROC of 92.6% compared with ophthalmologist AUROCs that ranged from 69.6% to 84.9% and the "best case" consensus scenario AUROC of 89.1%. Pegasus had a sensitivity of 83.7% and a specificity of 88.2%, whereas the ophthalmologists' sensitivity ranged from 61.3% to 81.6% and specificity ranged from 80.0% to 94.1%. The agreement between Pegasus and gold standard was 0.715, whereas the highest ophthalmologist agreement with the gold standard was 0.613. Intraobserver agreement ranged from 0.62 to 0.97 for ophthalmologists and was perfect (1.00) for Pegasus. The deep learning system took ∼10% of the time of the ophthalmologists in determining classification. CONCLUSIONS: Pegasus outperformed 5 of the 6 ophthalmologists in terms of diagnostic performance, and there was no statistically significant difference between the deep learning system and the "best case" consensus between the ophthalmologists. The high sensitivity of Pegasus makes it a valuable tool for screening patients with glaucomatous optic neuropathy. Future work will extend this study to a larger sample of patients.


Subject(s)
Deep Learning , Diagnosis, Computer-Assisted/methods , Glaucoma, Open-Angle/diagnosis , Optic Nerve Diseases/diagnosis , Photography/methods , Adult , Aged , Area Under Curve , Diagnostic Techniques, Ophthalmological , Female , Humans , Intraocular Pressure , Male , Middle Aged , Observer Variation , Ophthalmologists , Optic Disk/pathology , ROC Curve , Retrospective Studies , Sensitivity and Specificity
3.
J Chem Theory Comput ; 15(3): 1728-1742, 2019 Mar 12.
Article in English | MEDLINE | ID: mdl-30681844

ABSTRACT

Building on the success of Quantum Monte Carlo techniques such as diffusion Monte Carlo, alternative stochastic approaches to solve electronic structure problems have emerged over the past decade. The full configuration interaction quantum Monte Carlo (FCIQMC) method allows one to systematically approach the exact solution of such problems, for cases where very high accuracy is desired. The introduction of FCIQMC has subsequently led to the development of coupled cluster Monte Carlo (CCMC) and density matrix quantum Monte Carlo (DMQMC), allowing stochastic sampling of the coupled cluster wave function and the exact thermal density matrix, respectively. In this Article, we describe the HANDE-QMC code, an open-source implementation of FCIQMC, CCMC and DMQMC, including initiator and semistochastic adaptations. We describe our code and demonstrate its use on three example systems; a molecule (nitric oxide), a model solid (the uniform electron gas), and a real solid (diamond). An illustrative tutorial is also included.

4.
J Xray Sci Technol ; 25(3): 323-339, 2017.
Article in English | MEDLINE | ID: mdl-28157116

ABSTRACT

BACKGROUND: Non-intrusive inspection systems based on X-ray radiography techniques are routinely used at transport hubs to ensure the conformity of cargo content with the supplied shipping manifest. As trade volumes increase and regulations become more stringent, manual inspection by trained operators is less and less viable due to low throughput. Machine vision techniques can assist operators in their task by automating parts of the inspection workflow. Since cars are routinely involved in trafficking, export fraud, and tax evasion schemes, they represent an attractive target for automated detection and flagging for subsequent inspection by operators. OBJECTIVE: Development and evaluation of a novel method for the automated detection of cars in complex X-ray cargo imagery. METHODS: X-ray cargo images from a stream-of-commerce dataset were classified using a window-based scheme. The limited number of car images was addressed by using an oversampling scheme. Different Convolutional Neural Network (CNN) architectures were compared with well-established bag of words approaches. In addition, robustness to concealment was evaluated by projection of objects into car images. RESULTS: CNN approaches outperformed all other methods evaluated, achieving 100% car image classification rate for a false positive rate of 1-in-454. Cars that were partially or completely obscured by other goods, a modus operandi frequently adopted by criminals, were correctly detected. CONCLUSIONS: We believe that this level of performance suggests that the method is suitable for deployment in the field. It is expected that the generic object detection workflow described can be extended to other object classes given the availability of suitable training data.


Subject(s)
Automobiles , Machine Learning , Radiographic Image Enhancement/methods , Radiography/methods , Security Measures , Humans
5.
J Xray Sci Technol ; 25(1): 33-56, 2017.
Article in English | MEDLINE | ID: mdl-27802247

ABSTRACT

We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.


Subject(s)
Image Processing, Computer-Assisted/methods , Security Measures , Terrorism/prevention & control , Transportation/standards , X-Rays
6.
J Xray Sci Technol ; 25(1): 57-77, 2017.
Article in English | MEDLINE | ID: mdl-27802248

ABSTRACT

BACKGROUND: Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. OBJECTIVE: We consider the measurement of wobble and amelioration of the consequent loss of image precision. METHODS: Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. RESULTS: We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. CONCLUSIONS: The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Security Measures , Technology, Radiologic/methods , Terrorism/prevention & control , Artifacts , Transportation/standards , X-Rays
SELECTION OF CITATIONS
SEARCH DETAIL
...