Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 20
1.
Comput Methods Programs Biomed ; 248: 108111, 2024 May.
Article En | MEDLINE | ID: mdl-38479147

BACKGROUND AND OBJECTIVE: Training deep learning models for medical image segmentation require large annotated datasets, which can be expensive and time-consuming to create. Active learning is a promising approach to reduce this burden by strategically selecting the most informative samples for segmentation. This study investigates the use of active learning for efficient left ventricle segmentation in echocardiography with sparse expert annotations. METHODS: We adapt and evaluate various sampling techniques, demonstrating their effectiveness in judiciously selecting samples for segmentation. Additionally, we introduce a novel strategy, Optimised Representativeness Sampling, which combines feature-based outliers with the most representative samples to enhance annotation efficiency. RESULTS: Our findings demonstrate a substantial reduction in annotation costs, achieving a remarkable 99% upper bound performance while utilising only 20% of the labelled data. This equates to a reduction of 1680 images needing annotation within our dataset. When applied to a publicly available dataset, our approach yielded a remarkable 70% reduction in required annotation efforts, representing a significant advancement compared to baseline active learning strategies, which achieved only a 50% reduction. Our experiments highlight the nuanced performance of diverse sampling strategies across datasets within the same domain. CONCLUSIONS: The study provides a cost-effective approach to tackle the challenges of limited expert annotations in echocardiography. By introducing a distinct dataset, made publicly available for research purposes, our work contributes to the field's understanding of efficient annotation strategies in medical image segmentation.


Echocardiography , Heart Ventricles , Heart Ventricles/diagnostic imaging , Image Processing, Computer-Assisted
2.
Comput Biol Med ; 171: 108192, 2024 Mar.
Article En | MEDLINE | ID: mdl-38417384

Doppler echocardiography is a widely utilised non-invasive imaging modality for assessing the functionality of heart valves, including the mitral valve. Manual assessments of Doppler traces by clinicians introduce variability, prompting the need for automated solutions. This study introduces an innovative deep learning model for automated detection of peak velocity measurements from mitral inflow Doppler images, independent from Electrocardiogram information. A dataset of Doppler images annotated by multiple expert cardiologists was established, serving as a robust benchmark. The model leverages heatmap regression networks, achieving 96% detection accuracy. The model discrepancy with the expert consensus falls comfortably within the range of inter- and intra-observer variability in measuring Doppler peak velocities. The dataset and models are open-source, fostering further research and clinical application.


Deep Learning , Blood Flow Velocity , Echocardiography, Doppler/methods , Mitral Valve/diagnostic imaging , Ultrasonography, Doppler
3.
Med Biol Eng Comput ; 61(5): 911-926, 2023 May.
Article En | MEDLINE | ID: mdl-36631666

Tissue Doppler imaging is an essential echocardiographic technique for the non-invasive assessment of myocardial blood velocity. Image acquisition and interpretation are performed by trained operators who visually localise landmarks representing Doppler peak velocities. Current clinical guidelines recommend averaging measurements over several heartbeats. However, this manual process is both time-consuming and disruptive to workflow. An automated system for accurate beat isolation and landmark identification would be highly desirable. A dataset of tissue Doppler images was annotated by three cardiologist experts, providing a gold standard and allowing for observer variability comparisons. Deep neural networks were trained for fully automated predictions on multiple heartbeats and tested on tissue Doppler strips of arbitrary length. Automated measurements of peak Doppler velocities show good Bland-Altman agreement (average standard deviation of 0.40 cm/s) with consensus expert values; less than the inter-observer variability (0.65 cm/s). Performance is akin to individual experts (standard deviation of 0.40 to 0.75 cm/s). Our approach allows for > 26 times as many heartbeats to be analysed, compared to a manual approach. The proposed automated models can accurately and reliably make measurements on tissue Doppler images spanning several heartbeats, with performance indistinguishable from that of human experts, but with significantly shorter processing time. HIGHLIGHTS: • Novel approach successfully identifies heartbeats from Tissue Doppler Images • Accurately measures peak velocities on several heartbeats • Framework is fast and can make predictions on arbitrary length images • Patient dataset and models made public for future benchmark studies.


Algorithms , Echocardiography, Doppler , Humans , Echocardiography, Doppler/methods , Neural Networks, Computer , Echocardiography , Myocardium
4.
Comput Biol Med ; 143: 105249, 2022 Apr.
Article En | MEDLINE | ID: mdl-35091363

Continuous ambulatory cardiac monitoring plays a critical role in early detection of abnormality in at-risk patients, thereby increasing the chance of early intervention. In this study, we present an automated ECG classification approach for distinguishing between healthy heartbeats and pathological rhythms. The proposed lightweight solution uses quantized one-dimensional deep convolutional neural networks and is ideal for real-time continuous monitoring of cardiac rhythm, capable of providing one output prediction per second. Raw ECG data is used as the input to the classifier, eliminating the need for complex data preprocessing on low-powered wearable devices. In contrast to many compute-intensive approaches, the data analysis can be carried out locally on edge devices, providing privacy and portability. The proposed lightweight solution is accurate (sensitivity of 98.5% and specificity of 99.8%), and implemented on a smartphone, it is energy-efficient and fast, requiring 5.85 mJ and 7.65 ms per prediction, respectively.

5.
J Med Imaging (Bellingham) ; 8(3): 034002, 2021 May.
Article En | MEDLINE | ID: mdl-34179218

Purpose: Echocardiography is the most commonly used modality for assessing the heart in clinical practice. In an echocardiographic exam, an ultrasound probe samples the heart from different orientations and positions, thereby creating different viewpoints for assessing the cardiac function. The determination of the probe viewpoint forms an essential step in automatic echocardiographic image analysis. Approach: In this study, convolutional neural networks are used for the automated identification of 14 different anatomical echocardiographic views (larger than any previous study) in a dataset of 8732 videos acquired from 374 patients. Differentiable architecture search approach was utilized to design small neural network architectures for rapid inference while maintaining high accuracy. The impact of the image quality and resolution, size of the training dataset, and number of echocardiographic view classes on the efficacy of the models were also investigated. Results: In contrast to the deeper classification architectures, the proposed models had significantly lower number of trainable parameters (up to 99.9% reduction), achieved comparable classification performance (accuracy 88.4% to 96%, precision 87.8% to 95.2%, recall 87.1% to 95.1%) and real-time performance with inference time per image of 3.6 to 12.6 ms. Conclusion: Compared with the standard classification neural network architectures, the proposed models are faster and achieve comparable classification performance. They also require less training data. Such models can be used for real-time detection of the standard views.

6.
Circ Cardiovasc Imaging ; 14(5): e011951, 2021 05.
Article En | MEDLINE | ID: mdl-33998247

BACKGROUND: requires training and validation to standards expected of humans. We developed an online platform and established the Unity Collaborative to build a dataset of expertise from 17 hospitals for training, validation, and standardization of such techniques. METHODS: The training dataset consisted of 2056 individual frames drawn at random from 1265 parasternal long-axis video-loops of patients undergoing clinical echocardiography in 2015 to 2016. Nine experts labeled these images using our online platform. From this, we trained a convolutional neural network to identify keypoints. Subsequently, 13 experts labeled a validation dataset of the end-systolic and end-diastolic frame from 100 new video-loops, twice each. The 26-opinion consensus was used as the reference standard. The primary outcome was precision SD, the SD of the differences between AI measurement and expert consensus. RESULTS: In the validation dataset, the AI's precision SD for left ventricular internal dimension was 3.5 mm. For context, precision SD of individual expert measurements against the expert consensus was 4.4 mm. Intraclass correlation coefficient between AI and expert consensus was 0.926 (95% CI, 0.904-0.944), compared with 0.817 (0.778-0.954) between individual experts and expert consensus. For interventricular septum thickness, precision SD was 1.8 mm for AI (intraclass correlation coefficient, 0.809; 0.729-0.967), versus 2.0 mm for individuals (intraclass correlation coefficient, 0.641; 0.568-0.716). For posterior wall thickness, precision SD was 1.4 mm for AI (intraclass correlation coefficient, 0.535 [95% CI, 0.379-0.661]), versus 2.2 mm for individuals (0.366 [0.288-0.462]). We present all images and annotations. This highlights challenging cases, including poor image quality and tapered ventricles. CONCLUSIONS: Experts at multiple institutions successfully cooperated to build a collaborative AI. This performed as well as individual experts. Future echocardiographic AI research should use a consensus of experts as a reference. Our collaborative welcomes new partners who share our commitment to publish all methods, code, annotations, and results openly.


Artificial Intelligence , Echocardiography/methods , Heart Ventricles/diagnostic imaging , Machine Learning , Humans , Reproducibility of Results , United Kingdom
7.
Comput Biol Med ; 133: 104373, 2021 06.
Article En | MEDLINE | ID: mdl-33857775

BACKGROUND: Accurate identification of end-diastolic and end-systolic frames in echocardiographic cine loops is important, yet challenging, for human experts. Manual frame selection is subject to uncertainty, affecting crucial clinical measurements, such as myocardial strain. Therefore, the ability to automatically detect frames of interest is highly desirable. METHODS: We have developed deep neural networks, trained and tested on multi-centre patient data, for the accurate identification of end-diastolic and end-systolic frames in apical four-chamber 2D multibeat cine loop recordings of arbitrary length. Seven experienced cardiologist experts independently labelled the frames of interest, thereby providing infallible annotations, allowing for observer variability measurements. RESULTS: When compared with the ground-truth, our model shows an average frame difference of -0.09 ± 1.10 and 0.11 ± 1.29 frames for end-diastolic and end-systolic frames, respectively. When applied to patient datasets from a different clinical site, to which the model was blind during its development, average frame differences of -1.34 ± 3.27 and -0.31 ± 3.37 frames were obtained for both frames of interest. All detection errors fall within the range of inter-observer variability: [-0.87, -5.51]±[2.29, 4.26] and [-0.97, -3.46]±[3.67, 4.68] for ED and ES events, respectively. CONCLUSIONS: The proposed automated model can identify multiple end-systolic and end-diastolic frames in echocardiographic videos of arbitrary length with performance indistinguishable from that of human experts, but with significantly shorter processing time.


Echocardiography , Neural Networks, Computer , Diastole , Humans , Observer Variation
8.
IEEE J Biomed Health Inform ; 25(1): 131-142, 2021 01.
Article En | MEDLINE | ID: mdl-32750901

Esophageal cancer is categorized as a type of disease with a high mortality rate. Early detection of esophageal abnormalities (i.e. precancerous and early cancerous) can improve the survival rate of the patients. Recent deep learning-based methods for selected types of esophageal abnormality detection from endoscopic images have been proposed. However, no methods have been introduced in the literature to cover the detection from endoscopic videos, detection from challenging frames and detection of more than one esophageal abnormality type. In this paper, we present an efficient method to automatically detect different types of esophageal abnormalities from endoscopic videos. We propose a novel 3D Sequential DenseConvLstm network that extracts spatiotemporal features from the input video. Our network incorporates 3D Convolutional Neural Network (3DCNN) and Convolutional Lstm (ConvLstm) to efficiently learn short and long term spatiotemporal features. The generated feature map is utilized by a region proposal network and ROI pooling layer to produce a bounding box that detects abnormality regions in each frame throughout the video. Finally, we investigate a post-processing method named Frame Search Conditional Random Field (FS-CRF) that improves the overall performance of the model by recovering the missing regions in neighborhood frames within the same clip. We extensively validate our model on an endoscopic video dataset that includes a variety of esophageal abnormalities. Our model achieved high performance using different evaluation metrics showing 93.7% recall, 92.7% precision, and 93.2% F-measure. Moreover, as no results have been reported in the literature for the esophageal abnormality detection from endoscopic videos, to validate the robustness of our model, we have tested the model on a publicly available colonoscopy video dataset, achieving the polyp detection performance in a recall of 81.18%, precision of 96.45% and F-measure 88.16%, compared to the state-of-the-art results of 78.84% recall, 90.51% precision and 84.27% F-measure using the same dataset. This demonstrates that the proposed method can be adapted to different gastrointestinal endoscopic video applications with a promising performance.


Early Detection of Cancer , Neural Networks, Computer , Colonoscopy , Humans , Surgical Instruments
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2019-2022, 2020 07.
Article En | MEDLINE | ID: mdl-33018400

Echocardiography is the modality of choice for the assessment of left ventricle function. Left ventricle is responsible for pumping blood rich in oxygen to all body parts. Segmentation of this chamber from echocardiographic images is a challenging task, due to the ambiguous boundary and inhomogeneous intensity distribution. In this paper we propose a novel deep learning model named ResDUnet. The model is based on U-net incorporated with dilated convolution, where residual blocks are employed instead of the basic U-net units to ease the training process. Each block is enriched with squeeze and excitation unit for channel-wise attention and adaptive feature re-calibration. To tackle the problem of left ventricle shape and size variability, we chose to enrich the process of feature concatenation in U-net by integrating feature maps generated by cascaded dilation. Cascaded dilation broadens the receptive field size in comparison with traditional convolution, which allows the generation of multi-scale information which in turn results in a more robust segmentation. Performance measures were evaluated on a publicly available dataset of 500 patients with large variability in terms of quality and patients pathology. The proposed model shows a dice similarity increase of 8.4% when compared to deeplabv3 and 1.2% when compared to the basic U-net architecture. Experimental results demonstrate the potential use in clinical domain.


Echocardiography , Heart Ventricles , Heart Ventricles/diagnostic imaging , Humans , Specimen Handling
10.
J Med Artif Intell ; 32020 Mar 25.
Article En | MEDLINE | ID: mdl-32226937

Echocardiography is the commonest medical ultrasound examination, but automated interpretation is challenging and hinges on correct recognition of the 'view' (imaging plane and orientation). Current state-of-the-art methods for identifying the view computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. Here we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We demonstrate that these new architectures more than halve the error rate of traditional CNNs from 8.1% to 3.9%. These advances in accuracy may be due to these networks' ability to track the movement of specific structures such as heart valves throughout the cardiac cycle. Finally, we show the accuracies of these new state-of-the-art networks are approaching expert agreement (3.6% discordance), with a similar pattern of discordance between views.

11.
Med Biol Eng Comput ; 58(6): 1309-1323, 2020 Jun.
Article En | MEDLINE | ID: mdl-32253607

Speckle tracking is the most prominent technique used to estimate the regional movement of the heart based on echocardiograms. In this study, we propose an optimised-based block matching algorithm to perform speckle tracking iteratively. The proposed technique was evaluated using a publicly available synthetic echocardiographic dataset with known ground-truth from several major vendors and for healthy/ischaemic cases. The results were compared with the results from the classic (standard) two-dimensional block matching. The proposed method presented an average displacement error of 0.57 pixels, while classic block matching provided an average error of 1.15 pixels. When estimating the segmental/regional longitudinal strain in healthy cases, the proposed method, with an average of 0.32 ± 0.53, outperformed the classic counterpart, with an average of 3.43 ± 2.84. A similar superior performance was observed in ischaemic cases. This method does not require any additional ad hoc filtering process. Therefore, it can potentially help to reduce the variability in the strain measurements caused by various post-processing techniques applied by different implementations of the speckle tracking. Graphical Abstract Standard block matching versus proposed iterative block matching approach.


Diagnosis, Computer-Assisted/methods , Echocardiography/methods , Myocardial Ischemia/diagnosis , Algorithms , Databases, Factual , Humans , Image Processing, Computer-Assisted/methods
12.
Int J Comput Assist Radiol Surg ; 14(4): 611-621, 2019 Apr.
Article En | MEDLINE | ID: mdl-30666547

PURPOSE: This study aims to adapt and evaluate the performance of different state-of-the-art deep learning object detection methods to automatically identify esophageal adenocarcinoma (EAC) regions from high-definition white light endoscopy (HD-WLE) images. METHOD: Several state-of-the-art object detection methods using Convolutional Neural Networks (CNNs) were adapted to automatically detect abnormal regions in the esophagus HD-WLE images, utilizing VGG'16 as the backbone architecture for feature extraction. Those methods are Regional-based Convolutional Neural Network (R-CNN), Fast R-CNN, Faster R-CNN and Single-Shot Multibox Detector (SSD). For the evaluation of the different methods, 100 images from 39 patients that have been manually annotated by five experienced clinicians as ground truth have been tested. RESULTS: Experimental results illustrate that the SSD and Faster R-CNN networks show promising results, and the SSD outperforms other methods achieving a sensitivity of 0.96, specificity of 0.92 and F-measure of 0.94. Additionally, the Average Recall Rate of the Faster R-CNN in locating the EAC region accurately is 0.83. CONCLUSION: In this paper, recent deep learning object detection methods are adapted to detect esophageal abnormalities automatically. The evaluation of the methods proved its ability to locate abnormal regions in the esophagus from endoscopic images. The automatic detection is a crucial step that may help early detection and treatment of EAC and also can improve automatic tumor segmentation to monitor its growth and treatment outcome.


Adenocarcinoma/diagnosis , Deep Learning , Early Diagnosis , Esophageal Neoplasms/diagnosis , Neural Networks, Computer , Humans , Reproducibility of Results
13.
Eur Heart J Cardiovasc Imaging ; 19(12): 1380-1389, 2018 12 01.
Article En | MEDLINE | ID: mdl-29346531

Aims: Measurements with superior reproducibility are useful clinically and research purposes. Previous reproducibility studies of Doppler assessment of aortic stenosis (AS) have compared only a pair of observers and have not explored the mechanism by which disagreement between operators occurs. Using custom-designed software which stored operators' traces, we investigated the reproducibility of peak and velocity time integral (VTI) measurements across a much larger group of operators and explored the mechanisms by which disagreement arose. Methods and results: Twenty-five observers reviewed continuous wave (CW) aortic valve (AV) and pulsed wave (PW) left ventricular outflow tract (LVOT) Doppler traces from 20 sequential cases of AS in random order. Each operator unknowingly measured each peak velocity and VTI twice. VTI tracings were stored for comparison. Measuring the peak is much more reproducible than VTI for both PW (coefficient of variation 10.1 vs. 18.0%; P < 0.001) and CW traces (coefficient of variation 4.0 vs. 10.2%; P < 0.001). VTI is inferior because the steep early and late parts of the envelope are difficult to trace reproducibly. Dimensionless index improves reproducibility because operators tended to consistently over-read or under-read on LVOT and AV traces from the same patient (coefficient of variation 9.3 vs. 17.1%; P < 0.001). Conclusion: It is far more reproducible to measure the peak of a Doppler trace than the VTI, a strategy that reduces measurement variance by approximately six-fold. Peak measurements are superior to VTI because tracing the steep slopes in the early and late part of the VTI envelope is difficult to achieve reproducibly.


Aortic Valve Stenosis/diagnostic imaging , Blood Flow Velocity/physiology , Echocardiography, Doppler, Pulsed/methods , Echocardiography/methods , Stroke Volume/physiology , Age Factors , Aged , Aged, 80 and over , Analysis of Variance , Aortic Valve Stenosis/physiopathology , Cohort Studies , Databases, Factual , Female , Humans , Male , Observer Variation , Risk Assessment , Severity of Illness Index , Sex Factors , Time Factors
14.
Echocardiography ; 34(7): 956-967, 2017 Jul.
Article En | MEDLINE | ID: mdl-28573718

BACKGROUND: Correctly selecting the end-diastolic and end-systolic frames on a 2D echocardiogram is important and challenging, for both human experts and automated algorithms. Manual selection is time-consuming and subject to uncertainty, and may affect the results obtained, especially for advanced measurements such as myocardial strain. METHODS AND RESULTS: We developed and evaluated algorithms which can automatically extract global and regional cardiac velocity, and identify end-diastolic and end-systolic frames. We acquired apical four-chamber 2D echocardiographic video recordings, each at least 10 heartbeats long, acquired twice at frame rates of 52 and 79 frames/s from 19 patients, yielding 38 recordings. Five experienced echocardiographers independently marked end-systolic and end-diastolic frames for the first 10 heartbeats of each recording. The automated algorithm also did this. Using the average of time points identified by five human operators as the reference gold standard, the individual operators had a root mean square difference from that gold standard of 46.5 ms. The algorithm had a root mean square difference from the human gold standard of 40.5 ms (P<.0001). Put another way, the algorithm-identified time point was an outlier in 122/564 heartbeats (21.6%), whereas the average human operator was an outlier in 254/564 heartbeats (45%). CONCLUSION: An automated algorithm can identify the end-systolic and end-diastolic frames with performance indistinguishable from that of human experts. This saves staff time, which could therefore be invested in assessing more beats, and reduces uncertainty about the reliability of the choice of frame.


Echocardiography/methods , Heart/diagnostic imaging , Heart/physiology , Adult , Aged , Aged, 80 and over , Algorithms , Diastole , Female , Humans , Male , Middle Aged , Observer Variation , Reproducibility of Results , Systole
15.
Int J Cardiovasc Imaging ; 33(8): 1135-1148, 2017 Aug.
Article En | MEDLINE | ID: mdl-28220275

Current guidelines for measuring cardiac function by tissue Doppler recommend using multiple beats, but this has a time cost for human operators. We present an open-source, vendor-independent, drag-and-drop software capable of automating the measurement process. A database of ~8000 tissue Doppler beats (48 patients) from the septal and lateral annuli were analyzed by three expert echocardiographers. We developed an intensity- and gradient-based automated algorithm to measure tissue Doppler velocities. We tested its performance against manual measurements from the expert human operators. Our algorithm showed strong agreement with expert human operators. Performance was indistinguishable from a human operator: for algorithm, mean difference and SDD from the mean of human operators' estimates 0.48 ± 1.12 cm/s (R2 = 0.82); for the humans individually this was 0.43 ± 1.11 cm/s (R2 = 0.84), -0.88 ± 1.12 cm/s (R2 = 0.84) and 0.41 ± 1.30 cm/s (R2 = 0.78). Agreement between operators and the automated algorithm was preserved when measuring at either the edge or middle of the trace. The algorithm was 10-fold quicker than manual measurements (p < 0.001). This open-source, vendor-independent, drag-and-drop software can make peak velocity measurements from pulsed wave tissue Doppler traces as accurately as human experts. This automation permits rapid, bias-resistant multi-beat analysis from spectral tissue Doppler images.


Algorithms , Cardiac-Gated Imaging Techniques , Echocardiography, Doppler, Pulsed/methods , Heart Diseases/diagnostic imaging , Heart Rate , Image Interpretation, Computer-Assisted/methods , Software , Aged , Automation , Female , Heart Diseases/physiopathology , Humans , Male , Middle Aged , Myocardial Contraction , Observer Variation , Predictive Value of Tests , Reproducibility of Results , Ventricular Function, Left
16.
Int J Cardiol ; 218: 31-36, 2016 Sep 01.
Article En | MEDLINE | ID: mdl-27232908

OBJECTIVES: To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. BACKGROUND: Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. MATERIAL AND METHODS: 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. RESULTS: Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. CONCLUSIONS: The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field.


Echocardiography/standards , Image Interpretation, Computer-Assisted/standards , Software/standards , Adult , Aged , Aged, 80 and over , Echocardiography/methods , Female , Heart Rate/physiology , Humans , Image Interpretation, Computer-Assisted/methods , Male , Middle Aged , Time Factors
17.
Int J Cardiovasc Imaging ; 31(7): 1303-14, 2015 Oct.
Article En | MEDLINE | ID: mdl-26141526

Left ventricular function can be evaluated by qualitative grading and by eyeball estimation of ejection fraction (EF). We sought to define the reproducibility of these techniques, and how they are affected by image quality, experience and accreditation. Twenty apical four-chamber echocardiographic cine loops (Online Resource 1-20) of varying image quality and left ventricular function were anonymized and presented to 35 operators. Operators were asked to provide (1) a one-phrase grading of global systolic function (2) an "eyeball" EF estimate and (3) an image quality rating on a 0-100 visual analogue scale. Each observer viewed every loop twice unknowingly, a total of 1400 viewings. When grading LV function into five categories, an operator's chance of agreement with another operator was 50% and with themself on blinded re-presentation was 68%. Blinded eyeball LVEF re-estimates by the same operator had standard deviation (SD) of difference of 7.6 EF units, with the SD across operators averaging 8.3 EF units. Image quality, defined as the average of all operators' assessments, correlated with EF estimate variability (r = -0.616, p < 0.01) and visual grading agreement (r = 0.58, p < 0.01). However, operators' own single quality assessments were not a useful forewarning of their estimate being an outlier, partly because individual quality assessments had poor within-operator reproducibility (SD of difference 17.8). Reproducibility of visual grading of LV function and LVEF estimation is dependent on image quality, but individuals cannot themselves identify when poor image quality is disrupting their LV function estimate. Clinicians should not assume that patients changing in grade or in visually estimated EF have had a genuine clinical change.


Accreditation/standards , Clinical Competence/standards , Echocardiography/standards , Stroke Volume , Ventricular Dysfunction, Left/diagnostic imaging , Ventricular Function, Left , Visual Perception , Adult , Aged , Female , Humans , Male , Middle Aged , Observer Variation , Predictive Value of Tests , Reproducibility of Results , Severity of Illness Index , Ventricular Dysfunction, Left/physiopathology
18.
IEEE Trans Med Imaging ; 33(5): 1071-82, 2014 May.
Article En | MEDLINE | ID: mdl-24770912

In clinical practice, echocardiographers are often unkeen to make the significant time investment to make additional multiple measurements of Doppler velocity. Main hurdle to obtaining multiple measurements is the time required to manually trace a series of Doppler traces. To make it easier to analyze more beats, we present the description of an application system for automated aortic Doppler envelope quantification, compatible with a range of hardware platforms. It analyses long Doppler strips, spanning many heartbeats, and does not require electrocardiogram to separate individual beats. We tested its measurement of velocity-time-integral and peak-velocity against the reference standard defined as the average of three experts who each made three separate measurements. The automated measurements of velocity-time-integral showed strong correspondence (R(2) = 0.94) and good Bland-Altman agreement (SD = 1.39 cm) with the reference consensus expert values, and indeed performed as well as the individual experts ( R(2) = 0.90 to 0.96, SD = 1.05 to 1.53 cm). The same performance was observed for peak-velocities; ( R(2) = 0.98, SD = 3.07 cm/s) and ( R(2) = 0.93 to 0.98, SD = 2.96 to 5.18 cm/s). This automated technology allows > 10 times as many beats to be analyzed compared to the conventional manual approach. This would make clinical and research protocols more precise for the same operator effort.


Echocardiography, Doppler/methods , Image Processing, Computer-Assisted/methods , Algorithms , Electrocardiography , Female , Heart/physiology , Humans , Male , Middle Aged
19.
Eur Heart J Cardiovasc Imaging ; 15(7): 817-27, 2014 Jul.
Article En | MEDLINE | ID: mdl-24699322

BACKGROUND: Variability has been described between different echo machines and different modalities when measuring tissue velocities. We assessed the consistency of tissue velocity measurements across different modalities and different manufacturers in an in vitro model and in patients. Furthermore, we present freely available software tools to repeat these evaluations. METHODS AND RESULTS: We constructed a simple setup to generate reproducible motion and used it to compare velocities measured using three echocardiographic modalities: M-mode, speckle tracking, and tissue Doppler, with a straightforward, non-ultrasound, optical gold standard. In the clinical phase, 25 patients underwent M-mode, speckle tracking, and tissue Doppler measurements of s', e', and a' velocities. In vitro, the M-mode and speckle tracking velocities agreed with optical assessment. Of the three possible tissue Doppler measurement conventions (outer, middle, and inner edge) only the middle agreed with optical assessment (discrepancy -0.20 (95% CI -0.44 to 0.03) cm/s, P = 0.11, outer +5.19 (4.65 to 5.73) cm/s, P < 0.0001, inner -6.26 (-6.87 to -5.65) cm/s, P < 0.0001). A similar pattern occurred across all four studied manufacturers. M-mode was therefore chosen as the in vivo gold standard. Clinical measurements of s' velocities by speckle tracking and the middle line of the tissue Doppler showed concordance with M-mode, while the outer line overestimated significantly (+1.27(0.96 to 1.59) cm/s, P < 0.0001) and the inner line underestimated (-1.82 (-2.11 to -1.52) cm/s, P < 0.0001). CONCLUSIONS: Echocardiographic velocity measurements can be more consistent than previously suspected. The statistically modal velocity, found at the centre of the spectral pulsed wave tissue Doppler envelope, most closely represents true tissue velocity. This article includes downloadable, vendor-independent software enabling calibration of echocardiographic machines using a simple, inexpensive in vitro setup.


Echocardiography, Doppler, Pulsed/methods , Echocardiography, Doppler, Pulsed/standards , Guidelines as Topic , Image Processing, Computer-Assisted , Laser-Doppler Flowmetry/standards , Phantoms, Imaging , Aged , Analysis of Variance , Blood Flow Velocity , Calibration , Female , Humans , Laser-Doppler Flowmetry/methods , Male , Middle Aged , Reproducibility of Results , Sampling Studies , Sensitivity and Specificity
20.
J Med Imaging (Bellingham) ; 1(3): 037001, 2014 Oct.
Article En | MEDLINE | ID: mdl-26158075

Obtaining a "correct" view in echocardiography is a subjective process in which an operator attempts to obtain images conforming to consensus standard views. Real-time objective quantification of image alignment may assist less experienced operators, but no reliable index yet exists. We present a fully automated algorithm for detecting incorrect medial/lateral translation of an ultrasound probe by image analysis. The ability of the algorithm to distinguish optimal from sub-optimal four-chamber images was compared to that of specialists-the current "gold-standard." The orientation assessments produced by the automated algorithm correlated well with consensus visual assessments of the specialists ([Formula: see text]) and compared favourably with the correlation between individual specialists and the consensus, [Formula: see text]. Each individual specialist's assessments were within the consensus of other specialists, [Formula: see text] of the time, and the algorithm's assessments were within the consensus of specialists 85% of the time. The mean discrepancy in probe translation values between individual specialists and their consensus was [Formula: see text], and between the automated algorithm and specialists' consensus was [Formula: see text]. This technology could be incorporated into hardware to provide real-time guidance for image optimisation-a potentially valuable tool both for training and quality control.

...