Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
IEEE J Biomed Health Inform ; 27(6): 2739-2750, 2023 06.
Article in English | MEDLINE | ID: mdl-36223359

ABSTRACT

Early detection of retinal diseases is one of the most important means of preventing partial or permanent blindness in patients. In this research, a novel multi-label classification system is proposed for the detection of multiple retinal diseases, using fundus images collected from a variety of sources. First, a new multi-label retinal disease dataset, the MuReD dataset, is constructed, using a number of publicly available datasets for fundus disease classification. Next, a sequence of post-processing steps is applied to ensure the quality of the image data and the range of diseases, present in the dataset. For the first time in fundus multi-label disease classification, a transformer-based model optimized through extensive experimentation is used for image analysis and decision making. Numerous experiments are performed to optimize the configuration of the proposed system. It is shown that the approach performs better than state-of-the-art works on the same task by 7.9% and 8.1% in terms of AUC score for disease detection and disease classification, respectively. The obtained results further support the potential applications of transformer-based architectures in the medical imaging field.


Subject(s)
Algorithms , Retinal Diseases , Humans , Retina/diagnostic imaging , Retinal Diseases/diagnostic imaging , Fundus Oculi , Image Processing, Computer-Assisted
2.
PLoS One ; 17(12): e0278885, 2022.
Article in English | MEDLINE | ID: mdl-36508426

ABSTRACT

The effect of spatial nonuniformity of the temperature distribution was examined on the capability of machine-learning algorithms to provide accurate temperature prediction based on Laser Absorption Spectroscopy. First, sixteen machine learning models were trained as surrogate models of conventional physical methods to measure temperature from uniform temperature distributions (uniform-profile spectra). The best three of them, Gaussian Process Regression (GPR), VGG13, and Boosted Random Forest (BRF) were shown to work excellently on uniform profiles but their performance degraded tremendously on nonuniform-profile spectra. This indicated that directly using uniform-profile-targeted methods to nonuniform profiles was improper. However, after retraining models on nonuniform-profile data, the models of GPR and VGG13, which utilized all features of the spectra, not only showed good accuracy and sensitivity to spectral twins, but also showed excellent generalization performance on spectra of increased nonuniformity, which demonstrated that the negative effects of nonuniformity on temperature measurement could be overcome. In contrast, BRF, which utilized partial features, did not have good generalization performance, which implied the nonuniformity level had impact on regional features of spectra. By reducing the data dimensionality through T-SNE and LDA, the visualizations of the data in two-dimensional feature spaces demonstrated that two datasets of substantially different levels of non-uniformity shared very closely similar distributions in terms of both spectral appearance and spectrum-temperature mapping. Notably, datasets from uniform and nonuniform temperature distributions clustered in two different areas of the 2D spaces of the t-SNE and LDA features with very few samples overlapping.


Subject(s)
Algorithms , Machine Learning , Lasers
3.
Animals (Basel) ; 12(21)2022 Oct 25.
Article in English | MEDLINE | ID: mdl-36359044

ABSTRACT

Fencing in livestock management is essential for location and movement control yet with conventional methods to require close labour supervision, leading to increased costs and reduced flexibility. Consequently, virtual fencing systems (VF) have recently gained noticeable attention as an effective method for the maintenance and control of restricted areas for animals. Existing systems to control animal movement use audio followed by controversial electric shocks which are prohibited in various countries. Accordingly, the present work has investigated the sole application of audio signals in training and managing animal behaviour. Audio cues in the range of 125-17 kHz were used to prohibit the entrance of seven Hebridean ewes from a restricted area with a feed bowl. Two trials were performed over the period of a year which were video recorded. Sound signals were activated when the animal approached a feed bowl and a restricted area with no feed bowl present. Results from both trials demonstrated that white noise and sounds in the frequency ranges of 125-440 Hz to 10-17 kHz successfully discouraged animals from entering a specific area with an overall success rate of 89.88% (white noise: 92.28%, 10-14 kHz: 89.13%, 15-17 kHz: 88.48%, 125-440 Hz: 88.44%). The study demonstrated that unaided audio stimuli were effective at managing virtual fencing for sheep.

4.
J Imaging ; 8(10)2022 Oct 21.
Article in English | MEDLINE | ID: mdl-36286385

ABSTRACT

A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods.

5.
R Soc Open Sci ; 8(1): 201823, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33614100

ABSTRACT

Since the coronavirus disease (COVID-19) outbreak in December 2019, studies have been addressing diverse aspects in relation to COVID-19 and Variant of Concern 202012/01 (VOC 202012/01) such as potential symptoms and predictive tools. However, limited work has been performed towards the modelling of complex associations between the combined demographic attributes and varying nature of the COVID-19 infections across the globe. This study presents an intelligent approach to investigate the multi-dimensional associations between demographic attributes and COVID-19 global variations. We gather multiple demographic attributes and COVID-19 infection data (by 8 January 2021) from reliable sources, which are then processed by intelligent algorithms to identify the significant associations and patterns within the data. Statistical results and experts' reports indicate strong associations between COVID-19 severity levels across the globe and certain demographic attributes, e.g. female smokers, when combined together with other attributes. The outcomes will aid the understanding of the dynamics of disease spread and its progression, which in turn may support policy makers, medical specialists and society, in better understanding and effective management of the disease.

6.
IEEE J Biomed Health Inform ; 24(12): 3507-3519, 2020 12.
Article in English | MEDLINE | ID: mdl-32750920

ABSTRACT

Vascular structures in the retina contain important information for the detection and analysis of ocular diseases, including age-related macular degeneration, diabetic retinopathy and glaucoma. Commonly used modalities in diagnosis of these diseases are fundus photography, scanning laser ophthalmoscope (SLO) and fluorescein angiography (FA). Typically, retinal vessel segmentation is carried out either manually or interactively, which makes it time consuming and prone to human errors. In this research, we propose a new multi-modal framework for vessel segmentation called ELEMENT (vEsseL sEgmentation using Machine lEarning and coNnecTivity). This framework consists of feature extraction and pixel-based classification using region growing and machine learning. The proposed features capture complementary evidence based on grey level and vessel connectivity properties. The latter information is seamlessly propagated through the pixels at the classification phase. ELEMENT reduces inconsistencies and speeds up the segmentation throughput. We analyze and compare the performance of the proposed approach against state-of-the-art vessel segmentation algorithms in three major groups of experiments, for each of the ocular modalities. Our method produced higher overall performance, with an overall accuracy of 97.40%, compared to 25 of the 26 state-of-the-art approaches, including six works based on deep learning, evaluated on the widely known DRIVE fundus image dataset. In the case of the STARE, CHASE-DB, VAMPIRE FA, IOSTAR SLO and RC-SLO datasets, the proposed framework outperformed all of the state-of-the-art methods with accuracies of 98.27%, 97.78%, 98.34%, 98.04% and 98.35%, respectively.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Machine Learning , Retinal Vessels/diagnostic imaging , Adult , Aged , Aged, 80 and over , Algorithms , Diabetic Retinopathy/diagnostic imaging , Diagnostic Techniques, Ophthalmological , Humans , Middle Aged , Multimodal Imaging
7.
Sensors (Basel) ; 19(6)2019 Mar 13.
Article in English | MEDLINE | ID: mdl-30871162

ABSTRACT

Detection of abnormalities in wireless capsule endoscopy (WCE) images is a challenging task. Typically, these images suffer from low contrast, complex background, variations in lesion shape and color, which affect the accuracy of their segmentation and subsequent classification. This research proposes an automated system for detection and classification of ulcers in WCE images, based on state-of-the-art deep learning networks. Deep learning techniques, and in particular, convolutional neural networks (CNNs), have recently become popular in the analysis and recognition of medical images. The medical image datasets used in this study were obtained from WCE video frames. In this work, two milestone CNN architectures, namely the AlexNet and the GoogLeNet are extensively evaluated in object classification into ulcer or non-ulcer. Furthermore, we examine and analyze the images identified as containing ulcer objects to evaluate the efficiency of the utilized CNNs. Extensive experiments show that CNNs deliver superior performance, surpassing traditional machine learning methods by large margins, which supports their effectiveness as automated diagnosis tools.


Subject(s)
Capsule Endoscopy/methods , Neural Networks, Computer , Ulcer/diagnostic imaging , Deep Learning , Humans , Image Interpretation, Computer-Assisted , Image Processing, Computer-Assisted , Machine Learning
8.
Comput Biol Med ; 107: 97-108, 2019 04.
Article in English | MEDLINE | ID: mdl-30798220

ABSTRACT

One of the most promising clinical applications of Electrical Impedance Tomography (EIT) is real-time monitoring of lung function in ambulatory or ICU due to the rapid, non-invasive and non-ionizing nature of the measurements. However, to move this modality into routine clinical use will, as a minimum, require the development of realistic and computationally efficient forward and inverse meshes of the thorax and the lungs alongside mechanisms to extract quantitative information from the resulting reconstructed images. The latter will allow for reduction of artefacts and better localization of conductivity changes within the image domain. This research aims to contribute towards this goal, by introducing a pipeline for the generation of anatomically accurate meshes for EIT forward and inverse models. We achieve this by the segmentation of realistic volumetric data from thoracic CT volumes, and subsequent tessellation. Mesh quality is assessed in terms of aspect ratio, dihedral and face angles. Moreover, the generated meshes are fused with currently available EIT software, with a novel electrode placement method, to show the practical application of the generated meshes. Results show that anatomically constrained unstructured meshes can be generated, conforming to realistic anatomical geometry, and performing well in EIT numerical computations. Such realistic computational models will further enhance the performance of EIT reconstruction algorithms, thus offering significant benefits to clinical EIT lung imaging.


Subject(s)
Electric Impedance , Imaging, Three-Dimensional/methods , Thorax/diagnostic imaging , Tomography/methods , Finite Element Analysis , Humans , Lung/diagnostic imaging , Tomography, X-Ray Computed
9.
Comput Methods Programs Biomed ; 144: 189-202, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28495002

ABSTRACT

BACKGROUND AND OBJECTIVE: State-of-the-art medical imaging techniques have enabled non-invasive imaging of the internal organs. However, high volumes of imaging data make manual interpretation and delineation of abnormalities cumbersome for clinicians. These challenges have driven intensive research into efficient medical image segmentation. In this work, we propose a hybrid region-based energy formulation for effective segmentation in computed tomography angiography (CTA) imagery. METHODS: The proposed hybrid energy couples an intensity-based local term with an efficient discontinuity-based global model of the image for optimal segmentation. The segmentation is achieved using a level set formulation due to the computational robustness. After validating the statistical significance of the hybrid energy, we applied the proposed model to solve an important clinical problem of 3D coronary segmentation. An improved seed detection method is used to initialize the level set evolution. Moreover, we employed an auto-correction feature that captures the emerging peripheries during the curve evolution for completeness of the coronary tree. RESULTS: We evaluated the segmentation accuracy of the proposed energy model against the existing techniques in two stages. Qualitative and quantitative results demonstrate the effectiveness of the proposed framework with a consistent mean sensitivity and specificity measures of 80% across the CTA data. Moreover, a high degree of agreement with respect to the inter-observer differences justifies the generalization of the proposed method. CONCLUSIONS: The proposed method is effective to segment the coronary tree from the CTA volume based on hybrid image based energy, which can improve the clinicians ability to detect arterial abnormalities.


Subject(s)
Computed Tomography Angiography , Coronary Angiography , Image Interpretation, Computer-Assisted , Algorithms , Coronary Vessels/diagnostic imaging , Humans , Sensitivity and Specificity
10.
Comput Math Methods Med ; 2016: 2695962, 2016.
Article in English | MEDLINE | ID: mdl-27403203

ABSTRACT

Coronary artery disease (CAD) is the most common type of heart disease in western countries. Early detection and diagnosis of CAD is quintessential to preventing mortality and subsequent complications. We believe hemodynamic data derived from patient-specific computational models could facilitate more accurate prediction of the risk of atherosclerosis. We introduce a semiautomated method to build 3D patient-specific coronary vessel models from 2D monoplane angiogram images. The main contribution of the method is a robust segmentation approach using dynamic programming combined with iterative 3D reconstruction to build 3D mesh models of the coronary vessels. Results indicate the accuracy and robustness of the proposed pipeline. In conclusion, patient-specific modelling of coronary vessels is of vital importance for developing accurate computational flow models and studying the hemodynamic effects of the presence of plaques on the arterial walls, resulting in lumen stenoses, as well as variations in the angulations of the coronary arteries.


Subject(s)
Angiography/methods , Coronary Vessels/diagnostic imaging , Imaging, Three-Dimensional , Blood Vessels/physiopathology , Computer Simulation , Coronary Vessels/pathology , Coronary Vessels/physiopathology , Hemodynamics , Humans , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated , ROC Curve , Radiographic Image Interpretation, Computer-Assisted , Software , X-Rays
11.
J Opt Soc Am A Opt Image Sci Vis ; 33(1): 84-94, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26831589

ABSTRACT

Automated analysis of retinal images plays a vital role in the examination, diagnosis, and prognosis of healthy and pathological retinas. Retinal disorders and the associated visual loss can be interpreted via quantitative correlations, based on measurements of photoreceptor loss. Therefore, it is important to develop reliable tools for identification of photoreceptor cells. In this paper, an automated algorithm is proposed, based on the use of the Hessian-Laplacian of Gaussian filter, which allows enhancement and detection of photoreceptor cells. The performance of the proposed technique is evaluated on both synthetic and high-resolution retinal images, in terms of packing density. The results on the synthetic data were compared against ground truth as well as cone counts obtained by the Li and Roorda algorithm. For the synthetic datasets, our method showed an average detection accuracy of 98.8%, compared to 93.9% for the Li and Roorda approach. The packing density estimates calculated on the retinal datasets were validated against manual counts and the results obtained by a proprietary software from Imagine Eyes and the Li and Roorda algorithm. Among the tested methods, the proposed approach showed the closest agreement with manual counting.


Subject(s)
Image Processing, Computer-Assisted/methods , Molecular Imaging/instrumentation , Optical Devices , Retina/cytology , Retinal Cone Photoreceptor Cells/cytology , Algorithms , Normal Distribution
12.
EuroIntervention ; 9(2): 277-84, 2013 Jun 22.
Article in English | MEDLINE | ID: mdl-23793012

ABSTRACT

Studies evaluating the diagnostic performance of coronary computed tomography angiography (CTA) are consistent in demonstrating a high negative predictive accuracy, but only a modest positive predictive accuracy for the detection of significant coronary artery disease. Consequentially, there has been a considerable effort made to enhance the diagnostic capability of coronary CTA by developing scanner technology and also post-processing algorithms for coronary stenosis evaluation. Of these new developments, the proposition of being able to measure non-invasive fractional flow reserve by coronary computed tomography angiography (FFRct) has generated much recent interest. Initial reports indicate that the application FFRct not only correlates well with invasive fractional flow reserve but also has the potential to enhance substantially the positive predictive accuracy and overall accuracy of coronary CTA. Although it is theoretically possible to measure FFRct using complex computational fluid dynamics adapted from the aeronautical industry, this approach is likely to face a number of challenges prior to it being accepted into the mainstream as an adjunct to coronary CTA. The aim of the current review is to provide an overview of: 1) the fundamental engineering principles behind computational fluid dynamic modelling of coronary arterial blood flow; 2) the difficulties faced from an engineering perspective in developing a truly representative model; and 3) the challenges this technology is likely to face as it attempts to enter the clinical domain.


Subject(s)
Coronary Angiography/methods , Coronary Artery Disease/diagnostic imaging , Coronary Vessels/diagnostic imaging , Fractional Flow Reserve, Myocardial , Tomography, X-Ray Computed , Computer Simulation , Coronary Artery Disease/physiopathology , Coronary Vessels/physiopathology , Humans , Models, Cardiovascular , Numerical Analysis, Computer-Assisted , Predictive Value of Tests , Radiographic Image Interpretation, Computer-Assisted
13.
IEEE Trans Inf Technol Biomed ; 16(4): 782-8, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22481830

ABSTRACT

In this paper, we present a novel two-step algorithm for segmentation of coronary arteries in computed tomography images based on the framework of active contours. In the proposed method, both global and local intensity information is utilized in the energy calculation. The global term is defined as a normalized cumulative distribution function, which contributes to the overall active contour energy in an adaptive fashion based on image histograms, to deform the active contour away from local stationary points. Possible outliers, such as kissing vessel artifacts, are removed in the postprocessing stage by a slice-by-slice correction scheme based on multiregion competition, where both arteries and kissing vessels are identified and tracked through the slices. The efficiency and the accuracy of the proposed technique are demonstrated on both synthetic and real datasets. The results on clinical datasets show that the method is able to extract the major branches of arteries with an average distance of 0.73 voxels to the manually delineated ground truth data. In the presence of kissing vessel artifacts, the outer surface of the entire coronary tree, extracted by the proposed algorithm, is smooth and contains fewer erroneous regions, originating in kissing vessel artifacts, as compared to the initial segmentation.


Subject(s)
Coronary Angiography/methods , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Artifacts , Computer Simulation , Databases, Factual , Humans
14.
IEEE Trans Biomed Eng ; 59(7): 1850-60, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22180504

ABSTRACT

Reliable and reproducible estimation of vessel centerlines and reference surfaces is an important step for the assessment of luminal lesions. Conventional methods are commonly developed for quantitative analysis of the "straight" vessel segments and have limitations in defining the precise location of the centerline and the reference lumen surface for both the main vessel and the side branches in the vicinity of bifurcations. To address this, we propose the estimation of the centerline and the reference surface through the registration of an elliptical cross-sectional tube to the desired constituent vessel in each major bifurcation of the arterial tree. The proposed method works directly on the mesh domain, thus alleviating the need for image upsampling, usually required in conventional volume domain approaches. We demonstrate the efficiency and accuracy of the method on both synthetic images and coronary CT angiograms. Experimental results show that the new method is capable of estimating vessel centerlines and reference surfaces with a high degree of agreement to those obtained through manual delineation. The centerline errors are reduced by an average of 62.3% in the regions of the bifurcations, when compared to the results of the initial solution obtained through the use of mesh contraction method.


Subject(s)
Coronary Angiography/methods , Imaging, Three-Dimensional/methods , Models, Cardiovascular , Tomography, X-Ray Computed/methods , Coronary Vessels/anatomy & histology , Humans
15.
Article in English | MEDLINE | ID: mdl-22255190

ABSTRACT

In the forward EIT-problem numerical solutions of an elliptic partial differential equation are required. Given the arbitrary geometries encountered, the Finite Element Method (FEM) is, naturally, the method of choice. Nowadays, in EIT applications, there is an increasing demand for finer Finite Element mesh models. This in turn results to a soaring number of degrees of freedom and an excessive number of unknowns. As such, only piece-wise linear basis functions can practically be employed to maintain inexpensive computations. In addition, domain reduction and/or compression schemes are often sought to further counteract for the growing number of unknowns. In this paper, we replace the piece-wise linear with wavelet basis functions (coupled with the domain embedding method) to enable sparse approximations of the forward computations. Given that the forward solutions are repeatedly, if not extensively, utilised during the image reconstruction process, considerable computational savings can be recorded whilst maintaining O(N) forward problem complexity. We verify with numerical results that, in practice, less than 5% of the involved coefficients are actually required for computations and, hence, needs to be stored. We finalise this work by addressing the impact to the inverse problem. It is worth underlining that the proposed scheme is independent of the actual family of wavelet basis functions of compact support.


Subject(s)
Finite Element Analysis , Models, Theoretical
16.
Article in English | MEDLINE | ID: mdl-21096680

ABSTRACT

We investigate on the use of the Domain Embedding Method (DEM) for the forward modelling in EIT. This approach is suitably configured to overcome the model meshing bottleneck since it does not require that the mesh on the domain is adapted to the boundary surface. This is of crucial importance for, e.g., clinical applications of EIT, as it avoids tedious and time-consuming (re-)meshing procedures. The suggested DEM approach can accommodate arbitrary yet Lipschitz smooth boundary surfaces and is not limited to polygonal domains. For the discretisation purposes, we employ B-splines as they allow for arbitrary accuracy by raising the polynomial degree and are easy to implement due to their inherent piecewise polynomial structure. Numerical experiments confirm that a B-spline discretization yields, similarly to conventional Finite Difference discretizations, increasing condition numbers of the system matrix with respect to the discretisation levels. Fortunately, multiresolution ideas based on B-splines allow for optimal wavelet preconditioning.


Subject(s)
Dielectric Spectroscopy/methods , Image Interpretation, Computer-Assisted/methods , Models, Biological , Tomography/methods , Animals , Computer Simulation , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
17.
Article in English | MEDLINE | ID: mdl-21096682

ABSTRACT

In this paper, we employ the concept of the Fisher information matrix (FIM) to reformulate and improve on the "Newton's One-Step Error Reconstructor" (NOSER) algorithm. FIM is a systematic approach for incorporating statistical properties of noise, modeling errors and multi-frequency data. The method is discussed in a maximum likelihood estimator (MLE) setting. The ill-posedness of the inverse problem is mitigated by means of a nonlinear regularization strategy. It is shown that the overall approach reduces to the maximum a posteriori estimator (MAP) with the prior (conductivity vector) described by a multivariate normal distribution. The covariance matrix of the prior is a diagonal matrix and is computed directly from the Fisher information matrix. An eigenvalue analysis is presented, revealing the advantages of using this prior to a Gaussian smoothness prior (Laplace). Reconstructions are shown using measured data obtained from a shallow breathing of an adult human subject. The reconstructions show that the FIM approach clearly improves on the original NOSER algorithm.


Subject(s)
Dielectric Spectroscopy/methods , Image Interpretation, Computer-Assisted/methods , Models, Biological , Tomography/methods , Animals , Computer Simulation , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
18.
IEEE Trans Neural Netw ; 19(9): 1574-82, 2008 Sep.
Article in English | MEDLINE | ID: mdl-18779089

ABSTRACT

Wearable human movement measurement systems are increasingly popular as a means of capturing human movement data in real-world situations. Previous work has attempted to estimate segment kinematics during walking from foot acceleration and angular velocity data. In this paper, we propose a novel neural network [GRNN with Auxiliary Similarity Information (GASI)] that estimates joint kinematics by taking account of proximity and gait trajectory slope information through adaptive weighting. Furthermore, multiple kernel bandwidth parameters are used that can adapt to the local data density. To demonstrate the value of the GASI algorithm, hip, knee, and ankle joint motions are estimated from acceleration and angular velocity data for the foot and shank, collected using commercially available wearable sensors. Reference hip, knee, and ankle kinematic data were obtained using externally mounted reflective markers and infrared cameras for subjects while they walked at different speeds. The results provide further evidence that a neural net approach to the estimation of joint kinematics is feasible and shows promise, but other practical issues must be addressed before this approach is mature enough for clinical implementation. Furthermore, they demonstrate the utility of the new GASI algorithm for making estimates from continuous periodic data that include noise and a significant level of variability.


Subject(s)
Biomechanical Phenomena/methods , Gait/physiology , Models, Biological , Models, Theoretical , Monitoring, Ambulatory/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Algorithms , Artificial Intelligence , Biomechanical Phenomena/instrumentation , Computer Simulation , Humans , Monitoring, Ambulatory/instrumentation
19.
Catheter Cardiovasc Interv ; 71(1): 28-43, 2008 Jan 01.
Article in English | MEDLINE | ID: mdl-18098180

ABSTRACT

OBJECTIVE: To develop and implement a method for three-dimensional (3D) reconstruction of coronary arteries from conventional monoplane angiograms. BACKGROUND: 3D reconstruction of conventional coronary angiograms is a promising imaging modality for both diagnostic and interventional purposes. METHODS: Our method combines image enhancement, automatic edge detection, an iterative method to reconstruct the centerline of the artery and reconstruction of the diameter of the vessel by taking into consideration foreshortening effects. The X-Ray-based 3D coronary trees were compared against phantom data from a virtual arterial tree projected into two planes as well as computed tomography (CT)-based coronary artery reconstructions in patients subjected to coronary angiography. RESULTS: Comparison against the phantom arterial tree demonstrated perfect agreement with the developed algorithm. Visual comparison against the CT-based reconstruction was performed in the 3D space, in terms of the direction angle along the centerline length of the left anterior descending and circumflex arteries relative to the main stem, and location and take-off angle of sample bifurcation branches from the main coronary arteries. Only minimal differences were detected between the two methods. Inter- and intraobserver variability of our method was low (intra-class correlation coefficients > 0.8). CONCLUSION: The developed method for coronary artery reconstruction from conventional angiography images provides the geometry of coronary arteries in the 3D space.


Subject(s)
Coronary Angiography/methods , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional , Tomography, X-Ray Computed/methods , Algorithms , Humans , Observer Variation , Phantoms, Imaging
20.
IEEE Trans Neural Netw ; 18(6): 1683-96, 2007 Nov.
Article in English | MEDLINE | ID: mdl-18051185

ABSTRACT

This paper proposes a new nonparametric regression method, based on the combination of generalized regression neural networks (GRNNs), density-dependent multiple kernel bandwidths, and regularization. The presented model is generic and substitutes the very large number of bandwidths with a much smaller number of trainable weights that control the regression model. It depends on sets of extracted data density features which reflect the density properties and distribution irregularities of the training data sets. We provide an efficient initialization scheme and a second-order algorithm to train the model, as well as an overfitting control mechanism based on Bayesian regularization. Numerical results show that the proposed network manages to reduce significantly the computational demands of having individual bandwidths, while at the same time, provides competitive function approximation accuracy in relation to existing methods.


Subject(s)
Algorithms , Artificial Intelligence , Electronic Data Processing , Neural Networks, Computer , Pattern Recognition, Automated , Regression Analysis , Bayes Theorem , Computer Simulation , Computing Methodologies , Image Processing, Computer-Assisted , Information Storage and Retrieval , Nonlinear Dynamics , Numerical Analysis, Computer-Assisted , Signal Processing, Computer-Assisted , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...