Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
1.
J Biomech Eng ; 142(6)2020 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-31633169

RESUMEN

In this work, we provide a quantitative assessment of the biomechanical and geometric features that characterize abdominal aortic aneurysm (AAA) models generated from 19 Asian and 19 Caucasian diameter-matched AAA patients. 3D patient-specific finite element models were generated and used to compute peak wall stress (PWS), 99th percentile wall stress (99th WS), and spatially averaged wall stress (AWS) for each AAA. In addition, 51 global geometric indices were calculated, which quantify the wall thickness, shape, and curvature of each AAA. The indices were correlated with 99th WS (the only biomechanical metric that exhibited significant association with geometric indices) using Spearman's correlation and subsequently with multivariate linear regression using backward elimination. For the Asian AAA group, 99th WS was highly correlated (R2 = 0.77) with three geometric indices, namely tortuosity, intraluminal thrombus volume, and area-averaged Gaussian curvature. Similarly, 99th WS in the Caucasian AAA group was highly correlated (R2 = 0.87) with six geometric indices, namely maximum AAA diameter, distal neck diameter, diameter-height ratio, minimum wall thickness variance, mode of the wall thickness variance, and area-averaged Gaussian curvature. Significant differences were found between the two groups for ten geometric indices; however, no differences were found for any of their respective biomechanical attributes. Assuming maximum AAA diameter as the most predictive metric for wall stress was found to be imprecise: 24% and 28% accuracy for the Asian and Caucasian groups, respectively. This investigation reveals that geometric indices other than maximum AAA diameter can serve as predictors of wall stress, and potentially for assessment of aneurysm rupture risk, in the Asian and Caucasian AAA populations.


Asunto(s)
Aneurisma de la Aorta Abdominal , Análisis de Elementos Finitos , Fenómenos Biomecánicos , Humanos , Masculino , Persona de Mediana Edad , Modelos Cardiovasculares
2.
Biomed Eng Online ; 16(1): 36, 2017 Mar 23.
Artículo en Inglés | MEDLINE | ID: mdl-28335790

RESUMEN

Current clinically accepted technologies for cancer treatment still have limitations which lead to the exploration of new therapeutic methods. Since the past few decades, the hyperthermia treatment has attracted the attention of investigators owing to its strong biological rationales in applying hyperthermia as a cancer treatment modality. Advancement of nanotechnology offers a potential new heating method for hyperthermia by using nanoparticles which is termed as magnetic fluid hyperthermia (MFH). In MFH, superparamagnetic nanoparticles dissipate heat through Néelian and Brownian relaxation in the presence of an alternating magnetic field. The heating power of these particles is dependent on particle properties and treatment settings. A number of pre-clinical and clinical trials were performed to test the feasibility of this novel treatment modality. There are still issues yet to be solved for the successful transition of this technology from bench to bedside. These issues include the planning, execution, monitoring and optimization of treatment. The modeling and simulation play crucial roles in solving some of these issues. Thus, this review paper provides a basic understanding of the fundamental and rationales of hyperthermia and recent development in the modeling and simulation applied to depict the heat generation and transfer phenomena in the MFH.


Asunto(s)
Hipertermia Inducida/métodos , Campos Magnéticos , Modelos Biológicos , Fenómenos Físicos , Animales , Calor , Humanos , Nanopartículas/química
3.
Opt Express ; 23(4): 4927-34, 2015 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-25836527

RESUMEN

Maxwell's wave equation was solved for fs laser drilling of silicon. The pre-formed hole wall's influence on the propagation behavior of subsequent laser pulses was investigated. The laser intensity at hole bottom shows distinct profile as compared with that at hole entrance. The multi-peaks and ring structure of the laser intensity were found at hole bottom. The position of maximum laser intensity (MLI) in relation to the wall taper angle was studied. It was found that the position of the MLI point would be closer to the hole entrance with increasing taper angle. This observation provides valuable information in predicting the position of plasma plume which is a key factor influencing laser drilling process. The elliptical entrance hole shape and zonal structure at the hole bottom reported in the literatures have been reasonably explained using the laser intensity distribution obtained in the present model.

4.
J Therm Biol ; 51: 23-32, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25965014

RESUMEN

Radiofrequency ablation (RFA) has been increasingly used in treating cancer for multitude of situations in various tissue types. To perform the therapy safely and reliably, the effect of critical parameters needs to be known beforehand. Temperature plays an important role in the outcome of the therapy and any uncertainties in temperature assessment can be lethal. This study presents the RFA case of fixed tip temperature where we've analysed the effect of electrical conductivity, thermal conductivity and blood perfusion rate of the tumour and surrounding normal tissue on the radiofrequency ablation. Ablation volume was chosen as the characteristic to be optimised and temperature control was achieved via PID controller. The effect of all 6 parameters each having 3 levels was quantified with minimum number of experiments harnessing the fractional factorial characteristic of Taguchi's orthogonal arrays. It was observed that as the blood perfusion increases the ablation volume decreases. Increasing electrical conductivity of the tumour results in increase of ablation volume whereas increase in normal tissue conductivity tends to decrease the ablation volume and vice versa. Likewise, increasing thermal conductivity of the tumour results in enhanced ablation volume whereas an increase in thermal conductivity of the surrounding normal tissue has a debilitating effect on the ablation volume and vice versa. With increase in the size of the tumour (i.e., 2-3cm) the effect of each parameter is not linear. The parameter effect varies with change in size of the tumour that is manifested by the different gradient observed in ablation volume. Most important is the relative insensitivity of ablation volume to blood perfusion rate for smaller tumour size (2cm) that is also in accordance with the previous results presented in literature. These findings will provide initial insight for safe, reliable and improved treatment planning perceptively.


Asunto(s)
Ablación por Catéter/métodos , Modelos Biológicos , Neoplasias/cirugía , Ablación por Catéter/efectos adversos , Simulación por Computador , Conductividad Eléctrica , Electricidad , Humanos , Temperatura , Conductividad Térmica
5.
Infrared Phys Technol ; 66: 160-175, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32288546

RESUMEN

The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

6.
Proc Inst Mech Eng H ; 227(1): 37-49, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23516954

RESUMEN

The human eye is one of the most sophisticated organs, with perfectly interrelated retina, pupil, iris cornea, lens, and optic nerve. Automatic retinal image analysis is emerging as an important screening tool for early detection of eye diseases. Uncontrolled diabetic retinopathy (DR) and glaucoma may lead to blindness. The identification of retinal anatomical regions is a prerequisite for the computer-aided diagnosis of several retinal diseases. The manual examination of optic disk (OD) is a standard procedure used for detecting different stages of DR and glaucoma. In this article, a novel automated, reliable, and efficient OD localization and segmentation method using digital fundus images is proposed. General-purpose edge detection algorithms often fail to segment the OD due to fuzzy boundaries, inconsistent image contrast, or missing edge features. This article proposes a novel and probably the first method using the Attanassov intuitionistic fuzzy histon (A-IFSH)-based segmentation to detect OD in retinal fundus images. OD pixel intensity and column-wise neighborhood operation are employed to locate and isolate the OD. The method has been evaluated on 100 images comprising 30 normal, 39 glaucomatous, and 31 DR images. Our proposed method has yielded precision of 0.93, recall of 0.91, F-score of 0.92, and mean segmentation accuracy of 93.4%. We have also compared the performance of our proposed method with the Otsu and gradient vector flow (GVF) snake methods. Overall, our result shows the superiority of proposed fuzzy segmentation technique over other two segmentation methods.


Asunto(s)
Retinopatía Diabética/patología , Angiografía con Fluoresceína/métodos , Lógica Difusa , Glaucoma/patología , Disco Óptico/patología , Reconocimiento de Normas Patrones Automatizadas/métodos , Retinoscopía/métodos , Adulto , Anciano , Anciano de 80 o más Años , Inteligencia Artificial , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
7.
Comput Methods Programs Biomed ; 229: 107308, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36535127

RESUMEN

BACKGROUND AND OBJECTIVE: Myocardial infarction (MI) is a life-threatening condition diagnosed acutely on the electrocardiogram (ECG). Several errors, such as noise, can impair the prediction of automated ECG diagnosis. Therefore, quantification and communication of model uncertainty are essential for reliable MI diagnosis. METHODS: A Dirichlet DenseNet model that could analyze out-of-distribution data and detect misclassification of MI and normal ECG signals was developed. The DenseNet model was first trained with the pre-processed MI ECG signals (from the best lead V6) acquired from the Physikalisch-Technische Bundesanstalt (PTB) database, using the reverse Kullback-Leibler (KL) divergence loss. The model was then tested with newly synthesized ECG signals with added em and ma noise samples. Predictive entropy was used as an uncertainty measure to determine the misclassification of normal and MI signals. Model performance was evaluated using four uncertainty metrics: uncertainty sensitivity (UNSE), uncertainty specificity (UNSP), uncertainty accuracy (UNAC), and uncertainty precision (UNPR); the classification threshold was set at 0.3. RESULTS: The UNSE of the DenseNet model was low but increased over the studied decremental noise range (-6 to 24 dB), indicating that the model grew more confident in classifying the signals as they got less noisy. The model became more certain in its predictions from SNR values of 12 dB and 18 dB onwards, yielding UNAC values of 80% and 82.4% for em and ma noise signals, respectively. UNSP and UNPR values were close to 100% for em and ma noise signals, indicating that the model was self-aware of what it knew and didn't. CONCLUSION: Through this work, it has been established that the model is reliable as it was able to convey when it was not confident in the diagnostic information it was presenting. Thus, the model is trustworthy and can be used in healthcare applications, such as the emergency diagnosis of MI on ECGs.


Asunto(s)
Electrocardiografía , Infarto del Miocardio , Humanos , Incertidumbre , Infarto del Miocardio/diagnóstico , Bases de Datos Factuales , Entropía
8.
Comput Biol Med ; 146: 105550, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35533457

RESUMEN

Myocardial infarction (MI) accounts for a high number of deaths globally. In acute MI, accurate electrocardiography (ECG) is important for timely diagnosis and intervention in the emergency setting. Machine learning is increasingly being explored for automated computer-aided ECG diagnosis of cardiovascular diseases. In this study, we have developed DenseNet and CNN models for the classification of healthy subjects and patients with ten classes of MI based on the location of myocardial involvement. ECG signals from the Physikalisch-Technische Bundesanstalt database were pre-processed, and the ECG beats were extracted using an R peak detection algorithm. The beats were then fed to the two models separately. While both models attained high classification accuracies (more than 95%), DenseNet is the preferred model for the classification task due to its low computational complexity and higher classification accuracy than the CNN model due to feature reusability. An enhanced class activation mapping (CAM) technique called Grad-CAM was subsequently applied to the outputs of both models to enable visualization of the specific ECG leads and portions of ECG waves that were most influential for the predictive decisions made by the models for the 11 classes. It was observed that Lead V4 was the most activated lead in both the DenseNet and CNN models. Furthermore, this study has also established the different leads and parts of the signal that get activated for each class. This is the first study to report features that influenced the classification decisions of deep models for multiclass classification of MI and healthy ECGs. Hence this study is crucial and contributes significantly to the medical field as with some level of visible explainability of the inner workings of the models, the developed DenseNet and CNN models may garner needed clinical acceptance and have the potential to be implemented for ECG triage of MI diagnosis in hospitals and remote out-of-hospital settings.


Asunto(s)
Aprendizaje Profundo , Infarto del Miocardio , Algoritmos , Diagnóstico por Computador , Electrocardiografía/métodos , Humanos , Infarto del Miocardio/diagnóstico
9.
Comput Biol Med ; 134: 104457, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33991857

RESUMEN

Cardiovascular diseases (CVDs) are main causes of death globally with coronary artery disease (CAD) being the most important. Timely diagnosis and treatment of CAD is crucial to reduce the incidence of CAD complications like myocardial infarction (MI) and ischemia-induced congestive heart failure (CHF). Electrocardiogram (ECG) signals are most commonly employed as the diagnostic screening tool to detect CAD. In this study, an automated system (AS) was developed for the automated categorization of electrocardiogram signals into normal, CAD, myocardial infarction (MI) and congestive heart failure (CHF) classes using convolutional neural network (CNN) and unique GaborCNN models. Weight balancing was used to balance the imbalanced dataset. High classification accuracies of more than 98.5% were obtained by the CNN and GaborCNN models respectively, for the 4-class classification of normal, coronary artery disease, myocardial infarction and congestive heart failure classes. GaborCNN is a more preferred model due to its good performance and reduced computational complexity as compared to the CNN model. To the best of our knowledge, this is the first study to propose GaborCNN model for automated categorizing of normal, coronary artery disease, myocardial infarction and congestive heart failure classes using ECG signals. Our proposed system is equipped to be validated with bigger database and has the potential to aid the clinicians to screen for CVDs using ECG signals.


Asunto(s)
Enfermedad de la Arteria Coronaria , Insuficiencia Cardíaca , Infarto del Miocardio , Enfermedad de la Arteria Coronaria/diagnóstico , Electrocardiografía , Insuficiencia Cardíaca/diagnóstico , Humanos , Infarto del Miocardio/diagnóstico , Procesamiento de Señales Asistido por Computador
10.
Artículo en Inglés | MEDLINE | ID: mdl-32078556

RESUMEN

Recently, coronary heart disease has attracted more and more attention, where segmentation and analysis for vascular lumen contour are helpful for treatment. And intravascular optical coherence tomography (IVOCT) images are used to display lumen shapes in clinic. Thus, an automatic segmentation method for IVOCT lumen contour is necessary to reduce the doctors' workload while ensuring diagnostic accuracy. In this paper, we proposed a deep residual segmentation network of multi-scale feature fusion based on attention mechanism (RSM-Network, Residual Squeezed Multi-Scale Network) to segment the lumen contour in IVOCT images. Firstly, three different data augmentation methods including mirror level turnover, rotation and vertical flip are considered to expand the training set. Then in the proposed RSM-Network, U-Net is contained as the main body, considering its characteristic of accepting input images with any sizes. Meanwhile, the combination of residual network and attention mechanism is applied to improve the ability of global feature extraction and solve the vanishing gradient problem. Moreover, the pyramid feature extraction structure is introduced to enhance the learning ability for multi-scale features. Finally, in order to increase the matching degree between the actual output and expected output, the cross entropy loss function is also used. A series of metrics are presented to evaluate the performance of our proposed network and the experimental results demonstrate that the proposed RSM-Network can learn the contour details better, contributing to strong robustness and accuracy for IVOCT lumen contour segmentation.


Asunto(s)
Aprendizaje Profundo , Procedimientos Endovasculares/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Coherencia Óptica/métodos , Vasos Sanguíneos/diagnóstico por imagen , Bases de Datos Factuales , Humanos , Redes Neurales de la Computación
11.
Med Phys ; 37(11): 6022-34, 2010 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-21158314

RESUMEN

PURPOSE: A novel technique was developed to measure tear evaporation and monitor its variation with respect to time, for the studying of ocular physiology based on dynamic functional infrared thermography and the first law of thermodynamics using the measured ocular surface temperatures (OSTs). This is a noninvasive, noncontact temperature measuring method that is widely applied in the field of biomedicine. METHODS: A simple method based on the ocular thermal data was proposed to measure the rate of tear evaporation. The OST of 60 normal subjects were recorded in the form of sequential thermal images. For each thermal sequence, the ocular region was selected and warped to a standard form. Thermal data within the regions were processed, on the basis of the first law of thermodynamics to derive the evaporation rate. RESULTS: For elder subjects (aged above 35), the rate was determined to be 55.82 Wm(-2) and for younger subjects, the rate was 58.9 Wm(-2). The corneal rate of evaporation in elder subjects was found statistically (p < 0.11) larger than their younger counterparts. The rate of blinking was observed to be related to the variation of evaporation rate. CONCLUSIONS: The authors have measured the evaporation rate on a sequence of thermographic images. A region of interest was selected at first and the same region on all the images were warped into a standard form. Calculations were performed based on the thermal data in those regions to obtain the values of interest. The authors found that the tear evaporation rate for subjects of all age groups was 57.36 +/- 12.73 Wm(-2) and the corneal tear evaporation was higher in elder subjects. The corneal rate of evaporation fluctuated in a larger magnitude in subjects who blinked more than average.


Asunto(s)
Ojo/metabolismo , Espectrofotometría Infrarroja/métodos , Lágrimas , Termografía/métodos , Adolescente , Adulto , Factores de Edad , Anciano , Parpadeo , Diseño de Equipo , Humanos , Procesamiento de Imagen Asistido por Computador , Persona de Mediana Edad , Modelos Anatómicos , Fenómenos Fisiológicos Oculares , Temperatura
12.
Comput Biol Med ; 120: 103718, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32250851

RESUMEN

Unlike passive infrared (IR) thermal imaging/thermography, where no external stimulation is applied, active dynamic thermography (ADT) results in a high contrast thermal image. In ADT, transient thermal images of the skin surface are captured using an IR thermal camera while the skin surface is stimulated externally, followed by a recovery phase. Upon the application of external stimulation, the presence of stenosis in the carotid artery is expected to differ the recovery rate of the external neck skin surface from the case with no stenosis. In this prospective study, using an external cooling stimulation, the ADT procedure was performed on a total of 54 (N) samples (C: N = 19, 0% stenosis; D1: N = 17, 10%-29% stenosis; D2: N = 18, ≥30% stenosis using Duplex Ultrasound). Analyzing the ADT sequence with a parameter called tissue activity ratio (TAR), the samples were classified using a cut-off value: C versus (D1+D2) and (C + D1) versus D2. As the degree of stenosis increases, the value of the TAR parameter depreciates with a significant difference among the sample groups (C:0.97 ± 0.05, D1:0.80 ± 0.04, D2:0.75 ± 0.02; p < 0.05). Under the two classification scenarios, classification accuracies of 90% and 85%, respectively, were achieved. This study suggests the potential of screening CAS with the proposed ADT procedure.


Asunto(s)
Estenosis Carotídea , Termografía , Arteria Carótida Común , Estenosis Carotídea/diagnóstico por imagen , Constricción Patológica , Humanos , Tamizaje Masivo , Estudios Prospectivos
13.
Comput Biol Med ; 118: 103630, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-32174317

RESUMEN

Hypertension (HPT), also known as high blood pressure, is a precursor to heart, brain or kidney diseases. Some symptoms of HPT include headaches, dizziness and fainting. The potential diagnosis of masked hypertension is of specific interest in this study. In masked hypertension (MHPT), the instantaneous blood pressure appears normal, but the 24-h ambulatory blood pressure is abnormal. Hence patients with MHPT are difficult to identify and thus remain untreated or are treated insufficiently. Hence, a computational intelligence tool (CIT) using electrocardiograms (ECG) signals for HPT and possible MHPT detection is proposed in this work. Empirical mode decomposition (EMD) is employed to decompose the pre-processed signals up to five levels. Nonlinear features are extracted from the five intrinsic mode functions (IMFs) thereafter. Student's t-test is subsequently applied to select a set of highly discriminatory features. This feature set is then input to various classifiers, in which, the best accuracy of 97.70% is yielded by the k-nearest neighbor (k-NN) classifier. The developed tool is evaluated by the 10-fold cross validation technique. Our findings suggest that the developed system is useful for diagnostic computational intelligence tool in hospital settings, and that it enables the automatic classification of HPT versus normal ECG signals.


Asunto(s)
Monitoreo Ambulatorio de la Presión Arterial , Hipertensión , Algoritmos , Inteligencia Artificial , Electrocardiografía , Humanos , Hipertensión/diagnóstico , Procesamiento de Señales Asistido por Computador
14.
Comput Biol Med ; 126: 103999, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32992139

RESUMEN

BACKGROUND: Hypertension (HPT) occurs when there is increase in blood pressure (BP) within the arteries, causing the heart to pump harder against a higher afterload to deliver oxygenated blood to other parts of the body. PURPOSE: Due to fluctuation in BP, 24-h ambulatory blood pressure monitoring has emerged as a useful tool for diagnosing HPT but is limited by its inconvenience. So, an automatic diagnostic tool using electrocardiogram (ECG) signals is used in this study to detect HPT automatically. METHOD: The pre-processed signals are fed to a convolutional neural network model. The model learns and identifies unique ECG signatures for classification of normal and hypertension ECG signals. The proposed model is evaluated by the 10-fold and leave one out patient based validation techniques. RESULTS: A high classification accuracy of 99.99% is achieved for both validation techniques. This is one of the first few studies to have employed deep learning algorithm coupled with ECG signals for the detection of HPT. Our results imply that the developed tool is useful in a hospital setting as an automated diagnostic tool, enabling the effortless detection of HPT using ECG signals.


Asunto(s)
Monitoreo Ambulatorio de la Presión Arterial , Hipertensión , Algoritmos , Electrocardiografía , Humanos , Hipertensión/diagnóstico , Redes Neurales de la Computación
15.
Proc Inst Mech Eng H ; 223(5): 545-53, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19623908

RESUMEN

Diabetes mellitus is a heterogeneous clinical syndrome characterized by hyperglycaemia and the long-term complications are retinopathy, neuropathy, nephropathy, and cardiomyopathy. It is a leading cause of blindness. Diabetic retinopathy is the progressive pathological alterations in the retinal microvasculature, leading to areas of retinal nonperfusion, increased vascular permeability, and the pathological proliferation of retinal vessels. Hence, it is beneficial to have regular cost-effective eye screening for diabetes subjects. Nowadays, different stages of diabetes retinopathy are detected by retinal examination using indirect biomicroscopy by senior ophthalmologists. In this work, morphological image processing and support vector machine (SVM) techniques were used for the automatic diagnosis of eye health. In this study, 331 fundus images were analysed. Five groups were identified: normal retina, mild non-proliferative diabetic retinopathy, moderate non-proliferative diabetic retinopathy, severe non-proliferative diabetic retinopathy, and proliferative diabetic retinopathy. Four salient features blood vessels, microaneurysms, exudates, and haemorrhages were extracted from the raw images using image-processing techniques and fed to the SVM for classification. A sensitivity of more than 82 per cent and specificity of 86 per cent was demonstrated for the system developed.


Asunto(s)
Algoritmos , Inteligencia Artificial , Retinopatía Diabética/patología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Retinoscopía/métodos , Procesamiento de Señales Asistido por Computador , Adulto , Anciano , Femenino , Humanos , Aumento de la Imagen/métodos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
Comput Biol Med ; 113: 103419, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31493579

RESUMEN

In the present study, an infrared (IR) thermal camera was used to map the temperature of the target skin surface, and the resulting thermal image was evaluated for the presence of carotid artery stenosis (CAS). In the presence of stenosis in the carotid artery, abnormal temperature maps are expected to occur on the external skin surface, which could be captured and quantified using IR thermography. A Duplex Ultrasound (DUS) examination was used to establish the ground truth. In each patient, the background-subtracted thermal image, referred to as full thermal image, was used to extract novel parametric cold thermal feature images. From these images, statistical features, viz., correlation, energy, homogeneity, contrast, entropy, mean, standard deviation (SD), skewness, and kurtosis, were calculated and the two groups of patients (control and diseased: a total of 80 carotid artery samples) were classified. Both cut-off value- and support vector machine (SVM)-based binary classification models were tested. While the cut-off value classification model resulted in a moderate performance (70% accurate), SVM was found to have classified the patients with high accuracy (92% or higher). This preliminary study suggests the potential of IR thermography as a possible screening tool for CAS patients.


Asunto(s)
Estenosis Carotídea , Procesamiento de Imagen Asistido por Computador , Rayos Infrarrojos , Máquina de Vectores de Soporte , Termografía , Anciano , Estenosis Carotídea/diagnóstico , Estenosis Carotídea/diagnóstico por imagen , Femenino , Humanos , Masculino , Tamizaje Masivo , Persona de Mediana Edad , Estudios Prospectivos
17.
Med Biol Eng Comput ; 57(2): 379-388, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30123948

RESUMEN

Early detection of breast tumors, feet pre-ulcers diagnosing in diabetic patients, and identifying the location of pain in patients are essential to physicians. Hot or cold regions in medical thermographic images have potential to be suspicious. Hence extracting the hottest or coldest regions in the body thermographic images is an important task. Lazy snapping is an interactive image cutout algorithm that can be applied to extract the hottest or coldest regions in the body thermographic images quickly with easy detailed adjustment. The most important advantage of this technique is that it can provide the results for physicians in real time readily. In other words, it is a good interactive image segmentation algorithm since it has two basic characteristics: (1) the algorithm produces intuitive segmentation that reflects the user intent with given a certain user input and (2) the algorithm is efficient enough to provide instant visual feedback. Comparing to other methods used by the authors for segmentation of breast thermograms such as K-means, fuzzy c-means, level set, and mean shift algorithms, lazy snapping was more user-friendly and could provide instant visual feedback. In this study, twelve test cases were presented and by applying lazy snapping algorithm, the hottest or coldest regions were extracted from the corresponding body thermographic images. The time taken to see the results varied from 7 to 30 s for these twelve cases. It was concluded that lazy snapping was much faster than other methods applied by the authors such as K-means, fuzzy c-means, level set, and mean shift algorithms for segmentation. Graphical abstract Time taken to implement lazy snapping algorithm to extract suspicious regions in different presented thermograms (in seconds). In this study, ten test cases are presented that by applying lazy snapping algorithm, the hottest or coldest regions were extracted from the corresponding body thermographic images. The time taken to see the results varied from 7 to 30 s for the ten cases. It concludes lazy snapping is much faster than other methods applied by the authors.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Algoritmos , Mama/patología , Femenino , Lógica Difusa , Calor , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas/métodos , Termografía/métodos
18.
Comput Biol Med ; 112: 103371, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31404720

RESUMEN

OBJECTIVE: The aim of this study was to research, develop and assess the feasibility of using basic statistical parameters derived from renogram, "mean count value (MeanCV) and "median count value (MedianCV)", as novel indices in the diagnosis of renal obstruction through diuresis renography. SUBJECTS AND METHODS: First, we re-digitalized and normalized 132 renograms from 74 patients in order to derive the MeanCV and MedianCV. To improve the performance of the parameters, we extrapolated renograms by a two-compartmental modeling. After that, the cutoff points for diagnosis using each modified parameter were set and the sensitivity and specificity were calculated in order to determine the best variants of MeanCV and MedianCV that could differentiate renal obstruction status into 3 distinct classes - i) unobstructed, ii) slightly obstructed, and iii) heavily obstructed. RESULTS: The modified MeanCV and MedianCV derived from extended renograms predicted the severity of the renal obstruction. The most appropriate variants of MeanCV and MedianCV were found to be the MeanCV50 and the MedianCV60. The cutoff points of MeanCV50 in separating unobstructed and obstructed classes as well as slightly and heavily obstructed classes were 0.50 and 0.72, respectively. The cutoff points of MedianCV60 in separating unobstructed and obstructed classes as well as slightly and heavily obstructed classes were 0.35 and 0.69, respectively. Notably, MeanCV50 and MedianCV60 were not significantly influenced by either age or gender. CONCLUSIONS: The MeanCV50 and the MedianCV60 derived from a renogram could be incorporated with other quantifiable parameters to form a system that could provide a highly accurate diagnosis of renal obstructions.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador , Enfermedades Renales/diagnóstico por imagen , Renografía por Radioisótopo , Radiofármacos/administración & dosificación , Tecnecio Tc 99m Mertiatida/administración & dosificación , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Sensibilidad y Especificidad
19.
Comput Biol Med ; 113: 103392, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31446317

RESUMEN

In this paper, a continuous non-occluding blood pressure (BP) prediction method is proposed using multiple photoplethysmogram (PPG) signals. In the new method, BP is predicted by a committee machine or ensemble learning framework comprising multiple support vector regression (SVR) machines. The existing methods for continuous BP prediction rely on a single calibration model obtained from a single arterial segment. Our ensemble framework is the first BP estimation method which uses multiple SVR models for calibration from multiple arterial segments. This permits reducing of the mean prediction error and the risk of overfitting associated with a single model. Each SVR in the ensemble is trained on a comprehensive feature set that is constructed from a distinct PPG segment. The feature set includes pulse morphological parameters such as systolic pulse amplitude and area under the curve, heart rate variability (HRV) frequency, time domain parameters and the pulse wave velocity (PWV). Empirical evaluation using 40 volunteers with no serious health conditions shows that the proposed method is more reliable for estimating both the systolic and diastolic BP than similar methods employing a single calibration model under identical settings. Moreover, the combined output is found to be more stable than the output of any of the constituent models in the ensemble for both the systolic and diastolic cases.


Asunto(s)
Determinación de la Presión Sanguínea , Presión Sanguínea , Fotopletismografía , Análisis de la Onda del Pulso , Procesamiento de Señales Asistido por Computador , Máquina de Vectores de Soporte , Humanos , Valor Predictivo de las Pruebas
20.
Front Neurosci ; 13: 210, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30949018

RESUMEN

Recent research has reported the application of image fusion technologies in medical images in a wide range of aspects, such as in the diagnosis of brain diseases, the detection of glioma and the diagnosis of Alzheimer's disease. In our study, a new fusion method based on the combination of the shuffled frog leaping algorithm (SFLA) and the pulse coupled neural network (PCNN) is proposed for the fusion of SPECT and CT images to improve the quality of fused brain images. First, the intensity-hue-saturation (IHS) of a SPECT and CT image are decomposed using a non-subsampled contourlet transform (NSCT) independently, where both low-frequency and high-frequency images, using NSCT, are obtained. We then used the combined SFLA and PCNN to fuse the high-frequency sub-band images and low-frequency images. The SFLA is considered to optimize the PCNN network parameters. Finally, the fused image was produced from the reversed NSCT and reversed IHS transforms. We evaluated our algorithms against standard deviation (SD), mean gradient (G), spatial frequency (SF) and information entropy (E) using three different sets of brain images. The experimental results demonstrated the superior performance of the proposed fusion method to enhance both precision and spatial resolution significantly.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda