Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Cancers (Basel) ; 14(21)2022 Nov 01.
Article in English | MEDLINE | ID: mdl-36358809

ABSTRACT

Infantile hemangiomas occur in 3 to 10% of infants. To predict the clinical course and counsel on treatment, it is crucial to accurately determine the hemangiomas' extension, volume, and location. However, this can represent a challenge because hemangiomas may present irregular patterns or be covered by hair, or their depth may be difficult to estimate. Diagnosis is commonly made by clinical inspection and palpation, with physicians basing their diagnoses on visual characteristics such as area, texture, and color. Doppler ultrasonography or magnetic resonance imaging are normally used to estimate depth or to confirm difficult assessments. This paper presents an alternative diagnosis tool-thermography-as a useful, immediate means of carrying out accurate hemangioma examinations. We conducted a study analyzing infantile hemangiomas with a custom thermographic system. In the first phase of the study, 55 hemangiomas of previously diagnosed patients were analyzed with a thermal camera over several sessions. An average temperature variation before and after treatment of -0.19 °C was measured. In the second phase, we selected nine patients and assessed their evolution over nine months by analyzing their thermographic images and implementing dedicated image processing algorithms. In all cases, we found that the thermal image analysis concurred with the independent diagnoses of two dermatologists. We concluded that a higher temperature inside the tumor in the follow-up was indicative of an undesirable evolution.

2.
J Imaging ; 8(7)2022 Jul 12.
Article in English | MEDLINE | ID: mdl-35877641

ABSTRACT

Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis.

3.
IEEE J Biomed Health Inform ; 23(2): 560-569, 2019 03.
Article in English | MEDLINE | ID: mdl-29993674

ABSTRACT

Color has great diagnostic significance in dermatoscopy. Several diagnosis methods are based on the colors detected within a lesion. Malignant lesions frequently show more than three colors, whereas in benign lesions, three or fewer colors are usually observed. Black, red, white, and blue-gray are found more frequently in melanomas than in benign nevi. In this paper, a method to automatically identify the colors of a lesion is presented. A color label identification problem is proposed and solved by maximizing the posterior probability of a pixel to belong to a label, given its color value and the neighborhood color values. The main contribution of this paper is the estimation of the different terms involved in the computation of this probability. Two evaluations are performed on a database of 200 dermoscopic images. The first one evaluates if all the colors detected in a lesion are indeed present in it. The second analyzes if each pixel within a lesion is assigned the correct color label. The results show that the proposed method performs correctly and outperforms other methods, with an average F-measure of 0.89, an accuracy of 0.90, and a Spearman correlation of 0.831.


Subject(s)
Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Skin Neoplasms/diagnostic imaging , Skin Pigmentation/physiology , Skin/diagnostic imaging , Algorithms , Color , Databases, Factual , Humans , Models, Statistical , Skin/pathology , Skin Neoplasms/pathology
4.
Comput Methods Programs Biomed ; 156: 85-95, 2018 Mar.
Article in English | MEDLINE | ID: mdl-29428079

ABSTRACT

BACKGROUND AND OBJECTIVES: The segmentation of muscle and bone structures in CT is of interest to physicians and surgeons for surgical planning, disease diagnosis and/or the analysis of fractures or bone/muscle densities. Recently, the issue has been addressed in many research works. However, most studies have focused on only one of the two tissues and on the segmentation of one particular bone or muscle. This work addresses the segmentation of muscle and bone structures in 3D CT volumes. METHODS: The proposed bone and muscle segmentation algorithm is based on a three-label convex relaxation approach. The main novelty is that the proposed energy function to be minimized includes distance to histogram models of bone and muscle structures combined with gray-level information. RESULTS: 27 CT volumes corresponding to different sections from 20 different patients were manually segmented and used as ground-truth for training and evaluation purposes. Different metrics (Dice index, Jaccard index, Sensitivity, Specificity, Positive Predictive Value, accuracy and computational cost) were computed and compared with those used in some state-of-the art algorithms. The proposed algorithm outperformed the other methods, obtaining a Dice coefficient of 0.88 ±â€¯0.14, a Jaccard index of 0.80 ±â€¯0.19, a Sensitivity of 0.94 ±â€¯0.15 and a Specificity of 0.95 ±â€¯0.04 for bone segmentation, and 0.78 ±â€¯0.12, 0.65 ±â€¯0.16, 0.94 ±â€¯0.04 and 0.95 ±â€¯0.04 for muscle tissue. CONCLUSIONS: A fast, generalized method has been presented for segmenting muscle and bone structures in 3D CT volumes using a multilabel continuous convex relaxation approach. The results obtained show that the proposed algorithm outperforms some state-of-the art methods. The algorithm will help physicians and surgeons in surgical planning, disease diagnosis and/or the analysis of fractures or bone/muscle densities.


Subject(s)
Bone and Bones/diagnostic imaging , Fractures, Bone/diagnostic imaging , Muscles/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Male , Middle Aged , Pattern Recognition, Automated , Reproducibility of Results , Sensitivity and Specificity , Tomography, X-Ray Computed , Treatment Outcome , Young Adult
5.
Int J Comput Assist Radiol Surg ; 12(12): 2055-2067, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28188486

ABSTRACT

PURPOSE: In 2005, an application for surgical planning called AYRA[Formula: see text] was designed and validated by different surgeons and engineers at the Virgen del Rocío University Hospital, Seville (Spain). However, the segmentation methods included in AYRA and in other surgical planning applications are not able to segment accurately tumors that appear in soft tissue. The aims of this paper are to offer an exhaustive validation of an accurate semiautomatic segmentation tool to delimitate retroperitoneal tumors from CT images and to aid physicians in planning both radiotherapy doses and surgery. METHODS: A panel of 6 experts manually segmented 11 cases of tumors, and the segmentation results were compared exhaustively with: the results provided by a surgical planning tool (AYRA), the segmentations obtained using a radiotherapy treatment planning system (Pinnacle[Formula: see text]), the segmentation results obtained by a group of experts in the delimitation of retroperitoneal tumors and the segmentation results using the algorithm under validation. RESULTS: 11 cases of retroperitoneal tumors were tested. The proposed algorithm provided accurate results regarding the segmentation of the tumor. Moreover, the algorithm requires minimal computational time-an average of 90.5% less than that required when manually contouring the same tumor. CONCLUSION: A method developed for the semiautomatic selection of retroperitoneal tumor has been validated in depth. AYRA, as well as other surgical and radiotherapy planning tools, could be greatly improved by including this algorithm.


Subject(s)
Algorithms , Imaging, Three-Dimensional/methods , Radiotherapy Planning, Computer-Assisted/methods , Retroperitoneal Neoplasms/diagnosis , Adolescent , Adult , Humans , Male , Retroperitoneal Neoplasms/therapy , Tomography, X-Ray Computed , Young Adult
6.
Med Biol Eng Comput ; 55(1): 1-15, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27099157

ABSTRACT

An innovative algorithm has been developed for the segmentation of retroperitoneal tumors in 3D radiological images. This algorithm makes it possible for radiation oncologists and surgeons semiautomatically to select tumors for possible future radiation treatment and surgery. It is based on continuous convex relaxation methodology, the main novelty being the introduction of accumulated gradient distance, with intensity and gradient information being incorporated into the segmentation process. The algorithm was used to segment 26 CT image volumes. The results were compared with manual contouring of the same tumors. The proposed algorithm achieved 90 % sensitivity, 100 % specificity and 84 % positive predictive value, obtaining a mean distance to the closest point of 3.20 pixels. The algorithm's dependence on the initial manual contour was also analyzed, with results showing that the algorithm substantially reduced the variability of the manual segmentation carried out by different specialists. The algorithm was also compared with four benchmark algorithms (thresholding, edge-based level-set, region-based level-set and continuous max-flow with two labels). To the best of our knowledge, this is the first time the segmentation of retroperitoneal tumors for radiotherapy planning has been addressed.


Subject(s)
Imaging, Three-Dimensional , Radiotherapy Planning, Computer-Assisted , Retroperitoneal Neoplasms/diagnostic imaging , Retroperitoneal Neoplasms/radiotherapy , Adolescent , Adult , Algorithms , Female , Humans , Linear Models , Male , Observer Variation , Young Adult
7.
Burns ; 41(8): 1883-1890, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26188898

ABSTRACT

PURPOSE: In this paper an automatic system to diagnose burn depths based on colour digital photographs is presented. JUSTIFICATION: There is a low success rate in the determination of burn depth for inexperienced surgeons (around 50%), which rises to the range from 64 to 76% for experienced surgeons. In order to establish the first treatment, which is crucial for the patient evolution, the determination of the burn depth is one of the main steps. As the cost of maintaining a Burn Unit is very high, it would be desirable to have an automatic system to give a first assessment in local medical centres or at the emergency, where there is a lack of specialists. METHOD: To this aim a psychophysical experiment to determine the physical characteristics that physicians employ to diagnose a burn depth is described. A Multidimensional Scaling Analysis (MDS) is then applied to the data obtained from the experiment in order to identify these physical features. Subsequently, these characteristics are translated into mathematical features. Finally, via a classifier (Support Vector Machine) and a feature selection method, the discriminant power of these mathematical features to distinguish among burn depths is analysed, and the subset of features that better estimates the burn depth is selected. RESULTS: A success rate of 79.73% was obtained when burns were classified as those which needed grafts and those which did not. CONCLUSIONS: Results validate the ability of the features extracted from the psychophysical experiment to classify burns into their depths.


Subject(s)
Burns/pathology , Diagnosis, Computer-Assisted/methods , Skin/pathology , Support Vector Machine , Algorithms , Automation , Burn Units , Burns/classification , Databases, Factual , Humans , Photography , Reproducibility of Results , Sensitivity and Specificity , Trauma Severity Indices
8.
Stud Health Technol Inform ; 210: 399-403, 2015.
Article in English | MEDLINE | ID: mdl-25991174

ABSTRACT

This paper addresses a fully automatic landmarks detection method for breast reconstruction aesthetic assessment. The set of landmarks detected are the supraesternal notch (SSN), armpits, nipples, and inframammary fold (IMF). These landmarks are commonly used in order to perform anthropometric measurements for aesthetic assessment. The methodological approach is based on both illumination and morphological analysis. The proposed method has been tested with 21 images. A good overall performance is observed, although several improvements must be achieved in order to refine the detection of nipples and SSNs.


Subject(s)
Anatomic Landmarks/anatomy & histology , Breast/anatomy & histology , Breast/surgery , Image Interpretation, Computer-Assisted/methods , Photography/methods , Plastic Surgery Procedures/methods , Female , Humans , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity , Treatment Outcome
9.
IEEE Trans Med Imaging ; 33(5): 1137-47, 2014 May.
Article in English | MEDLINE | ID: mdl-24770918

ABSTRACT

In this paper different model-based methods of classification of global patterns in dermoscopic images are proposed. Global patterns identification is included in the pattern analysis framework, the melanoma diagnosis method most used among dermatologists. The modeling is performed in two senses: first a dermoscopic image is modeled by a finite symmetric conditional Markov model applied to L∗a∗b∗ color space and the estimated parameters of this model are treated as features. In turn, the distribution of these features are supposed that follow different models along a lesion: a Gaussian model, a Gaussian mixture model, and a bag-of-features histogram model. For each case, the classification is carried out by an image retrieval approach with different distance metrics. The main objective is to classify a whole pigmented lesion into three possible patterns: globular, homogeneous, and reticular. An extensive evaluation of the performance of each method has been carried out on an image database extracted from a public Atlas of Dermoscopy. The best classification success rate is achieved by the Gaussian mixture model-based method with a 78.44% success rate in average. In a further evaluation the multicomponent pattern is analyzed obtaining a 72.91% success rate.


Subject(s)
Dermoscopy/methods , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Humans , Melanoma/diagnosis , Melanoma/pathology , Normal Distribution , Skin/pathology
10.
IEEE Trans Pattern Anal Mach Intell ; 35(11): 2706-19, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24051730

ABSTRACT

Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.


Subject(s)
Algorithms , Data Compression/methods , Decision Support Techniques , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods
11.
IEEE Trans Image Process ; 22(12): 5322-35, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23996560

ABSTRACT

This paper presents the first framework capable of performing active contour segmentation using Earth Mover's Distance (EMD) to measure dissimilarity between multidimensional feature distributions. EMD is the best known and understood cross-bin histogram distance measure, and as such it allows for meaningful comparisons between distributions, unlike bin-to-bin measures that only account for discrepancies on a bin-to-bin basis. Because EMD is obtained with linear programming techniques, its differential structure with respect to variations in bin weights as the active contour evolves is expressed through sensitivity analysis. Euler-Lagrange equations are then derived from the computed sensitivity at every iteration to produce gradient descent flows. We validate our approach with color image segmentation, in comparison with state-of-the-art Bhattacharyya (bin-to-bin) and 1D EMD (cross-bin) active contours. Some unique advantages of cross-bin comparison are highlighted in our segmentation results: better perceptual value and increased robustness with respect to the initialization.

12.
J Biomed Opt ; 18(6): 066017, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23804164

ABSTRACT

Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.


Subject(s)
Diagnosis, Computer-Assisted/methods , Microscopy, Fluorescence/methods , Neuromuscular Diseases/classification , Neuromuscular Diseases/diagnosis , Algorithms , Atrophy , Biopsy/methods , Databases, Factual , Diagnosis, Computer-Assisted/instrumentation , Humans , Models, Statistical , Muscle, Skeletal/pathology , Muscles/pathology , Neural Networks, Computer , Neuromuscular Diseases/pathology , Reproducibility of Results
13.
IEEE Trans Med Imaging ; 32(6): 1111-20, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23542950

ABSTRACT

In this paper a psychophysical experiment and a multidimensional scaling (MDS) analysis are undergone to determine the physical characteristics that physicians employ to diagnose a burn depth. Subsequently, these characteristics are translated into mathematical features, correlated with these physical characteristics analysis. Finally, a study to verify the ability of these mathematical features to classify burns is performed. In this study, a space with axes correlated with the MDS axes has been developed. 74 images have been represented in this space and a k-nearest neighbor classifier has been used to classify these 74 images. A success rate of 66.2% was obtained when classifying burns into three burn depths and a success rate of 83.8% was obtained when burns were classified as those which needed grafts and those which did not. Additional studies have been performed comparing our system with a principal component analysis and a support vector machine classifier. Results validate the ability of the mathematical features extracted from the psychophysical experiment to classify burns into their depths. In addition, the method has been compared with another state-of-the-art method and the same database.


Subject(s)
Burns/diagnosis , Burns/pathology , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Psychophysics/methods , Color , Humans , Principal Component Analysis , Skin/anatomy & histology , Skin/pathology , Support Vector Machine
14.
BMC Med ; 11: 77, 2013 Mar 20.
Article in English | MEDLINE | ID: mdl-23514382

ABSTRACT

BACKGROUND: The diagnosis of neuromuscular diseases is strongly based on the histological characterization of muscle biopsies. However, this morphological analysis is mostly a subjective process and difficult to quantify. We have tested if network science can provide a novel framework to extract useful information from muscle biopsies, developing a novel method that analyzes muscle samples in an objective, automated, fast and precise manner. METHODS: Our database consisted of 102 muscle biopsy images from 70 individuals (including controls, patients with neurogenic atrophies and patients with muscular dystrophies). We used this to develop a new method, Neuromuscular DIseases Computerized Image Analysis (NDICIA), that uses network science analysis to capture the defining signature of muscle biopsy images. NDICIA characterizes muscle tissues by representing each image as a network, with fibers serving as nodes and fiber contacts as links. RESULTS: After a 'training' phase with control and pathological biopsies, NDICIA was able to quantify the degree of pathology of each sample. We validated our method by comparing NDICIA quantification of the severity of muscular dystrophies with a pathologist's evaluation of the degree of pathology, resulting in a strong correlation (R = 0.900, P <0.00001). Importantly, our approach can be used to quantify new images without the need for prior 'training'. Therefore, we show that network science analysis captures the useful information contained in muscle biopsies, helping the diagnosis of muscular dystrophies and neurogenic atrophies. CONCLUSIONS: Our novel network analysis approach will serve as a valuable tool for assessing the etiology of muscular dystrophies or neurogenic atrophies, and has the potential to quantify treatment outcomes in preclinical and clinical trials.


Subject(s)
Image Processing, Computer-Assisted/methods , Muscles/pathology , Muscular Atrophy/diagnosis , Muscular Dystrophies/diagnosis , Neural Networks, Computer , Pathology/methods , Automation/methods , Biopsy , Humans
15.
Burns ; 37(7): 1233-40, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21703768

ABSTRACT

In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface.


Subject(s)
Body Surface Area , Burns/pathology , Computer Graphics , Imaging, Three-Dimensional/methods , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Female , Humans , Infant , Male , Middle Aged , Young Adult
16.
IEEE Trans Neural Netw ; 21(4): 609-20, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20181543

ABSTRACT

Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.


Subject(s)
Form Perception/physiology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Signal Processing, Computer-Assisted , Vision, Ocular/physiology , Humans , Signal Processing, Computer-Assisted/instrumentation , Time Factors
17.
J Biomed Opt ; 10(3): 034014, 2005.
Article in English | MEDLINE | ID: mdl-16229658

ABSTRACT

In this paper, a burn color image segmentation and classification system is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths). Digital color photographs are used as inputs to the system. The system is based on color and texture information, since these are the characteristics observed by physicians in order to form a diagnosis. A perceptually uniform color space (L*u*v*) was used, since Euclidean distances calculated in this space correspond to perceptual color differences. After the burn is segmented, a set of color and texture features is calculated that serves as the input to a Fuzzy-ARTMAP neural network. The neural network classifies burns into three types of burn depths: superficial dermal, deep dermal, and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images, yielding an average classification success rate of 82%.


Subject(s)
Artificial Intelligence , Burns/classification , Burns/pathology , Color , Colorimetry/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Feasibility Studies , Fuzzy Logic , Humans , Image Enhancement/methods , Information Storage and Retrieval/methods , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Severity of Illness Index
18.
Burns ; 31(3): 275-81, 2005 May.
Article in English | MEDLINE | ID: mdl-15774281

ABSTRACT

In this paper, a computer assisted diagnosis (CAD) tool for the classification of burns into their depths is proposed. The aim of the system is to separate burn wounds from healthy skin, and to distinguish among the different types of burns (burn depths) by means of digital photographs. It is intended to be used as an aid to diagnosis in local medical centres, where there is a lack of specialists. Another potential use of the system is as an educational tool. The system is based on the analysis of digital photographs. It extracts from those images colour and texture information, as these are the characteristics observed by physicians in order to form a diagnosis. Clinical effectiveness of the method was demonstrated on 35 clinical burn wound images, yielding an average classification success rate of 88% compared to expert classified images.


Subject(s)
Burns/diagnosis , Diagnosis, Computer-Assisted/methods , Trauma Severity Indices , Algorithms , Burns/classification , Burns/pathology , Color , Humans , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated , Photography/methods
19.
Inf Process Med Imaging ; 18: 294-305, 2003 Jul.
Article in English | MEDLINE | ID: mdl-15344466

ABSTRACT

In this paper a new system for burn diagnosis is proposed. The aim of the system is to separate burn wounds from healthy skin, and the different types of burns (burn depths) from each other, identifying each one. The system is based on the colour and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. We use a perceptually uniform colour space (L*u*v*), since Euclidean distances calculated in this space correspond to perceptually colour differences. After the burn is segmented, some colour and texture descriptors are calculated and they are the inputs to a Fuzzy-ARTMAP neural network. The neural network classifies them into three types of bums: superficial dermal, deep dermal and full thickness. Clinical effectiveness of the method was demonstrated on 62 clinical burn wound images obtained from digital colour photographs, yielding an average classification success rate of 82% compared to expert classified images.


Subject(s)
Algorithms , Burns/diagnosis , Colorimetry/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated , Photography/methods , Subtraction Technique , Burns/classification , Color , Computer Simulation , Diagnosis, Computer-Assisted/methods , Fuzzy Logic , Humans , Image Enhancement/methods , Models, Biological , Models, Statistical , Neural Networks, Computer , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...