Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Eur Heart J Digit Health ; 5(3): 260-269, 2024 May.
Article in English | MEDLINE | ID: mdl-38774376

ABSTRACT

Aims: Augmenting echocardiography with artificial intelligence would allow for automated assessment of routine parameters and identification of disease patterns not easily recognized otherwise. View classification is an essential first step before deep learning can be applied to the echocardiogram. Methods and results: We trained two- and three-dimensional convolutional neural networks (CNNs) using transthoracic echocardiographic (TTE) studies obtained from 909 patients to classify nine view categories (10 269 videos). Transthoracic echocardiographic studies from 229 patients were used in internal validation (2582 videos). Convolutional neural networks were tested on 100 patients with comprehensive TTE studies (where the two examples chosen by CNNs as most likely to represent a view were evaluated) and 408 patients with five view categories obtained via point-of-care ultrasound (POCUS). The overall accuracy of the two-dimensional CNN was 96.8%, and the averaged area under the curve (AUC) was 0.997 on the comprehensive TTE testing set; these numbers were 98.4% and 0.998, respectively, on the POCUS set. For the three-dimensional CNN, the accuracy and AUC were 96.3% and 0.998 for full TTE studies and 95.0% and 0.996 on POCUS videos, respectively. The positive predictive value, which defined correctly identified predicted views, was higher with two-dimensional rather than three-dimensional networks, exceeding 93% in apical, short-axis aortic valve, and parasternal long-axis left ventricle views. Conclusion: An automated view classifier utilizing CNNs was able to classify cardiac views obtained using TTE and POCUS with high accuracy. The view classifier will facilitate the application of deep learning to echocardiography.

2.
Sci Rep ; 12(1): 20057, 2022 11 21.
Article in English | MEDLINE | ID: mdl-36414660

ABSTRACT

Wound classification is an essential step of wound diagnosis. An efficient classifier can assist wound specialists in classifying wound types with less financial and time costs and help them decide on an optimal treatment procedure. This study developed a deep neural network-based multi-modal classifier using wound images and their corresponding locations to categorize them into multiple classes, including diabetic, pressure, surgical, and venous ulcers. A body map was also developed to prepare the location data, which can help wound specialists tag wound locations more efficiently. Three datasets containing images and their corresponding location information were designed with the help of wound specialists. The multi-modal network was developed by concatenating the image-based and location-based classifier outputs with other modifications. The maximum accuracy on mixed-class classifications (containing background and normal skin) varies from 82.48 to 100% in different experiments. The maximum accuracy on wound-class classifications (containing only diabetic, pressure, surgical, and venous) varies from 72.95 to 97.12% in various experiments. The proposed multi-modal network also showed a significant improvement in results from the previous works of literature.


Subject(s)
Neural Networks, Computer
3.
Adv Wound Care (New Rochelle) ; 11(12): 687-709, 2022 12.
Article in English | MEDLINE | ID: mdl-34544270

ABSTRACT

Significance: Accurately predicting wound healing trajectories is difficult for wound care clinicians due to the complex and dynamic processes involved in wound healing. Wound care teams capture images of wounds during clinical visits generating big datasets over time. Developing novel artificial intelligence (AI) systems can help clinicians diagnose, assess the effectiveness of therapy, and predict healing outcomes. Recent Advances: Rapid developments in computer processing have enabled the development of AI-based systems that can improve the diagnosis and effectiveness of therapy in various clinical specializations. In the past decade, we have witnessed AI revolutionizing all types of medical imaging like X-ray, ultrasound, computed tomography, magnetic resonance imaging, etc., but AI-based systems remain to be developed clinically and computationally for high-quality wound care that can result in better patient outcomes. Critical Issues: In the current standard of care, collecting wound images on every clinical visit, interpreting and archiving the data are cumbersome and time consuming. Commercial platforms are developed to capture images, perform wound measurements, and provide clinicians with a workflow for diagnosis, but AI-based systems are still in their infancy. This systematic review summarizes the breadth and depth of the most recent and relevant work in intelligent image-based data analysis and system developments for wound assessment. Future Directions: With increasing availabilities of massive data (wound images, wound-specific electronic health records, etc.) as well as powerful computing resources, AI-based digital platforms will play a significant role in delivering data-driven care to people suffering from debilitating chronic wounds.


Subject(s)
Artificial Intelligence , Image Processing, Computer-Assisted , Electronic Health Records , Humans , Image Processing, Computer-Assisted/methods , Workflow
4.
J Biomed Inform ; 125: 103972, 2022 01.
Article in English | MEDLINE | ID: mdl-34920125

ABSTRACT

Wound prognostic models not only provide an estimate of wound healing time to motivate patients to follow up their treatments but also can help clinicians to decide whether to use a standard care or adjuvant therapies and to assist them with designing clinical trials. However, collecting prognosis factors from Electronic Medical Records (EMR) of patients is challenging due to privacy, sensitivity, and confidentiality. In this study, we developed time series medical generative adversarial networks (GANs) to generate synthetic wound prognosis factors using very limited information collected during routine care in a specialized wound care facility. The generated prognosis variables are used in developing a predictive model for chronic wound healing trajectory. Our novel medical GAN can produce both continuous and categorical features from EMR. Moreover, we applied temporal information to our model by considering data collected from the weekly follow-ups of patients. Conditional training strategies were utilized to enhance training and generate classified data in terms of healing or non-healing. The ability of the proposed model to generate realistic EMR data was evaluated by TSTR (test on the synthetic, train on the real), discriminative accuracy, and visualization. We utilized samples generated by our proposed GAN in training a prognosis model to demonstrate its real-life application. Using the generated samples in training predictive models improved the classification accuracy by 6.66-10.01% compared to the previous EMR-GAN. Additionally, the suggested prognosis classifier has achieved the area under the curve (AUC) of 0.875, 0.810, and 0.647 when training the network using data from the first three visits, first two visits, and first visit, respectively. These results indicate a significant improvement in wound healing prediction compared to the previous prognosis models.


Subject(s)
Confidentiality , Electronic Health Records , Humans , Privacy , Prognosis , Time Factors
5.
Accid Anal Prev ; 159: 106211, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34126276

ABSTRACT

Work zone safety management and research relies heavily on the quality of work zone crash data. However, it is possible that a police officer may misclassify a crash in structured data due to: restrictive options in the crash report; a lack of understanding about their importance; lack of time due to police officers' work load; and ignorance of work zone as one of the crash contributing factors. Consequently, work zone crashes are under representative in crash statistics. Crash narratives contain valuable information that is not included in the structured data. The objective of this study is to develop a classifier that applies text mining techniques to quickly find missed work zone (WZ) crashes through the unstructured text saved in the crash narratives. The study used three-year crash data from 2017 to 2019. The data from 2017 to 2018 was used as training data, and the 2019 data was used as testing data. A unigram + bigram noisy-OR classifier was developed and proven to be an efficient and effective means of classifying work zone crashes based on key information in the crash narrative. The ad-hoc analysis of misclassified work zone crashes sheds light on when, where and the plausible reasons as to why work zone crashes are more likely to be missed.


Subject(s)
Accidents, Traffic , Police , Data Mining , Humans , Narration , Safety Management
6.
Comput Biol Med ; 134: 104536, 2021 07.
Article in English | MEDLINE | ID: mdl-34126281

ABSTRACT

Acute and chronic wounds are a challenge to healthcare systems around the world and affect many people's lives annually. Wound classification is a key step in wound diagnosis that would help clinicians to identify an optimal treatment procedure. Hence, having a high-performance classifier assists wound specialists to classify wound types with less financial and time costs. Different wound classification methods based on machine learning and deep learning have been proposed in the literature. In this study, we have developed an ensemble Deep Convolutional Neural Network-based classifier to categorize wound images into multiple classes including surgical, diabetic, and venous ulcers. The output classification scores of two classifiers (namely, patch-wise and image-wise) are fed into a Multilayer Perceptron to provide a superior classification performance. A 5-fold cross-validation approach is used to evaluate the proposed method. We obtained maximum and average classification accuracy values of 96.4% and 94.28% for binary and 91.9% and 87.7% for 3-class classification problems. The proposed classifier was compared with some common deep classifiers and showed significantly higher accuracy metrics. We also tested the proposed method on the Medetec wound image dataset, and the accuracy values of 91.2% and 82.9% were obtained for binary and 3-class classifications. The results show that our proposed method can be used effectively as a decision support system in classification of wound images or other related clinical applications.


Subject(s)
Machine Learning , Neural Networks, Computer , Humans
7.
Sci Rep ; 10(1): 21897, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33318503

ABSTRACT

Acute and chronic wounds have varying etiologies and are an economic burden to healthcare systems around the world. The advanced wound care market is expected to exceed $22 billion by 2024. Wound care professionals rely heavily on images and image documentation for proper diagnosis and treatment. Unfortunately lack of expertise can lead to improper diagnosis of wound etiology and inaccurate wound management and documentation. Fully automatic segmentation of wound areas in natural images is an important part of the diagnosis and care protocol since it is crucial to measure the area of the wound and provide quantitative parameters in the treatment. Various deep learning models have gained success in image analysis including semantic segmentation. This manuscript proposes a novel convolutional framework based on MobileNetV2 and connected component labelling to segment wound regions from natural images. The advantage of this model is its lightweight and less compute-intensive architecture. The performance is not compromised and is comparable to deeper neural networks. We build an annotated wound image dataset consisting of 1109 foot ulcer images from 889 patients to train and test the deep learning models. We demonstrate the effectiveness and mobility of our method by conducting comprehensive experiments and analyses on various segmentation neural networks. The full implementation is available at https://github.com/uwm-bigdata/wound-segmentation .


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Neural Networks, Computer , Wound Healing , Wounds and Injuries/diagnostic imaging , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...