Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
PeerJ Comput Sci ; 7: e452, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33987454

RESUMO

CONTEXT: The interpretations of cardiotocography (CTG) tracings are indeed vital to monitor fetal well-being both during pregnancy and childbirth. Currently, many studies are focusing on feature extraction and CTG classification using computer vision approach in determining the most accurate diagnosis as well as monitoring the fetal well-being during pregnancy. Additionally, a fetal monitoring system would be able to perform detection and precise quantification of fetal heart rate patterns. OBJECTIVE: This study aimed to perform a systematic review to describe the achievements made by the researchers, summarizing findings that have been found by previous researchers in feature extraction and CTG classification, to determine criteria and evaluation methods to the taxonomies of the proposed literature in the CTG field and to distinguish aspects from relevant research in the field of CTG. METHODS: Article search was done systematically using three databases: IEEE Xplore digital library, Science Direct, and Web of Science over a period of 5 years. The literature in the medical sciences and engineering was included in the search selection to provide a broader understanding for researchers. RESULTS: After screening 372 articles, and based on our protocol of exclusion and inclusion criteria, for the final set of articles, 50 articles were obtained. The research literature taxonomy was divided into four stages. The first stage discussed the proposed method which presented steps and algorithms in the pre-processing stage, feature extraction and classification as well as their use in CTG (20/50 papers). The second stage included the development of a system specifically on automatic feature extraction and CTG classification (7/50 papers). The third stage consisted of reviews and survey articles on automatic feature extraction and CTG classification (3/50 papers). The last stage discussed evaluation and comparative studies to determine the best method for extracting and classifying features with comparisons based on a set of criteria (20/50 articles). DISCUSSION: This study focused more on literature compared to techniques or methods. Also, this study conducts research and identification of various types of datasets used in surveys from publicly available, private, and commercial datasets. To analyze the results, researchers evaluated independent datasets using different techniques. CONCLUSIONS: This systematic review contributes to understand and have insight into the relevant research in the field of CTG by surveying and classifying pertinent research efforts. This review will help to address the current research opportunities, problems and challenges, motivations, recommendations related to feature extraction and CTG classification, as well as the measurement of various performance and various data sets used by other researchers.

2.
PeerJ Comput Sci ; 7: e405, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33817048

RESUMO

BACKGROUND: Otitis media (OM) is the infection and inflammation of the mucous membrane covering the Eustachian with the airy cavities of the middle ear and temporal bone. OM is also one of the most common ailments. In clinical practice, the diagnosis of OM is carried out by visual inspection of otoscope images. This vulnerable process is subjective and error-prone. METHODS: In this study, a novel computer-aided decision support model based on the convolutional neural network (CNN) has been developed. To improve the generalized ability of the proposed model, a combination of the channel and spatial model (CBAM), residual blocks, and hypercolumn technique is embedded into the proposed model. All experiments were performed on an open-access tympanic membrane dataset that consists of 956 otoscopes images collected into five classes. RESULTS: The proposed model yielded satisfactory classification achievement. The model ensured an overall accuracy of 98.26%, sensitivity of 97.68%, and specificity of 99.30%. The proposed model produced rather superior results compared to the pre-trained CNNs such as AlexNet, VGG-Nets, GoogLeNet, and ResNets. Consequently, this study points out that the CNN model equipped with the advanced image processing techniques is useful for OM diagnosis. The proposed model may help to field specialists in achieving objective and repeatable results, decreasing misdiagnosis rate, and supporting the decision-making processes.

3.
J Digit Imaging ; 34(2): 263-272, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33674979

RESUMO

Coronavirus (COVID-19) is a pandemic, which caused suddenly unexplained pneumonia cases and caused a devastating effect on global public health. Computerized tomography (CT) is one of the most effective tools for COVID-19 screening. Since some specific patterns such as bilateral, peripheral, and basal predominant ground-glass opacity, multifocal patchy consolidation, crazy-paving pattern with a peripheral distribution can be observed in CT images and these patterns have been declared as the findings of COVID-19 infection. For patient monitoring, diagnosis and segmentation of COVID-19, which spreads into the lung, expeditiously and accurately from CT, will provide vital information about the stage of the disease. In this work, we proposed a SegNet-based network using the attention gate (AG) mechanism for the automatic segmentation of COVID-19 regions in CT images. AGs can be easily integrated into standard convolutional neural network (CNN) architectures with a minimum computing load as well as increasing model precision and predictive accuracy. Besides, the success of the proposed network has been evaluated based on dice, Tversky, and focal Tversky loss functions to deal with low sensitivity arising from the small lesions. The experiments were carried out using a fivefold cross-validation technique on a COVID-19 CT segmentation database containing 473 CT images. The obtained sensitivity, specificity, and dice scores were reported as 92.73%, 99.51%, and 89.61%, respectively. The superiority of the proposed method has been highlighted by comparing with the results reported in previous studies and it is thought that it will be an auxiliary tool that accurately detects automatic COVID-19 regions from CT images.


Assuntos
COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Semântica , Tomografia Computadorizada por Raios X
4.
Med Biol Eng Comput ; 59(1): 57-70, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33222016

RESUMO

Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.


Assuntos
Recuperação Demorada da Anestesia , Processamento de Imagem Assistida por Computador , Atenção , Encéfalo/diagnóstico por imagem , Humanos , Redes Neurais de Computação
5.
Appl Soft Comput ; 97: 106580, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32837453

RESUMO

A pneumonia of unknown causes, which was detected in Wuhan, China, and spread rapidly throughout the world, was declared as Coronavirus disease 2019 (COVID-19). Thousands of people have lost their lives to this disease. Its negative effects on public health are ongoing. In this study, an intelligence computer-aided model that can automatically detect positive COVID-19 cases is proposed to support daily clinical applications. The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. Contrary to the generally used transfer learning approach, the proposed deep CNN model was trained from scratch. Instead of the pre-trained CNNs, a novel serial network consisting of five convolution layers was designed. This CNN model was utilized as a deep feature extractor. The extracted deep discriminative features were used to feed the machine learning algorithms, which were k-nearest neighbor, support vector machine (SVM), and decision tree. The hyperparameters of the machine learning models were optimized using the Bayesian optimization algorithm. The experiments were conducted on a public COVID-19 radiology database. The database was divided into two parts as training and test sets with 70% and 30% rates, respectively. As a result, the most efficient results were ensured by the SVM classifier with an accuracy of 98.97%, a sensitivity of 89.39%, a specificity of 99.75%, and an F-score of 96.72%. Consequently, a cheap, fast, and reliable intelligence tool has been provided for COVID-19 infection detection. The developed model can be used to assist field specialists, physicians, and radiologists in the decision-making process. Thanks to the proposed tool, the misdiagnosis rates can be reduced, and the proposed model can be used as a retrospective evaluation tool to validate positive COVID-19 infection cases.

6.
Comput Biol Med ; 121: 103805, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32568679

RESUMO

Coronavirus causes a wide variety of respiratory infections and it is an RNA-type virus that can infect both humans and animal species. It often causes pneumonia in humans. Artificial intelligence models have been helpful for successful analyses in the biomedical field. In this study, Coronavirus was detected using a deep learning model, which is a sub-branch of artificial intelligence. Our dataset consists of three classes namely: coronavirus, pneumonia, and normal X-ray imagery. In this study, the data classes were restructured using the Fuzzy Color technique as a preprocessing step and the images that were structured with the original images were stacked. In the next step, the stacked dataset was trained with deep learning models (MobileNetV2, SqueezeNet) and the feature sets obtained by the models were processed using the Social Mimic optimization method. Thereafter, efficient features were combined and classified using Support Vector Machines (SVM). The overall classification rate obtained with the proposed approach was 99.27%. With the proposed approach in this study, it is evident that the model can efficiently contribute to the detection of COVID-19 disease.


Assuntos
Betacoronavirus , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Aprendizado Profundo , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Inteligência Artificial , COVID-19 , Cor , Biologia Computacional , Bases de Dados Factuais , Lógica Fuzzy , Humanos , Pulmão/diagnóstico por imagem , Pandemias , Pneumonia/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , SARS-CoV-2 , Máquina de Vetores de Suporte
7.
Med Hypotheses ; 134: 109426, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31622926

RESUMO

Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Fundo de Olho , Processamento de Imagem Assistida por Computador , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Área Sob a Curva , Angiofluoresceinografia , Humanos , Curva ROC , Sensibilidade e Especificidade
8.
Med Hypotheses ; 135: 109503, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31760247

RESUMO

Invasive ductal carcinoma cancer, which invades the breast tissues by destroying the milk channels, is the most common type of breast cancer in women. Approximately, 80% of breast cancer patients have invasive ductal carcinoma and roughly 66.6% of these patients are older than 55 years. This situation points out a powerful relationship between the type of breast cancer and progressed woman age. In this study, the classification of invasive ductal carcinoma breast cancer is performed by using deep learning models, which is the sub-branch of artificial intelligence. In this scope, convolutional neural network models and the autoencoder network model are combined. In the experiment, the dataset was reconstructed by processing with the autoencoder model. The discriminative features obtained from convolutional neural network models were utilized. As a result, the most efficient features were determined by using the ridge regression method, and classification was performed using linear discriminant analysis. The best success rate of classification was achieved as 98.59%. Consequently, the proposed approach can be admitted as a successful model in the classification.


Assuntos
Neoplasias da Mama/diagnóstico , Carcinoma Ductal de Mama/diagnóstico , Diagnóstico por Computador/métodos , Algoritmos , Inteligência Artificial , Análise Discriminante , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Modelos Lineares , Aprendizado de Máquina , Invasividade Neoplásica , Redes Neurais de Computação , Linguagens de Programação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Software
9.
Med Hypotheses ; 134: 109531, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31877442

RESUMO

A brain tumor is a mass that grows unevenly in the brain and directly affects human life. This mass occurs spontaneously because of the tissues surrounding the brain or the skull. Surgical methods are generally preferred for the treatment of the brain tumor. Recently, models of deep learning in the diagnosis and treatment of diseases in the biomedical field have gained intense interest. In this study, we propose a new convolutional neural network model named BrainMRNet. This architecture is built on attention modules and hypercolumn technique; it has a residual network. Firstly, image is preprocessed in BrainMRNet. Then, this step is transferred to attention modules using image augmentation techniques for each image. Attention modules select important areas of the image and the image is transferred to convolutional layers. One of the most important techniques that the BrainMRNet model uses in the convolutional layers is hypercolumn. With the help of this technique, the features extracted from each layer of the BrainMRNet model are retained by the array structure in the last layer. The aim is to select the best and the most efficient features among the features maintained in the array. Accessible magnetic resonance images were used to detect brain tumor with the BrainMRNet model. BrainMRNet model is more successful than the pre-trained convolutional neural network models (AlexNet, GoogleNet, VGG-16) used in this study. The classification success achieved with the BrainMRNet model was 96.05%.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Neoplasias Encefálicas/classificação , Conjuntos de Dados como Assunto , Detecção Precoce de Câncer
10.
Health Inf Sci Syst ; 7(1): 17, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31435480

RESUMO

INTRODUCTION: Cardiotocography (CTG) consists of two biophysical signals that are fetal heart rate (FHR) and uterine contraction (UC). In this research area, the computerized systems are usually utilized to provide more objective and repeatable results. MATERIALS AND METHODS: Feature selection algorithms are of great importance regarding the computerized systems to not only reduce the dimension of feature set but also to reveal the most relevant features without losing too much information. In this paper, three filters and two wrappers feature selection methods and machine learning models, which are artificial neural network (ANN), k-nearest neighbor (kNN), decision tree (DT), and support vector machine (SVM), are evaluated on a high dimensional feature set obtained from an open-access CTU-UHB intrapartum CTG database. The signals are divided into two classes as normal and hypoxic considering umbilical artery pH value (pH < 7.20) measured after delivery. A comprehensive diagnostic feature set forming the features obtained from morphological, linear, nonlinear, time-frequency and image-based time-frequency domains is generated first. Then, combinations of the feature selection algorithms and machine learning models are evaluated to achieve the most effective features as well as high classification performance. RESULTS: The experimental results show that it is possible to achieve better classification performance using lower dimensional feature set that comprises of more related features, instead of the high-dimensional feature set. The most informative feature subset was generated by considering the frequency of selection of the features by feature selection algorithms. As a result, the most efficient results were produced by selected only 12 relevant features instead of a full feature set consisting of 30 diagnostic indices and SVM model. Sensitivity and specificity were achieved as 77.40% and 93.86%, respectively. CONCLUSION: Consequently, the evaluation of multiple feature selection algorithms resulted in achieving the best results.

11.
Front Physiol ; 10: 255, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30914973

RESUMO

Background: Electronic fetal monitoring (EFM) is widely applied as a routine diagnostic tool by clinicians using fetal heart rate (FHR) signals to prevent fetal hypoxia. However, visual interpretation of the FHR usually leads to significant inter-observer and intra-observer variability, and false positives become the main cause of unnecessary cesarean sections. Goal: The main aim of this study was to ensure a novel, consistent, robust, and effective model for fetal hypoxia detection. Methods: In this work, we proposed a novel computer-aided diagnosis (CAD) system integrated with an advanced deep learning (DL) algorithm. For a 1-dimensional preprocessed FHR signal, the 2-dimensional image was transformed using recurrence plot (RP), which is considered to greatly capture the non-linear characteristics. The ultimate image dataset was enriched by changing several parameters of the RP and was then used to feed the convolutional neural network (CNN). Compared to conventional machine learning (ML) methods, a CNN can self-learn useful features from the input data and does not perform complex manual feature engineering (i.e., feature extraction and selection). Results: Finally, according to the optimization experiment, the CNN model obtained the average performance using optimal configuration across 10-fold: accuracy = 98.69%, sensitivity = 99.29%, specificity = 98.10%, and area under the curve = 98.70%. Conclusion: To the best of our knowledge, this approached achieved better classification performance in predicting fetal hypoxia using FHR signals compared to the other state-of-the-art works. Significance: In summary, the satisfied result proved the effectiveness of our proposed CAD system for assisting obstetricians making objective and accurate medical decisions based on RP and powerful CNN algorithm.

12.
Comput Biol Med ; 99: 85-97, 2018 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-29894897

RESUMO

Cardiotocography (CTG) is applied routinely for fetal monitoring during the perinatal period to decrease the rates of neonatal mortality and morbidity as well as unnecessary interventions. The analysis of CTG traces has become an indispensable part of present clinical practices; however, it also has serious drawbacks, such as poor specificity and variability in its interpretation. The automated CTG analysis is seen as the most promising way to overcome these disadvantages. In this study, a novel prognostic model is proposed for predicting fetal hypoxia from CTG traces based on an innovative approach called image-based time-frequency (IBTF) analysis comprised of a combination of short time Fourier transform (STFT) and gray level co-occurrence matrix (GLCM). More specifically, from a graphical representation of the fetal heart rate (FHR) signal, the spectrogram is obtained by using STFT. The spectrogram images are converted into 8-bit grayscale images, and IBTF features such as contrast, correlation, energy, and homogeneity are utilized for identifying FHR signals. At the final stage of the analysis, different subsets of the feature space are applied as the input to the least square support vector machine (LS-SVM) classifier to determine the most informative subset. For this particular purpose, the genetic algorithm is employed. The prognostic model was performed on the open-access intrapartum CTU-UHB CTG database. The sensitivity and specificity obtained using only conventional features were 57.33% and 67.24%, respectively, whereas the most effective results were achieved using a combination of conventional and IBTF features, with a sensitivity of 63.45% and a specificity of 65.88%. Conclusively, this study provides a new promising approach for feature extraction of FHR signals. In addition, the experimental outcomes showed that IBTF features provided an increase in the classification accuracy.


Assuntos
Cardiotocografia , Hipóxia Fetal , Frequência Cardíaca Fetal , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte , Adulto , Feminino , Hipóxia Fetal/diagnóstico , Hipóxia Fetal/diagnóstico por imagem , Hipóxia Fetal/fisiopatologia , Humanos , Gravidez , Prognóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA