Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
J Digit Imaging ; 36(4): 1675-1686, 2023 08.
Article in English | MEDLINE | ID: mdl-37131063

ABSTRACT

Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Microscopy
2.
J Digit Imaging ; 36(3): 973-987, 2023 06.
Article in English | MEDLINE | ID: mdl-36797543

ABSTRACT

Modern computer vision algorithms are based on convolutional neural networks (CNNs), and both end-to-end learning and transfer learning modes have been used with CNN for image classification. Thus, automated brain tumor classification models have been proposed by deploying CNNs to help medical professionals. Our primary objective is to increase the classification performance using CNN. Therefore, a patch-based deep feature engineering model has been proposed in this work. Nowadays, patch division techniques have been used to attain high classification performance, and variable-sized patches have achieved good results. In this work, we have used three types of patches of different sizes (32 × 32, 56 × 56, 112 × 112). Six feature vectors have been obtained using these patches and two layers of the pretrained ResNet50 (global average pooling and fully connected layers). In the feature selection phase, three selectors-neighborhood component analysis (NCA), Chi2, and ReliefF-have been used, and 18 final feature vectors have been obtained. By deploying k nearest neighbors (kNN), 18 results have been calculated. Iterative hard majority voting (IHMV) has been applied to compute the general classification accuracy of this framework. This model uses different patches, feature extractors (two layers of the ResNet50 have been utilized as feature extractors), and selectors, making this a framework that we have named PatchResNet. A public brain image dataset containing four classes (glioblastoma multiforme (GBM), meningioma, pituitary tumor, healthy) has been used to develop the proposed PatchResNet model. Our proposed PatchResNet attained 98.10% classification accuracy using the public brain tumor image dataset. The developed PatchResNet model obtained high classification accuracy and has the advantage of being a self-organized framework. Therefore, the proposed method can choose the best result validation prediction vectors and achieve high image classification performance.


Subject(s)
Brain Neoplasms , Neural Networks, Computer , Humans , Algorithms , Brain Neoplasms/diagnostic imaging , Magnetic Resonance Imaging , Brain
3.
Inform Med Unlocked ; 36: 101158, 2023.
Article in English | MEDLINE | ID: mdl-36618887

ABSTRACT

Background: Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering. Material and method: We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm. Results: Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity. Conclusions: Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.

4.
Physiol Meas ; 44(3)2023 03 14.
Article in English | MEDLINE | ID: mdl-36599170

ABSTRACT

Objective.Schizophrenia (SZ) is a severe, chronic psychiatric-cognitive disorder. The primary objective of this work is to present a handcrafted model using state-of-the-art technique to detect SZ accurately with EEG signals.Approach.In our proposed work, the features are generated using a histogram-based generator and an iterative decomposition model. The graph-based molecular structure of the carbon chain is employed to generate low-level features. Hence, the developed feature generation model is called the carbon chain pattern (CCP). An iterative tunable q-factor wavelet transform (ITQWT) technique is implemented in the feature extraction phase to generate various sub-bands of the EEG signal. The CCP was applied to the generated sub-bands to obtain several feature vectors. The clinically significant features were selected using iterative neighborhood component analysis (INCA). The selected features were then classified using the k nearest neighbor (kNN) with a 10-fold cross-validation strategy. Finally, the iterative weighted majority method was used to obtain the results in multiple channels.Main results.The presented CCP-ITQWT and INCA-based automated model achieved an accuracy of 95.84% and 99.20% using a single channel and majority voting method, respectively with kNN classifier.Significance.Our results highlight the success of the proposed CCP-ITQWT and INCA-based model in the automated detection of SZ using EEG signals.


Subject(s)
Cognitive Dysfunction , Schizophrenia , Humans , Electroencephalography/methods , Schizophrenia/diagnosis , Wavelet Analysis , Carbon , Algorithms
5.
Diagnostics (Basel) ; 12(12)2022 Dec 15.
Article in English | MEDLINE | ID: mdl-36553188

ABSTRACT

SARS-CoV-2 and Influenza-A can present similar symptoms. Computer-aided diagnosis can help facilitate screening for the two conditions, and may be especially relevant and useful in the current COVID-19 pandemic because seasonal Influenza-A infection can still occur. We have developed a novel text-based classification model for discriminating between the two conditions using protein sequences of varying lengths. We downloaded viral protein sequences of SARS-CoV-2 and Influenza-A with varying lengths (all 100 or greater) from the NCBI database and randomly selected 16,901 SARS-CoV-2 and 19,523 Influenza-A sequences to form a two-class study dataset. We used a new feature extraction function based on a unique pattern, HamletPat, generated from the text of Shakespeare's Hamlet, and a signum function to extract local binary pattern-like bits from overlapping fixed-length (27) blocks of the protein sequences. The bits were converted to decimal map signals from which histograms were extracted and concatenated to form a final feature vector of length 1280. The iterative Chi-square function selected the 340 most discriminative features to feed to an SVM with a Gaussian kernel for classification. The model attained 99.92% and 99.87% classification accuracy rates using hold-out (75:25 split ratio) and five-fold cross-validations, respectively. The excellent performance of the lightweight, handcrafted HamletPat-based classification model suggests that it can be a valuable tool for screening protein sequences to discriminate between SARS-CoV-2 and Influenza-A infections.

6.
Diagnostics (Basel) ; 12(10)2022 Oct 16.
Article in English | MEDLINE | ID: mdl-36292199

ABSTRACT

BACKGROUND: Sleep stage classification is a crucial process for the diagnosis of sleep or sleep-related diseases. Currently, this process is based on manual electroencephalogram (EEG) analysis, which is resource-intensive and error-prone. Various machine learning models have been recommended to standardize and automate the analysis process to address these problems. MATERIALS AND METHODS: The well-known cyclic alternating pattern (CAP) sleep dataset is used to train and test an L-tetrolet pattern-based sleep stage classification model in this research. By using this dataset, the following three cases are created, and they are: Insomnia, Normal, and Fused cases. For each of these cases, the machine learning model is tasked with identifying six sleep stages. The model is structured in terms of feature generation, feature selection, and classification. Feature generation is established with a new L-tetrolet (Tetris letter) function and multiple pooling decomposition for level creation. We fuse ReliefF and iterative neighborhood component analysis (INCA) feature selection using a threshold value. The hybrid and iterative feature selectors are named threshold selection-based ReliefF and INCA (TSRFINCA). The selected features are classified using a cubic support vector machine. RESULTS: The presented L-tetrolet pattern and TSRFINCA-based sleep stage classification model yield 95.43%, 91.05%, and 92.31% accuracies for Insomnia, Normal dataset, and Fused cases, respectively. CONCLUSION: The recommended L-tetrolet pattern and TSRFINCA-based model push the envelope of current knowledge engineering by accurately classifying sleep stages even in the presence of sleep disorders.

7.
Pediatr Emerg Care ; 37(3): e100-e104, 2021 Mar 01.
Article in English | MEDLINE | ID: mdl-30702650

ABSTRACT

OBJECTIVES: The aim of this study was to compare the role of computed tomography (CT) in the diagnosis of open-globe trauma and intraocular foreign body (IOFB) in pediatric and adult age groups. METHODS: Medical records of cases with open-globe trauma at Inonu University Hospital's Ophthalmology Emergency Service were retrospectively evaluated. Preoperative orbital CT images of the cases obtained at emergency services and their clinical and/or surgical findings were compared in pediatric and adult groups. RESULTS: We included 47 eyes of 47 cases aged 18 years and below (pediatric group) and 85 eyes of 82 cases over 18 years (adult group). The mean ± SD age was 10.80 ± 5.11 years (range, 2-18 years) in the pediatric group and 46.34 ± 19.01 years (range, 19-82 years) in the adult group. Computed tomography images revealed 21.7% of the cases with corneal lacerations, 55.5% with scleral lacerations, and 91.6% with corneoscleral lacerations in the pediatric group, whereas the respective numbers were 48.4%, 66.6%, and 61.9% in the adult group. The detection rates of corneal penetrations and vitreous hemorrhage with CT were significantly lower in the pediatric group than in the adult group (P < 0.05). The CT scans diagnosed 66.6% of the pediatric cases and 90% of the adult cases with an IOFB. CONCLUSIONS: Corneal lacerations and IOFBs can be missed, especially in the pediatric group, because the eye is smaller in adults. Pediatric patients with a history of ocular trauma should undergo an examination under general anesthesia followed by surgical exploration if necessary.


Subject(s)
Eye Foreign Bodies , Eye Injuries, Penetrating , Adolescent , Adult , Child , Child, Preschool , Eye Foreign Bodies/diagnostic imaging , Eye Foreign Bodies/surgery , Eye Injuries, Penetrating/diagnostic imaging , Eye Injuries, Penetrating/surgery , Humans , Retrospective Studies , Tomography, X-Ray Computed , Visual Acuity
8.
Cutan Ocul Toxicol ; 37(1): 19-23, 2018 Mar.
Article in English | MEDLINE | ID: mdl-28427301

ABSTRACT

PURPOSE: The aim of this study was at evaluating the effects of long-term cannabis use on the corneal endothelial cells with the specular microscopy. METHODS: The study enrolled 28 eyes of 28 patients diagnosed with cannabinoid use disorder. The cannabinoid group was selected among patients who had been using the substance for three days or more per week over the past one year. Thirty-two eyes of 32 age- and sex-matched healthy individuals enrolled as control group in the study. Corneal endothelial cell density (CD), coefficient of variation (CV) and hexagonal cell ratio (HEX) values were analyzed by specular microscopy. RESULTS: The mean CD was 2900 ± 211 cells/mm2 in the cannabinoid group and 3097 ± 214 cells/mm2 in the control group (p < 0.01). There was a significant decrease in cannabinoid group. The mean CV was 29 ± 7 and 27 ± 4 in the cannabinoid and control groups, respectively (p > 0.05). No significant difference was present between the cannabinoid and the control groups in terms of mean CV value. The mean HEX was 52 ± 5% in the cannabinoid group and 53 ± 10% in the control group (p > 0.05). There was not a significant difference between the cannabinoid and the control groups in terms of mean HEX value. CONCLUSION: A significant decrease in CD was found in cannabinoid users compared the control group.


Subject(s)
Cannabinoids/toxicity , Endothelium, Corneal/drug effects , Marijuana Abuse/pathology , Adult , Endothelium, Corneal/pathology , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...