Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Electromagn Biol Med ; : 1-15, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39081005

RESUMO

Efficient and accurate classification of brain tumor categories remains a critical challenge in medical imaging. While existing techniques have made strides, their reliance on generic features often leads to suboptimal results. To overcome these issues, Multimodal Contrastive Domain Sharing Generative Adversarial Network for Improved Brain Tumor Classification Based on Efficient Invariant Feature Centric Growth Analysis (MCDS-GNN-IBTC-CGA) is proposed in this manuscript.Here, the input imagesare amassed from brain tumor dataset. Then the input images are preprocesssed using Range - Doppler Matched Filter (RDMF) for improving the quality of the image. Then Ternary Pattern and Discrete Wavelet Transforms (TPDWT) is employed for feature extraction and focusing on white, gray mass, edge correlation, and depth features. The proposed method leverages Multimodal Contrastive Domain Sharing Generative Adversarial Network (MCDS-GNN) to categorize brain tumor images into Glioma, Meningioma, and Pituitary tumors. Finally, Coati Optimization Algorithm (COA) optimizes MCDS-GNN's weight parameters. The proposed MCDS-GNN-IBTC-CGA is empirically evaluated utilizing accuracy, specificity, sensitivity, Precision, F1-score,Mean Square Error (MSE). Here, MCDS-GNN-IBTC-CGA attains 12.75%, 11.39%, 13.35%, 11.42% and 12.98% greater accuracy comparing to the existingstate-of-the-arts techniques, likeMRI brain tumor categorization utilizing parallel deep convolutional neural networks (PDCNN-BTC), attention-guided convolutional neural network for the categorization of braintumor (AGCNN-BTC), intelligent driven deep residual learning method for the categorization of braintumor (DCRN-BTC),fully convolutional neural networks method for the classification of braintumor (FCNN-BTC), Convolutional Neural Network and Multi-Layer Perceptron based brain tumor classification (CNN-MLP-BTC) respectively.


The proposed MCDS-GNN-IBTC-CGA method starts by cleaning brain tumor images with RDMF and extracting features using TPDWT, focusing on color and texture. Subsequently, the MCDS-GNN artificial intelligence system categorizes tumors into types like Glioma and Meningioma. To enhance accuracy, COA fine-tunes the MCDS-GNN parameters. Ultimately, this approach aids in more effective diagnosis and treatment of brain tumors.

2.
Sensors (Basel) ; 23(11)2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37300085

RESUMO

The understanding of roads and lanes incorporates identifying the level of the road, the position and count of lanes, and ending, splitting, and merging roads and lanes in highway, rural, and urban scenarios. Even though a large amount of progress has been made recently, this kind of understanding is ahead of the accomplishments of the present perceptual methods. Nowadays, 3D lane detection has become the trending research in autonomous vehicles, which shows an exact estimation of the 3D position of the drivable lanes. This work mainly aims at proposing a new technique with Phase I (road or non-road classification) and Phase II (lane or non-lane classification) with 3D images. Phase I: Initially, the features, such as the proposed local texton XOR pattern (LTXOR), local Gabor binary pattern histogram sequence (LGBPHS), and median ternary pattern (MTP), are derived. These features are subjected to the bidirectional gated recurrent unit (BI-GRU) that detects whether the object is road or non-road. Phase II: Similar features in Phase I are further classified using the optimized BI-GRU, where the weights are chosen optimally via self-improved honey badger optimization (SI-HBO). As a result, the system can be identified, and whether it is lane-related or not. Particularly, the proposed BI-GRU + SI-HBO obtained a higher precision of 0.946 for db 1. Furthermore, the best-case accuracy for the BI-GRU + SI-HBO was 0.928, which was better compared with honey badger optimization. Finally, the development of SI-HBO was proven to be better than the others.


Assuntos
Acidentes de Trânsito , População Rural , Humanos
3.
Sensors (Basel) ; 19(7)2019 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-30987018

RESUMO

Human action recognition plays a significant part in the research community due to its emerging applications. A variety of approaches have been proposed to resolve this problem, however, several issues still need to be addressed. In action recognition, effectively extracting and aggregating the spatial-temporal information plays a vital role to describe a video. In this research, we propose a novel approach to recognize human actions by considering both deep spatial features and handcrafted spatiotemporal features. Firstly, we extract the deep spatial features by employing a state-of-the-art deep convolutional network, namely Inception-Resnet-v2. Secondly, we introduce a novel handcrafted feature descriptor, namely Weber's law based Volume Local Gradient Ternary Pattern (WVLGTP), which brings out the spatiotemporal features. It also considers the shape information by using gradient operation. Furthermore, Weber's law based threshold value and the ternary pattern based on an adaptive local threshold is presented to effectively handle the noisy center pixel value. Besides, a multi-resolution approach for WVLGTP based on an averaging scheme is also presented. Afterward, both these extracted features are concatenated and feed to the Support Vector Machine to perform the classification. Lastly, the extensive experimental analysis shows that our proposed method outperforms state-of-the-art approaches in terms of accuracy.


Assuntos
Técnicas Biossensoriais , Atividades Humanas , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Monitorização Fisiológica , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Máquina de Vetores de Suporte
4.
Sensors (Basel) ; 17(12)2017 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-29186923

RESUMO

A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.


Assuntos
Face , Algoritmos , Humanos , Iluminação , Reconhecimento Automatizado de Padrão
5.
J Med Syst ; 40(12): 272, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27757715

RESUMO

Smart, interactive healthcare is necessary in the modern age. Several issues, such as accurate diagnosis, low-cost modeling, low-complexity design, seamless transmission, and sufficient storage, should be addressed while developing a complete healthcare framework. In this paper, we propose a patient state recognition system for the healthcare framework. We design the system in such a way that it provides good recognition accuracy, provides low-cost modeling, and is scalable. The system takes two main types of input, video and audio, which are captured in a multi-sensory environment. Speech and video input are processed separately during feature extraction and modeling; these two input modalities are merged at score level, where the scores are obtained from the models of different patients' states. For the experiments, 100 people were recruited to mimic a patient's states of normal, pain, and tensed. The experimental results show that the proposed system can achieve an average 98.2 % recognition accuracy.


Assuntos
Expressão Facial , Avaliação das Necessidades , Pacientes , Reconhecimento Automatizado de Padrão/métodos , Fala , Gravação em Vídeo/métodos , Humanos , Sistemas de Informação/organização & administração , Fatores de Tempo
6.
Cogn Neurodyn ; 18(1): 95-108, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38406197

RESUMO

Neuropsychiatric disorders are one of the leading causes of disability. Mental health problems can occur due to various biological and environmental factors. The absence of definitive confirmatory diagnostic tests for psychiatric disorders complicates the diagnosis. It's critical to distinguish between bipolar disorder, depression, and schizophrenia since their symptoms and treatments differ. Because of brain-heart autonomic connections, electrocardiography (ECG) signals can be changed in behavioral disorders. In this research, we have automatically classified bipolar, depression, and schizophrenia from ECG signals. In this work, a new hand-crafted feature engineering model has been proposed to detect psychiatric disorders automatically. The main objective of this model is to accurately detect psychiatric disorders using ECG beats with linear time complexity. Therefore, we collected a new ECG signal dataset containing 3,570 ECG beats with four categories. The used categories are bipolar, depression, schizophrenia, and control. Furthermore, a new ternary pattern-based signal classification model has been proposed to classify these four categories. Our proposal contains four essential phases, and these phases are (i) multileveled feature extraction using multilevel discrete wavelet transform and ternary pattern, (ii) the best features selection applying iterative Chi2 selector, (iii) classification with artificial neural network (ANN) to calculate lead wise results and (iv) calculation the voted/general classification accuracy using iterative majority voting (IMV) algorithm. tenfold cross-validation is one of the most used validation techniques in the literature, and this validation model gives robust classification results. Using ANN with tenfold cross-validation, lead-by-lead and voted results have been calculated. The lead-by-lead accuracy range of the proposed model using the ANN classifier is from 73.67 to 89.19%. By deploying the IMV method, the general classification performance of our ternary pattern-based ECG classification model is increased from 89.19 to 96.25%. The findings and the calculated classification accuracies (single lead and voted) clearly demonstrated the success of the proposed ternary pattern-based advanced signal processing model. By using this model, a new wearable device can be proposed.

7.
Comput Methods Programs Biomed ; 240: 107692, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37459773

RESUMO

BACKGROUND AND OBJECTIVE: Lung cancer is an important cause of death and morbidity around the world. Two of the primary computed tomography (CT) imaging markers that can be used to differentiate malignant and benign lung nodules are the inhomogeneity of the nodules' texture and nodular morphology. The objective of this paper is to present a new model that can capture the inhomogeneity of the detected lung nodules as well as their morphology. METHODS: We modified the local ternary pattern to use three different levels (instead of two) and a new pattern identification algorithm to capture the nodule's inhomogeneity and morphology in a more accurate and flexible way. This modification aims to address the wide Hounsfield unit value range of the detected nodules which decreases the ability of the traditional local binary/ternary pattern to accurately classify nodules' inhomogeneity. The cut-off values defining these three levels of the novel technique are estimated empirically from the training data. Subsequently, the extracted imaging markers are fed to a hyper-tuned stacked generalization-based classification architecture to classify the nodules as malignant or benign. The proposed system was evaluated on in vivo datasets of 679 CT scans (364 malignant nodules and 315 benign nodules) from the benchmark Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and an external dataset of 100 CT scans (50 malignant and 50 benign). The performance of the classifier was quantitatively assessed using a Leave-one-out cross-validation approach and externally validated using the unseen external dataset based on sensitivity, specificity, and accuracy. RESULTS: The overall accuracy of the system is 96.17% with 97.14% sensitivity and 95.33% specificity. The area under the receiver-operating characteristic curve was 0.98, which highlights the robustness of the system. Using the unseen external dataset for validating the system led to consistent results showing the generalization abilities of the proposed approach. Moreover, applying the original local binary/ternary pattern or using other classification structures achieved inferior performance when compared against the proposed approach. CONCLUSIONS: These experimental results demonstrate the feasibility of the proposed model as a novel tool to assist physicians and radiologists for lung nodules' early assessment based on the new comprehensive imaging markers.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico , Pulmão/patologia , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador
8.
Big Data ; 11(6): 452-465, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37702608

RESUMO

Tongue analysis plays the major role in disease type prediction and classification according to Indian ayurvedic medicine. Traditionally, there is a manual inspection of tongue image by the expert ayurvedic doctor to identify or predict the disease. However, this is time-consuming and even imprecise. Due to the advancements in recent machine learning models, several researchers addressed the disease prediction from tongue image analysis. However, they have failed to provide enough accuracy. In addition, multiclass disease classification with enhanced accuracy is still a challenging task. Therefore, this article focuses on the development of optimized deep q-neural network (DQNN) for disease identification and classification from tongue images, hereafter referred as ODQN-Net. Initially, the multiscale retinex approach is introduced for enhancing the quality of tongue images, which also acts as a noise removal technique. In addition, a local ternary pattern is used to extract the disease-specific and disease-dependent features based on color analysis. Then, the best features are extracted from the available features set using the natural inspired Remora optimization algorithm with reduced computational time. Finally, the DQNN model is used to classify the type of diseases from these pretrained features. The obtained simulation performance on tongue imaging data set proved that the proposed ODQN-Net resulted in superior performance compared with state-of-the-art approaches with 99.17% of accuracy and 99.75% and 99.84% of F1-score and Mathew's correlation coefficient, respectively.


Assuntos
Algoritmos , Redes Neurais de Computação , Língua/diagnóstico por imagem , Aprendizado de Máquina
9.
Front Neurosci ; 17: 1200630, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37469843

RESUMO

Introduction: Intracranial hemorrhage detection in 3D Computed Tomography (CT) brain images has gained more attention in the research community. The major issue to deal with the 3D CT brain images is scarce and hard to obtain the labelled data with better recognition results. Methods: To overcome the aforementioned problem, a new model has been implemented in this research manuscript. After acquiring the images from the Radiological Society of North America (RSNA) 2019 database, the region of interest (RoI) was segmented by employing Otsu's thresholding method. Then, feature extraction was performed utilizing Tamura features: directionality, contrast, coarseness, and Gradient Local Ternary Pattern (GLTP) descriptors to extract vectors from the segmented RoI regions. The extracted vectors were dimensionally reduced by proposing a modified genetic algorithm, where the infinite feature selection technique was incorporated with the conventional genetic algorithm to further reduce the redundancy within the regularized vectors. The selected optimal vectors were finally fed to the Bi-directional Long Short Term Memory (Bi-LSTM) network to classify intracranial hemorrhage sub-types, such as subdural, intraparenchymal, subarachnoid, epidural, and intraventricular. Results: The experimental investigation demonstrated that the Bi-LSTM based modified genetic algorithm obtained 99.40% sensitivity, 99.80% accuracy, and 99.48% specificity, which are higher compared to the existing machine learning models: Naïve Bayes, Random Forest, Support Vector Machine (SVM), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM) network.

10.
Diagnostics (Basel) ; 13(3)2023 Feb 02.
Artigo em Inglês | MEDLINE | ID: mdl-36766652

RESUMO

Every year, cervical cancer is a leading cause of mortality in women all over the world. This cancer can be cured if it is detected early and patients are treated promptly. This study proposes a new strategy for the detection of cervical cancer using cervigram pictures. The associated histogram equalization (AHE) technique is used to improve the edges of the cervical image, and then the finite ridgelet transform is used to generate a multi-resolution picture. Then, from this converted multi-resolution cervical picture, features such as ridgelets, gray-level run-length matrices, moment invariant, and enhanced local ternary pattern are retrieved. A feed-forward backward propagation neural network is used to train and test these extracted features in order to classify the cervical images as normal or abnormal. To detect and segment cancer regions, morphological procedures are applied to the abnormal cervical images. The cervical cancer detection system's performance metrics include 98.11% sensitivity, 98.97% specificity, 99.19% accuracy, a PPV of 98.88%, an NPV of 91.91%, an LPR of 141.02%, an LNR of 0.0836, 98.13% precision, 97.15% FPs, and 90.89% FNs. The simulation outcomes show that the proposed method is better at detecting and segmenting cervical cancer than the traditional methods.

11.
J Imaging ; 8(4)2022 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-35448237

RESUMO

The effortless detection of salient objects by humans has been the subject of research in several fields, including computer vision, as it has many applications. However, salient object detection remains a challenge for many computer models dealing with color and textured images. Most of them process color and texture separately and therefore implicitly consider them as independent features which is not the case in reality. Herein, we propose a novel and efficient strategy, through a simple model, almost without internal parameters, which generates a robust saliency map for a natural image. This strategy consists of integrating color information into local textural patterns to characterize a color micro-texture. It is the simple, yet powerful LTP (Local Ternary Patterns) texture descriptor applied to opposing color pairs of a color space that allows us to achieve this end. Each color micro-texture is represented by a vector whose components are from a superpixel obtained by the SLICO (Simple Linear Iterative Clustering with zero parameter) algorithm, which is simple, fast and exhibits state-of-the-art boundary adherence. The degree of dissimilarity between each pair of color micro-textures is computed by the FastMap method, a fast version of MDS (Multi-dimensional Scaling) that considers the color micro-textures' non-linearity while preserving their distances. These degrees of dissimilarity give us an intermediate saliency map for each RGB (Red-Green-Blue), HSL (Hue-Saturation-Luminance), LUV (L for luminance, U and V represent chromaticity values) and CMY (Cyan-Magenta-Yellow) color space. The final saliency map is their combination to take advantage of the strength of each of them. The MAE (Mean Absolute Error), MSE (Mean Squared Error) and Fß measures of our saliency maps, on the five most used datasets show that our model outperformed several state-of-the-art models. Being simple and efficient, our model could be combined with classic models using color contrast for a better performance.

12.
Front Oncol ; 12: 822827, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35371983

RESUMO

Purpose: The purpose of this study was to realize automatic segmentation of lung parenchyma based on random walk algorithm to ensure the accuracy of lung parenchyma segmentation. The explicable features of pulmonary nodules were added into VGG16 neural network to improve the classification accuracy of pulmonary nodules. Materials and Methods: LIDC-IDRI, a public dataset containing lung Computed Tomography images/pulmonary nodules, was used as experimental data. In lung parenchyma segmentation, the maximum Between-Class Variance method (OTSU), corrosion and expansion methods were used to automatically obtain the foreground and background seed points of random walk algorithm in lung parenchyma region. The shortest distance between point sets was added as one of the criteria of prospect probability in the calculation of random walk weight function to achieve accurate segmentation of pulmonary parenchyma. According to the location of the nodules marked by the doctor, the nodules were extracted. The texture features and grayscale features were extracted by Volume Local Direction Ternary Pattern (VLDTP) method and gray histogram. The explicable features were input into VGG16 network in series mode and fused with depth features to achieve accurate classification of nodules. Intersection of Union (IOU) and false positive rate (FPR) were used to measure the segmentation results. Accuracy, Sensitivity, Specificity, Accuracy and F1 score were used to evaluate the results of nodule classification. Results: The automatic random walk algorithm is effective in lung parenchyma segmentation, and its segmentation efficiency is improved obviously. In VGG16 network, the accuracy of nodular classification is 0.045 higher than that of single depth feature classification. Conclusion: The method proposed in this paper can effectively and accurately achieve automatic segmentation of lung parenchyma. In addition, the fusion of multi-feature VGG16 network is effective in the classification of pulmonary nodules, which can improve the accuracy of nodular classification.

13.
Cognit Comput ; : 1-14, 2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35637880

RESUMO

Flipped learning is a blended learning method based on academic engagement of students online (outside class) and offline (inside class). In this learning pedagogy, students receive lesson any time from lecture videos pre-loaded on digital platform at their convenience places and it is followed by in-classroom activities such as doubt clearing, problem solving, etc. However, students are constantly exposed to high levels of distraction in this age of the Internet. Therefore, it is hard for an instructor to know whether a student has paid attention while watching pre-loaded lecture video. In order to analyze attention level of individual students, captured brain signal or electroencephalogram (EEG) of students can be utilized. In this study, we utilize a popular feature extraction technique called Local Binary Pattern (LBP) and improvise it to develop an enhanced feature selection method. The adapted feature selection method termed as 1D Multi-Point Local Ternary Pattern (1D MP-LTP) is used to extract unique features from collected electroencephalogram (EEG) signals. Standard classification techniques are exploited to classify the attention level of students. Experiments are conducted with the data captured at Intelligent Data Analysis Lab, NIT Rourkela, to show effectiveness of the proposed feature extraction technique. The proposed 1D Multi-Point Local Ternary Pattern (1D MP-LTP)-based classification techniques outperform traditional and state-of-the-art classification techniques using LBP. This research can be helpful for instructors to identify students who need special care for improving their learning ability. Researchers in educational technology can extend this work by adopting this methodology in other online teaching pedagogy such as Massive Open Online Courses (MOOC).

14.
Sensors (Basel) ; 11(8): 8028-44, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22164060

RESUMO

Texture-based analysis of images is a very common and much discussed issue in the fields of computer vision and image processing. Several methods have already been proposed to codify texture micro-patterns (texlets) in images. Most of these methods perform well when a given image is noise-free, but real world images contain different types of signal-independent as well as signal-dependent noises originated from different sources, even from the camera sensor itself. Hence, it is necessary to differentiate false textures appearing due to the noises, and thus, to achieve a reliable representation of texlets. In this proposal, we define an adaptive noise band (ANB) to approximate the amount of noise contamination around a pixel up to a certain extent. Based on this ANB, we generate reliable codes named noise tolerant ternary pattern (NTTP) to represent the texlets in an image. Extensive experiments on several datasets from renowned texture databases, such as the Outex and the Brodatz database, show that NTTP performs much better than the state-of-the-art methods.


Assuntos
Reconhecimento Automatizado de Padrão , Algoritmos , Bases de Dados Factuais , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Ruído , Distribuição Normal , Reconhecimento Automatizado de Padrão/métodos , Fótons , Reprodutibilidade dos Testes , Propriedades de Superfície
15.
Front Biosci (Landmark Ed) ; 26(7): 222-234, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-34340269

RESUMO

Introduction: The prediction of interacting drug-target pairs plays an essential role in the field of drug repurposing, and drug discovery. Although biotechnology and chemical technology have made extraordinary progress, the process of dose-response experiments and clinical trials is still extremely complex, laborious, and costly. As a result, a robust computer-aided model is of an urgent need to predict drug-target interactions (DTIs). Methods: In this paper, we report a novel computational approach combining fuzzy local ternary pattern (FLTP), Position-Specific Scoring Matrix (PSSM), and rotation forest (RF) to identify DTIs. More specially, the target primary sequence is first numerically characterized into PSSM which records the biological evolution information. Afterward, the FLTP method is applied in extracting the highly representative descriptors of PSSM, and the combinations of FLTP descriptors and drug molecular fingerprints are regarded as the complete features of drug-target pairs. Results: Finally, the entire features are fed into rotation forests for inferring potential DTIs. The experiments of 5-fold cross-validation (CV) achieve mean accuracies of 89.08%, 86.14%, 82.41%, and 78.40% on Enzyme, Ion Channel, GPCRs, and Nuclear Receptor datasets. Discussion: For further validating the model performance, we performed experiments with the state-of-art support vector machine (SVM) and light gradient boosting machine (LGBM). The experimental results indicate the superiorities of the proposed model in effectively and reliably detect potential DTIs. There is an anticipation that the proposed model can establish a feasible and convenient tool to identify high-throughput identification of DTIs.


Assuntos
Preparações Farmacêuticas , Máquina de Vetores de Suporte , Biologia Computacional , Bases de Dados de Proteínas , Interações Medicamentosas , Matrizes de Pontuação de Posição Específica
16.
Bioengineered ; 11(1): 904-920, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32815466

RESUMO

In recent years, the incidence of lung cancer has been increasing. Lung cancer detection is based on computed tomography (CT) imaging of the lung area to determine whether there are pulmonary nodules. And then judge what's good and what's bad. However, due to the traditional way of manual reading and lack of experience and other problems. This leads to visual fatigue and misdiagnosis and missed diagnosis. In order to detect pulmonary nodules early and accurately, a new assistant diagnosis method for pulmonary nodules is proposed. Firstly, the image is preprocessed and denoised by median filter, the lung parenchyma is segmented by random walk algorithm and the region of interest is extracted, and then, according to the continuity of the CT slices, the texture feature extraction method of pulmonary nodules based on volume local direction ternary pattern is used to extract the features. Finally, the pulmonary nodules are identified and classified by the assistant diagnosis model of pulmonary nodules based on Stacking algorithm. In order to illustrate the validity of the diagnosis model, the experiments are carried out by cross-validation of ten folds. Experiments using data from LIDC database show that the accuracy, sensitivity and specificity of the proposed method are 82.2%, 85.7%, and 78.8%, respectively. Texture Recognition method based on volume vocal direction ternary pattern is feasible for the identification of pulmonary nodules and provides a reference value for doctor-assisted diagnosis.


Assuntos
Neoplasias Pulmonares/diagnóstico , Algoritmos , Bases de Dados Factuais , Humanos , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X
17.
Biomed Eng Lett ; 10(3): 345-357, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32850176

RESUMO

In this letter, a new feature descriptor called three dimensional local oriented zigzag ternary co-occurrence fused pattern ( 3 D - L O Z T C o F P ) is proposed for computed tomography (CT) image retrieval. Unlike the conventional local pattern based approaches, where the relationship between the reference and its neighbors in a circular shaped neighborhood are captured in a 2-D plane, the proposed descriptor encodes the relationship between the reference and it's neighbors within a local 3D block drawn from multiscale Gaussian filtered images employing a new 3D zigzag sampling structure. The proposed 3D zigzag scan around a reference not only provides an effective texture representation by capturing non-uniform and uniform local texture patterns but the fine to coarse details are also captured via multiscale Gaussian filtered images. In this letter, we have introduced three unique 3D zigzag patterns in four diverse directions. In 3 D - L O Z T C o F P , we first calculate the 3D local ternary pattern within a local 3D block around a reference using proposed 3D zigzag sampling structure at both radius 1 and 2. Then the co-occurrence of similar ternary edges within the local 3D cube is computed to further enhance the discriminative power of the descriptor. A quantization and fusion based scheme is introduced to reduce the feature dimension of the proposed descriptor. Experiments are conducted on popular NEMA and TCIA-CT image databases and the results demonstrate superior retrieval efficiency of the proposed 3 D - L O Z T C o F P descriptor over many local pattern based approaches in terms of average retrieval precision and average retrieval recall in CT image retrieval.

18.
Int J Comput Assist Radiol Surg ; 15(4): 601-615, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32152831

RESUMO

PURPOSE: The left ventricle (LV) myocardium undergoes deterioration with the reduction in ejection fraction (EF). The analysis of its texture pattern plays a major role in diagnosis of heart muscle disease severity. Hence, a classification framework with co-occurrence of local ternary pattern feature (COALTP) and whale optimization algorithm has been attempted to improve the prediction accuracy of disease severity level. METHODS: This analysis is carried out on 600 slices of 76 participants from Kaggle challenge that include subjects with normal and reduced EF. The myocardium of LV is segmented using optimized edge-based local Gaussian distribution energy (LGE)-based level set, and end-diastolic and end-systolic volumes were calculated. COALTP is extracted for two distance levels (d = 1 and 2). The t-test has been performed between the features of individual binary classes. The features are ranked using feature ranking methods. The experiments have been performed to analyze the performance of various percentages of features in each combination of bin for fivefold cross-validation. An integrated whale optimized feature selection and multi-classification framework is developed to classify the normal and pathological subjects using CMR images, and DeLong test has been performed to compare the ROCs. RESULTS: The optimized edge embedded to level set has produced better segmented myocardium that correlates with R = 0.98 with gold standard volume. The t-test shows that texture features extracted from severe subjects with distance level "1" are more statistically significant with a p value (< 0.00004) compared to other pathologies. This approach has produced an overall multi-class accuracy of 75% [confidence interval (CI) 63.74-84.23%] and effective subclass specificity of 70% (CI 55.90-81.22%). CONCLUSION: The obtained results show that the multi-objective whale optimized multi-class support vector machine framework can effectively discriminate the healthy and patients with reduced ejection fraction and potentially support the treatment process.


Assuntos
Doenças Cardiovasculares/diagnóstico por imagem , Ventrículos do Coração/diagnóstico por imagem , Disfunção Ventricular Esquerda/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade , Índice de Gravidade de Doença , Máquina de Vetores de Suporte
19.
Artigo em Chinês | WPRIM | ID: wpr-861381

RESUMO

Objective: To explore the value of three-dimensional optimization of threshold local ternary pattern (LTP) texture features, conventional texture features and grayscale statistical features fusion features for diagnosis of prostate cancer. Methods The peripheral zone of prostate was segmented from multi-sequence MR images. The optimization of the threshold LTP texture features, the conventional texture features and the grayscale statistical features was extracted. The fusion features were classified with Adaboost algorithm. The diagnostic efficacy was analyzed. Results: AUC of three-dimensional optimization of the threshold LTP fusion texture feature for predicting prostate cancer was 0.79±0.04, and the sensitivity, specificity and accuracy was 78.31% (65/83), 80.81% (80/99) and 79.67% (145/182), respectively. The AUC of conventional texture features for predicting prostate cancer was 0.71±0.04, and the sensitivity, specificity and accuracy was 72.29% (60/83), 81.82% (81/99), 77.47% (141/182), respectively. The AUC of grayscale statistical features for predicting prostate cancer was 0.80±0.04, and the sensitivity, specificity and accuracy was 78.31% (65/83), 82.83% (82/99), 80.77% (147/182), respectively. The AUC of fusion features for predicting prostate cancer was 0.87±0.04, and the sensitivity, specificity and accuracy was 86.75% (72/83), 88.89% (88/99) and 87.91% (160/182), respectively. Conclusion: The diagnostic efficacy of prostate cancer can be effectively improved by fusing local ternary patterns features, conventional texture features and grayscale statistical texture features.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa