Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 116
Filtrar
1.
BMC Med Inform Decis Mak ; 23(1): 232, 2023 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-37858107

RESUMO

BACKGROUND: Cardiac arrhythmia is a cardiovascular disorder characterized by disturbances in the heartbeat caused by electrical conduction anomalies in cardiac muscle. Clinically, ECG machines are utilized to diagnose and monitor cardiac arrhythmia noninvasively. Since ECG signals are dynamic in nature and depict various complex information, visual assessment and analysis are time consuming and very difficult. Therefore, an automated system that can assist physicians in the easy detection of arrhythmia is needed. METHOD: The main objective of this study was to create an automated deep learning model capable of accurately classifying ECG signals into three categories: cardiac arrhythmia (ARR), congestive heart failure (CHF), and normal sinus rhythm (NSR). To achieve this, ECG data from the MIT-BIH and BIDMC databases available on PhysioNet were preprocessed and segmented before being utilized for deep learning model training. Pretrained models, ResNet 50 and AlexNet, were fine-tuned and configured to achieve optimal classification results. The main outcome measures for evaluating the performance of the model were F-measure, recall, precision, sensitivity, specificity, and accuracy, obtained from a multi-class confusion matrix. RESULT: The proposed deep learning model showed overall classification accuracy of 99.2%, average sensitivity of 99.2%, average specificity of 99.6%, average recall, precision and F- measure of 99.2% of test data. CONCLUSION: The proposed work introduced a robust approach for the classification of arrhythmias in comparison with the most recent state of the art and will reduce the diagnosis time and error that occurs in the visual investigation of ECG signals.


Assuntos
Doenças Cardiovasculares , Aprendizado Profundo , Humanos , Processamento de Sinais Assistido por Computador , Arritmias Cardíacas/diagnóstico , Eletrocardiografia/métodos , Algoritmos
2.
Sensors (Basel) ; 23(13)2023 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-37447919

RESUMO

With the increase in urban rail transit construction, instances of tunnel disease are on the rise, and cracks have become the focus of tunnel maintenance and management. Therefore, it is essential to carry out crack detection in a timely and efficient manner to not only prolong the service life of the tunnel but also reduce the incidence of accidents. In this paper, the design and structure of a tunnel crack detection system are analyzed. On this basis, this paper proposes a new method for crack identification and feature detection using image processing technology. This method fully considers the characteristics of tunnel images and the combination of these characteristics with deep learning, while a deep convolutional network (Single-Shot MultiBox Detector (SSD)) is proposed based on deep learning for object detection in complex images. The experimental results show that the test set accuracy and training set accuracy of the support vector machine (SVM) in the classification comparison test are up to 88% and 87.8%, respectively; while the test accuracy of Alexnet's deep convolutional neural network-based classification and identification is up to 96.7%, and the training set accuracy is up to 97.5%. It can be seen that this deep convolutional network recognition algorithm based on deep learning and image processing is better and more suitable for the detection of cracks in subway tunnels.


Assuntos
Ferrovias , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Máquina de Vetores de Suporte
3.
Sensors (Basel) ; 23(15)2023 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-37571620

RESUMO

With a view of the post-COVID-19 world and probable future pandemics, this paper presents an Internet of Things (IoT)-based automated healthcare diagnosis model that employs a mixed approach using data augmentation, transfer learning, and deep learning techniques and does not require physical interaction between the patient and physician. Through a user-friendly graphic user interface and availability of suitable computing power on smart devices, the embedded artificial intelligence allows the proposed model to be effectively used by a layperson without the need for a dental expert by indicating any issues with the teeth and subsequent treatment options. The proposed method involves multiple processes, including data acquisition using IoT devices, data preprocessing, deep learning-based feature extraction, and classification through an unsupervised neural network. The dataset contains multiple periapical X-rays of five different types of lesions obtained through an IoT device mounted within the mouth guard. A pretrained AlexNet, a fast GPU implementation of a convolutional neural network (CNN), is fine-tuned using data augmentation and transfer learning and employed to extract the suitable feature set. The data augmentation avoids overtraining, whereas accuracy is improved by transfer learning. Later, support vector machine (SVM) and the K-nearest neighbors (KNN) classifiers are trained for lesion classification. It was found that the proposed automated model based on the AlexNet extraction mechanism followed by the SVM classifier achieved an accuracy of 98%, showing the effectiveness of the presented approach.


Assuntos
COVID-19 , Aprendizado Profundo , Internet das Coisas , Humanos , Inteligência Artificial , Análise por Conglomerados
4.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36679381

RESUMO

This article is devoted to the development of a classification method based on an artificial neural network architecture to solve the problem of recognizing the sources of acoustic influences recorded by a phase-sensitive OTDR. At the initial stage of signal processing, we propose the use of a band-pass filter to collect data sets with an increased signal-to-noise ratio. When solving the classification problem, we study three widely used convolutional neural network architectures: AlexNet, ResNet50, and DenseNet169. As a result of computational experiments, it is shown that the AlexNet and DenseNet169 architectures can obtain accuracies above 90%. In addition, we propose a novel CNN architecture based on AlexNet, which obtains the best results; in particular, its accuracy is above 98%. The advantages of the proposed model include low power consumption (400 mW) and high speed (0.032 s per net evaluation). In further studies, in order to increase the accuracy, reliability, and data invariance, the use of new algorithms for the filtering and extraction of acoustic signals recorded by a phase-sensitive reflectometer will be considered.


Assuntos
Algoritmos , Redes Neurais de Computação , Reprodutibilidade dos Testes , Razão Sinal-Ruído , Acústica
5.
Sensors (Basel) ; 23(7)2023 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-37050521

RESUMO

Speaker Recognition (SR) is a common task in AI-based sound analysis, involving structurally different methodologies such as Deep Learning or "traditional" Machine Learning (ML). In this paper, we compared and explored the two methodologies on the DEMoS dataset consisting of 8869 audio files of 58 speakers in different emotional states. A custom CNN is compared to several pre-trained nets using image inputs of spectrograms and Cepstral-temporal (MFCC) graphs. AML approach based on acoustic feature extraction, selection and multi-class classification by means of a Naïve Bayes model is also considered. Results show how a custom, less deep CNN trained on grayscale spectrogram images obtain the most accurate results, 90.15% on grayscale spectrograms and 83.17% on colored MFCC. AlexNet provides comparable results, reaching 89.28% on spectrograms and 83.43% on MFCC.The Naïve Bayes classifier provides a 87.09% accuracy and a 0.985 average AUC while being faster to train and more interpretable. Feature selection shows how F0, MFCC and voicing-related features are the most characterizing for this SR task. The high amount of training samples and the emotional content of the DEMoS dataset better reflect a real case scenario for speaker recognition, and account for the generalization power of the models.


Assuntos
Aprendizado de Máquina , Som , Teorema de Bayes , Acústica
6.
Sensors (Basel) ; 23(18)2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37766026

RESUMO

Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.


Assuntos
Aprendizado Profundo , Língua de Sinais , Humanos , Estados Unidos , Qualidade de Vida , Gestos , Tecnologia
7.
J Sci Food Agric ; 103(8): 3970-3983, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36397181

RESUMO

BACKGROUND: The purity of sorghum varieties is an important indicator of the quality of raw materials used in the distillation of liquors. Different varieties of sorghum may be mixed during the acquisition process, which will affect the flavor and quality of liquor. To facilitate the rapid identification of sorghum varieties, this study proposes a sorghum variety identification model using hyperspectral imaging (HSI) technology combined with convolutional neural network (AlexNet). RESULTS: First, the watershed algorithm, which was modified with the extended-maxim transform, was used to segment the hyperspectral images of a single sorghum grain. The isolated forest algorithm was used to eliminate abnormal spectral data from the complete spectral data. Secondly, the AlexNet model of sorghum variety identification was established based on the two-dimensional gray image data of sorghum grain in group 1. The effects of different preprocessing methods and different convolution kernel sizes on the performance of the AlexNet model were discussed. The eigenvalues of the last layer of the AlexNet model were visualized using the t-distributed random neighborhood embedding method, which is used to evaluate the separability of features extracted by the AlexNet model. The performance differences between the optimal AlexNet model and traditional machine learning models for sorghum variety identification were compared. Finally, the varieties of sorghum grains in groups 2 and 3 were identified based on the optimal AlexNet model, and the average accuracy values of the test set reached 95.62% and 95.91% respectively. CONCLUSION: The results in this study demonstrated that HSI combined with the AlexNet model could provide a feasible technical approach for the detection of sorghum varieties. © 2022 Society of Chemical Industry.


Assuntos
Sorghum , Imageamento Hiperespectral , Redes Neurais de Computação , Algoritmos , Grão Comestível
8.
J Xray Sci Technol ; 31(1): 211-221, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36463485

RESUMO

Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.


Assuntos
Adenocarcinoma , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Diagnóstico por Computador/métodos , Adenocarcinoma/diagnóstico por imagem , Carcinoma de Células Escamosas/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Máquina de Vetores de Suporte
9.
Multimed Syst ; 29(2): 739-751, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36310764

RESUMO

The pandemic that the SARS-CoV-2 originated in 2019 is continuing to cause serious havoc on the global population's health, economy, and livelihood. A critical way to suppress and restrain this pandemic is the early detection of COVID-19, which will help to control the virus. Chest X-rays are one of the more straightforward ways to detect the COVID-19 virus compared to the standard methods like CT scans and RT-PCR diagnosis, which are very complex, expensive, and take much time. Our research on various papers shows that the currently researchers are actively working for an efficient Deep Learning model to produce an unbiased detection of COVID-19 through chest X-ray images. In this work, we propose a novel convolution neural network model based on supervised classification that simultaneously computes identification and verification loss. We adopt a transfer learning approach using pretrained models trained on imagenet dataset such as Alex Net and VGG16 as back-bone models and use data augmentation techniques to solve class imbalance and boost the classifier's performance. Finally, our proposed classifier architecture model ensures unbiased and high accuracy results, outperforming existing deep learning models for COVID-19 detection from chest X-ray images producing State of the Art performance. It shows strong and robust performance and proves to be easily deployable and scalable, therefore increasing the efficiency of analyzing chest X-ray images with high accuracy in detection of Coronavirus.

10.
Dev Sci ; 25(1): e13155, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34240787

RESUMO

Little is known about the development of higher-level areas of visual cortex during infancy, and even less is known about how the development of visually guided behavior is related to the different levels of the cortical processing hierarchy. As a first step toward filling these gaps, we used representational similarity analysis (RSA) to assess links between gaze patterns and a neural network model that captures key properties of the ventral visual processing stream. We recorded the eye movements of 4- to 12-month-old infants (N = 54) as they viewed photographs of scenes. For each infant, we calculated the similarity of the gaze patterns for each pair of photographs. We also analyzed the images using a convolutional neural network model in which the successive layers correspond approximately to the sequence of areas along the ventral stream. For each layer of the network, we calculated the similarity of the activation patterns for each pair of photographs, which was then compared with the infant gaze data. We found that the network layers corresponding to lower-level areas of visual cortex accounted for gaze patterns better in younger infants than in older infants, whereas the network layers corresponding to higher-level areas of visual cortex accounted for gaze patterns better in older infants than in younger infants. Thus, between 4 and 12 months, gaze becomes increasingly controlled by more abstract, higher-level representations. These results also demonstrate the feasibility of using RSA to link infant gaze behavior to neural network models. A video abstract of this article can be viewed at https://youtu.be/K5mF2Rw98Is.


Assuntos
Movimentos Oculares , Córtex Visual , Idoso , Humanos , Lactente , Redes Neurais de Computação , Córtex Visual/fisiologia , Percepção Visual/fisiologia
11.
Chemometr Intell Lab Syst ; 231: 104695, 2022 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-36311473

RESUMO

This paper aims to diagnose COVID-19 by using Chest X-Ray (CXR) scan images in a deep learning-based system. First of all, COVID-19 Chest X-Ray Dataset is used to segment the lung parts in CXR images semantically. DeepLabV3+ architecture is trained by using the masks of the lung parts in this dataset. The trained architecture is then fed with images in the COVID-19 Radiography Database. In order to improve the output images, some image preprocessing steps are applied. As a result, lung regions are successfully segmented from CXR images. The next step is feature extraction and classification. While features are extracted with modified AlexNet (mAlexNet), Support Vector Machine (SVM) is used for classification. As a result, 3-class data consisting of Normal, Viral Pneumonia and COVID-19 class are classified with 99.8% success. Classification results show that the proposed method is superior to previous state-of-the-art methods.

12.
Sensors (Basel) ; 22(22)2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36433545

RESUMO

In this paper, we propose a spectrum sensing algorithm based on the Jones vector covariance matrix (JCM) and AlexNet model, i.e., the JCM-AlexNet algorithm, by taking advantage of the different state characteristics of the signal and noise in the polarization domain. We use the AlexNet model, which is good at extracting matrix features, as the classification model and use the Jones vector, which characterizes the polarization state, to calculate its covariance matrix and convert it into an image and then use it as the input to the AlexNet model. Then, we calculate the likelihood ratio test statistic (AlexNet-LRT) based on the output of the model to achieve the classification of the signal and noise. The simulation analysis shows that the JCM-AlexNet algorithm performs better than the conventional polarization detection (PSD) algorithm and the other three (LeNet5, long short-term memory (LSTM), multilayer perceptron (MLP)) excellent deep-learning-based spectrum sensing algorithms for different signal-to-noise ratios and different false alarm probabilities.


Assuntos
Algoritmos , Redes Neurais de Computação , Refração Ocular
13.
Sensors (Basel) ; 22(14)2022 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-35890840

RESUMO

Nowadays, the demand for soft-biometric-based devices is increasing rapidly because of the huge use of electronics items such as mobiles, laptops and electronic gadgets in daily life. Recently, the healthcare department also emerged with soft-biometric technology, i.e., face biometrics, because the entire data, i.e., (gender, age, face expression and spoofing) of patients, doctors and other staff in hospitals is managed and forwarded through digital systems to reduce paperwork. This concept makes the relation friendlier between the patient and doctors and makes access to medical reports and treatments easier, anywhere and at any moment of life. In this paper, we proposed a new soft-biometric-based methodology for a secure biometric system because medical information plays an essential role in our life. In the proposed model, 5-layer U-Net-based architecture is used for face detection and Alex-Net-based architecture is used for classification of facial information i.e., age, gender, facial expression and face spoofing, etc. The proposed model outperforms the other state of art methodologies. The proposed methodology is evaluated and verified on six benchmark datasets i.e., NUAA Photograph Imposter Database, CASIA, Adience, The Images of Groups Dataset (IOG), The Extended Cohn-Kanade Dataset CK+ and The Japanese Female Facial Expression (JAFFE) Dataset. The proposed model achieved an accuracy of 94.17% for spoofing, 83.26% for age, 95.31% for gender and 96.9% for facial expression. Overall, the modification made in the proposed model has given better results and it will go a long way in the future to support soft-biometric based applications.


Assuntos
Identificação Biométrica , Reconhecimento Facial , Idoso de 80 Anos ou mais , Identificação Biométrica/métodos , Biometria , Face/anatomia & histologia , Expressão Facial , Feminino , Humanos , Redes Neurais de Computação
14.
Sensors (Basel) ; 22(5)2022 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-35271011

RESUMO

Traditional methods for behavior detection of distracted drivers are not capable of capturing driver behavior features related to complex temporal features. With the goal to improve transportation safety and to reduce fatal accidents on roads, this research article presents a Hybrid Scheme for the Detection of Distracted Driving called HSDDD. This scheme is based on a strategy of aggregating handcrafted and deep CNN features. HSDDD is based on three-tiered architecture. The three tiers are named as Coordination tier, Concatenation tier and Classification tier. We first obtain HOG features by using handcrafted algorithms, and then at the coordination tier, we leverage four deep CNN models including AlexNet, Inception V3, Resnet50 and VGG-16 for extracting DCNN features. DCNN extracted features are fused with HOG extracted features at the Concatenation tier. Then PCA is used as a feature selection technique. PCA takes both the extracted features and removes the redundant and irrelevant information, and it improves the classification performance. After feature fusion and feature selection, the two classifiers, KNN and SVM, at the Classification tier take the selected features and classify the ten classes of distracted driving behaviors. We evaluate our proposed scheme and observe its performance by using the accuracy metrics.


Assuntos
Aprendizado Profundo , Direção Distraída , Algoritmos , Máquina de Vetores de Suporte
15.
Sensors (Basel) ; 22(12)2022 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-35746208

RESUMO

The convolutional neural network (CNN) has become a powerful tool in machine learning (ML) that is used to solve complex problems such as image recognition, natural language processing, and video analysis. Notably, the idea of exploring convolutional neural network architecture has gained substantial attention as well as popularity. This study focuses on the intrinsic various CNN architectures: LeNet, AlexNet, VGG16, ResNet-50, and Inception-V1, which have been scrutinized and compared with each other for the detection of lung cancer using publicly available LUNA16 datasets. Furthermore, multiple performance optimizers: root mean square propagation (RMSProp), adaptive moment estimation (Adam), and stochastic gradient descent (SGD), were applied for this comparative study. The performances of the three CNN architectures were measured for accuracy, specificity, sensitivity, positive predictive value, false omission rate, negative predictive value, and F1 score. The experimental results showed that the CNN AlexNet architecture with the SGD optimizer achieved the highest validation accuracy for CT lung cancer with an accuracy of 97.42%, misclassification rate of 2.58%, 97.58% sensitivity, 97.25% specificity, 97.58% positive predictive value, 97.25% negative predictive value, false omission rate of 2.75%, and F1 score of 97.58%. AlexNet with the SGD optimizer was the best and outperformed compared to the other state-of-the-art CNN architectures.


Assuntos
Neoplasias Pulmonares , Redes Neurais de Computação , Humanos , Neoplasias Pulmonares/diagnóstico , Aprendizado de Máquina , Tomografia Computadorizada por Raios X
16.
Sensors (Basel) ; 22(10)2022 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-35632242

RESUMO

Oral cancer is a dangerous and extensive cancer with a high death ratio. Oral cancer is the most usual cancer in the world, with more than 300,335 deaths every year. The cancerous tumor appears in the neck, oral glands, face, and mouth. To overcome this dangerous cancer, there are many ways to detect like a biopsy, in which small chunks of tissues are taken from the mouth and tested under a secure and hygienic microscope. However, microscope results of tissues to detect oral cancer are not up to the mark, a microscope cannot easily identify the cancerous cells and normal cells. Detection of cancerous cells using microscopic biopsy images helps in allaying and predicting the issues and gives better results if biologically approaches apply accurately for the prediction of cancerous cells, but during the physical examinations microscopic biopsy images for cancer detection there are major chances for human error and mistake. So, with the development of technology deep learning algorithms plays a major role in medical image diagnosing. Deep learning algorithms are efficiently developed to predict breast cancer, oral cancer, lung cancer, or any other type of medical image. In this study, the proposed model of transfer learning model using AlexNet in the convolutional neural network to extract rank features from oral squamous cell carcinoma (OSCC) biopsy images to train the model. Simulation results have shown that the proposed model achieved higher classification accuracy 97.66% and 90.06% of training and testing, respectively.


Assuntos
Carcinoma de Células Escamosas , Neoplasias de Cabeça e Pescoço , Neoplasias Bucais , Biópsia , Carcinoma de Células Escamosas/diagnóstico , Humanos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Neoplasias Bucais/diagnóstico , Carcinoma de Células Escamosas de Cabeça e Pescoço
17.
Radiol Med ; 127(4): 398-406, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35262842

RESUMO

PURPOSE: We developed a tool for locating and grading knee osteoarthritis (OA) from digital X-ray images and illustrate the possibility of deep learning techniques to predict knee OA as per the Kellgren-Lawrence (KL) grading system. The purpose of the project is to see how effectively an artificial intelligence (AI)-based deep learning approach can locate and diagnose the severity of knee OA in digital X-ray images. METHODS: Selection criteria: Patients above 50 years old with OA symptoms (knee joint pain, stiffness, crepitus, and functional limitations) were included in the study. Medical experts excluded patients with post-surgical evaluation, trauma, and infection from the study. We used 3172 Anterior-posterior view knee joint digital X-ray images. We have trained the Faster RCNN architecture to locate the knee joint space width (JSW) region in digital X-ray images and we incorporate ResNet-50 with transfer learning to extract the features. We have used another pre-trained network (AlexNet with transfer learning) for the classification of knee OA severity. We trained the region proposal network (RPN) using manual extract knee area as the ground truth image and the medical experts graded the knee joint digital X-ray images based on the Kellgren-Lawrence score. An X-ray image is an input for the final model, and the output is a Kellgren-Lawrence grading value. RESULTS: The proposed model identified the minimal knee JSW area with a maximum accuracy of 98.516%, and the overall knee OA severity classification accuracy was 98.90%. CONCLUSIONS: Today numerous diagnostic methods are available, but tools are not transparent and automated analysis of OA remains a problem. The performance of the proposed model increases while fine-tuning the network and it is higher than the existing works. We will extend this work to grade OA in MRI data in the future.


Assuntos
Aprendizado Profundo , Osteoartrite do Joelho , Inteligência Artificial , Humanos , Articulação do Joelho , Pessoa de Meia-Idade , Osteoartrite do Joelho/diagnóstico por imagem , Dor
18.
J Digit Imaging ; 35(2): 200-212, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35048231

RESUMO

Magnetic resonance (MR) is one of the special imaging techniques used to diagnose orthopedics and traumatology. In this study, a new method has been proposed to detect highly accurate automatic meniscal tear and anterior cruciate ligament (ACL) injuries. In this study, images in three different slices were collected. These are the sagittal, coronal, and axial slices, respectively. Images taken from each slice were categorized in 3 different ways: sagittal database (sDB), coronal database (cDB), and axial database (aDB). The proposed model in the study uses deep feature extraction. In this context, deep features have been obtained by using fully-connected layers of AlexNet architecture. In the second stage of the study, the most significant features were selected using the iterative RelifF (IRF) algorithm. In the last step of the application, the features are classified by using the k-nearest neighbor (kNN) method. Three datasets were used in the study. These datasets, sDB, and cDB, have four classes and consist of 442 and 457 images, respectively. The aDB used in the study has two class labels and consists of 190 images. The model proposed within the scope of the study was applied in 3 datasets. In this context, 98.42%, 100%, and 100% accuracy values were obtained for sDB, cDB, and aDB datasets, respectively. The study results showed that the proposed method detected meniscal tear and anterior cruciate ligament (ACL) injuries with high accuracy.


Assuntos
Lesões do Ligamento Cruzado Anterior , Traumatismos do Joelho , Ortopedia , Ligamento Cruzado Anterior/diagnóstico por imagem , Lesões do Ligamento Cruzado Anterior/diagnóstico por imagem , Humanos , Traumatismos do Joelho/diagnóstico por imagem , Traumatismos do Joelho/patologia , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos
19.
J Comput Sci Technol ; 37(2): 330-343, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35496726

RESUMO

COVID-19 is a contagious infection that has severe effects on the global economy and our daily life. Accurate diagnosis of COVID-19 is of importance for consultants, patients, and radiologists. In this study, we use the deep learning network AlexNet as the backbone, and enhance it with the following two aspects: 1) adding batch normalization to help accelerate the training, reducing the internal covariance shift; 2) replacing the fully connected layer in AlexNet with three classifiers: SNN, ELM, and RVFL. Therefore, we have three novel models from the deep COVID network (DC-Net) framework, which are named DC-Net-S, DC-Net-E, and DC-Net-R, respectively. After comparison, we find the proposed DC-Net-R achieves an average accuracy of 90.91% on a private dataset (available upon email request) comprising of 296 images while the specificity reaches 96.13%, and has the best performance among all three proposed classifiers. In addition, we show that our DC-Net-R also performs much better than other existing algorithms in the literature. Supplementary Information: The online version contains supplementary material available at 10.1007/s11390-020-0679-8.

20.
Zhongguo Yi Liao Qi Xie Za Zhi ; 46(3): 242-247, 2022 May 30.
Artigo em Chinês | MEDLINE | ID: mdl-35678429

RESUMO

Premature delivery is one of the direct factors that affect the early development and safety of infants. Its direct clinical manifestation is the change of uterine contraction intensity and frequency. Uterine Electrohysterography(EHG) signal collected from the abdomen of pregnant women can accurately and effectively reflect the uterine contraction, which has higher clinical application value than invasive monitoring technology such as intrauterine pressure catheter. Therefore, the research of fetal preterm birth recognition algorithm based on EHG is particularly important for perinatal fetal monitoring. We proposed a convolution neural network(CNN) based on EHG fetal preterm birth recognition algorithm, and a deep CNN model was constructed by combining the Gramian angular difference field(GADF) with the transfer learning technology. The structure of the model was optimized using the clinical measured term-preterm EHG database. The classification accuracy of 94.38% and F1 value of 97.11% were achieved. The experimental results showed that the model constructed in this paper has a certain auxiliary diagnostic value for clinical prediction of premature delivery.


Assuntos
Nascimento Prematuro , Algoritmos , Eletromiografia , Feminino , Humanos , Recém-Nascido , Redes Neurais de Computação , Gravidez , Nascimento Prematuro/diagnóstico , Contração Uterina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA