Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 326
Filtrer
1.
Pest Manag Sci ; 2024 Oct 08.
Article de Anglais | MEDLINE | ID: mdl-39377441

RÉSUMÉ

BACKGROUND: The use of computer vision and deep learning models to automatically classify insect species on sticky traps has proven to be a cost- and time-efficient approach to pest monitoring. As different species are attracted to different colours, the variety of sticky trap colours poses a challenge to the performance of the models. However, the effectiveness of deep learning in classifying pests on different coloured sticky traps has not yet been sufficiently explored. In this study, we aim to investigate the influence of sticky trap colour and imaging devices on the performance of deep learning models in classifying pests on sticky traps. RESULTS: Our results show that using the MobileNetV2 architecture with transparent sticky traps as training data, the model predicted the pest species on transparent sticky traps with an accuracy of at least 0.95 and on other sticky trap colours with at least 0.85 of the F1 score. Using a generalised linear model (GLM) and a Boruta feature selection algorithm, we also showed that the colour and architecture of the sticky traps significantly influenced the performance of the model. CONCLUSION: Our results support the development of an automatic classification of pests on a sticky trap, which should focus on colour and deep learning architecture to achieve good results. Future studies could aim to incorporate the trap system into pest monitoring, providing more accurate and cost-effective results in a pest management programme. © 2024 The Author(s). Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

2.
BMC Med Imaging ; 24(1): 227, 2024 Aug 28.
Article de Anglais | MEDLINE | ID: mdl-39198741

RÉSUMÉ

Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME) are vision related complications prominently found in diabetic patients. The early identification of DR/DME grades facilitates the devising of an appropriate treatment plan, which ultimately prevents the probability of visual impairment in more than 90% of diabetic patients. Thereby, an automatic DR/DME grade detection approach is proposed in this work by utilizing image processing. In this work, the retinal fundus image provided as input is pre-processed using Discrete Wavelet Transform (DWT) with the aim of enhancing its visual quality. The precise detection of DR/DME is supported further with the application of suitable Artificial Neural Network (ANN) based segmentation technique. The segmented images are subsequently subjected to feature extraction using Adaptive Gabor Filter (AGF) and the feature selection using Random Forest (RF) technique. The former has excellent retinal vein recognition capability, while the latter has exceptional generalization capability. The RF approach also assists with the improvement of classification accuracy of Deep Convolutional Neural Network (CNN) classifier. Moreover, Chicken Swarm Algorithm (CSA) is used for further enhancing the classifier performance by optimizing the weights of both convolution and fully connected layer. The entire approach is validated for its accuracy in determination of grades of DR/DME using MATLAB software. The proposed DR/DME grade detection approach displays an excellent accuracy of 97.91%.


Sujet(s)
Algorithmes , Rétinopathie diabétique , Oedème maculaire , 29935 , Rétinopathie diabétique/imagerie diagnostique , Rétinopathie diabétique/classification , Humains , Oedème maculaire/imagerie diagnostique , Oedème maculaire/classification , Analyse en ondelettes , Interprétation d'images assistée par ordinateur/méthodes
3.
ACM Trans Appl Percept ; 21(1)2024 Jan.
Article de Anglais | MEDLINE | ID: mdl-39131565

RÉSUMÉ

Facial morphs created between two identities resemble both of the faces used to create the morph. Consequently, humans and machines are prone to mistake morphs made from two identities for either of the faces used to create the morph. This vulnerability has been exploited in "morph attacks" in security scenarios. Here, we asked whether the "other-race effect" (ORE)-the human advantage for identifying own- vs. other-race faces-exacerbates morph attack susceptibility for humans. We also asked whether face-identification performance in a deep convolutional neural network (DCNN) is affected by the race of morphed faces. Caucasian (CA) and East-Asian (EA) participants performed a face-identity matching task on pairs of CA and EA face images in two conditions. In the morph condition, different-identity pairs consisted of an image of identity "A" and a 50/50 morph between images of identity "A" and "B". In the baseline condition, morphs of different identities never appeared. As expected, morphs were identified mistakenly more often than original face images. Of primary interest, morph identification was substantially worse for cross-race faces than for own-race faces. Similar to humans, the DCNN performed more accurately for original face images than for morphed image pairs. Notably, the deep network proved substantially more accurate than humans in both cases. The results point to the possibility that DCNNs might be useful for improving face identification accuracy when morphed faces are presented. They also indicate the significance of the race of a face in morph attack susceptibility in applied settings.

4.
IEEE Open J Eng Med Biol ; 5: 514-523, 2024.
Article de Anglais | MEDLINE | ID: mdl-39050971

RÉSUMÉ

Background: Deep learning models for patch classification in whole-slide images (WSIs) have shown promise in assisting follicular lymphoma grading. However, these models often require pathologists to identify centroblasts and manually provide refined labels for model optimization. Objective: To address this limitation, we propose PseudoCell, an object detection framework for automated centroblast detection in WSI, eliminating the need for extensive pathologist's refined labels. Methods: PseudoCell leverages a combination of pathologist-provided centroblast labels and pseudo-negative labels generated from undersampled false-positive predictions based on cell morphology features. This approach reduces the reliance on time-consuming manual annotations. Results: Our framework significantly reduces the workload for pathologists by accurately identifying and narrowing down areas of interest containing centroblasts. Depending on the confidence threshold, PseudoCell can eliminate 58.18-99.35% of irrelevant tissue areas on WSI, streamlining the diagnostic process. Conclusion: This study presents PseudoCell as a practical and efficient prescreening method for centroblast detection, eliminating the need for refined labels from pathologists. The discussion section provides detailed guidance for implementing PseudoCell in clinical practice.

5.
Sensors (Basel) ; 24(14)2024 Jul 12.
Article de Anglais | MEDLINE | ID: mdl-39065902

RÉSUMÉ

Accurate prediction of scoliotic curve progression is crucial for guiding treatment decisions in adolescent idiopathic scoliosis (AIS). Traditional methods of assessing the likelihood of AIS progression are limited by variability and rely on static measurements. This study developed and validated machine learning models for classifying progressive and non-progressive scoliotic curves based on gait analysis using wearable inertial sensors. Gait data from 38 AIS patients were collected using seven inertial measurement unit (IMU) sensors, and hip-knee (HK) cyclograms representing inter-joint coordination were generated. Various machine learning algorithms, including support vector machine (SVM), random forest (RF), and novel deep convolutional neural network (DCNN) models utilizing multi-plane HK cyclograms, were developed and evaluated using 10-fold cross-validation. The DCNN model incorporating multi-plane HK cyclograms and clinical factors achieved an accuracy of 92% in predicting curve progression, outperforming SVM (55% accuracy) and RF (52% accuracy) models using handcrafted gait features. Gradient-based class activation mapping revealed that the DCNN model focused on the swing phase of the gait cycle to make predictions. This study demonstrates the potential of deep learning techniques, and DCNNs in particular, in accurately classifying scoliotic curve progression using gait data from wearable IMU sensors.


Sujet(s)
Apprentissage profond , Analyse de démarche , Scoliose , Humains , Scoliose/physiopathologie , Scoliose/diagnostic , Adolescent , Femelle , Analyse de démarche/méthodes , Mâle , Démarche/physiologie , Évolution de la maladie , Machine à vecteur de support , 29935 , Algorithmes , Enfant , Dispositifs électroniques portables , Genou/physiopathologie
6.
Sensors (Basel) ; 24(14)2024 Jul 13.
Article de Anglais | MEDLINE | ID: mdl-39065948

RÉSUMÉ

Over the past decades, drones have become more attainable by the public due to their widespread availability at affordable prices. Nevertheless, this situation sparks serious concerns in both the cyber and physical security domains, as drones can be employed for malicious activities with public safety threats. However, detecting drones instantly and efficiently is a very difficult task due to their tiny size and swift flights. This paper presents a novel drone detection method using deep convolutional learning and deep transfer learning. The proposed algorithm employs a new feature extraction network, which is added to the modified YOU ONLY LOOK ONCE version2 (YOLOv2) network. The feature extraction model uses bypass connections to learn features from the training sets and solves the "vanishing gradient" problem caused by the increasing depth of the network. The structure of YOLOv2 is modified by replacing the rectified linear unit (relu) with a leaky-relu activation function and adding an extra convolutional layer with a stride of 2 to improve the small object detection accuracy. Using leaky-relu solves the "dying relu" problem. The additional convolution layer with a stride of 2 reduces the spatial dimensions of the feature maps and helps the network to focus on larger contextual information while still preserving the ability to detect small objects. The model is trained with a custom dataset that contains various types of drones, airplanes, birds, and helicopters under various weather conditions. The proposed model demonstrates a notable performance, achieving an accuracy of 77% on the test images with only 5 million learnable parameters in contrast to the Darknet53 + YOLOv3 model, which exhibits a 54% accuracy on the same test set despite employing 62 million learnable parameters.

7.
Front Oncol ; 14: 1346237, 2024.
Article de Anglais | MEDLINE | ID: mdl-39035745

RÉSUMÉ

Pancreatic cancer is one of the most lethal cancers worldwide, with a 5-year survival rate of less than 5%, the lowest of all cancer types. Pancreatic ductal adenocarcinoma (PDAC) is the most common and aggressive pancreatic cancer and has been classified as a health emergency in the past few decades. The histopathological diagnosis and prognosis evaluation of PDAC is time-consuming, laborious, and challenging in current clinical practice conditions. Pathological artificial intelligence (AI) research has been actively conducted lately. However, accessing medical data is challenging; the amount of open pathology data is small, and the absence of open-annotation data drawn by medical staff makes it difficult to conduct pathology AI research. Here, we provide easily accessible high-quality annotation data to address the abovementioned obstacles. Data evaluation is performed by supervised learning using a deep convolutional neural network structure to segment 11 annotated PDAC histopathological whole slide images (WSIs) drawn by medical staff directly from an open WSI dataset. We visualized the segmentation results of the histopathological images with a Dice score of 73% on the WSIs, including PDAC areas, thus identifying areas important for PDAC diagnosis and demonstrating high data quality. Additionally, pathologists assisted by AI can significantly increase their work efficiency. The pathological AI guidelines we propose are effective in developing histopathological AI for PDAC and are significant in the clinical field.

8.
Int J Med Robot ; 20(4): e2664, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38994900

RÉSUMÉ

BACKGROUND: This study aimed to develop a novel deep convolutional neural network called Dual-path Double Attention Transformer (DDA-Transformer) designed to achieve precise and fast knee joint CT image segmentation and to validate it in robotic-assisted total knee arthroplasty (TKA). METHODS: The femoral, tibial, patellar, and fibular segmentation performance and speed were evaluated and the accuracy of component sizing, bone resection and alignment of the robotic-assisted TKA system constructed using this deep learning network was clinically validated. RESULTS: Overall, DDA-Transformer outperformed six other networks in terms of the Dice coefficient, intersection over union, average surface distance, and Hausdorff distance. DDA-Transformer exhibited significantly faster segmentation speeds than nnUnet, TransUnet and 3D-Unet (p < 0.01). Furthermore, the robotic-assisted TKA system outperforms the manual group in surgical accuracy. CONCLUSIONS: DDA-Transformer exhibited significantly improved accuracy and robustness in knee joint segmentation, and this convenient and stable knee joint CT image segmentation network significantly improved the accuracy of the TKA procedure.


Sujet(s)
Arthroplastie prothétique de genou , Apprentissage profond , Articulation du genou , Interventions chirurgicales robotisées , Tomodensitométrie , Humains , Arthroplastie prothétique de genou/méthodes , Interventions chirurgicales robotisées/méthodes , Tomodensitométrie/méthodes , Articulation du genou/chirurgie , Articulation du genou/imagerie diagnostique , Mâle , 29935 , Femelle , Traitement d'image par ordinateur/méthodes , Chirurgie assistée par ordinateur/méthodes , Sujet âgé , Reproductibilité des résultats , Adulte d'âge moyen , Tibia/chirurgie , Tibia/imagerie diagnostique , Algorithmes , Fémur/chirurgie , Fémur/imagerie diagnostique , Imagerie tridimensionnelle/méthodes
9.
Sensors (Basel) ; 24(13)2024 Jun 27.
Article de Anglais | MEDLINE | ID: mdl-39000965

RÉSUMÉ

Regarding the difficulty of extracting the acquired fault signal features of bearings from a strong background noise vibration signal, coupled with the fact that one-dimensional (1D) signals provide limited fault information, an optimal time frequency fusion symmetric dot pattern (SDP) bearing fault feature enhancement and diagnosis method is proposed. Firstly, the vibration signals are transformed into two-dimensional (2D) features by the time frequency fusion algorithm SDP, which can multi-scale analyze the fluctuations of signals at minor scales, as well as enhance bearing fault features. Secondly, the bat algorithm is employed to optimize the SDP parameters adaptively. It can effectively improve the distinctions between various types of faults. Finally, the fault diagnosis model can be constructed by a deep convolutional neural network (DCNN). To validate the effectiveness of the proposed method, Case Western Reserve University's (CWRU) bearing fault dataset and bearing fault dataset laboratory experimental platform were used. The experimental results illustrate that the fault diagnosis accuracy of the proposed method is 100%, which proves the feasibility and effectiveness of the proposed method. By comparing with other 2D transformer methods, the experimental results illustrate that the proposed method achieves the highest accuracy in bearing fault diagnosis. It validated the superiority of the proposed methodology.

10.
Network ; : 1-33, 2024 Jul 31.
Article de Anglais | MEDLINE | ID: mdl-39082422

RÉSUMÉ

The rapid advancements in Agriculture 4.0 have led to the development of the continuous monitoring of the soil parameters and recommend crops based on soil fertility to improve crop yield. Accordingly, the soil parameters, such as pH, nitrogen, phosphorous, potassium, and soil moisture are exploited for irrigation control, followed by the crop recommendation of the agricultural field. The smart irrigation control is performed utilizing the Interactive guide optimizer-Deep Convolutional Neural Network (Interactive guide optimizer-DCNN), which supports the decision-making regarding the soil nutrients. Specifically, the Interactive guide optimizer-DCNN classifier is designed to replace the standard ADAM algorithm through the modeled interactive guide optimizer, which exhibits alertness and guiding characters from the nature-inspired dog and cat population. In addition, the data is down-sampled to reduce redundancy and preserve important information to improve computing performance. The designed model attains an accuracy of 93.11 % in predicting the minerals, pH value, and soil moisture thereby, exhibiting a higher recommendation accuracy of 97.12% when the model training is fixed at 90%. Further, the developed model attained the F-score, specificity, sensitivity, and accuracy values of 90.30%, 92.12%, 89.56%, and 86.36% with k-fold 10 in predicting the minerals that revealed the efficacy of the model.

11.
Sci Rep ; 14(1): 16890, 2024 07 23.
Article de Anglais | MEDLINE | ID: mdl-39043766

RÉSUMÉ

To quantitatively evaluate chronic kidney disease (CKD), a deep convolutional neural network-based segmentation model was applied to renal enhanced computed tomography (CT) images. A retrospective analysis was conducted on a cohort of 100 individuals diagnosed with CKD and 90 individuals with healthy kidneys, who underwent contrast-enhanced CT scans of the kidneys or abdomen. Demographic and clinical data were collected from all participants. The study consisted of two distinct stages: firstly, the development and validation of a three-dimensional (3D) nnU-Net model for segmenting the arterial phase of renal enhanced CT scans; secondly, the utilization of the 3D nnU-Net model for quantitative evaluation of CKD. The 3D nnU-Net model achieved a mean Dice Similarity Coefficient (DSC) of 93.53% for renal parenchyma and 81.48% for renal cortex. Statistically significant differences were observed among different stages of renal function for renal parenchyma volume (VRP), renal cortex volume (VRC), renal medulla volume (VRM), the CT values of renal parenchyma (HuRP), the CT values of renal cortex (HuRC), and the CT values of renal medulla (HuRM) (F = 93.476, 144.918, 9.637, 170.533, 216.616, and 94.283; p < 0.001). Pearson correlation analysis revealed significant positive associations between glomerular filtration rate (eGFR) and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = 0.749, 0.818, 0.321, 0.819, 0.820, and 0.747, respectively, all p < 0.001). Similarly, a negative correlation was observed between serum creatinine (Scr) levels and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = - 0.759, - 0.777, - 0.420, - 0.762, - 0.771, and - 0.726, respectively, all p < 0.001). For predicting CKD in males, VRP had an area under the curve (AUC) of 0.726, p < 0.001; VRC, AUC 0.765, p < 0.001; VRM, AUC 0.578, p = 0.018; HuRP, AUC 0.912, p < 0.001; HuRC, AUC 0.952, p < 0.001; and HuRM, AUC 0.772, p < 0.001 in males. In females, VRP had an AUC of 0.813, p < 0.001; VRC, AUC 0.851, p < 0.001; VRM, AUC 0.623, p = 0.060; HuRP, AUC 0.904, p < 0.001; HuRC, AUC 0.934, p < 0.001; and HuRM, AUC 0.840, p < 0.001. The optimal cutoff values for predicting CKD in HuRP are 99.9 Hu for males and 98.4 Hu for females, while in HuRC are 120.1 Hu for males and 111.8 Hu for females. The kidney was effectively segmented by our AI-based 3D nnU-Net model for enhanced renal CT images. In terms of mild kidney injury, the CT values exhibited higher sensitivity compared to kidney volume. The correlation analysis revealed a stronger association between VRC, HuRP, and HuRC with renal function, while the association between VRP and HuRM was weaker, and the association between VRM was the weakest. Particularly, HuRP and HuRC demonstrated significant potential in predicting renal function. For diagnosing CKD, it is recommended to set the threshold values as follows: HuRP < 99.9 Hu and HuRC < 120.1 Hu in males, and HuRP < 98.4 Hu and HuRC < 111.8 Hu in females.


Sujet(s)
Rein , Insuffisance rénale chronique , Tomodensitométrie , Humains , Insuffisance rénale chronique/imagerie diagnostique , Mâle , Femelle , Tomodensitométrie/méthodes , Adulte d'âge moyen , Études rétrospectives , Sujet âgé , Rein/imagerie diagnostique , Adulte , 29935 , Produits de contraste , Imagerie tridimensionnelle/méthodes
12.
Gland Surg ; 13(5): 619-629, 2024 May 30.
Article de Anglais | MEDLINE | ID: mdl-38845827

RÉSUMÉ

Background: A deep convolutional neural network (DCNN) model was employed for the differentiation of thyroid nodules diagnosed as atypia of undetermined significance (AUS) according to the 2023 Bethesda System for Reporting Thyroid Cytopathology (TBSRTC). The aim of this study was to investigate the efficiency of ResNeSt in improving the diagnostic accuracy of fine-needle aspiration (FNA) biopsy. Methods: Fragmented images were used to train and test DCNN models. A training dataset was built from 1,330 samples diagnosed as papillary thyroid carcinoma (PTC) or benign nodules, and a test dataset was built from 173 samples diagnosed as AUS. ResNeSt was trained and tested to provide a differentiation. With regard to AUS samples, the characteristics of the cell nuclei were compared using the Wilcoxon test. Results: The ResNeSt model achieved an accuracy of 92.49% (160/173) on fragmented images and 84.78% (39/46) from a patient wise viewpoint in discrimination of PTC and benign nodules in AUS nodules. The sensitivity and specificity of ResNeSt model were 95.79% and 88.46%. The κ value between ResNeSt and the pathological results was 0.847 (P<0.001). With regard to the cell nuclei of AUS nodules, both area and perimeter of malignant nodules were larger than those of benign ones, which were 2,340.00 (1,769.00, 2,807.00) vs. 1,941.00 (1,567.50, 2,455.75), P<0.001 and 190.46 (167.64, 208.46) vs. 171.71 (154.95, 193.65), P<0.001, respectively. The grayscale (0 for black, 255 for white) of malignant lesions was lower than that of benign ones, which was 37.52 (31.41, 46.67) vs. 45.84 (31.88, 57.36), P <0.001, indicating nuclear staining of malignant lesions were deeper than benign ones. Conclusions: In summary, the DCNN model ResNeSt showed great potential in discriminating thyroid nodules diagnosed as AUS. Among those nodules, malignant nodules showed larger and more deeply stained nuclei than benign nodules.

13.
Med Phys ; 2024 May 16.
Article de Anglais | MEDLINE | ID: mdl-38753975

RÉSUMÉ

BACKGROUND: Seed implant brachytherapy (SIBT) is a promising treatment modality for parotid gland cancers (PGCs). However, the current clinical standard dose calculation method based on the American Association of Physicists in Medicine (AAPM) Task Group 43 (TG-43) Report oversimplifies patient anatomy as a homogeneous water phantom medium, leading to significant dose calculation errors due to heterogeneity surrounding the parotid gland. Monte Carlo Simulation (MCS) can yield accurate dose distributions but the long computation time hinders its wide application in clinical practice. PURPOSE: This paper aims to develop an end-to-end deep convolutional neural network-based dose engine (DCNN-DE) to achieve fast and accurate dose calculation for PGC SIBT. METHODS: A DCNN model was trained using the patient's CT images and TG-43-based dose maps as inputs, with the corresponding MCS-based dose maps as the ground truth. The DCNN model was enhanced based on our previously proposed model by incorporating attention gates (AGs) and large kernel convolutions. Training and evaluation of the model were performed using a dataset comprising 188 PGC I-125 SIBT patient cases, and its transferability was tested on an additional 16 non-PGC head and neck cancers (HNCs) I-125 SIBT patient cases. Comparison studies were conducted to validate the superiority of the enhanced model over the original one and compare their overall performance. RESULTS: On the PGC testing dataset, the DCNN-DE demonstrated the ability to generate accurate dose maps, with percentage absolute errors (PAEs) of 0.67% ± 0.47% for clinical target volume (CTV) D90 and 1.04% ± 1.33% for skin D0.1cc. The comparison studies revealed that incorporating AGs and large kernel convolutions resulted in 8.2% (p < 0.001) and 3.1% (p < 0.001) accuracy improvement, respectively, as measured by dose mean absolute error. On the non-PGC HNC dataset, the DCNN-DE exhibited good transferability, achieving a CTV D90 PAE of 1.88% ± 1.73%. The DCNN-DE can generate a dose map in less than 10 ms. CONCLUSIONS: We have developed and validated an end-to-end DCNN-DE for PGC SIBT. The proposed DCNN-DE enables fast and accurate dose calculation, making it suitable for application in the plan optimization and evaluation process of PGC SIBT.

14.
J Neural Eng ; 21(3)2024 May 22.
Article de Anglais | MEDLINE | ID: mdl-38729132

RÉSUMÉ

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Sujet(s)
Attention , Perception auditive , Apprentissage profond , Électroencéphalographie , Perte d'audition , Humains , Attention/physiologie , Femelle , Électroencéphalographie/méthodes , Mâle , Adulte d'âge moyen , Perte d'audition/physiopathologie , Perte d'audition/rééducation et réadaptation , Perte d'audition/diagnostic , Sujet âgé , Perception auditive/physiologie , Bruit , Adulte , Aides auditives , Perception de la parole/physiologie , 29935
15.
Diagnostics (Basel) ; 14(10)2024 May 10.
Article de Anglais | MEDLINE | ID: mdl-38786291

RÉSUMÉ

In computer-aided medical diagnosis, deep learning techniques have shown that it is possible to offer performance similar to that of experienced medical specialists in the diagnosis of knee osteoarthritis. In this study, a new deep learning (DL) software, called "MedKnee" is developed to assist physicians in the diagnosis process of knee osteoarthritis according to the Kellgren and Lawrence (KL) score. To accomplish this task, 5000 knee X-ray images obtained from the Osteoarthritis Initiative public dataset (OAI) were divided into train, valid, and test datasets in a ratio of 7:1:2 with a balanced distribution across each KL grade. The pre-trained Xception model is used for transfer learning and then deployed in a Graphical User Interface (GUI) developed with Tkinter and Python. The suggested software was validated on an external public database, Medical Expert, and compared with a rheumatologist's diagnosis on a local database, with the involvement of a radiologist for arbitration. The MedKnee achieved an accuracy of 95.36% when tested on Medical Expert-I and 94.94% on Medical Expert-II. In the local dataset, the developed tool and the rheumatologist agreed on 23 images out of 30 images (74%). The MedKnee's satisfactory performance makes it an effective assistant for doctors in the assessment of knee osteoarthritis.

16.
Sensors (Basel) ; 24(9)2024 Apr 29.
Article de Anglais | MEDLINE | ID: mdl-38732936

RÉSUMÉ

Lung diseases are the third-leading cause of mortality in the world. Due to compromised lung function, respiratory difficulties, and physiological complications, lung disease brought on by toxic substances, pollution, infections, or smoking results in millions of deaths every year. Chest X-ray images pose a challenge for classification due to their visual similarity, leading to confusion among radiologists. To imitate those issues, we created an automated system with a large data hub that contains 17 datasets of chest X-ray images for a total of 71,096, and we aim to classify ten different disease classes. For combining various resources, our large datasets contain noise and annotations, class imbalances, data redundancy, etc. We conducted several image pre-processing techniques to eliminate noise and artifacts from images, such as resizing, de-annotation, CLAHE, and filtering. The elastic deformation augmentation technique also generates a balanced dataset. Then, we developed DeepChestGNN, a novel medical image classification model utilizing a deep convolutional neural network (DCNN) to extract 100 significant deep features indicative of various lung diseases. This model, incorporating Batch Normalization, MaxPooling, and Dropout layers, achieved a remarkable 99.74% accuracy in extensive trials. By combining graph neural networks (GNNs) with feedforward layers, the architecture is very flexible when it comes to working with graph data for accurate lung disease classification. This study highlights the significant impact of combining advanced research with clinical application potential in diagnosing lung diseases, providing an optimal framework for precise and efficient disease identification and classification.


Sujet(s)
Maladies pulmonaires , 29935 , Humains , Maladies pulmonaires/imagerie diagnostique , Maladies pulmonaires/diagnostic , Traitement d'image par ordinateur/méthodes , Apprentissage profond , Algorithmes , Poumon/imagerie diagnostique , Poumon/anatomopathologie
17.
Genes (Basel) ; 15(4)2024 03 26.
Article de Anglais | MEDLINE | ID: mdl-38674339

RÉSUMÉ

The precise identification of splice sites is essential for unraveling the structure and function of genes, constituting a pivotal step in the gene annotation process. In this study, we developed a novel deep learning model, DRANetSplicer, that integrates residual learning and attention mechanisms for enhanced accuracy in capturing the intricate features of splice sites. We constructed multiple datasets using the most recent versions of genomic data from three different organisms, Oryza sativa japonica, Arabidopsis thaliana and Homo sapiens. This approach allows us to train models with a richer set of high-quality data. DRANetSplicer outperformed benchmark methods on donor and acceptor splice site datasets, achieving an average accuracy of (96.57%, 95.82%) across the three organisms. Comparative analyses with benchmark methods, including SpliceFinder, Splice2Deep, Deep Splicer, EnsembleSplice, and DNABERT, revealed DRANetSplicer's superior predictive performance, resulting in at least a (4.2%, 11.6%) relative reduction in average error rate. We utilized the DRANetSplicer model trained on O. sativa japonica data to predict splice sites in A. thaliana, achieving accuracies for donor and acceptor sites of (94.89%, 94.25%). These results indicate that DRANetSplicer possesses excellent cross-organism predictive capabilities, with its performance in cross-organism predictions even surpassing that of benchmark methods in non-cross-organism predictions. Cross-organism validation showcased DRANetSplicer's excellence in predicting splice sites across similar organisms, supporting its applicability in gene annotation for understudied organisms. We employed multiple methods to visualize the decision-making process of the model. The visualization results indicate that DRANetSplicer can learn and interpret well-known biological features, further validating its overall performance. Our study systematically examined and confirmed the predictive ability of DRANetSplicer from various levels and perspectives, indicating that its practical application in gene annotation is justified.


Sujet(s)
Arabidopsis , Oryza , Sites d'épissage d'ARN , Arabidopsis/génétique , Sites d'épissage d'ARN/génétique , Humains , Oryza/génétique , Apprentissage profond , Logiciel , Épissage des ARN , Biologie informatique/méthodes
18.
Front Comput Neurosci ; 18: 1209082, 2024.
Article de Anglais | MEDLINE | ID: mdl-38655070

RÉSUMÉ

Introduction: Face recognition has been a longstanding subject of interest in the fields of cognitive neuroscience and computer vision research. One key focus has been to understand the relative importance of different facial features in identifying individuals. Previous studies in humans have demonstrated the crucial role of eyebrows in face recognition, potentially even surpassing the importance of the eyes. However, eyebrows are not only vital for face recognition but also play a significant role in recognizing facial expressions and intentions, which might occur simultaneously and influence the face recognition process. Methods: To address these challenges, our current study aimed to leverage the power of deep convolutional neural networks (DCNNs), an artificial face recognition system, which can be specifically tailored for face recognition tasks. In this study, we investigated the relative importance of various facial features in face recognition by selectively blocking feature information from the input to the DCNN. Additionally, we conducted experiments in which we systematically blurred the information related to eyebrows to varying degrees. Results: Our findings aligned with previous human research, revealing that eyebrows are the most critical feature for face recognition, followed by eyes, mouth, and nose, in that order. The results demonstrated that the presence of eyebrows was more crucial than their specific high-frequency details, such as edges and textures, compared to other facial features, where the details also played a significant role. Furthermore, our results revealed that, unlike other facial features, the activation map indicated that the significance of eyebrows areas could not be readily adjusted to compensate for the absence of eyebrow information. This finding explains why masking eyebrows led to more significant deficits in face recognition performance. Additionally, we observed a synergistic relationship among facial features, providing evidence for holistic processing of faces within the DCNN. Discussion: Overall, our study sheds light on the underlying mechanisms of face recognition and underscores the potential of using DCNNs as valuable tools for further exploration in this field.

19.
Article de Anglais | MEDLINE | ID: mdl-38605999

RÉSUMÉ

Deep learning-based image reconstruction and noise reduction (DLIR) methods have been increasingly deployed in clinical CT. Accurate assessment of their data uncertainty properties is essential to understand the stability of DLIR in response to noise. In this work, we aim to evaluate the data uncertainty of a DLIR method using real patient data and a virtual imaging trial framework and compare it with filtered-backprojection (FBP) and iterative reconstruction (IR). The ensemble of noise realizations was generated by using a realistic projection domain noise insertion technique. The impact of varying dose levels and denoising strengths were investigated for a ResNet-based deep convolutional neural network (DCNN) model trained using patient images. On the uncertainty maps, DCNN shows more detailed structures than IR although its bias map has less structural dependency, which implies that DCNN is more sensitive to small changes in the input. Both visual examples and histogram analysis demonstrated that hotspots of uncertainty in DCNN may be associated with a higher chance of distortion from the truth than IR, but it may also correspond to a better detection performance for some of the small structures.

20.
Comput Biol Med ; 174: 108466, 2024 May.
Article de Anglais | MEDLINE | ID: mdl-38615462

RÉSUMÉ

Circular RNAs (circRNAs) have surfaced as important non-coding RNA molecules in biology. Understanding interactions between circRNAs and RNA-binding proteins (RBPs) is crucial in circRNA research. Existing prediction models suffer from limited availability and accuracy, necessitating advanced approaches. In this study, we propose CRIECNN (Circular RNA-RBP Interaction predictor using an Ensemble Convolutional Neural Network), a novel ensemble deep learning model that enhances circRNA-RBP binding site prediction accuracy. CRIECNN employs advanced feature extraction methods and evaluates four distinct sequence datasets and encoding techniques (BERT, Doc2Vec, KNF, EIIP). The model consists of an ensemble convolutional neural network, a BiLSTM, and a self-attention mechanism for feature refinement. Our results demonstrate that CRIECNN outperforms state-of-the-art methods in accuracy and performance, effectively predicting circRNA-RBP interactions from both full-length sequences and fragments. This novel strategy makes an enormous advancement in the prediction of circRNA-RBP interactions, improving our understanding of circRNAs and their regulatory roles.


Sujet(s)
29935 , ARN circulaire , ARN circulaire/génétique , ARN circulaire/métabolisme , Humains , Sites de fixation , Protéines de liaison à l'ARN/génétique , Protéines de liaison à l'ARN/métabolisme , Biologie informatique/méthodes
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE