Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39230611

RESUMO

PURPOSE: To assess the accuracy of deep learning models for the diagnosis of maxillary fungal ball rhinosinusitis (MFB) and to compare the accuracy, sensitivity, specificity, precision, and F1-score with a rhinologist. METHODS: Data from 1539 adult chronic rhinosinusitis (CRS) patients who underwent paranasal sinus computed tomography (CT) were collected. The overall dataset consisted of 254 MFB cases and 1285 non-MFB cases. The CT images were constructed and labeled to form the deep learning models. Seventy percent of the images were used for training the deep-learning models, and 30% were used for testing. Whole image analysis and instance segmentation analysis were performed using three different architectures: MobileNetv3, ResNet50, and ResNet101 for whole image analysis, and YOLOv5X-SEG, YOLOv8X-SEG, and YOLOv9-C-SEG for instance segmentation analysis. The ROC curve was assessed. Accuracy, sensitivity (recall), specificity, precision, and F1-score were compared between the models and a rhinologist. Kappa agreement was evaluated. RESULTS: Whole image analysis showed lower precision, recall, and F1-score compared to instance segmentation. The models exhibited an area under the ROC curve of 0.86 for whole image analysis and 0.88 for instance segmentation. In the testing dataset for whole images, the MobileNet V3 model showed 81.00% accuracy, 47.40% sensitivity, 87.90% specificity, 66.80% precision, and a 67.20% F1 score. Instance segmentation yielded the best evaluation with YOLOv8X-SEG showing 94.10% accuracy, 85.90% sensitivity, 95.80% specificity, 88.90% precision, and an 89.80% F1-score. The rhinologist achieved 93.5% accuracy, 84.6% sensitivity, 95.3% specificity, 78.6% precision, and an 81.5% F1-score. CONCLUSION: Utilizing paranasal sinus CT imaging with enhanced localization and constructive instance segmentation in deep learning models can be the practical promising deep learning system in assisting physicians for diagnosing maxillary fungal ball.

2.
Sensors (Basel) ; 22(19)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-36236253

RESUMO

Thailand, like other countries worldwide, has experienced instability in recent years. If current trends continue, the number of crimes endangering people or property will expand. Closed-circuit television (CCTV) technology is now commonly utilized for surveillance and monitoring to ensure people's safety. A weapon detection system can help police officers with limited staff minimize their workload through on-screen surveillance. Since CCTV footage captures the entire incident scenario, weapon detection becomes challenging due to the small weapon objects in the footage. Due to public datasets providing inadequate information on our interested scope of CCTV image's weapon detection, an Armed CCTV Footage (ACF) dataset, the self-collected mockup CCTV footage of pedestrians armed with pistols and knives, was collected for different scenarios. This study aimed to present an image tilling-based deep learning for small weapon object detection. The experiments were conducted on a public benchmark dataset (Mock Attack) to evaluate the detection performance. The proposed tilling approach achieved a significantly better mAP of 10.22 times. The image tiling approach was used to train different object detection models to analyze the improvement. On SSD MobileNet V2, the tiling ACF Dataset achieved an mAP of 0.758 on the pistol and knife evaluation. The proposed method for enhancing small weapon detection by using the tiling approach with our ACF Dataset can significantly enhance the performance of weapon detection.


Assuntos
Crime , Televisão , Humanos
3.
PeerJ Comput Sci ; 10: e2100, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855220

RESUMO

Portable devices like accelerometers and physiological trackers capture movement and biometric data relevant to sports. This study uses data from wearable sensors to investigate deep learning techniques for recognizing human behaviors associated with sports and fitness. The proposed CNN-BiGRU-CBAM model, a unique hybrid architecture, combines convolutional neural networks (CNNs), bidirectional gated recurrent unit networks (BiGRUs), and convolutional block attention modules (CBAMs) for accurate activity recognition. CNN layers extract spatial patterns, BiGRU captures temporal context, and CBAM focuses on informative BiGRU features, enabling precise activity pattern identification. The novelty lies in seamlessly integrating these components to learn spatial and temporal relationships, prioritizing significant features for activity detection. The model and baseline deep learning models were trained on the UCI-DSA dataset, evaluating with 5-fold cross-validation, including multi-class classification accuracy, precision, recall, and F1-score. The CNN-BiGRU-CBAM model outperformed baseline models like CNN, LSTM, BiLSTM, GRU, and BiGRU, achieving state-of-the-art results with 99.10% accuracy and F1-score across all activity classes. This breakthrough enables accurate identification of sports and everyday activities using simplified wearables and advanced deep learning techniques, facilitating athlete monitoring, technique feedback, and injury risk detection. The proposed model's design and thorough evaluation significantly advance human activity recognition for sports and fitness.

4.
Front Med (Lausanne) ; 11: 1303982, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38384407

RESUMO

Introduction: Detection and counting of Centroblast cells (CB) in hematoxylin & eosin (H&E) stained whole slide image (WSI) is an important workflow in grading Lymphoma. Each high power field (HPF) patch of a WSI is inspected for the number of CB cells and compared with the World Health Organization (WHO) guideline that organizes lymphoma into 3 grades. Spotting and counting CBs is time-consuming and labor intensive. Moreover, there is often disagreement between different readers, and even a single reader may not be able to perform consistently due to many factors. Method: We propose an artificial intelligence system that can scan patches from a WSI and detect CBs automatically. The AI system works on the principle of object detection, where the CB is the single class of object of interest. We trained the AI model on 1,669 example instances of CBs that originate from WSI of 5 different patients. The data was split 80%/20% for training and validation respectively. Result: The best performance was from YOLOv5x6 model that used the preprocessed CB dataset achieved precision of 0.808, recall of 0.776, mAP at 0.5 IoU of 0.800 and overall mAP of 0.647. Discussion: The results show that centroblast cells can be detected in WSI with relatively high precision and recall.

5.
IEEE Open J Eng Med Biol ; 5: 514-523, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39050971

RESUMO

Background: Deep learning models for patch classification in whole-slide images (WSIs) have shown promise in assisting follicular lymphoma grading. However, these models often require pathologists to identify centroblasts and manually provide refined labels for model optimization. Objective: To address this limitation, we propose PseudoCell, an object detection framework for automated centroblast detection in WSI, eliminating the need for extensive pathologist's refined labels. Methods: PseudoCell leverages a combination of pathologist-provided centroblast labels and pseudo-negative labels generated from undersampled false-positive predictions based on cell morphology features. This approach reduces the reliance on time-consuming manual annotations. Results: Our framework significantly reduces the workload for pathologists by accurately identifying and narrowing down areas of interest containing centroblasts. Depending on the confidence threshold, PseudoCell can eliminate 58.18-99.35% of irrelevant tissue areas on WSI, streamlining the diagnostic process. Conclusion: This study presents PseudoCell as a practical and efficient prescreening method for centroblast detection, eliminating the need for refined labels from pathologists. The discussion section provides detailed guidance for implementing PseudoCell in clinical practice.

6.
Front Nutr ; 8: 732449, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34733876

RESUMO

Carbohydrate counting is essential for well-controlled blood glucose in people with type 1 diabetes, but to perform it precisely is challenging, especially for Thai foods. Consequently, we developed a deep learning-based system for automatic carbohydrate counting using Thai food images taken from smartphones. The newly constructed Thai food image dataset contained 256,178 ingredient objects with measured weight for 175 food categories among 75,232 images. These were used to train object detector and weight estimator algorithms. After training, the system had a Top-1 accuracy of 80.9% and a root mean square error (RMSE) for carbohydrate estimation of <10 g in the test dataset. Another set of 20 images, which contained 48 food items in total, was used to compare the accuracy of carbohydrate estimations between measured weight, system estimation, and eight experienced registered dietitians (RDs). System estimation error was 4%, while estimation errors from nearest, lowest, and highest carbohydrate among RDs were 0.7, 25.5, and 7.6%, respectively. The RMSE for carbohydrate estimations of the system and the lowest RD were 9.4 and 10.2, respectively. The system could perform with an estimation error of <10 g for 13/20 images, which placed it third behind only two of the best performing RDs: RD1 (15/20 images) and RD5 (14/20 images). Hence, the system was satisfactory in terms of accurately estimating carbohydrate content, with results being comparable with those of experienced dietitians.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA